a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
โ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
โ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,658,623 | <p>You can switch to approximate nearest neighbors (ANN) algorithms which usually take advantage of sophisticated hashing or proximity graph techniques to index your data quickly and perform faster queries. One example is Spotify's <a href="https://github.com/spotify/annoy" rel="noreferrer">Annoy</a>. Annoy's README includes a plot which shows precision-performance tradeoff comparison of various ANN algorithms published in recent years. The top-performing algorithm (at the time this comment was posted), <a href="https://arxiv.org/abs/1603.09320" rel="noreferrer">hnsw</a>, has a Python implementation under <a href="https://github.com/searchivarius/nmslib" rel="noreferrer">Non-Metric Space Library (NMSLIB)</a>.</p> | 2017-12-05 16:46:42.363000+00:00 | 2019-11-27 06:03:04.403000+00:00 | 2019-11-27 06:03:04.403000+00:00 | null | 39,660,968 | <p>I have a code, which calculates the nearest voxel (which is unassigned) to a voxel ( which is assigned). That is i have an array of voxels, few voxels already have a scalar (1,2,3,4....etc) values assigned, and few voxels are empty (lets say a value of '0'). This code below finds the nearest assigned voxel to an unassigned voxel and assigns that voxel the same scalar. So, a voxel with a scalar '0' will be assigned a value (1 or 2 or 3,...) based on the nearest voxel. This code below works, but it takes too much time.
Is there an alternative to this ? or if you have any feedback on how to improve it further?</p>
<p>""" #self.voxels is a 3D numpy array""" </p>
<pre><code>def fill_empty_voxel1(self,argx, argy, argz):
""" where # argx, argy, argz are the voxel location where the voxel is zero"""
argx1, argy1, argz1 = np.where(self.voxels!=0) # find the non zero voxels
a = np.column_stack((argx1, argy1, argz1))
b = np.column_stack((argx, argy, argz))
tree = cKDTree(a, leafsize=a.shape[0]+1)
distances, ndx = tree.query(b, k=1, distance_upper_bound= self.mean) # self.mean is a mean radius search value
argx2, argy2, argz2 = a[ndx][:][:,0],a[ndx][:][:,1],a[ndx][:][:,2]
self.voxels[argx,argy,argz] = self.voxels[argx2,argy2,argz2] # update the voxel array
</code></pre>
<h1>Example</h1>
<p>""" Here is a small example with small dataset:"""</p>
<pre><code>import numpy as np
from scipy.spatial import cKDTree
import timeit
voxels = np.zeros((10,10,5), dtype=np.uint8)
voxels[1:2,:,:] = 5.
voxels[5:6,:,:] = 2.
voxels[:,3:4,:] = 1.
voxels[:,8:9,:] = 4.
argx, argy, argz = np.where(voxels==0)
tic=timeit.default_timer()
argx1, argy1, argz1 = np.where(voxels!=0) # non zero voxels
a = np.column_stack((argx1, argy1, argz1))
b = np.column_stack((argx, argy, argz))
tree = cKDTree(a, leafsize=a.shape[0]+1)
distances, ndx = tree.query(b, k=1, distance_upper_bound= 5.)
argx2, argy2, argz2 = a[ndx][:][:,0],a[ndx][:][:,1],a[ndx][:][:,2]
voxels[argx,argy,argz] = voxels[argx2,argy2,argz2]
toc=timeit.default_timer()
timetaken = toc - tic #elapsed time in seconds
print '\nTime to fill empty voxels', timetaken
</code></pre>
<h1>for visualization:</h1>
<pre><code>from mayavi import mlab
data = voxels.astype('float')
scalar_field = mlab.pipeline.scalar_field(data)
iso_surf = mlab.pipeline.iso_surface(scalar_field)
surf = mlab.pipeline.surface(scalar_field)
vol = mlab.pipeline.volume(scalar_field,vmin=0,vmax=data.max())
mlab.outline()
mlab.show()
</code></pre>
<p>Now, if I have the dimension of the voxels array as something like (500,500,500), then the time it takes to compute the nearest search is no longer efficient. How can I overcome this? Could parallel computation reduce the time (I have no idea whether I can parallelize the code, if you do, please let me know)?</p>
<h1>A potential fix:</h1>
<p>I could substantially improve the computation time by adding the n_jobs = -1 parameter in the cKDTree query.</p>
<pre><code>distances, ndx = tree.query(b, k=1, distance_upper_bound= 5., n_jobs=-1)
</code></pre>
<p>I was able to compute the distances in less than a hour for an array of (400,100,100) on a 13 core CPU. I tried with 1 processor and it takes around 18 hours to complete the same array.
Thanks to @gsamaras for the answer!</p> | 2016-09-23 12:38:06.830000+00:00 | 2019-11-27 06:03:04.403000+00:00 | 2016-10-09 08:22:10.327000+00:00 | python|performance|parallel-processing|nearest-neighbor|kdtree | ['https://github.com/spotify/annoy', 'https://arxiv.org/abs/1603.09320', 'https://github.com/searchivarius/nmslib'] | 3 |
55,490,671 | <p>I tend to see <strong>correlation versus covariance</strong> as an opposition between a quick <strong>dry mathematical relation overview</strong> and a more <strong>raw relation analysis</strong>. Imagine yourself joining a project in a field you know approximately nothing about:</p>
<ul>
<li>if a team member gives you a correlation coefficient for two key variables/indicators associated with the project, you will be able to extract all the information out of this coefficient immediately, without knowing the samples respective scales</li>
<li>if he gives you a covariance, you will probably want to have a look at the data to appreciate exactly what it implies</li>
</ul>
<p>Covariance is easily understood when the samples being compared live on a similar scale/have a similar nature, since the value you'll be considering will not try to compare two completely different things with an intuitively absurd compromise in nature/scale (remember that to compute the covariance you use the products of two things possibly very different with <code>(x-mean(x))(y-mean(y))</code>). Correlation being standardized, issues associated with varying scales and nature in the data are simply absent of your indicator, leading to an "easier interpretation" feeling. </p>
<p>One should therefore realize that <strong>while correlation can make it easier to understand a mathematical relationship, it obfuscates the actual nature of the data you're playing with.</strong> Looking at both can't hurt you to appreciate what's going on with your samples, and that's probably why you'll like considering both. If you aren't convinced you can also read this <a href="https://stats.stackexchange.com/questions/53/pca-on-correlation-or-covariance">related stats.stackexchange question</a>.</p>
<p>In case you wonder why would you want to keep close to the nature and scales of your data while trying to highlight relations between samples, a good example would be the efforts deployed in AI to extract useful features in images to feed models: you want to emphasize discriminatory descriptions of data, without filtering out other potentially interesting information with a standardization. See for example <a href="https://arxiv.org/abs/1409.0083" rel="nofollow noreferrer">this paper</a> that uses covariance matrices to build a dictionary on images.</p> | 2019-04-03 08:38:18.473000+00:00 | 2019-04-03 08:44:38.110000+00:00 | 2019-04-03 08:44:38.110000+00:00 | null | 54,743,529 | <p>If I have calculated coefficient of correlation, I already have the idea of covariance. But I have seen many data scientist calculates the covariance after it. If I have coefficient of correlation with me, I can say that data is positively or negatively correlated with the strength, while covariance give the same thing without the strength. Then what is the importance of covariance if I have coefficient of correlation.</p>
<p>Please suggest, apologies if my question is of low importance.</p> | 2019-02-18 08:56:33.193000+00:00 | 2019-04-03 08:44:38.110000+00:00 | 2019-04-03 08:42:03.350000+00:00 | correlation | ['https://stats.stackexchange.com/questions/53/pca-on-correlation-or-covariance', 'https://arxiv.org/abs/1409.0083'] | 2 |
49,210,026 | <p>The reason is that FC layers have a ton of parameters, counting for the majority of the network's parameters in some architectures. The authors of SqueezeNet removed the FCs, replacing them with a convolutional layer and a global average pooling. </p>
<p>The conv layer has a number of filters equal to the number of classes, processing the output of a previous layer to (roughly) a map for each class. The pooling averages the response of each of these maps. They end up with a flattened vector with dimension equal to the number of classes that is, then, fed to the SoftMax layer. </p>
<p>With these modifications (not forgetting the Fire modules they proposed) they were able to significantly reduce memory footprint. </p>
<p>I strongly recommend that you read the <a href="https://arxiv.org/abs/1602.07360" rel="nofollow noreferrer">SqueezeNet paper</a>.</p> | 2018-03-10 14:11:40.523000+00:00 | 2019-01-09 10:12:07.430000+00:00 | 2019-01-09 10:12:07.430000+00:00 | null | 49,209,158 | <p>I am working on caffe SqueezeNet prototxt <a href="https://github.com/DeepScale/SqueezeNet/blob/master/SqueezeNet_v1.1/train_val.prototxt" rel="nofollow noreferrer">link</a>.</p>
<p>I am just wondering where is the FC layer? (I only see type: data, conv, relu, pooling, concat, SoftmaxWithLoss and accuracy)</p> | 2018-03-10 12:39:07.773000+00:00 | 2019-01-09 10:12:07.430000+00:00 | null | neural-network|deep-learning|caffe|conv-neural-network|pycaffe | ['https://arxiv.org/abs/1602.07360'] | 1 |
71,903,778 | <p>I don't find the <code>Applicative</code> interface of applying lifted functions (namely, <code>(<*>)</code>) a good intuition. Functions are more complicated to conceptualize for various reasons.</p>
<p>I prefer thinking of <code>Applicative</code> as lifting an <em>n</em>-ary function</p>
<pre class="lang-hs prettyprint-override"><code>liftA0 :: Applicative f => (a) -> (f a)
liftA :: Functor f => (a -> b) -> (f a -> f b)
liftA2 :: Applicative f => (a -> b -> c) -> (f a -> f b -> f c)
liftA3 :: Applicative f => (a -> b -> c -> d) -> (f a -> f b -> f c -> f d)
</code></pre>
<p>where <code>liftA0 = pure</code> and <code>liftA</code> already exists as <code>fmap</code> defined in terms of <code>Applicative</code>.</p>
<p>The thing is that 0-ary and 1-ary liftings</p>
<pre><code>liftA0 @f @a :: Applicative f => a -> f a
liftA @f @a @b :: Applicative f => (a -> b) -> (f a -> f b)
</code></pre>
<p>can both take an <code>a -> b</code> function if we instantiate <code>liftA0 = pure</code> at a function type:</p>
<pre><code>liftA0 @f @(a->b) :: Applicative f => (a -> b) -> f (a->b)
liftA @f @a @b :: Applicative f => (a -> b) -> (f a -> f b)
</code></pre>
<p>So <code>pure @f @(a->b)</code> already has that type.</p>
<p>And <code>pure</code> has plenty of purposes, theoretical which turn out to be practical in Haskell, it is the unit if <code>Applicative</code> is viewed as a <code>Monoid</code> in the category of natural transformations, with (<a href="https://arxiv.org/pdf/1406.4823.pdf" rel="nofollow noreferrer">Notions of Computation as Monoids</a>) with <a href="https://hackage.haskell.org/package/kan-extensions-5.2.3/docs/Data-Functor-Day.html" rel="nofollow noreferrer"><code>Day</code></a></p>
<pre><code>type Mempty :: Type -> Type
type Mempty = Identity
type Mappend :: (Type -> Type) -> (Type -> Type) -> (Type -> Type)
type Mappend = Day
mempty :: Applicative f => Mempty ~> f
mempty (Identity a) = pure a
mappend :: Mappend f f ~> f
mappend (LiftA2 (ยท) as bs) = liftA2 (ยท) as bs
</code></pre>
<p>I just released a library that works with <code>Applicative</code> homomorphisms, that are polymorphic functions that respect the applicative structure. It defines a type class for such structures</p>
<pre class="lang-hs prettyprint-override"><code>type Idiom :: k -> (Type -> Type) -> (Type -> Type) -> Constraint
class (Applicative f, Applicative g) => Idiom tag f g where
idiom :: f ~> g
</code></pre>
<p>where <code>pure</code> is the <a href="https://hackage.haskell.org/package/idiomatic-0.1.1.0/docs/Generic-Applicative-Idiom.html#t:Initial" rel="nofollow noreferrer">initial applicative morphism</a>.</p>
<pre class="lang-hs prettyprint-override"><code>-- https://chrisdone.com/posts/haskell-constraint-trick/
instance (Identity ~ id, Applicative f) => Idiom Initial id f where
idiom :: Identity ~> f
idiom (Identity a) = pure a
</code></pre>
<p><code>pure</code> is then frequently used, as a unit for a computation. It is the driving force in <code>Traversable</code> one of the success stories of Haskell</p>
<pre class="lang-hs prettyprint-override"><code>instance Traversable [] where
traverse :: Applicative f => (a -> f b) -> ([a] -> f [b])
traverse f [] = pure []
traverse f (x:xs) = ...
</code></pre>
<p>we require <code>pure</code> because our only argument that produces an <code>Applicative</code>-action is <code>f x</code> but with an empty list we don't have an <code>x :: a</code> to feed it. Thus, we need 0-ary lifting.</p> | 2022-04-17 16:55:36.447000+00:00 | 2022-04-20 20:19:07.093000+00:00 | 2022-04-20 20:19:07.093000+00:00 | null | 71,902,595 | <p>Pure is used to transform normal function into function in <code>Applicative</code> container. With this, any multi-parameter operations become can be used on <code>Applicative</code>. In this context, pure is not desired to be <code>a -> f a</code> type, it is just desired to be <code>(a -> b) -> f (a -> b)</code> type. But type of <code>pure</code> is <code>a -> f a</code>. Why should normal values can be transformed into <code>Applicative</code>? Is there more purpose for use <code>pure</code> more than transforming functions?</p> | 2022-04-17 14:10:28.653000+00:00 | 2022-04-20 20:19:07.093000+00:00 | null | haskell|applicative | ['https://arxiv.org/pdf/1406.4823.pdf', 'https://hackage.haskell.org/package/kan-extensions-5.2.3/docs/Data-Functor-Day.html', 'https://hackage.haskell.org/package/idiomatic-0.1.1.0/docs/Generic-Applicative-Idiom.html#t:Initial'] | 3 |
48,599,663 | <p>Clustering in the way i specified in question is not correct for facial images. It is best to use a convolutional neural network to train the features instead of manually computing distances from facial landmarks. </p>
<p>Later on these trained features, we can apply any of the popular clustering algorithm as shown here: <a href="https://arxiv.org/pdf/1604.00989.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1604.00989.pdf</a>
or as @sascha suggested, Approximate Nearest Neighbour or as @Davis King suggested Chinese Whispers depending on your needs. </p>
<p>As @sascha suggested, there are many deep learning libraries like openface that does this on top of torche/tensorflow. </p> | 2018-02-03 16:48:53.523000+00:00 | 2018-02-03 17:02:42.200000+00:00 | 2018-02-03 17:02:42.200000+00:00 | null | 48,598,568 | <p>I am getting facial landmarks using dlib. I have dataset of more than 1000 faces. I want to make a comparison of these 1000 images with some unknown image. To decrease the database search time, i want to cluster these 1000images in to 10 different clusters based on the 68 facial landmark features of dlib. Currently, I am clustering based on chin to nose distance of different face images. </p>
<p>Problem: Each image of same person is generating different facial landmarks which is effecting the distance calculated from chin to nose tip. Please find the screenshot of csv <a href="https://i.stack.imgur.com/4e2cn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4e2cn.png" alt="enter image description here"></a></p>
<ol>
<li>1st column - Face Image names (same person face with around 25
samples) </li>
<li>2nd,3rd columns - Kmeans clustered labels and centroids of column
4 </li>
<li>4th - Face chin to nose tip euclidean distances</li>
<li>5th - 68 long dlib facial landmarks seperated as chin, eye ....</li>
</ol>
<p>Questions:</p>
<ol>
<li>Is it a right way to cluster images based on facial landmarks ? If not, what is the best way to cluster face images/face groupping to make databse searc more efficient for more images ?</li>
</ol>
<p>I tried with gender classification, but the accuracy is not good. Tried with face color/Ethnicity classification but this limiting my scope. For instance, only asian/european faces will again make me to search all the database</p>
<p>I am not able to identify which is the right factor to cluster. Any reference to articles or ideas are much appreciated. </p> | 2018-02-03 14:44:25.073000+00:00 | 2018-02-03 17:02:42.200000+00:00 | 2018-02-03 14:52:51.360000+00:00 | image-processing|cluster-analysis|dlib|biometrics|facial-identification | ['https://arxiv.org/pdf/1604.00989.pdf'] | 1 |
47,654,214 | <p>Probably you are referring to the <a href="http://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">scientific paper</a> by Ronneberger et al in which the U-Net architecture was published. There the graph shows these numbers. </p>
<p><a href="https://i.stack.imgur.com/DjXVU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DjXVU.png" alt="U-Net architecture"></a></p>
<p>The explanation is a bit hidden in section "<strong>3. Training</strong>" of the paper:</p>
<blockquote>
<p>Due to the unpadded convolutions, the output image is smaller than the input by a constant border width.</p>
</blockquote>
<p>This means that during each convolution, part of the image is "cropped" since the convolution will start in a coordinate so that it fully overlaps with the input-image / input-blob of the layer. In case of 3x3 convolutions, this is always one pixel at each side. For more a visual explanation of kernels/convolutions see e.g. <a href="http://setosa.io/ev/image-kernels/" rel="nofollow noreferrer">here</a>.
<strong>The output is smaller because due to the cropping occuring during unpadded convolutions only (the inner) part of the image gets a result.</strong></p>
<p>It is not a general characteristic of the architecture, but something inherent to (unpadded) convolutions and can be avoided with padding. Probably the most common strategy is mirroring at the image borders, so that each convolution can start at the very edge of an image (and sees mirrored pixels in places where it's kernel overlaps). Then the input size can be preserved and the full image will be segmented.</p> | 2017-12-05 12:58:30.310000+00:00 | 2017-12-05 12:58:30.310000+00:00 | null | null | 44,014,534 | <p>The input image size of u-net is 572*572, but the output mask size is 388*388. How could the image get masked with a smaller mask?</p> | 2017-05-17 02:31:23+00:00 | 2022-08-04 09:08:46.930000+00:00 | 2022-08-04 09:08:46.930000+00:00 | deep-learning|neural-network|conv-neural-network|semantic-segmentation|unet-neural-network | ['http://arxiv.org/abs/1505.04597', 'https://i.stack.imgur.com/DjXVU.png', 'http://setosa.io/ev/image-kernels/'] | 3 |
24,263,968 | <p>Hashing is usually an efficient technique for storage and retrieval of multidimensional data. Problem is here that the number of attributes is variable and potentially very large, right? I googled it a bit and found <a href="https://en.wikipedia.org/wiki/Feature_hashing" rel="nofollow noreferrer">Feature Hashing</a> on Wikipedia. The idea is basically the following:</p>
<ul>
<li>Construct a hash of fixed length from each data entry (aka feature vector)</li>
<li>The length of the hash must be much smaller than the number of available features. The length is important for the performance.</li>
</ul>
<p>On the wikipedia page there is an implementation in pseudocode (create hash for each feature contained in entry, then increase feature-vector-hash at this index position (modulo length) by one) and links to other implementations.</p>
<p>Also here on SO is a question about <a href="https://stackoverflow.com/questions/8673035/what-is-feature-hashing-hashing-trick">feature hashing</a> and amongst others a reference to a scientific paper about <a href="http://arxiv.org/pdf/0902.2206" rel="nofollow noreferrer">Feature Hashing for Large Scale Multitask Learning</a>.</p>
<p>I cannot give a complete solution but you didn't want one. I'm quite convinced this is a good approach. You'll have to play around with the length of the hash as well as with different hashing functions (bloom filter being another keyword) to optimize the speed for your special case. Also there might still be even more efficient approaches if for example retrieval speed is more important than storage (balanced trees maybe?).</p> | 2014-06-17 12:35:00.397000+00:00 | 2014-06-17 12:35:00.397000+00:00 | 2017-05-23 11:57:34.993000+00:00 | null | 24,035,133 | <p>I need a way of storing sets of arbitrary size for fast query later on.
I'll be needing to query the resulting data structure for subsets or sets that are already stored.</p>
<p>===
Later edit: To clarify, an accepted answer to this question would be a link to a study that proposes a solution to this problem. I'm not expecting for people to develop the algorithm themselves.
I've been looking over the tuple clustering algorithm found <a href="http://www.cs.uoi.gr/~tsap/publications/limbo.pdf" rel="nofollow noreferrer">here</a>, but it's not exactly what I want since from what I understand it 'clusters' the tuples into more simple, discrete/aproximate forms and loses the original tuples.</p>
<p>Now, an even simpler example:</p>
<p><code>[alpha, beta, gamma, delta] [alpha, epsilon, delta] [gamma, niu, omega] [omega, beta]</code></p>
<p>Query:</p>
<p><code>[alpha, delta]</code></p>
<p>Result:</p>
<p><code>[alpha, beta, gama, delta] [alpha, epsilon, delta]</code></p>
<p>So the set elements are just that, unique, unrelated elements. Forget about types and values. The elements can be tested among them for equality and that's it. I'm looking for an established algorithm (which probably has a name and a scientific paper on it) more than just creating one now, on the spot.</p>
<p>==
Original examples:</p>
<p>For example, say the database contains these sets<br></p>
<pre><code>[A1, B1, C1, D1], [A2, B2, C1], [A3, D3], [A1, D3, C1]
</code></pre>
<p>If I use <code>[A1, C1]</code> as a query, these two sets should be returned as a result:<br></p>
<pre><code>[A1, B1, C1, D1], [A1, D3, C1]
</code></pre>
<p>Example 2:</p>
<p>Database:</p>
<pre><code>[Gasoline amount: 5L, Distance to Berlin: 240km, car paint: red]
[Distance to Berlin: 240km, car paint: blue, number of car seats: 2]
[number of car seats: 2, Gasoline amount: 2L]
</code></pre>
<p>Query:</p>
<pre><code>[Distance to berlin: 240km]
</code></pre>
<p>Result</p>
<pre><code>[Gasoline amount: 5L, Distance to Berlin: 240km, car paint: red]
[Distance to Berlin: 240km, car paint: blue, number of car seats: 2]
</code></pre>
<p>There can be an unlimited number of 'fields' such as <code>Gasoline amount</code>. A solution would probably involve the database grouping and linking sets having common states (such as <code>Gasoline amount: 240</code>) in such a way that the query is as efficient as possible.</p>
<p>What algorithms are there for such needs?</p>
<p>I am hoping there is already an established solution to this problem instead of just trying to find my own on the spot, which might not be as efficient as one tested and improved upon by other people over time.</p>
<p>Clarifications:</p>
<ul>
<li>If it helps answer the question, I'm intending on using them for storing states:
Simple example:
[Has milk, Doesn't have eggs, Has Sugar]</li>
<li>I'm thinking such a requirement might require graphs or multidimensional arrays, but I'm not sure</li>
</ul>
<p><strong>Conclusion</strong>
I've implemented the two algorithms proposed in the answers, that is <em>Set-Trie</em> and <em>Inverted Index</em> and did some rudimentary profiling on them. Illustrated below is the duration of a query for a given set for each algorithm. Both algorithms worked on the same randomly generated data set consisting of sets of integers. The algorithms seem equivalent (or almost) performance wise:</p>
<p><img src="https://i.stack.imgur.com/pVj7o.png" alt="enter image description here"></p> | 2014-06-04 10:33:28.580000+00:00 | 2014-06-26 11:07:22.310000+00:00 | 2014-06-26 11:07:22.310000+00:00 | database|algorithm|search|graph|data-partitioning | ['https://en.wikipedia.org/wiki/Feature_hashing', 'https://stackoverflow.com/questions/8673035/what-is-feature-hashing-hashing-trick', 'http://arxiv.org/pdf/0902.2206'] | 3 |
62,313,484 | <p>The tff package provides basic <a href="https://arxiv.org/abs/1602.05629" rel="nofollow noreferrer">Federated Averaging</a> tutorials.
You can add secure model aggregation techniques by combining tensorflow encrypted with tff.</p> | 2020-06-10 21:33:39.753000+00:00 | 2020-06-11 02:33:39.800000+00:00 | 2020-06-11 02:33:39.800000+00:00 | null | 62,311,087 | <p>Does federated learning provide privacy securities for the model being trained? </p> | 2020-06-10 18:54:57.843000+00:00 | 2020-06-11 02:33:39.800000+00:00 | null | python|deep-learning|tensorflow-federated | ['https://arxiv.org/abs/1602.05629'] | 1 |
68,559,867 | <pre><code>encoder_inputs = Input(shape=(max_len_text,))
</code></pre>
<p>So you must set <code>max_len_text</code>.</p>
<p>As far as I can see from (<a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">Bahdanau et al., 2015</a>), there is no restriction on the input length of Attention layer. The rest ist just collecting LSTM intermediate state, which should not depend on input length either.</p>
<p>Have you tried setting a different <code>max_len_text</code> during inference than during model building? (set it dynamically for every inference, i.e. for every input text you are summarizing)</p> | 2021-07-28 11:51:17.707000+00:00 | 2021-07-28 11:51:17.707000+00:00 | null | null | 68,559,547 | <p>I'm using Tensorflow keras library in python3 for text summarization of unknown text size.</p>
<p>I'm using the code explain in <a href="https://www.analyticsvidhya.com/blog/2019/06/comprehensive-guide-text-summarization-using-deep-learning-python/" rel="nofollow noreferrer">this link</a> for text summarization. but it looks like the code has a set up value for the maximum size for the input text to be summarized because it already knows what text size it's going to summarize. But what if I don't know? I mean if I have to do the summarization for many texts that I don't know the total size of them??</p>
<p>the error text was too long so I was not successfull in finding something relevant to my case.</p>
<p>so the error is :</p>
<blockquote>
<p>indices[0,0] = 30 is not in [0, 13) [[node
model_2/embedding_1/embedding_lookup (defined at
C:\Users\f_pc\Desktop\class_python.py:314) ]]
[Op:__inference_predict_function_19765]</p>
<p>Errors may have originated from an input operation. Input Source
operations connected to node model_2/embedding_1/embedding_lookup:
model_2/embedding_1/embedding_lookup/19252 (defined at
D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\contextlib.py:112)</p>
<p>Function call stack: predict_function</p>
</blockquote>
<p>I was trying also by</p>
<pre><code>max_text_len=800
max_summary_len=500
</code></pre>
<p>but adding up this size, the analysis time increases but there was also</p> | 2021-07-28 11:27:57.603000+00:00 | 2021-07-28 11:51:17.707000+00:00 | null | python|web-scraping|web-scraping-language | ['https://arxiv.org/pdf/1409.0473.pdf'] | 1 |
37,622,182 | <p>The reason why the problem persists might be simply because your <code>lib3j6j9j.a</code> does not include necessary files (such as d1mach). Actually, we can compile necessary files rather directly, so I will summarize the procedure below:</p>
<p>1) Download <code>drc3jm.f</code> (which calculates 3j-symbols) and dependencies from the Netlib/Slatec page (<a href="http://www.netlib.org/slatec/src/" rel="nofollow">here</a> or <a href="http://www.netlib.org/cgi-bin/netlibfiles.pl?filename=/slatec/src/drc3jm.f" rel="nofollow">here</a>). Unpack the archive file to get Fortran files (*.f).</p>
<pre><code>tar xvf netlibfiles.tgz
</code></pre>
<p>2) Remove <code>d1mach.f</code>, <code>i1mach.f</code>, and <code>r1mach.f</code> (if any). Instead, download their alternative versions from Netlib/blas (*):</p>
<pre><code>rm -f i1mach.f r1mach.f d1mach.f
wget http://www.netlib.org/blas/i1mach.f
wget http://www.netlib.org/blas/r1mach.f
wget http://www.netlib.org/blas/d1mach.f
</code></pre>
<p>3) Compile all *.f files</p>
<pre><code>gfortran testf.f90 *.f
</code></pre>
<p>together with a main program testf.f90 (in free-format), e.g.,</p>
<pre><code>program main
implicit none
integer, parameter :: N = 1000
double precision coef( N ), M2min, M2max, M2
integer ier
ier = 0 ; coef(:) = 0.0d0
call DRC3JM( 15.0d0, 30.0d0, 40.d0, 2.0d0, M2min, M2max, coef, N, ier )
print *, "M2min, M2max, ier = ", M2min, M2max, ier
M2 = 2.0d0
print "(a, f20.15)", "coef = ", coef( nint(M2 - M2min+1) ) !! -0.019081579799192
end
</code></pre>
<p>Then running the executable gives the desired result.</p>
<hr>
<p>3-a) We can also make these *.f as a library and link with C++ codes, e.g., as follows:</p>
<pre><code>gfortran -c *.f
ar rv mylib.a *.o
g++ testc.cpp mylib.a -lgfortran
</code></pre>
<p>with a main program (testc.cpp)</p>
<pre><code>#include <cstdio>
extern "C"
double drc3jm_ (double*, double*, double*,
double*, double*, double*, double*, int*, int*);
int main()
{
double* coef;
double L1, L2, L3, M1, M2min, M2max, M2;
int ier, k, N = 1000;
coef = new double [ N ];
L1 = 15.0; L2 = 30.0; L3 = 40.0; M1 = 2.0;
drc3jm_ ( &L1, &L2, &L3,
&M1, &M2min, &M2max, coef, &N, &ier );
printf( "M2min, M2max, ierr = %10.5f%10.5f%d\n", M2min, M2max, ier );
M2 = 2.0;
k = (int)(M2 - M2min + 1.0e-3);
printf( "coef = %20.15f\n", coef[ k ] ); // -0.019081579799192
return 0;
}
</code></pre>
<p>We can see that the two programs give the same coefficient (-0.019081579799192) for</p>
<pre><code>j1=15, j2=30, j3=40, m1=2, m2=2, m3=-4
</code></pre>
<p>You can also get the same result with an online tool, e.g., <a href="http://www-stone.ch.cam.ac.uk/cgi-bin/wigner.cgi?symbol=3j&j1=15&j2=30&j3=40&m1=2&m2=2&m3=-4" rel="nofollow">here</a>.</p>
<hr>
<p>But depending on cases, it may be simpler to use other libraries. One approach is to use the corresponding GSL routines (<a href="https://www.gnu.org/software/gsl/manual/html_node/3_002dj-Symbols.html" rel="nofollow">here</a>) as</p>
<pre><code>#include <cstdio>
extern "C"
double gsl_sf_coupling_3j (int two_ja, int two_jb, int two_jc,
int two_ma, int two_mb, int two_mc);
int main()
{
double coef;
coef = gsl_sf_coupling_3j( 30, 60, 80, 4, 4, -8 ); // -0.019081579799205
// NOTE: all j's and m's need to be doubled.
printf( "coef = %20.15f\n", coef );
return 0;
}
</code></pre>
<p>Here you need to link necessary GSL libraries (e.g., <code>g++ test.cpp -lgsl</code> or <code>g++ test.cpp /usr/lib64/libgsl.so.0 /usr/lib64/libgslcblas.so.0</code> etc). </p>
<hr>
<p>Yet another approach is to use a latest program <a href="http://fy.chalmers.se/subatom/wigxjpf/" rel="nofollow">WIGXJPF</a> (the related paper is <a href="http://arxiv.org/abs/1504.08329" rel="nofollow">here</a>). I tried this a bit and it seems extremely easy to install (only one <code>make</code>) and use. For example, enter the <code>example/</code> directory and try <code>gcc -I../inc csimple.c ../lib/libwigxjpf.a</code>.
According to the above paper, this program may offer some accuracy and performance advantage.</p>
<hr>
<p>(*) For more details, please see <a href="http://www.netlib.org/misc/faq.html#2.17" rel="nofollow">the Netlib/FAQ page</a> (thanks to @VladimirF in the comment). We could utilize the original d1mach.f etc in Slatec, but we need to modify them so as to obtain correct machine-dependent constants. The above BLAS versions of d1mach.f etc handle this automatically, so they are more convenient. </p> | 2016-06-03 19:38:28.217000+00:00 | 2016-06-03 19:44:37.730000+00:00 | 2016-06-03 19:44:37.730000+00:00 | null | 37,601,714 | <p>I'm trying to link a fortran subroutine with c++, but can't quite figure out what exactly is wrong here:
The fortran subroutine calls some functions eg. d1mach or xermsg, which aren't defined in the fortran subroutine but called externally. When compiling, the error is "undefined reference to d1mach_"(or xermsg).
I tried linking a library I think might contain the said functions (There seems to be a file called d1mach.o and xermsg.o inside the library) but the same error still persists. What might I be doing wrong?</p>
<pre><code>extern"C" {
void drc3jm_(double *L1,double *L2,double *L3,double *M1,double *M2MIN,
double *M2MAX,double *THRCOF,int *NDIM,int *IER);
}
</code></pre>
<p>This is the function I use to call the subroutine, and haven't used any new headers beside iostream</p>
<pre><code>*DECK DRC3JM
SUBROUTINE DRC3JM (L1, L2, L3, M1, M2MIN, M2MAX, THRCOF, NDIM,
+ IER)
CALL XERMSG('SLATEC','DRC3JM','L1-ABS(M1) less than zero or '//
+ 'L1+ABS(M1) not integer.',IER,1)
</code></pre>
<p>This is the declaration of the fortran subroutine which calls the undeclared function xermsg.</p>
<p>I link the library using the -L/<em>path</em>/lib instruction but to no avail.
The subroutine is to calculate a mathematical function and is part of the slatec codes.</p>
<p>Please let me know what other information you might need.</p> | 2016-06-02 20:39:41.777000+00:00 | 2019-10-10 01:58:05.677000+00:00 | null | c++|c|fortran|gfortran | ['http://www.netlib.org/slatec/src/', 'http://www.netlib.org/cgi-bin/netlibfiles.pl?filename=/slatec/src/drc3jm.f', 'http://www-stone.ch.cam.ac.uk/cgi-bin/wigner.cgi?symbol=3j&j1=15&j2=30&j3=40&m1=2&m2=2&m3=-4', 'https://www.gnu.org/software/gsl/manual/html_node/3_002dj-Symbols.html', 'http://fy.chalmers.se/subatom/wigxjpf/', 'http://arxiv.org/abs/1504.08329', 'http://www.netlib.org/misc/faq.html#2.17'] | 7 |
44,379,359 | <p>Most of your poor performance is because you're creating a new random number generator for every random number. I got a 7x speed increase by only creating one. I also merged all the code into a single function to maximize the optimization from .net. The final result is just under 10x faster (117 seconds to 12 for 80,000,000 items on my 32-bit Win7 virtual machine). </p>
<pre><code>Friend Sub Shuffle(ByRef Buffer As Integer())
Dim _random As New System.Random
For max As Integer = Buffer.Length - 1 To 1 Step -1
Dim random As Integer = _random.Next(1, max)
Dim temp As Integer = Buffer(max)
Buffer(max) = Buffer(random)
Buffer(random) = temp
Next
End Sub
</code></pre>
<p>Note a lock is not needed for the random number generator because each call to Shuffle has its own copy and isn't shared across threads.</p>
<p>70% of the time is spent reading the value from Buffer(random). This is because it's likely not in the CPU cache and has to hit main memory whereas reading Buffer(max) is only .5% because it's being read sequentially descending and the CPU can pre-read it into cache. 20% is in calculating the random number so with a faster generator, you can save a little more time but not much.</p>
<p>The big win would be an algorithm that didn't randomly access the array for every element. I followed the advice of Hans Passant in the question comments and googled <a href="https://arxiv.org/pdf/1508.03167.pdf" rel="nofollow noreferrer">MergeShuffle</a>. It seems to be just such an algorithm (multiple sequential accesses instead of a random single pass). And in addition to allowing partitioning, it does not require more RAM.</p>
<p>I prototyped it and I saw a 20% speed up with 80M items switching to Fisher-Yates at 2.5M items.</p> | 2017-06-05 23:50:00.400000+00:00 | 2017-06-08 01:16:36.623000+00:00 | 2017-06-08 01:16:36.623000+00:00 | null | 44,340,271 | <p>I need to scramble HUGE integer arrays (from 1,000,000 to 80,000,000 elements).</p>
<p>These arrays are sequential created (1 to n step 1) and I need to ensure that no one sequential number is lost - and because of it I cannot create "random numbers" arrays.</p>
<p>So, I create them using a single FOR-NEXT (it takes very low time, about 2ms) and I utilize the Fisher-Yates-Durnstenfeld shuffle algorithm, as follows.</p>
<p>First I create the array:</p>
<pre><code> dim Huge(0 to 80,000,000) as Integer
For x as integer = 1 to 79,999,999
Huge(x) = x
Next
</code></pre>
<p>It takes really few miliseconds to be created...</p>
<p>After, I scramble the numbers:</p>
<pre><code> Shuffle(Huge)
</code></pre>
<p>where Shuffle is <strong>a routine called in a Parallel/Multitasking environment</strong>. Because of this, I need to LOCK the RANDOM function (to provide good entropy in swaping values and locations) utilized in Fisher-Yates-Durstenfeld routine :</p>
<h1>The Fisher-Yates Shuffle</h1>
<pre class="lang-vb prettyprint-override"><code>Friend Sub Shuffle(ByRef Buffer As Integer())
For max As Integer = Buffer.Length - 1 To 1 Step -1
ExtractRandomItem(Buffer, max)
Next
End Sub
Friend Sub ExtractRandomItem(ByRef buffer As Integer(), max As Integer)
Dim random As Integer = GetRandomNumber(max + 1)
If random = max Then
random -= 1
End If
Dim temp As Integer = buffer(max)
buffer(max) = buffer(random)
buffer(random) = temp
End Sub
Friend Function GetRandomNumber(max As Integer) As Integer
Dim _random As New System.Random
SyncLock _random
Return _random.Next(1,max)
End SyncLock
End Function
</code></pre>
<h2>My problem</h2>
<p>This shuffle routine is pretty good to scramble randomly the array, but, if it takes only 2ms to scramble 8192 elements, for <strong>24,000,000 elements it takes 32 seconds</strong> - too much time for my needs. </p>
<p><strong>Do you know any way to enhance this process and make it faster?</strong></p>
<p><strong>Or do you know another non-deterministic shuffle algorithm to work faster than that I have?</strong></p>
<p>Note: <em>I had already tried PARALLEL.FOR and I had no expressive gain anyway (just few miliseconds in fact)</em>.</p>
<p>And I don't really need "random-like" shuffle but just a non-deterministic ordering.</p>
<p>Thanks for any help</p>
<p><strong>UPDATE</strong></p>
<p>Due to relevant comments of Jdweng and Hans, I feel it's interesting to make an appointment:</p>
<ul>
<li><p>The routine works with 2 arrays at same time (an input and an output) and each of them may have until 32 mega-elements (> 33 millions each one), which cause .NET to allocate almost 1Gb of RAM. It's not my choice, but the system requirement, sometimes. </p></li>
<li><p>So, I need avoid to create a new array to substitute the SWAP operation to an in -> out operation...</p></li>
</ul>
<p><strong>UPDATE II</strong></p>
<p>Some guys had recommended good practices, but they are not eligible to solve the problem. Let me discuss each of them:</p>
<ul>
<li><p><strong>Using different arrays</strong>: this procedure may save some clocks-by-operation avoiding swap (3 ops) for using move between array A and B. But it will require to double the RAM request (two identical matrices). It's not an option using huge matrix over NET-Framework memory allocation schema (which require free consecutive blocks of memory).</p></li>
<li><p><strong>Using MergedShuffle algorithm</strong>: it seems to be the most desirable solution but I'm affraid of having the shuffle separated "by pieces of the matrix", since this schema works on pieces of the matrix based on their middle points. I need to study it deeper to understand the whole process.</p></li>
<li><p><strong>Using parallel on pieces of the main array</strong>: it is not an option since the shuffle will comply only the imposed range - in other words, I won't have the last elements shuffled with the initial elements of the array. It can be seen as a kind of deterministic schema (not an option).</p></li>
</ul> | 2017-06-03 04:30:07.710000+00:00 | 2017-06-08 01:16:36.623000+00:00 | 2017-06-03 18:23:44.517000+00:00 | arrays|vb.net|algorithm|performance|shuffle | ['https://arxiv.org/pdf/1508.03167.pdf'] | 1 |
39,258,253 | <p><a href="https://arxiv.org/abs/1502.03167" rel="nofollow">Batch normalisation</a> is now well accepted as a general learning facilitator and rรฉgularizer.</p>
<p>Here is a Tensorflow implementation of a batch normalized LSTM Cell: <a href="https://github.com/OlavHN/bnlstm/blob/master/lstm.py" rel="nofollow">https://github.com/OlavHN/bnlstm/blob/master/lstm.py</a></p>
<p>This implementation Explained in the article here : <a href="http://olavnymoen.com/2016/07/07/rnn-batch-normalization" rel="nofollow">Batch normalized LSTM for Tensorflow</a> </p>
<p>It is applying the principles from the paper: <a href="https://arxiv.org/abs/1603.09025" rel="nofollow">Recurrent Batch Normalization (arXiv)</a></p> | 2016-08-31 20:13:12.780000+00:00 | 2016-08-31 22:11:39.627000+00:00 | 2016-08-31 22:11:39.627000+00:00 | null | 38,166,206 | <p>I have a classifier with ~1100 features and 60k samples of training data. I create an RNN with 1100 LSTMcells, and it classifies all my training data correctly, and then underperforms on the test data.</p>
<p>If I had a very large feed-forward NN I think it would behave similarly, and one would reduce size of hidden layer(s), add regularization, dropout, etc. to reduce overfitting.</p>
<p>How would I do the same for the RNN/LSTM? (added dropout but don't see a way to add regularization or especially control the LSTM state size - seems to default to input size which is probably too large)</p>
<p>I see that there was an input_size parameter is now deprecated and unused.</p>
<p><a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.LSTMCell.md" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/functions_and_classes/shard5/tf.nn.rnn_cell.LSTMCell.md</a></p>
<p>I see references in that doc to </p>
<pre><code>{#LSTMCell.init}
{#LSTMCell.output_size}
{#LSTMCell.state_size}
</code></pre>
<p>but how does one use them? the simple tutorial examples just use the defaults, which result in overfitting.</p>
<p>If there is some other way to discover and tune hyperparameters I'm not seeing it.</p> | 2016-07-03 02:27:04.053000+00:00 | 2016-08-31 22:11:39.627000+00:00 | 2016-07-10 18:01:51.523000+00:00 | machine-learning|tensorflow|recurrent-neural-network | ['https://arxiv.org/abs/1502.03167', 'https://github.com/OlavHN/bnlstm/blob/master/lstm.py', 'http://olavnymoen.com/2016/07/07/rnn-batch-normalization', 'https://arxiv.org/abs/1603.09025'] | 4 |
41,733,267 | <p>In a training mode where word-vectors and doctag-vectors are interchangeably used during training, for the same surrounding-words prediction-task, they tend to be meaningfully comparable. (Your mode, DBOW with interleaved skip-gram word-training, fits this and is the mode used by the paper '<a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">Document Embedding with Paragraph Vectors</a>'.)</p>
<p>Your second question is abstract and speculative; I think you'd have to test those ideas yourself. The Word2Vec/Doc2Vec processes train the vectors to be good at certain mechanistic word-prediction tasks, subject to the constraints of the model and tradeoffs with other vectors' quality. That the resulting spatial arrangement happens to be then useful for other purposes โ ranked/absolute similarity, similarity along certain conceptual lines, classification, etc. โ is then just an observed, pragmatic benefit. It's a 'trick that works', and might yield insights, but many of the ways models change in response to different parameter choices or corpus characteristics haven't been theoretically or experimentally worked-out. </p> | 2017-01-19 03:35:24.007000+00:00 | 2017-01-19 03:35:24.007000+00:00 | null | null | 40,472,070 | <p>I am trying to understand relation between word2vec and doc2vec vectors in Gensim's implementation. In my application, I am tagging multiple documents with same label (topic), I am training a doc2vec model on my corpus using dbow_words=1 in order to train word vectors as well. I have been able to obtain similarities between word and document vectors in this fashion which does make a lot of sense
For ex. getting documents labels similar to a word-
doc2vec_model.docvecs.most_similar(positive = [doc2vec_model["management"]], topn = 50))</p>
<p>My question however is about theoretical interpretation of computing similarity between word2vec and doc2vec vectors. Would it be safe to assume that when trained on the same corpus with same dimensionality (d = 200), word vectors and document vectors can always be compared to find similar words for a document label or similar document labels for a word. Any suggestion/ideas are most welcome.</p>
<p>Question 2: My other questions is about impact of high/low frequency of a word in final word2vec model. If wordA and wordB have similar contexts in a particular doc label(set) of documents but wordA has much higher frequency than wordB, would wordB have higher similarity score with the corresponding doc label or not. I am trying to train multiple word2vec models by sampling corpus in a temporal fashion and want to know if the hypothesis that as words get more and more frequent, assuming context relatively stays similar, similarity score with a document label would also increase. Am I wrong to make this assumption? Any suggestions/ideas are very welcome.</p>
<p>Thanks,
Manish</p> | 2016-11-07 18:30:20.660000+00:00 | 2017-01-19 03:35:24.007000+00:00 | null | similarity|gensim|word2vec|temporal|doc2vec | ['https://arxiv.org/abs/1507.07998'] | 1 |
65,921,292 | <p>To add to your question - how about pixel perfect accuracy of segmentation boundary !!</p>
<p>Your intuition regarding down-sampling via max-pooling is correct. Normal CNNs have that limit. However, there have been some improvements recently to overcome it.</p>
<p>The breakthrough to this problem came in 2015-6 in the form of <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-net</a> and atrous/dilated convolution introduced in <a href="https://arxiv.org/abs/1706.05587" rel="nofollow noreferrer">DeepLab</a>.</p>
<p>Dilated convolutions or atrous convolutions, previously described for wavelet analysis without signal decimation, expands window size without increasing the number of weights by inserting zero-values into convolution kernels. Dilated convolutions have been shown to decrease blurring in semantic segmentation maps, and are purported to work at least in part by extracting long range information without the need for pooling.</p>
<p>Using U-Net architectures is another method that seeks to retain high spatial frequency information by directly adding skip connections between early and late layers. In other words, up-sampling followed by down-sampling.</p>
<p>In TensorFlow, atrous convolutions are implemented with function:</p>
<pre><code>tf.nn.atrous_conv2d
</code></pre>
<p>There are many more methods and this is an ongoing research area.</p> | 2021-01-27 14:35:22.520000+00:00 | 2021-01-27 14:35:22.520000+00:00 | null | null | 65,920,778 | <p>Each layer in a CNN reduces the size of the input via convolution and max-pooling operations. Convolution is translation equivariant, but max-pooling is translation invariant. Correct me if this is wrong : each time max-pooling applied, the precise location of a feature is reduced. So the feature maps of the final conv layer in a very deep CNN will have a large receptive field (w.r.t the original image), but the location of this feature (in the original image) is not discernible from looking at this feature map alone.</p>
<p>If this is true, how can the accuracy of bounding boxes when we do localisation be so good with a deep CNN? I understand how classification works, but making accurate bounding box predictions is confusing me.</p>
<p>Perhaps a toy example will clarify my confusion;</p>
<p>Say we have a dataset of images with dimension <code>256x256x1</code>, and we want to predict whether a cat is present, and if so, where it is, so our target is something like <code>[sigmoid_cat_present, cat_location]</code>.</p>
<p>Our vanilla CNN (let's assume something like VGG) will take in the image and transform it to something like <code>16x16x256</code> in the last convolutional layer. Each pixel in this final <code>16x16</code> feature map can be influenced by a much larger region in the original image. So if we determine a cat is present, how can the <code>[cat_location]</code> be refined to value more granular than this effective receptive field?</p> | 2021-01-27 14:06:01.983000+00:00 | 2021-01-27 19:32:53.117000+00:00 | 2021-01-27 19:32:53.117000+00:00 | machine-learning|deep-learning|neural-network|computer-vision|conv-neural-network | ['https://arxiv.org/abs/1505.04597', 'https://arxiv.org/abs/1706.05587'] | 2 |
41,297,269 | <p>I suppose you can use <a href="https://github.com/AlfredXiangWu/face_verification_experiment" rel="nofollow noreferrer">this model</a>, described in <a href="https://arxiv.org/abs/1511.02683" rel="nofollow noreferrer"><em>Xiang Wu, Ran He, Zhenan Sun, Tieniu Tan</em> <strong>A Light CNN for Deep Face Representation with Noisy Labels</strong> (arXiv 2015)</a> as a a strating point for your experiments.</p>
<p>As for the Siamese network, what you are trying to earn is a mapping from a face image into some high dimensional vector space, in which distances between points reflects (dis)similarity between faces.<br>
To do so, you only need <em>one</em> network that gets a face as an input and produce a high-dim vector as an output.<br>
However, to train this <em>single</em> network using the Siamese approach, you are going to duplicate it: creating two instances of the <em>same</em> net (you need to explicitly link the weights of the two copies). During training you are going to provide pairs of faces to the nets: one to each copy, then the single loss layer on top of the two copies can compare the high-dimensional vectors representing the two faces and compute a loss according to a "same/not same" label associated with this pair.<br>
Hence, you only need the duplication for the training. In test time (<code>'deploy'</code>) you are going to have a <em>single</em> net providing you with a semantically meaningful high dimensional representation of faces.</p>
<p>For a more advance Siamese architecture and loss see <a href="https://stackoverflow.com/q/33330779/1714410">this thread</a>. </p>
<hr>
<p>On the other hand, you might want to consider the approach described in <a href="https://arxiv.org/abs/1605.07270" rel="nofollow noreferrer"><em>Oren Tadmor, Yonatan Wexler, Tal Rosenwein, Shai Shalev-Shwartz, Amnon Shashua</em> <strong>Learning a Metric Embedding for Face Recognition using the Multibatch Method</strong> (arXiv 2016)</a>. This approach is more efficient and easy to implement than pair-wise losses over image pairs. </p> | 2016-12-23 07:45:58.250000+00:00 | 2016-12-26 21:02:20.767000+00:00 | 2017-05-23 12:01:39.123000+00:00 | null | 41,296,981 | <p>I want to use pre-trained model for the face identification. I try to use Siamese architecture which requires a few number of images. Could you give me any trained model which I can change for the Siamese architecture? How can I change the network model which I can put two images to find their similarities (I do not want to create image based on the tutorial <a href="https://github.com/NVIDIA/DIGITS/tree/master/examples/siamese" rel="nofollow noreferrer">here</a>)? I only want to use the system for real time application. Do you have any recommendations?</p> | 2016-12-23 07:25:12.460000+00:00 | 2020-10-26 07:43:58.467000+00:00 | 2020-10-26 07:43:58.467000+00:00 | neural-network|computer-vision|deep-learning|caffe|nvidia-digits | ['https://github.com/AlfredXiangWu/face_verification_experiment', 'https://arxiv.org/abs/1511.02683', 'https://stackoverflow.com/q/33330779/1714410', 'https://arxiv.org/abs/1605.07270'] | 4 |
62,973,550 | <p>You might get some use out of this thread: <a href="https://stackoverflow.com/questions/59996859/how-to-use-pytorch-onecyclelr-in-a-training-loop-and-optimizer-scheduler-intera">How to use Pytorch OneCycleLR in a training loop (and optimizer/scheduler interactions)?</a></p>
<p>But to address your points:</p>
<ol>
<li><p>Does the max_lr parameter has to be same with the optimizer lr parameter? <strong>No, this is the max or highest value -- a hyperparameter that you will experiment with. Notice in the paper the use of max_lr: <a href="https://arxiv.org/pdf/1708.07120.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1708.07120.pdf</a></strong></p>
</li>
<li><p>Can this scheduler be used with Adam optimizer. How is the momentum calculated then? <strong>Yes.</strong></p>
</li>
<li><p>Letโs say i trained my model for some number of epochs at a stretch now, i wanted to train for some more epochs. Would i have to reset the the scheduler? <strong>Depends, are you loading the model from a saved checkpoint or not? Check PyTorch's tutorials: <a href="https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py</a></strong></p>
</li>
</ol> | 2020-07-18 20:38:15.170000+00:00 | 2020-07-18 20:38:15.170000+00:00 | null | null | 62,917,353 | <p>I wanted to use <code>torch.optim.lr_scheduler.OneCycleLR()</code> while training. Can some kindly explain to me how to use it?
What i got from the documentation was that it should be called after each train_batch.</p>
<p>My confusions are as follows:</p>
<ul>
<li><p>Does the max_lr parameter has to be same with the optimizer lr parameter?</p>
</li>
<li><p>Can this scheduler be used with Adam optimizer. How is the momentum calculated then?</p>
</li>
<li><p>Letโs say i trained my model for some number of epochs at a stretch now, i wanted to train for some more epochs. Would i have to reset the the scheduler?</p>
</li>
</ul>
<p>Can anybody provide me a sort of a toy example/training loop that implements this scheduler?</p>
<p>I am kind of new to deep learning & PyTorch so my question might be somewhat silly.</p> | 2020-07-15 14:33:13.697000+00:00 | 2020-07-18 20:38:15.170000+00:00 | 2020-07-15 16:09:19.503000+00:00 | python-3.x|deep-learning|pytorch|torchvision | ['https://stackoverflow.com/questions/59996859/how-to-use-pytorch-onecyclelr-in-a-training-loop-and-optimizer-scheduler-intera', 'https://arxiv.org/pdf/1708.07120.pdf', 'https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py'] | 3 |
63,569,291 | <p>As pointed out by @papirrin, the answer given by @Prune is a bit misleading. In CNNs (or Fully Convolutional Neural Neworks, which is where Deconvolution is first proposed), the deconvolution is not exactly the reverse of convolution. More precisely, the deconvolution in CNNs only reverse the shape, but not the content. The name of deconvolution is misleading because deconvolution is already defined mathematically, hence, in the below, we will use transposed convolution to indicate the "deconvolution in CNNs".</p>
<p>To understand the transposed convolution, you will need to transform the filters of convolution operation into a matrix when performing the convolution operation. Then, the convolution operation can be defined as <code>Y=WX</code>. Then, in the transposed convolution, we basically transpose the matrix, and the output will be computed as <code>Y=W^TX</code>. For some examples, you can refer to <a href="https://tinynet.autoai.org/en/latest/induction/convolution.html" rel="nofollow noreferrer">https://tinynet.autoai.org/en/latest/induction/convolution.html</a> and <a href="https://tinynet.autoai.org/en/latest/induction/convolution-transpose.html" rel="nofollow noreferrer">https://tinynet.autoai.org/en/latest/induction/convolution-transpose.html</a>.</p>
<p>As for how to get the convolution matrix in transposed convolution, it depends on how you are going to use it. For image segmentation, it is learned during back propagation. In some visualizations of intermediate feature maps (for example, the ECCV14 paper: <a href="https://arxiv.org/abs/1311.2901" rel="nofollow noreferrer">https://arxiv.org/abs/1311.2901</a>), it is directly derived from the convolution operation. In summary, both ways are fine.</p>
<p>For how to compute the gradient, it is exactly the same as in convolution. You can also interpret the transposed convolution operation as it basically swap the forward and backward process of a convolution operation.</p> | 2020-08-24 22:04:04.407000+00:00 | 2020-08-24 22:04:04.407000+00:00 | null | null | 38,268,102 | <p>What does it mean by deconvolution or backwards convolution in convolutional neural nets?</p>
<p>I understand convolution, if we consider a 3x3 window W and a kernel k of same size the result of the convolution W*K will be one value. Here k is a matix with 3x3 elements.</p>
<p>In my understanding deconvolution trying to upsample feature maps to get a larger map. Does it use the same convolution matrix which is used to get the feature maps? If not how to calculate the gradients for backpropagation? A detail explanation would be very useful.</p> | 2016-07-08 13:37:32.470000+00:00 | 2020-08-24 22:04:04.407000+00:00 | 2016-07-08 15:28:10.387000+00:00 | deep-learning|caffe|conv-neural-network|deconvolution | ['https://tinynet.autoai.org/en/latest/induction/convolution.html', 'https://tinynet.autoai.org/en/latest/induction/convolution-transpose.html', 'https://arxiv.org/abs/1311.2901'] | 3 |
51,201,216 | <p>I am aware of one computer vision area called "Style transfer" using deep neural networks. There is <a href="https://arxiv.org/pdf/1508.06576.pdf" rel="nofollow noreferrer">this famous paper</a> on this topic.</p>
<p>A few GitHub repos:</p>
<ol>
<li><a href="https://github.com/lengstrom/fast-style-transfer" rel="nofollow noreferrer">https://github.com/lengstrom/fast-style-transfer</a></li>
<li><a href="https://github.com/fzliu/style-transfer" rel="nofollow noreferrer">https://github.com/fzliu/style-transfer</a></li>
<li><a href="https://github.com/anishathalye/neural-style" rel="nofollow noreferrer">https://github.com/anishathalye/neural-style</a></li>
</ol> | 2018-07-05 23:36:11.460000+00:00 | 2018-07-05 23:36:11.460000+00:00 | null | null | 51,200,471 | <p>I've found a really amazing wallpaper and later found out that it was a modified version of the original picture (which is an album cover).</p>
<p>Is anyone aware of the program or script being used in order to produce such kind of picture given any input? </p>
<p>Original picture:</p>
<p><a href="https://i.stack.imgur.com/oe1qT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oe1qT.png" alt="enter image description here"></a></p>
<p>Processed picture:</p>
<p><a href="https://i.stack.imgur.com/uUijs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uUijs.jpg" alt="enter image description here"></a></p>
<p>Zoom to the processed picture to see the style:</p>
<p><a href="https://i.stack.imgur.com/NrOyN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NrOyN.png" alt="enter image description here"></a></p>
<p>Is anyone aware of the program or script being used in order to produce such kind of picture given any input? </p> | 2018-07-05 22:02:59.920000+00:00 | 2021-12-03 21:12:35.623000+00:00 | null | image|image-processing | ['https://arxiv.org/pdf/1508.06576.pdf', 'https://github.com/lengstrom/fast-style-transfer', 'https://github.com/fzliu/style-transfer', 'https://github.com/anishathalye/neural-style'] | 4 |
45,926,391 | <p>Modify the superclass's builder to use an F-bound (aka the Curiously Recurring Template Pattern).</p>
<pre><code>public interface SuperClassBuilderInterface<SELF extends SuperClassBuilderInterface<SELF>> {
SELF withS1(String s1);
// etc.
Superclass build();
}
</code></pre>
<p>Then you have:</p>
<pre><code>class SuperClassBuilder<SELF extends SuperClassBuilder<SELF>> implements SuperClassBuilderInterface<SELF>
interface ABuilderInterface<SELF extends ABuilderInterface<SELF>> extends SuperClassBuilderInterface<SELF>
class ABuilder extends SuperClassBuilder<ABuilder> implements ABuilderInterface<ABuilder>
</code></pre>
<p>Note that the implementation of <code>SuperClassBuilder</code> must contain unchecked casts of the form <code>return (SELF)this;</code>. The type system is theoretically powerful enough to not need this, but the resulting encoding would probably be very ugly (see <a href="https://arxiv.org/pdf/1605.05274.pdf" rel="nofollow noreferrer">this</a>) and it's likely not worth it.</p>
<p>EDIT: This is what @shmosel meant</p> | 2017-08-28 19:50:35.583000+00:00 | 2017-08-28 20:07:31.813000+00:00 | 2017-08-28 20:07:31.813000+00:00 | null | 45,925,631 | <p>I want to implement a builder pattern with static inner classes for lets say classes A with fields (a1, a2, a3), B with fields (b1, b2) and C with fields (c1), whereas all share fields (s1, s2) from super class SuperClass:</p>
<pre><code>public class A extends SuperClass {
private final String a1;
...
private A(ABuilder builder) {
super(builder);
this.a1 = builder.a1;
...
}
public static class ABuilder extends SuperClassBuilder implements ABuilderInterface {
private String a1;
...
@Override
public ABuilder withA1(String a1) {
this.a1 = a1;
return this;
}
...
@Override
public SuperClass build() {
return new A(this);
}
}
}
</code></pre>
<p>Accordingly for B and C the builders just differ that they have their own fields and implement their own interfaces (BBuilderInterface and CBuilderInterface), whereas these interfaces are only defining which methods are to be implemented:</p>
<pre><code>public interface ABuilderInterface extends SuperClassBuilderInterface {
ABuilderInterface withA1(String a1);
...
}
...<interfaces for B and C>
public interface SuperClassBuilderInterface {
SuperClassBuilderInterface withS1(String s1);
...
SuperClass build();
}
// Usage of the builders:
public SuperClass foo() {
return new A.ABuilder()
.withA1(...) // returns ABuilderInterface
...
.withS1(...) // returns SuperClassBuilderInterface
...
.build();
}
public abstract class SuperClass {
private final String s1;
...
protected SuperClass(SuperClassBuilder builder) {
this.s1 = builder.s1;
...
}
protected static abstract class SuperClassBuilder implements SuperClassBuilderInterface {
private String s1;
...
@Override
public SuperClassBuilder withS1(String s1) {
this.s1 = s1;
return this;
}
...
@Override
public abstract SuperClass build();
}
}
</code></pre>
<p>Now you can spot the limitation that when I use the builder I have to pay attention to call the with... methods related to the child class first, then chain the ones for the super class, which is not a big of a deal, but still not sure whether good practice.
On the other side I could add the with... methods of the children classes to the super class interface all together and then the limitation is gone, but then I have an interface with mixed with... methods of different child classes.</p>
<p>Which one would you prefer/suggest?</p> | 2017-08-28 18:55:48.817000+00:00 | 2018-06-28 10:12:02.343000+00:00 | 2017-08-28 19:47:54.400000+00:00 | java|inheritance|design-patterns|builder | ['https://arxiv.org/pdf/1605.05274.pdf'] | 1 |
50,780,869 | <p>Both current answers are wrong in that they do not give you "weight decay as in cuda-convnet" but instead L2-regularization, which is different.</p>
<p>When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. <strong>When using any other optimizer, this is not true.</strong></p>
<p>Weight decay (don't know how to TeX here, so excuse my pseudo-notation):</p>
<pre><code>w[t+1] = w[t] - learning_rate * dw - weight_decay * w
</code></pre>
<p>L2-regularization:</p>
<pre><code>loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params)
</code></pre>
<p>Computing the gradient of the extra term in L2-regularization gives <code>lambda * w</code> and thus inserting it into the SGD update equation</p>
<pre><code>dloss_dw = dactual_loss_dw + lambda * w
w[t+1] = w[t] - learning_rate * dw
</code></pre>
<p>gives the same as weight decay, but mixes <code>lambda</code> with the <code>learning_rate</code>. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper <a href="/https://arxiv.org/abs/1711.05101">Fixing weight decay in Adam</a> for more details. (Edit: AFAIK, <a href="http://www.cs.toronto.edu/~hinton/absps/parle.pdf" rel="nofollow noreferrer">this 1987 Hinton paper</a> introduced "weight decay", literally as "each time the weights are updated, their magnitude is also decremented by 0.4%" at page 10)</p>
<p>That being said, there doesn't seem to be support for "proper" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.</p>
<p>One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and "attaching" it to your <code>train_op</code>. Both of these are just crude work-arounds, though. My current code:</p>
<pre><code># In the network definition:
with arg_scope([layers.conv2d, layers.dense],
weights_regularizer=layers.l2_regularizer(weight_decay)):
# define the network.
loss = # compute the actual loss of your problem.
train_op = optimizer.minimize(loss, global_step=global_step)
if args.weight_decay not in (None, 0):
with tf.control_dependencies([train_op]):
sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0)
train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)))
</code></pre>
<p>This somewhat makes use of TensorFlow's provided bookkeeping. Note that the <code>arg_scope</code> takes care of appending an L2-regularization term for every layer to the <code>REGULARIZATION_LOSSES</code> graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.</p>
<p>Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.</p>
<p><strong>Edit:</strong> see also <a href="https://github.com/tensorflow/tensorflow/pull/17438" rel="nofollow noreferrer">this PR</a> which just got merged into TF.</p> | 2018-06-10 05:37:44.840000+00:00 | 2018-06-15 10:24:02.077000+00:00 | 2018-06-15 10:24:02.077000+00:00 | null | 36,570,904 | <p>In CUDA ConvNet, we can write something like this (<a href="https://code.google.com/p/cuda-convnet/wiki/LayerParams" rel="noreferrer">source</a>) for each layer:</p>
<pre><code>[conv32]
epsW=0.001
epsB=0.002
momW=0.9
momB=0.9
wc=0
</code></pre>
<p>where <code>wc=0</code> refers to the L2 weight decay.</p>
<p>How can the same be achieved in TensorFlow?</p> | 2016-04-12 10:42:46.410000+00:00 | 2018-06-15 10:24:02.077000+00:00 | null | tensorflow | ['/https://arxiv.org/abs/1711.05101', 'http://www.cs.toronto.edu/~hinton/absps/parle.pdf', 'https://github.com/tensorflow/tensorflow/pull/17438'] | 3 |
21,406,281 | <p>The <code>res_var</code> attribute of the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Output.html" rel="noreferrer"><code>Output</code></a> is the so-called reduced Chi-square value for the fit, a popular choice of goodness-of-fit statistic. <a href="http://arxiv.org/abs/1012.3754" rel="noreferrer">It is somewhat problematic</a> for non-linear fitting, though. You can look at the residuals directly (<code>out.delta</code> for the <code>X</code> residuals and <code>out.eps</code> for the <code>Y</code> residuals). Implementing a cross-validation or bootstrap method for determining goodness-of-fit, as suggested in the linked paper, is left as an exercise for the reader.</p> | 2014-01-28 13:02:39.757000+00:00 | 2014-01-28 13:02:39.757000+00:00 | null | null | 21,395,328 | <p>I am fitting data with weights using scipy.odr but I don't know how to obtain a measure of goodness-of-fit or an R squared. Does anyone have suggestions for how to obtain this measure using the output stored by the function?</p> | 2014-01-28 01:45:20.627000+00:00 | 2021-07-05 18:09:34.750000+00:00 | null | scipy|regression|orthogonal | ['http://docs.scipy.org/doc/scipy/reference/generated/scipy.odr.Output.html', 'http://arxiv.org/abs/1012.3754'] | 2 |
65,966,222 | <p>There is a converse, an <a href="https://en.wikipedia.org/wiki/Implicant" rel="nofollow noreferrer">implicant</a>. In that case if the implicant is true, the whole expression must be true.</p>
<p>From a search, it seems the one you want is an <strong>implicate</strong>. See e.g. <a href="http://www.cs.albany.edu/%7Eritries/KAIS-1819R1.pdf" rel="nofollow noreferrer">http://www.cs.albany.edu/~ritries/KAIS-1819R1.pdf</a> or <a href="https://arxiv.org/ftp/arxiv/papers/1401/1401.3475.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/1401/1401.3475.pdf</a>. Since you mention "variable/term", an implicate must be a literal (a variable or a negation of a variable) or a disjunction of literals.</p>
<p>But it seems to be even less known than implicant and easily confused with the verb.</p> | 2021-01-30 08:54:23.137000+00:00 | 2021-01-30 08:54:23.137000+00:00 | null | null | 65,962,667 | <p>Take the following example:</p>
<pre><code>A AND (B OR C)
</code></pre>
<p>Obviously <em>A</em> must be true for the expression to be true.</p>
<p>Another example:</p>
<pre><code>(A AND (B OR C)) OR (D AND E AND A)
</code></pre>
<p>Again, <em>A</em> has to be true, but appears multiple times, in two different legs of the expression.</p>
<p>Is there a word for a variable/term that <em>must</em> be true regardless of how deeply nested it is for the expression as a whole to be true? Something like a dominant node in graph theory.</p> | 2021-01-29 22:39:18.163000+00:00 | 2021-01-30 08:54:23.137000+00:00 | null | math|logic | ['https://en.wikipedia.org/wiki/Implicant', 'http://www.cs.albany.edu/%7Eritries/KAIS-1819R1.pdf', 'https://arxiv.org/ftp/arxiv/papers/1401/1401.3475.pdf'] | 3 |
58,915,217 | <p>I guess you work from face images. To perform data augmentation on these images, you can:</p>
<ul>
<li>rotate a little the image, </li>
<li>modify tilt and slant (a little)</li>
<li>crop (a little) </li>
<li>flip left-right</li>
<li>add noise (salt and pepper, Gaussian...)</li>
<li>change luminosity</li>
</ul>
<p>And of course, <strong>mix</strong> two or more of the above processes. See <a href="https://github.com/aleju/imgaug" rel="nofollow noreferrer">imgaug</a> for example.</p>
<p>With more work, you can try style transfer on your images or modify them with GANs. See <a href="https://arxiv.org/abs/1904.11685" rel="nofollow noreferrer">this paper</a></p> | 2019-11-18 13:03:55.087000+00:00 | 2019-11-18 13:03:55.087000+00:00 | null | null | 58,914,830 | <p>I'm working on an age classification problem. When I created classes, based on age group, different age groups have a different number of images. One has only around 1100 pictures and another has around 4500 pictures. I know 1100 pictures are not enough for a class. How do I make more data on the class that has less data? Is there a way to make good data artifacts ?</p> | 2019-11-18 12:42:44.433000+00:00 | 2019-11-18 13:03:55.087000+00:00 | null | machine-learning|deep-learning|classification|conv-neural-network | ['https://github.com/aleju/imgaug', 'https://arxiv.org/abs/1904.11685'] | 2 |
42,882,289 | <p>First of all, there are very fundamental differences between libraries like TensorFlow and NumPy. In TensorFlow you basically define a computation graph in a symbolic manner and nothing is computed unless you call the function <code>run</code> from class <code>Session</code>. It is very important to be careful about calling the <code>run</code> function only once (or only if required) so that you avoid redundant computation.</p>
<p>Speaking of performance, libraries like TensorFlow and Theano are good at pruning and optimizing the whole computation graph (before running) and using very optimized kernels to run the operations in the graph on available devices, including GPU. GPU is very fast and useful in doing matrix operations, but NumPy can not take advantage of it. NumPy runs the computation mostly using C++ based code, but unlike TensorFlow it does not apply a graph optimization to the whole computation graph. In fact, there is no such a thing as graph in NumPy. TensorFlow takes the whole computation to a different runtime space and then returns the concrete results after actual execution.</p>
<p>In case you are curious about what exactly is a kernel, <a href="https://arxiv.org/pdf/1603.04467.pdf" rel="nofollow noreferrer">the original TensorFlow paper</a> defines it as: "A kernel is a particular implementation of an operation that can be run on a particular type of device (e.g., CPU or GPU)."</p>
<p>There are fairly simple, but good benchmarks <a href="https://simplyml.com/linear-algebra-shootout-numpy-vs-theano-vs-tensorflow-2/" rel="nofollow noreferrer">at this blog post</a> which show by numbers how much and when TensorFlow is better than NumPy.</p>
<p>To understand how actually TensorFlow works, <a href="https://stackoverflow.com/questions/34401714/is-tensorflow-lazy">This post</a> may be useful too.</p>
<p>If your problem is not a big one which needs to deal with large scale matrices and lots of matrix operations, then I would say NumPy is easier to use and you don't need to be worried about performance. </p> | 2017-03-19 02:35:14.057000+00:00 | 2017-09-03 05:54:18.273000+00:00 | 2017-09-03 05:54:18.273000+00:00 | null | 42,880,157 | <p>I was reading over <a href="https://www.tensorflow.org/tutorials/pdes" rel="nofollow noreferrer">this tensorflow tutorial</a> which specifies how to run a simulation of water ripples using tensorflow. My understanding is that tensorflow variables and placeholders are used in place of generic python variables, but is there any specific (i.e. performance) advantage to using tensorflow over just numpy? </p> | 2017-03-18 21:35:44.207000+00:00 | 2017-09-03 05:54:18.273000+00:00 | null | python|tensorflow | ['https://arxiv.org/pdf/1603.04467.pdf', 'https://simplyml.com/linear-algebra-shootout-numpy-vs-theano-vs-tensorflow-2/', 'https://stackoverflow.com/questions/34401714/is-tensorflow-lazy'] | 3 |
23,485,536 | <p>Can you go through this PDF <a href="http://arxiv.org/pdf/1203.2081v2.pdf" rel="nofollow">link</a></p>
<p>This explains the difference between MapReduce and BSP(Apache Hama offers Bulk Synchronous Parallel computing engine).</p> | 2014-05-06 03:34:48.487000+00:00 | 2014-05-06 03:34:48.487000+00:00 | null | null | 23,450,018 | <p>Hi I am finding it difficult comparing mapreduce with hama, I understand that hama uses this bulk synchronous parallel model and that the worker nodes can communicate with one another whereas in apache's hadoop the worker nodes only communicate to the namenode correct? If so I don't understand the benefits hama would have over a standard mapreduce in hadoop thanks!</p> | 2014-05-03 21:26:31.843000+00:00 | 2014-06-03 05:48:23.647000+00:00 | null | apache|hadoop|mapreduce|hama|bulk-synchronous-parallel | ['http://arxiv.org/pdf/1203.2081v2.pdf'] | 1 |
62,955,287 | <blockquote>
<p>Why does where appear in the middle of the lambda function definition?,</p>
</blockquote>
<p>Quoting <a href="https://agda.readthedocs.io/en/v2.6.1/language/lambda-abstraction.html" rel="nofollow noreferrer">the docs</a>:</p>
<blockquote>
<p>Anonymous pattern matching functions can be defined using one of the
two following syntaxes:</p>
<p><code>\ { p11 .. p1n -> e1 ; โฆ ; pm1 .. pmn -> em }</code></p>
<p><code>\ where p11 .. p1n -> e1 โฆ pm1 .. pmn -> em</code></p>
</blockquote>
<p>So <code>ฮป where</code> is an anonymous pattern matching function. <code>force</code> is the field of <a href="https://github.com/agda/agda-stdlib/blob/14577d5f187329c8eef5997b5bc2d30af1955aad/src/Codata/Thunk.agda#L17" rel="nofollow noreferrer"><code>Thunk</code></a> and <code>.force</code> is a <a href="https://agda.readthedocs.io/en/v2.6.1/language/copatterns.html" rel="nofollow noreferrer">copattern</a> in postfix notation (originally I said nonsense here, but thanks to @<strong>Cactus</strong> it's now fixed, see <a href="https://stackoverflow.com/a/63013330/3237465">his answer</a>).</p>
<blockquote>
<p>Also, is there a place where I can find documentation to use "Codata" and proofs with it? Thanks!</p>
</blockquote>
<p>Check out these papers</p>
<ol>
<li><a href="https://arxiv.org/pdf/1406.2059" rel="nofollow noreferrer">Normalization by Evaluation in the Delay Monad
A Case Study for Coinduction via Copatterns and Sized Types</a></li>
<li><a href="http://www.cse.chalmers.se/%7Eabela/jlamp17.pdf" rel="nofollow noreferrer">Equational Reasoning about Formal Languages in Coalgebraic Style</a></li>
<li><a href="https://cs.ru.nl/%7Enweide/AgdaSizedTypes.pdf" rel="nofollow noreferrer">Guarded Recursion in Agda via Sized Types</a></li>
</ol> | 2020-07-17 13:44:17.767000+00:00 | 2020-07-21 13:10:16.783000+00:00 | 2020-07-21 13:10:16.783000+00:00 | null | 62,943,643 | <p>I've been playing around with the idea of writing programs that run on Streams and properties with them, but I feel that I am stuck even with the simplest of things. When I look at the definition of <code>repeat</code> in <code>Codata/Streams</code> in the standard library, I find a construction that I haven't seen anywhere in Agda: <code>ฮป where .force โ</code>.</p>
<p>Here, an excerpt of a Stream defined with this weird feature:</p>
<pre><code>repeat : โ {i} โ A โ Stream A i
repeat a = a โท ฮป where .force โ repeat a
</code></pre>
<p>Why does <code>where</code> appear in the middle of the lambda function definition?, and what is the purpose of <code>.force</code> if it is never used?</p>
<p>I might be asking something that is in the documentation, but I can't figure out how to search for it.</p>
<p>Also, is there a place where I can find documentation to use "Codata" and proofs with it? Thanks!</p> | 2020-07-16 21:20:18.107000+00:00 | 2020-07-21 14:06:53.760000+00:00 | null | stream|agda | ['https://agda.readthedocs.io/en/v2.6.1/language/lambda-abstraction.html', 'https://github.com/agda/agda-stdlib/blob/14577d5f187329c8eef5997b5bc2d30af1955aad/src/Codata/Thunk.agda#L17', 'https://agda.readthedocs.io/en/v2.6.1/language/copatterns.html', 'https://stackoverflow.com/a/63013330/3237465', 'https://arxiv.org/pdf/1406.2059', 'http://www.cse.chalmers.se/%7Eabela/jlamp17.pdf', 'https://cs.ru.nl/%7Enweide/AgdaSizedTypes.pdf'] | 7 |
65,048,141 | <p>Let me suggest a simple algorithm, where you don't need to know the probabilities that each element in A is chosen (set B in your question). I assume that each element in A has a known order (for example, each element is a number).</p>
<p>There are two parts to the algorithm. The first is an algorithm to generate unbiased random bits. We'll call it "RandomBit()" (Morina et al. 2019).</p>
<ol>
<li>Draw one independent element from A, call it X, and then draw another independent element from A, call it Y.</li>
<li>If X is less than Y, return 0. If Y is less than X, return 1. If neither is the case, go to step 1.</li>
</ol>
<p>This works because independent pairs of independent draws from A are statistically indifferent (Montes Gutiรฉrrez 2014, De Schuymer et al. 2003); see the appendix to my <a href="https://peteroupc.github.io/randextract.html" rel="nofollow noreferrer">Note on Randomness Extraction</a> for details.</p>
<p>Next is an algorithm to generate a uniform random number in [0, 1]. All you have to do is "juxtapose enough random binary digits" (von Neumann 1951): Add RandomBit()/2 + RandomBit()/4 + RandomBit()/8 + ....</p>
<p>If your non-uniform distribution is the normal distribution, see the following question on <em>Cross Validated</em> for further tricks: <a href="https://stats.stackexchange.com/questions/117689">https://stats.stackexchange.com/questions/117689</a></p>
<p>REFERENCES:</p>
<ul>
<li>Von Neumann, J., "Various techniques used in connection with random digits", J. Res. Nat. Bur. Stand. Appl. Math. Series 3, 36-38 (1951).</li>
<li>Morina, G., ลatuszyลski, K., et al., "<a href="https://arxiv.org/abs/1912.09229" rel="nofollow noreferrer">From the Bernoulli Factory to a Dice Enterprise via Perfect Sampling of Markov Chains</a>", arXiv:1912.09229 [math.PR], 2019.</li>
<li>Montes Gutiรฉrrez, I., "Comparison of alternatives under uncertainty and imprecision", doctoral thesis, Universidad de Oviedo, 2014.</li>
<li>De Schuymer, Bart, Hans De Meyer, and Bernard De Baets. "A fuzzy approach to stochastic dominance of random variables", in International Fuzzy Systems Association World Congress 2003.</li>
</ul> | 2020-11-28 09:29:53.267000+00:00 | 2020-11-28 10:55:45.920000+00:00 | 2020-11-28 10:55:45.920000+00:00 | null | 61,926,610 | <p>Lets say I have a function which is non uniform and generates elements in the set A. I have a set B that has the probability of generation of each element in A. Is there any way I can make the non uniform pseudorandom function behave like a uniform pseudorandom function.</p> | 2020-05-21 03:19:35.483000+00:00 | 2021-05-26 12:50:04.587000+00:00 | null | algorithm|math|optimization|random | ['https://peteroupc.github.io/randextract.html', 'https://stats.stackexchange.com/questions/117689', 'https://arxiv.org/abs/1912.09229'] | 3 |
58,672,341 | <p>You can use <code>U-Net</code> or <code>SegNet</code> for image segmentation. In fact you add residual layers to your CNN to get this result:</p>
<p><a href="https://i.stack.imgur.com/7kOeB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7kOeB.png" alt="Image Segmentation"></a></p>
<p>About <strong>U-Net</strong>:</p>
<p>Arxiv: <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">U-Net: Convolutional Networks for Biomedical Image Segmentation</a></p>
<p><a href="https://i.stack.imgur.com/EwXH5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EwXH5.png" alt="U-Net Architecture"></a></p>
<p><strong>Seg-Net</strong>:</p>
<p>Arxiv: <a href="https://arxiv.org/pdf/1511.00561.pdf" rel="nofollow noreferrer">SegNet: A Deep Convolutional
Encoder-Decoder Architecture for Image
Segmentation</a></p>
<p><a href="https://i.stack.imgur.com/ydVfU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ydVfU.png" alt="SegNet Architecture"></a></p>
<p><strong>Here are Simple Examples of Codes:</strong> <code>keras==1.1.0</code></p>
<p><strong>U-Net:</strong></p>
<pre><code>shape=60
batch_size = 30
nb_classes = 10
img_rows, img_cols = shape, shape
nb_filters = 32
pool_size = (2, 2)
kernel_size = (3, 3)
input_shape=(shape,shape,1)
reg=0.001
learning_rate = 0.013
decay_rate = 5e-5
momentum = 0.9
sgd = SGD(lr=learning_rate,momentum=momentum, decay=decay_rate, nesterov=True)
shape2
recog0 = Sequential()
recog0.add(Convolution2D(20, 3,3,
border_mode='valid',
input_shape=input_shape))
recog0.add(BatchNormalization(mode=2))
recog=recog0
recog.add(Activation('relu'))
recog.add(MaxPooling2D(pool_size=(2,2)))
recog.add(UpSampling2D(size=(2, 2)))
recog.add(Convolution2D(20, 3, 3,init='glorot_uniform'))
recog.add(BatchNormalization(mode=2))
recog.add(Activation('relu'))
for i in range(0,2):
print(i,recog0.layers[i].name)
recog_res=recog0
part=1
recog0.layers[part].name
get_0_layer_output = K.function([recog0.layers[0].input, K.learning_phase()],[recog0.layers[part].output])
get_0_layer_output([x_train, 0])[0][0]
pred=[np.argmax(get_0_layer_output([x_train, 0])[0][i]) for i in range(0,len(x_train))]
loss=x_train-pred
loss=loss.astype('float32')
recog_res.add(Lambda(lambda x: x,input_shape=(56,56,20),output_shape=(56,56,20)))
recog2=Sequential()
recog2.add(Merge([recog,recog_res],mode='ave'))
recog2.add(Activation('relu'))
recog2.add(Convolution2D(20, 3, 3,init='glorot_uniform'))
recog2.add(BatchNormalization(mode=2))
recog2.add(Activation('relu'))
recog2.add(Convolution2D(1, 1, 1,init='glorot_uniform'))
recog2.add(Reshape((shape2,shape2,1)))
recog2.add(Activation('relu'))
recog2.compile(loss='mean_squared_error', optimizer=sgd,metrics = ['mae'])
recog2.summary()
x_train3=x_train2.reshape((1,shape2,shape2,1))
recog2.fit(x_train,x_train3,
nb_epoch=25,
batch_size=30,verbose=1)
</code></pre>
<p><strong>SegNet:</strong></p>
<pre><code>shape=60
batch_size = 30
nb_classes = 10
img_rows, img_cols = shape, shape
nb_filters = 32
pool_size = (2, 2)
kernel_size = (3, 3)
input_shape=(shape,shape,1)
reg=0.001
learning_rate = 0.012
decay_rate = 5e-5
momentum = 0.9
sgd = SGD(lr=learning_rate,momentum=momentum, decay=decay_rate, nesterov=True)
recog0 = Sequential()
recog0.add(Convolution2D(20, 4,4,
border_mode='valid',
input_shape=input_shape))
recog0.add(BatchNormalization(mode=2))
recog0.add(MaxPooling2D(pool_size=(2,2)))
recog=recog0
recog.add(Activation('relu'))
recog.add(MaxPooling2D(pool_size=(2,2)))
recog.add(UpSampling2D(size=(2, 2)))
recog.add(Convolution2D(20, 1, 1,init='glorot_uniform'))
recog.add(BatchNormalization(mode=2))
recog.add(Activation('relu'))
for i in range(0,8):
print(i,recog0.layers[i].name)
recog_res=recog0
part=8
recog0.layers[part].name
get_0_layer_output = K.function([recog0.layers[0].input, K.learning_phase()],[recog0.layers[part].output])
get_0_layer_output([x_train, 0])[0][0]
pred=[np.argmax(get_0_layer_output([x_train, 0])[0][i]) for i in range(0,len(x_train))]
loss=x_train-pred
loss=loss.astype('float32')
recog_res.add(Lambda(lambda x: x-np.mean(loss),input_shape=(28,28,20),output_shape=(28,28,20)))
recog2=Sequential()
recog2.add(Merge([recog,recog_res],mode='sum'))
recog2.add(UpSampling2D(size=(2, 2)))
recog2.add(Convolution2D(1, 3, 3,init='glorot_uniform'))
recog2.add(BatchNormalization(mode=2))
recog2.add(Reshape((shape2*shape2,)))
recog2.add(Reshape((shape2,shape2,1)))
recog2.add(Activation('relu'))
recog2.compile(loss='mean_squared_error', optimizer=sgd,metrics = ['mae'])
recog2.summary()
x_train3=x_train2.reshape((1,shape2,shape2,1))
recog2.fit(x_train,x_train3,
nb_epoch=400,
batch_size=30,verbose=1)
</code></pre>
<p>Then add a threshold for the colors of segmentation.</p> | 2019-11-02 15:08:07.410000+00:00 | 2019-11-02 19:13:08.280000+00:00 | 2019-11-02 19:13:08.280000+00:00 | null | 58,504,305 | <p>Suppose I have one or multiple tiles consisting of a single pattern (e.g. materials like: wood, concrete, gravel...) that I would like to train my classifier on, and then I'll use the trained classifier to determine to which class each pixel in another image belong.</p>
<p>Below are example of two tiles I would like to train the classifier on:</p>
<p><a href="https://i.stack.imgur.com/8l97k.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/8l97k.jpg" alt="concrete"></a>
<a href="https://i.stack.imgur.com/fS0sh.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/fS0sh.jpg" alt="wood"></a></p>
<p>And let's say I want to segment the image below to identify the pixels belonging to the door and those belonging to the wall. It's just an example, I know this image isn't made of exactly the same patterns as the tiles above:</p>
<p><a href="https://i.stack.imgur.com/eas1R.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/eas1R.jpg" alt="door"></a></p>
<p>For this specific problem, is it necessary to use convolutional neural networks? Or is there a way to achieve my goal with a shallow neural network or any other classifier, combined with texture features for example?</p>
<p>I've already implemented a classifier with Scikit-learn which works on tile pixels individually (see code below where <code>training_data</code> is a vector of singletons), but I want instead to train the classifier on texture patterns.</p>
<pre class="lang-py prettyprint-override"><code># train classifier
classifier = SGDClassifier()
classifier.fit(training_data, training_target)
# classify given image
test_data = image_gray.flatten().reshape((-1, 1))
predictions = classifier.predict(test_data)
image_classified = predictions.reshape(image_gray.shape)
</code></pre>
<p>I was reading <a href="https://medium.com/@arthur_ouaknine/review-of-deep-learning-algorithms-for-image-semantic-segmentation-509a600f7b57" rel="noreferrer">this review</a> of recent deep learning methods used for image segmentation and the results seem accurate, but since I've never used any CNN before I feel intimidated by it.</p> | 2019-10-22 12:28:12.173000+00:00 | 2019-11-03 20:09:16.937000+00:00 | 2019-10-22 16:18:13.103000+00:00 | python|machine-learning|image-processing|textures | ['https://i.stack.imgur.com/7kOeB.png', 'https://arxiv.org/abs/1505.04597', 'https://i.stack.imgur.com/EwXH5.png', 'https://arxiv.org/pdf/1511.00561.pdf', 'https://i.stack.imgur.com/ydVfU.png'] | 5 |
58,684,187 | <p>Convolutional Neural Networks (CNNs) are high performance tools for image recognition (including semantic segmentation) and have been shown to be <a href="https://arxiv.org/abs/1811.12231" rel="nofollow noreferrer">very sensitive to texture</a>. The field of computer vision has been around way before the current wave of interest in deep learning, however, and there are various other tools that are still relevant - often with smaller requirements for computational resources and/or training data.</p>
<blockquote>
<p>For this specific problem, is it necessary to use convolutional neural networks? </p>
</blockquote>
<p>It very much depends on what your metrics for success are. There are other tools that do not involve the use of CNNs - whether they will give you a satisfactory level of detection accuracy can only be determined by practical testing. </p>
<blockquote>
<p>Or is there a way to achieve my goal with a shallow neural network or any other classifier, combined with texture features for example?</p>
</blockquote>
<p>A shallow neural network will have some detection capability, although (unlike CNNs), they do not exhibit <a href="https://stats.stackexchange.com/questions/208936/what-is-translation-invariance-in-computer-vision-and-convolutional-neural-netwo">translational invariance</a> and so are sensitive to small displacements of the target. Such a network is likely to have more success if used to classify small patches of the image; classifying an image patch within a sliding window is not that unlike how a CNN works, of course. It is also possible to <a href="https://aul12.me/machinelearning/2019/06/10/cnn-mlp-1.html" rel="nofollow noreferrer">approximate a CNN using an equivalent multi-layer perceptron (MLP)</a> - that would be another approach, if your definition of 'shallow' permits.</p>
<p><em>Two approaches that do not require neural networks:</em></p>
<p><strong>Histogram of Oriented Gradients</strong> The HOG descriptor extracts image features using a histogram of gradients in the horizontal and vertical axis. This produces a feature vector, that can be classified - such as using a Support Vector Machine (SVM) or shallow neural network (MLP), for example. This would be a viable approach to classifying image patches without using CNNs. The <code>scikit-image</code> package has a <a href="https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_hog.html" rel="nofollow noreferrer">HOG function</a>, and there is a full worked example of classification of HOG features <a href="https://gurus.pyimagesearch.com/lesson-sample-histogram-of-oriented-gradients-and-car-logo-recognition/" rel="nofollow noreferrer">here</a>. From the documentation:</p>
<pre><code>from skimage.feature import hog
from skimage import data, exposure
image = data.astronaut()
fd, hog_image = hog(image, orientations=8, pixels_per_cell=(16, 16),
cells_per_block=(1, 1), visualize=True, multichannel=True)
</code></pre>
<p><strong>Felsenszwalbโs efficient graph based image segmentation</strong>
There are a bunch of segmentation algorithms in the <code>scikit-image.segmentation</code> toolbox. Felsenszwalbโs is one of them, which (broadly speaking) performs a clustering of image regions based on edges. <a href="https://scikit-image.org/docs/dev/api/skimage.segmentation.html#skimage.segmentation.felzenszwalb" rel="nofollow noreferrer">More info here</a>. From the module documentation:</p>
<pre><code>from skimage.segmentation import felzenszwalb
from skimage.data import coffee
img = coffee()
segments = felzenszwalb(img, scale=3.0, sigma=0.95, min_size=5)
</code></pre>
<p>Hope that helps.</p> | 2019-11-03 20:09:16.937000+00:00 | 2019-11-03 20:09:16.937000+00:00 | null | null | 58,504,305 | <p>Suppose I have one or multiple tiles consisting of a single pattern (e.g. materials like: wood, concrete, gravel...) that I would like to train my classifier on, and then I'll use the trained classifier to determine to which class each pixel in another image belong.</p>
<p>Below are example of two tiles I would like to train the classifier on:</p>
<p><a href="https://i.stack.imgur.com/8l97k.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/8l97k.jpg" alt="concrete"></a>
<a href="https://i.stack.imgur.com/fS0sh.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/fS0sh.jpg" alt="wood"></a></p>
<p>And let's say I want to segment the image below to identify the pixels belonging to the door and those belonging to the wall. It's just an example, I know this image isn't made of exactly the same patterns as the tiles above:</p>
<p><a href="https://i.stack.imgur.com/eas1R.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/eas1R.jpg" alt="door"></a></p>
<p>For this specific problem, is it necessary to use convolutional neural networks? Or is there a way to achieve my goal with a shallow neural network or any other classifier, combined with texture features for example?</p>
<p>I've already implemented a classifier with Scikit-learn which works on tile pixels individually (see code below where <code>training_data</code> is a vector of singletons), but I want instead to train the classifier on texture patterns.</p>
<pre class="lang-py prettyprint-override"><code># train classifier
classifier = SGDClassifier()
classifier.fit(training_data, training_target)
# classify given image
test_data = image_gray.flatten().reshape((-1, 1))
predictions = classifier.predict(test_data)
image_classified = predictions.reshape(image_gray.shape)
</code></pre>
<p>I was reading <a href="https://medium.com/@arthur_ouaknine/review-of-deep-learning-algorithms-for-image-semantic-segmentation-509a600f7b57" rel="noreferrer">this review</a> of recent deep learning methods used for image segmentation and the results seem accurate, but since I've never used any CNN before I feel intimidated by it.</p> | 2019-10-22 12:28:12.173000+00:00 | 2019-11-03 20:09:16.937000+00:00 | 2019-10-22 16:18:13.103000+00:00 | python|machine-learning|image-processing|textures | ['https://arxiv.org/abs/1811.12231', 'https://stats.stackexchange.com/questions/208936/what-is-translation-invariance-in-computer-vision-and-convolutional-neural-netwo', 'https://aul12.me/machinelearning/2019/06/10/cnn-mlp-1.html', 'https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_hog.html', 'https://gurus.pyimagesearch.com/lesson-sample-histogram-of-oriented-gradients-and-car-logo-recognition/', 'https://scikit-image.org/docs/dev/api/skimage.segmentation.html#skimage.segmentation.felzenszwalb'] | 6 |
53,547,262 | <p>@Josh Payne's answer is correct, but I'll expand on it for those who want to use the .mat file with an emphasis on typical data splits.</p>
<p>The data itself has already been split up in to a training and test set. Here's how I accessed the data:</p>
<pre><code> from scipy import io as sio
mat = sio.loadmat('emnist-letters.mat')
data = mat['dataset']
X_train = data['train'][0,0]['images'][0,0]
y_train = data['train'][0,0]['labels'][0,0]
X_test = data['test'][0,0]['images'][0,0]
y_test = data['test'][0,0]['labels'][0,0]
</code></pre>
<p>There is an additional field 'writers' (e.g. <code>data['train'][0,0]['writers'][0,0]</code>) that distinguishes the original sample writer. Finally, there is another field <code>data['mapping']</code>, but I'm not sure what it is mapping the digits to.</p>
<p>In addition, in Secion II D, the <a href="https://arxiv.org/abs/1702.05373v1" rel="nofollow noreferrer">EMNIST paper</a> states that "the last portion of the training set, equal in size to the testing set, is set aside as a validation set". Strangely, the .mat file training/testing size does not match the number listed in Table II, but it does match the size in Fig. 2.</p>
<pre><code> val_start = X_train.shape[0] - X_test.shape[0]
X_val = X_train[val_start:X_train.shape[0],:]
y_val = y_train[val_start:X_train.shape[0]]
X_train = X_train[0:val_start,:]
y_train = y_train[0:val_start]
</code></pre>
<p>If you don't want a validation set it is fine to leave these samples in the training set.</p>
<p>Also, if you would like to reshape the data into 2D, 28x28 sized images instead of a 1D 784 array, to get the correct image orientation you'll need to do a numpy reshape using Fortran ordering (Matlab uses column-major ordering, just like Fortran. <a href="https://en.wikipedia.org/wiki/Row-_and_column-major_order" rel="nofollow noreferrer">reference</a>). e.g. -</p>
<pre><code> X_train = X_train.reshape( (X_train.shape[0], 28, 28), order='F')
</code></pre> | 2018-11-29 20:43:12.330000+00:00 | 2020-06-17 09:47:51.273000+00:00 | 2020-06-17 09:47:51.273000+00:00 | null | 51,125,969 | <p>I have been trying to find a way to load the EMNIST-letters dataset but without much success. I have found interesting stuff in the structure and can't wrap my head around what is happening. Here is what I mean:</p>
<p>I downloaded the .mat format <a href="https://www.nist.gov/itl/iad/image-group/emnist-dataset" rel="noreferrer">in here</a></p>
<p>I can load the data using </p>
<pre><code>import scipy.io
mat = scipy.io.loadmat('letter_data.mat') # renamed for conveniance
</code></pre>
<p>it is a dictionnary with the keys as follow:</p>
<pre><code>dict_keys(['__header__', '__version__', '__globals__', 'dataset'])
</code></pre>
<p>the only key with interest is dataset, which I havent been able to gather data from. printing the shape of it give this:</p>
<pre><code>>>>print(mat['dataset'].shape)
(1, 1)
</code></pre>
<p>I dug deeper and deeper to find a shape that looks somewhat like a real dataset and came across this:</p>
<pre><code>>>>print(mat['dataset'][0][0][0][0][0][0].shape)
(124800, 784)
</code></pre>
<p>which is exactly what I wanted but I cant find the labels nor the test data, I tried many things but cant seem to understand the structure of this dataset.</p>
<p>If someone could tell me what is going on with this I would appreciate it</p> | 2018-07-01 18:35:45.917000+00:00 | 2021-01-27 20:47:19.823000+00:00 | 2021-01-27 20:47:19.823000+00:00 | python|python-3.x|numpy|scipy|mnist | ['https://arxiv.org/abs/1702.05373v1', 'https://en.wikipedia.org/wiki/Row-_and_column-major_order'] | 2 |
11,413,883 | <p>Oleg Kiselyov and Ralf Lรคmmel's <a href="https://arxiv.org/pdf/cs/0509027v1.pdf" rel="nofollow noreferrer">"Haskell's overlooked object system"</a> proposes a library for Haskell that implements an object system using Haskell's existing features, including type classes.</p>
<p>An excerpt from the "introduction" section of the paper (emphasis mine):</p>
<blockquote>
<p>The interest in this topic is not at all restricted to Haskell researchers and practitioners since there is a fundamental and unsettled question โ a question that is addressed in the present paper:</p>
<p> <strong>What is the relation between type-class-bounded and subtype polymorphism?</strong></p>
<p>In this research context, we speci๏ฌcally (and emphatically) restrict ourselves to the
existing Haskell language (Haskell 98 and common extensions where necessary),
i.e., no new Haskell extensions are to be proposed. <strong>As we will substantiate, this
restriction is adequate, as it allows us to deliver a meaningful and momentous answer to the aforementioned question.</strong></p>
</blockquote> | 2012-07-10 13:01:46.623000+00:00 | 2017-01-30 21:11:03.827000+00:00 | 2017-01-30 21:11:03.827000+00:00 | null | 2,707,171 | <p>In this <a href="http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/ECOOP-July09.pdf" rel="noreferrer">PDF presentation</a> on Haskell Type Classes, slide #54 has this question:</p>
<blockquote>
<p><strong>Open Question</strong>:</p>
<p>In a language with generics and
constrained polymorphism, do you need
subtyping too?</p>
</blockquote>
<p>My questions are:</p>
<ol>
<li><p>How do generics and constrained polymorphism make subtyping unnecessary?</p></li>
<li><p>If generics and constrained polymorphism make subtyping unnecessary, why does Scala have subtyping?</p></li>
</ol> | 2010-04-25 04:41:19.943000+00:00 | 2017-01-30 21:11:03.827000+00:00 | 2010-04-25 05:49:24.070000+00:00 | oop|programming-languages|scala|haskell|functional-programming | ['https://arxiv.org/pdf/cs/0509027v1.pdf'] | 1 |
47,919,268 | <p>I'd suggest a <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">U-Net</a> (see figure 1). In the first half of a U-Net, the spatial resolution gets reduced as the number of channels increases (like VGG, as you mentioned). In the second half, the opposite happens, (number of channels get reduced, resolution increases). "Skip" connections between different layers allow for the network to efficiently produce high-resolution output.</p>
<p>You should be able to find an appropriate Keras implementation (maybe <a href="https://github.com/zhixuhao/unet" rel="nofollow noreferrer">this one</a>).</p> | 2017-12-21 06:37:07.593000+00:00 | 2017-12-21 06:37:07.593000+00:00 | null | null | 39,685,349 | <p>I'm trying to design a Convolutional Net to estimate the Depth of images using Keras.</p>
<p>I have RGB Input images with the shape of 3x120x160 and have the Grayscale Output Depth Maps with the shape of 1x120x160.</p>
<p>I tried using a VGG like architecture where the depth of each layer grows but at the end when I want to design the final layers, I get stuck. using a Dense layer is too expensive and I tried using Upsampling which proved inefficient.</p>
<p>I want to use DeConvolution2D but I can't get it to work. the only architecture I end up is something like this:</p>
<pre><code> model = Sequential()
model.add(Convolution2D(64, 5, 5, activation='relu', input_shape=(3, 120, 160)))
model.add(Convolution2D(64, 5, 5, activation='relu'))
model.add(MaxPooling2D())
model.add(Dropout(0.5))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(MaxPooling2D())
model.add(Dropout(0.5))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(Dropout(0.5))
model.add(ZeroPadding2D())
model.add(Deconvolution2D(512, 3, 3, (None, 512, 41, 61), subsample=(2, 2), activation='relu'))
model.add(Deconvolution2D(512, 3, 3, (None, 512, 123, 183), subsample=(3, 3), activation='relu'))
model.add(cropping.Cropping2D(cropping=((1, 2), (11, 12))))
model.add(Convolution2D(1, 1, 1, activation='sigmoid', border_mode='same'))
</code></pre>
<p>The Model summary is like this :</p>
<pre><code>Layer (type) Output Shape Param # Connected to
====================================================================================================
convolution2d_1 (Convolution2D) (None, 64, 116, 156) 4864 convolution2d_input_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 64, 112, 152) 102464 convolution2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 64, 56, 76) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
dropout_1 (Dropout) (None, 64, 56, 76) 0 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 128, 54, 74) 73856 dropout_1[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 128, 52, 72) 147584 convolution2d_3[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 128, 26, 36) 0 convolution2d_4[0][0]
____________________________________________________________________________________________________
dropout_2 (Dropout) (None, 128, 26, 36) 0 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 256, 24, 34) 295168 dropout_2[0][0]
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D) (None, 256, 22, 32) 590080 convolution2d_5[0][0]
____________________________________________________________________________________________________
dropout_3 (Dropout) (None, 256, 22, 32) 0 convolution2d_6[0][0]
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D) (None, 512, 20, 30) 1180160 dropout_3[0][0]
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D) (None, 512, 18, 28) 2359808 convolution2d_7[0][0]
____________________________________________________________________________________________________
dropout_4 (Dropout) (None, 512, 18, 28) 0 convolution2d_8[0][0]
____________________________________________________________________________________________________
zeropadding2d_1 (ZeroPadding2D) (None, 512, 20, 30) 0 dropout_4[0][0]
____________________________________________________________________________________________________
deconvolution2d_1 (Deconvolution2(None, 512, 41, 61) 2359808 zeropadding2d_1[0][0]
____________________________________________________________________________________________________
deconvolution2d_2 (Deconvolution2(None, 512, 123, 183) 2359808 deconvolution2d_1[0][0]
____________________________________________________________________________________________________
cropping2d_1 (Cropping2D) (None, 512, 120, 160) 0 deconvolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_9 (Convolution2D) (None, 1, 120, 160) 513 cropping2d_1[0][0]
====================================================================================================
Total params: 9474113
</code></pre>
<p>I couldn't reduce the size of Deconvolution2D layers from 512 as doing so results in shape related errors and it seems I have to add as many Deconvolution2D layers as the number of filters in the previous layer.
I also had to add a final Convolution2D layer to be able to run the network.</p>
<p>The above architecture learns but really slow and (I think) inefficiently. I'm sure I'm doing something wrong and the design shouldn't be like this. Can you help me design a better network?</p>
<p>I also tried to make a network as the one mentioned in <a href="https://github.com/johannah/mono-depth" rel="nofollow noreferrer">this repository</a> but it seems Keras doesn't work as this Lasagne example does. I'd really appreciate it if someone could show me how to design something like this network in Keras. It's architecture is like this :</p>
<p><a href="https://i.stack.imgur.com/1vMfn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1vMfn.png" alt="enter image description here"></a></p>
<p>Thanks</p> | 2016-09-25 09:30:37.883000+00:00 | 2021-09-24 12:49:59.077000+00:00 | 2016-09-25 09:37:52.487000+00:00 | machine-learning|neural-network|conv-neural-network|keras|lasagne | ['https://arxiv.org/pdf/1505.04597.pdf', 'https://github.com/zhixuhao/unet'] | 2 |
69,001,986 | <p>There is a lot of manners to deal with missing timeseries values in fact.</p>
<p>You already tried the traditional way, imputing data with mean values. But the drawback of this method is the bias caused by so many values on the data.</p>
<p>You can try a genetic algorithm (GA), Support Vector Machine(SVR), autoregressive(AR) and moving average(MA) for time series imputation and modeling. To overcome the bias problem caused by the tradional method (mean), these methods are used to forecast or/and impute time series.</p>
<p><em>(Consider that you have a multivariate timeseries)</em></p>
<p>Here are some ressources you can use :</p>
<p><a href="https://arxiv.org/abs/2011.11347" rel="nofollow noreferrer">A Survey on Deep Learning Approaches</a></p>
<p><a href="https://stackoverflow.com/questions/49308530/missing-values-in-time-series-in-python">time.series.missing-values-in-time-series-in-python</a></p>
<p><a href="https://www.analyticsvidhya.com/blog/2021/06/power-of-interpolation-in-python-to-fill-missing-values/" rel="nofollow noreferrer">Interpolation in Python to fill Missing Values</a></p> | 2021-08-31 16:05:20.697000+00:00 | 2021-08-31 17:07:28.637000+00:00 | 2021-08-31 17:07:28.637000+00:00 | null | 68,963,674 | <p>I have a dataset that holds weather data for each month from 1st day to 20th of month and for each hour of the day throw a year and the last 10 days(with it's hours) of each month are removed.</p>
<p>The <strong>weather</strong> data are :
(temperature - humidity - wind_speed - visibility - dew_temperature - solar_radiation - rainfall -snowfall)</p>
<p>I want to upsample the dataset as time series to fill the missing data of the days but i face many issue due too the changes of climate.</p>
<p>Here it what is tried so far</p>
<pre><code>def get_hour_month_mean(data,date,hour,max_id):
return { 'ID':max_id,
'temperature':data['temperature'].mean(),
'humidity':data['humidity'].mean(),
'date':date,
'hour':hour,
'wind_speed':data['wind_speed'].mean(),
'visibility':data['visibility'].mean(),
'dew_temperature':data['dew_temperature'].mean(),
'solar_radiation':data['solar_radiation'].mean(),
'rainfall':data['rainfall'].mean(),
'count':data['count'].mean() if str(date.date()) not in seoul_not_func else 0,
'snowfall':data['snowfall'].mean(),
'season':data['season'].mode()[0],
'is_holiday':'No Holiday' if str(date.date()) not in seoul_p_holidays_17_18 else 'Holiday' ,
'functional_day':'Yes' if str(date.date()) not in seoul_not_func else 'No' ,
}
def upsample_data_with_missing_dates(data):
data_range = pd.date_range(
start="2017-12-20", end="2018-11-30", freq='D')
missing_range=data_range.difference(df['date'])
hour_range=range(0,24)
max_id=data['ID'].max()
data_copy=data.copy()
for date in missing_range:
for hour in hour_range:
max_id+=1
year=data_copy.year
month=date.month
if date.month==11:
year-=1
month=12
else:
month+=1
month_mask=((data_copy['year'] == year) &
(data_copy['month'] == month) &
(data_copy['hour'] == hour) &(data_copy['day'].isin([1,2])))
data_filter=data_copy[month_mask]
dict_row=get_hour_month_mean(data_filter,date,hour,max_id)
data = data.append(dict_row, ignore_index=True)
return data
</code></pre>
<p>any ideas what is the best way to get the values of the missing days if i have the previous 20 days and the next 20 days ?</p> | 2021-08-28 10:56:30.400000+00:00 | 2021-08-31 17:07:28.637000+00:00 | null | python|pandas|machine-learning|time-series|data-science | ['https://arxiv.org/abs/2011.11347', 'https://stackoverflow.com/questions/49308530/missing-values-in-time-series-in-python', 'https://www.analyticsvidhya.com/blog/2021/06/power-of-interpolation-in-python-to-fill-missing-values/'] | 3 |
8,394,553 | <p>I understand that you want to compute a rolling hash function to hash every n-gram (where n is what you call the "chunk size"). Rolling hashing is sometimes called "recursive hashing". There is a wikipedia entry on the topic:</p>
<p><a href="http://en.wikipedia.org/wiki/Rolling_hash" rel="nofollow">http://en.wikipedia.org/wiki/Rolling_hash</a></p>
<p>A common algorithm to solve this problem is Karp-Rabin. Here is some pseudo-code which you should be able to easily implement in C#:</p>
<pre><code>Bโ37
sโempty First-In-First-Out (FIFO) structure (e.g., a linked-list)
xโ0(L-bit integer)
zโ0(L-bit integer)
for each character c do
append c to s
x โ (B xโB^n z + c ) mod 2^L
yield x
if length(s) = n then
remove oldest character y from s
z โ y
end if
end for
</code></pre>
<p>Note that because B^n is a constant, the main loop only does two multiplications, one subtraction and one addition. The "mod 2^L" operation can be done very fast (use a mask, or unsigned integers with L=32 or L=64, for example).</p>
<p>Specifically, your C# code might look like this where n is the "chunk" size (just set B=37, and Btothen = 37 ^ n)</p>
<pre><code>r[0] = 0
for (int i=1 ; i < N ; i++) {
r[i] = a[i] + B * r[i-1] - Btothen * a[i-n];
}
</code></pre>
<p>Karp-Rabin is not ideal however. I wrote a paper where better solutions are discussed:</p>
<p>Daniel Lemire and Owen Kaser, Recursive n-gram hashing is pairwise independent, at best, Computer Speech & Language 24 (4), pages 698-710, 2010.
<a href="http://arxiv.org/abs/0705.4676" rel="nofollow">http://arxiv.org/abs/0705.4676</a></p>
<p>I also published the source code (Java and C++, alas no C# but it should not be hard to go from Java to C#):</p>
<p><a href="https://github.com/lemire/rollinghashjava" rel="nofollow">https://github.com/lemire/rollinghashjava</a></p>
<p><a href="https://github.com/lemire/rollinghashcpp" rel="nofollow">https://github.com/lemire/rollinghashcpp</a></p> | 2011-12-06 02:14:15.820000+00:00 | 2011-12-06 02:39:37.597000+00:00 | 2011-12-06 02:39:37.597000+00:00 | null | 8,393,579 | <p>Let's say I have the array</p>
<pre><code>1,2,3,4,5,6,7,8,9,10,11,12
</code></pre>
<p>if my chunck size = 4 </p>
<p>then I want to be able to have a method that will output an array of ints int[] a = </p>
<pre><code>a[0] = 1
a[1] = 3
a[2] = 6
a[3] = 10
a[4] = 14
a[5] = 18
a[6] = 22
a[7] = 26
a[8] = 30
a[9] = 34
a[10] = 38
a[11] = 42
</code></pre>
<p>note that <code>a[n] = a[n] + a[n-1] + a[n-2] + a[n-3]</code> because the chunk size is 4 thus I sum the last 4 items</p>
<p>I need to have the method <code>without</code> a nested loop </p>
<pre><code> for(int i=0; i<12; i++)
{
for(int k = i; k>=0 ;k--)
{
// do sumation
counter++;
if(counter==4)
break;
}
}
</code></pre>
<p>for example i don't want to have something like that... in order to make code more efficient</p>
<p>also the chunck size may change so I cannot do:</p>
<p><code>a[3] = a[0] + a[1] + a[2] + a[3]</code></p>
<h2>edit</h2>
<p>The reason why I asked this question is because I need to implement check sum rolling for my data structures class. I basically open a file for reading. I then have a byte array. then I will perform a hash function on parts of the file. lets say the file is 100 bytes. I split it in chunks of 10 bytes. I perform a hash function in each chunck thus I get 10 hashes. then I need to compare those hashes with a second file that is similar. let's say the second file has the same 100 bytes but with an additional 5 so it contains a total of 105 bytes. becasuse those extra bytes may have been in the middle of the file if I perform the same algorithm that I did on the first file it is not going to work. Hope I explain my self correctly. and because some files are large. it is not efficient to have a nested loop in my algorithm.</p>
<p>also the real rolling hashing functions are very complex. Most of them are in c++ and I have a hard time understanding them. That's why I want to create my own hashing function very simple just to demonstrate how check sum rolling works... </p>
<h2>Edit 2</h2>
<pre><code> int chunckSize = 4;
int[] a = new int[] { 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12 }; // the bytes of the file
int[] b = new int[a.Length]; // array where we will place the checksums
int[] sum = new int[a.Length]; // array needed to avoid nested loop
for (int i = 0; i < a.Length; i++)
{
int temp = 0;
if (i == 0)
{
temp = 1;
}
sum[i] += a[i] + sum[i-1+temp];
if (i < chunckSize)
{
b[i] = sum[i];
}
else
{
b[i] = sum[i] - sum[i - chunckSize];
}
}
</code></pre>
<p>the problem with this algorithm is that with large files the sum will at some point be larger than int.Max thus it is not going to work....</p>
<p>but at least know it is more efficient. getting rid of that nested loop helped a lot!</p>
<h2>edit 3</h2>
<p>Based on edit two I have worked this out. It does not work with large files and also the checksum algorithm is very bad. but at least I think it explains the hashing rolling that I am trying to explain...</p>
<pre><code> Part1(@"A:\fileA.txt");
Part2(@"A:\fileB.txt", null);
</code></pre>
<p>.....</p>
<pre><code> // split the file in chuncks and return the checksums of the chuncks
private static UInt64[] Part1(string file)
{
UInt64[] hashes = new UInt64[(int)Math.Pow(2, 20)];
var stream = File.OpenRead(file);
int chunckSize = (int)Math.Pow(2, 22); // 10 => kilobite 20 => megabite 30 => gigabite etc..
byte[] buffer = new byte[chunckSize];
int bytesRead; // how many bytes where read
int counter = 0; // counter
while ( // while bytesRead > 0
(bytesRead =
(stream.Read(buffer, 0, buffer.Length)) // returns the number of bytes read or 0 if no bytes read
) > 0)
{
hashes[counter] = 0;
for (int i = 0; i < bytesRead; i++)
{
hashes[counter] = hashes[counter] + buffer[i]; // simple algorithm not realistic to perform check sum of file
}
counter++;
}// end while loop
return hashes;
}
// split the file in chuncks rolling it. In reallity this file will be on a different computer..
private static void Part2(string file, UInt64[] hash)
{
UInt64[] hashes = new UInt64[(int)Math.Pow(2, 20)];
var stream = File.OpenRead(file);
int chunckSize = (int)Math.Pow(2, 22); // chunks must be as big as in pervious method
byte[] buffer = new byte[chunckSize];
int bytesRead; // how many bytes where read
int counter = 0; // counter
UInt64[] sum = new UInt64[(int)Math.Pow(2, 20)];
while ( // while bytesRead > 0
(bytesRead =
(stream.Read(buffer, 0, buffer.Length)) // returns the number of bytes read or 0 if no bytes read
) > 0)
{
for (int i = 0; i < bytesRead; i++)
{
int temp = 0;
if (counter == 0)
temp = 1;
sum[counter] += (UInt64)buffer[i] + sum[counter - 1 + temp];
if (counter < chunckSize)
{
hashes[counter] = (UInt64)sum[counter];
}else
{
hashes[counter] = (UInt64)sum[counter] - (UInt64)sum[counter - chunckSize];
}
counter++;
}
}// end while loop
// mising to compare hashes arrays
}
</code></pre> | 2011-12-05 23:51:56.440000+00:00 | 2011-12-06 10:07:58.470000+00:00 | 2011-12-06 01:20:08.490000+00:00 | c#|algorithm | ['http://en.wikipedia.org/wiki/Rolling_hash', 'http://arxiv.org/abs/0705.4676', 'https://github.com/lemire/rollinghashjava', 'https://github.com/lemire/rollinghashcpp'] | 4 |
30,671,162 | <p>You can do this by combining <a href="http://cran.r-project.org/web/packages/dplyr/index.html" rel="nofollow">dplyr</a>, <a href="http://cran.r-project.org/web/packages/tidyr/index.html" rel="nofollow">tidyr</a> and my <a href="https://github.com/dgrtwo/broom" rel="nofollow">broom</a> package (you can install them with <code>install.packages</code>). First you need to gather all the numeric columns into a single column:</p>
<pre><code>library(dplyr)
library(tidyr)
tidied <- myDat %>%
gather(column, value, -X, -Recipe, -Step, -Stage, -Prod)
</code></pre>
<p>To understand what this does, you can read up on <a href="http://blog.rstudio.org/2014/07/22/introducing-tidyr/" rel="nofollow">tidyr's gather operation</a>. (This assumes that all columns besides X, Recipe, Step, Stage, and Prod are numeric and therefore should be predicted in your regression. If that's not the case, you need to remove them beforehand. You'll need to produce a reproducible example of the problem if you need a more customized solution).</p>
<p>Then perform each regression, while grouping by the column and the four grouping variables.</p>
<pre><code>library(broom)
regressions <- tidied %>%
group_by(column, Recipe, Step, Stage, Prod) %>%
do(mod = lm(value ~ X))
glances <- regressions %>% glance(mod)
</code></pre>
<p>The resulting <code>glances</code> data frame will have one row for each combination of column, Recipe, Step, Stage, and Prod, along with an <code>r.squared</code> column containing the R-squared from each model. (It will also contain <code>adj.r.squared</code>, along with other columns such as F-test p-value: see <a href="https://github.com/dgrtwo/broom#tidying-functions" rel="nofollow">here</a> for more). Running <code>coefs <- regressions %>% tidy(mod)</code> will probably also be useful for you, as it will get the coefficient estimates and p-values from each regression.</p>
<p>A similar use case is described in the <a href="http://cran.r-project.org/web/packages/broom/vignettes/broom_and_dplyr.html" rel="nofollow">"broom and dplyr" vignette</a>, and in Section 3.1 of <a href="http://arxiv.org/abs/1412.3565" rel="nofollow">the broom manuscript</a>.</p> | 2015-06-05 16:13:26.890000+00:00 | 2015-06-05 16:13:26.890000+00:00 | null | null | 30,623,230 | <p>I am working on a regression script.
I have a data.frame with roughly 130 columns, of which I need to do a regression for one column (lets call it X column) against all the other ~100 numeric columns. </p>
<p>Before the regression is calculated, I need to group the data by 4 factors: <code>myDat$Recipe</code>, <code>myDat$Step</code>, <code>myDat$Stage</code>, and <code>myDat$Prod</code> while still keeping the other ~100 columns and row data attached for the regression. Then I need to do a regression of each column ~ X column and print out the R^2 value with the column name. This is what I've tried so far but it is getting overly complicated and I know there's got to be a better way.</p>
<pre><code> rm(list=ls())
myDat <- read.csv(file="C:/Users/Documents/myDat.csv", header=TRUE, sep=",")
for(j in myDat$Recipe)
{
myDatj <- subset(myDat, myDat$Recipe == j)
for(k in myDatj$Step)
{
myDatk <- subset(myDatj, myDatj$Step == k)
for(i in myDatk$Stage)
{
myDati <- subset(myDatk, myDatk$Stage == i)
for(m in myDati$Prod)
{
myDatm <- subset(myDati, myDati$Prod == m)
if(is.numeric(myDatm[3,i]))
{
fit <- lm(myDatk[,i] ~ X, data=myDatm)
rsq <- summary(fit)$r.squared
{
writeLines(paste(rsq,i,"\n"))
}
}
}
}
}
}
</code></pre> | 2015-06-03 14:38:29.283000+00:00 | 2015-06-12 16:54:37.340000+00:00 | 2015-06-12 16:54:37.340000+00:00 | r|sorting|statistics|dataframe|regression | ['http://cran.r-project.org/web/packages/dplyr/index.html', 'http://cran.r-project.org/web/packages/tidyr/index.html', 'https://github.com/dgrtwo/broom', 'http://blog.rstudio.org/2014/07/22/introducing-tidyr/', 'https://github.com/dgrtwo/broom#tidying-functions', 'http://cran.r-project.org/web/packages/broom/vignettes/broom_and_dplyr.html', 'http://arxiv.org/abs/1412.3565'] | 7 |
62,390,744 | <p>First, the ReLU function is not a cure-all activation function. Specifically, it still suffers from the exploding gradient problem, since it is unbounded in the positive domain. Implying, this problem would still exist in deeper LSTM networks. Most LSTM networks become very deep, so they have a decent chance of running into the exploding gradient problem. RNNs also have exploding gradients when using the same weight matrix at each time step. There are methods, such as gradient clipping, that help reduce this problem in RNNs. However, ReLU functions themselves do not solve the exploding gradient problem.</p>
<p>The ReLU function does help reduce the vanishing gradient problem, but it doesn't solve the vanishing gradient completely. Methods, such as <a href="https://arxiv.org/abs/1603.09025" rel="nofollow noreferrer">batch normalization</a>, can help reduce the vanishing gradient problem even further.</p>
<p>Now, to answer your question about using a ReLU function in place of a tanh function. As far as I know, there shouldn't be much of a difference between the ReLU and tanh activation functions on their own for this particular gate. Neither of them completely solve the vanishing/exploding gradient problems in LSTM networks. For more information about how LSTMs reduce the vanishing and exploding gradient problem, please refer to this <a href="https://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html#dealing-with-vanishing-and-exploding-gradients" rel="nofollow noreferrer">post</a>.</p> | 2020-06-15 14:44:59.647000+00:00 | 2020-07-08 16:12:11.023000+00:00 | 2020-07-08 16:12:11.023000+00:00 | null | 62,382,224 | <p>I was looking the blog and the author used 'relu' instead of 'tanh', why?
<a href="https://towardsdatascience.com/step-by-step-understanding-lstm-autoencoder-layers-ffab055b6352" rel="nofollow noreferrer">https://towardsdatascience.com/step-by-step-understanding-lstm-autoencoder-layers-ffab055b6352</a></p>
<pre><code>lstm_autoencoder = Sequential()
# Encoder
lstm_autoencoder.add(LSTM(timesteps, activation='relu', input_shape=(timesteps, n_features),
return_sequences=True))
lstm_autoencoder.add(LSTM(16, activation='relu', return_sequences=True))
lstm_autoencoder.add(LSTM(1, activation='relu'))
lstm_autoencoder.add(RepeatVector(timesteps))
# Decoder
lstm_autoencoder.add(LSTM(timesteps, activation='relu', return_sequences=True))
lstm_autoencoder.add(LSTM(16, activation='relu', return_sequences=True))
lstm_autoencoder.add(TimeDistributed(Dense(n_features)))
</code></pre> | 2020-06-15 06:00:27.670000+00:00 | 2020-07-08 16:12:11.023000+00:00 | null | deep-learning|lstm|autoencoder | ['https://arxiv.org/abs/1603.09025', 'https://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html#dealing-with-vanishing-and-exploding-gradients'] | 2 |
69,264,913 | <p>For variables like "item_description" which are in essence text variables, check <a href="https://www.arxiv-vanity.com/papers/1806.00979/" rel="nofollow noreferrer">this paper</a> and corresponding <a href="https://pypi.org/project/dirty-cat/" rel="nofollow noreferrer">Python package</a>.</p>
<p>Or simply search online for "dirty categorical variables" and if in doubt, it is the article and package are from Gal Varoquaux, one of the main developers from Sklearn.</p> | 2021-09-21 07:23:18.537000+00:00 | 2021-09-21 07:23:18.537000+00:00 | null | null | 61,585,507 | <p>Im stuck in a dataset that contains some categrotical features with a high cardinality.
like 'item_description' ...
I read about some trick called hashing, but its main idea is still blurry and incomprehensible, i also read about a library called 'Feature engine' but i didn't really find something that might solve my issue.
Any suggestions please? </p> | 2020-05-04 05:05:44.900000+00:00 | 2021-09-21 07:23:18.537000+00:00 | 2020-05-04 08:18:17.580000+00:00 | python|machine-learning | ['https://www.arxiv-vanity.com/papers/1806.00979/', 'https://pypi.org/project/dirty-cat/'] | 2 |
50,077,194 | <p>A neural network can be seen as a function approximation tool. The quality of the approximation is defined by its' error, i.e. how far the prediction is from the underlying ground truth. If we leave the practicioner approach (trial & error) aside, there's two theories through which we can investigate the effect of number of nodes (aka width) on the network's quality; one is theory of computation, and the other is algebraic topology. Neither has yet provided results immediately translatable to "if you add another node then this happens", but both have some very nice insights to offer. I am not sure if this is the kind of answer you are expecting, but I will try to very briefly walk you through the major points of what the latter field offers in terms of explanations.</p>
<p><strong>Algebraic Topology / Control Theory</strong></p>
<ol>
<li>A "shallow" network (i.e. a single dense layer) can approximate with arbitrary low error any continuous function under the assumption of <strong>no constraints</strong> on number of nodes. What this says is that your network can learn (almost) perfectly whatever you throw at it, no matter how complex it is, provided you can let it use as many nodes as it wants (potentially countably infinite). Practically, even though we know that there exists a shallow network that approximates with error <em>ฮต</em>โ 0 a continuous function <em>f</em>, we do not know what that network is or how to estimate its parameters. Generally, the more complex <em>f</em> is and the lower we want <em>ฮต</em> to be, the more nodes we would need, up to the point where training becomes unfeasible due to the curse of dimensionality. In very applied terms, this means that the wider your layer, the richer your representation, the more accurate your prediction. As a side effect you will also have more parameters that need to be trained, thus more data requirements, and measures against overfitting will become necessary.</li>
<li>A high-rank tensor, such as the ones usually used as objective functions for neural networks, can be decomposed into a series of potentially lower rank tensors. This effectively reduces degrees of freedom and makes numeric representation easier with far lesser parameters. However, determining the rank of the decomposition (the number of summands) is NP-Hard, as is determining the coefficients themselves. The <strong>shallow network</strong> corresponds to the canonical decomposition, so due to it being NP-Hard, no claims can be made regarding the number of nodes necessary to construct a perfect approximation. What we do know, however, is that <strong>recurrent networks</strong> correspond to another sort of decomposition, the Tensor-Train decomposition, which is far more memory efficient and stable; therefore we know that <em>a shallow network would need exponentially more width to mimic a recurrent network of the same width</em>. Similarly, we know that a <strong>convolutional network</strong> corresponds the Hierarchical-Tucker decomposition, which is also more efficient than the shallow network; therefore a <em>conv layer can compute in polynomial size what would require super-polynomial size for a shallow layer</em>. </li>
</ol>
<p>Refs: </p>
<ol>
<li><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.441.7873&rep=rep1&type=pdf" rel="nofollow noreferrer">Approximation by Superpositions of a Sigmoidal Function</a></li>
<li><a href="http://proceedings.mlr.press/v48/cohenb16.pdf" rel="nofollow noreferrer">Convolutional Rectifier Networks as Generalized Tensor Decomposition</a></li>
<li><a href="https://arxiv.org/abs/1711.00811" rel="nofollow noreferrer">Expressive Power of Recurrent Neural Networks</a></li>
</ol>
<p><strong>TL;DR</strong>: We don't know much about how much width is necessary for some approximation, but we can compare width-efficiency between different network types. We know a shallow network (fully-connected layer) can approximate anything if we let it grow with no constraints. We also know that an exponential increase in its size is equivalent to a linear increase in a recurrent layer size, and that a super-polynomial increase in its size is equivalent to a polynomial increase in a convolutional layer. So if you're adding width, it better be on an RNN cell :)</p>
<p>The computational theory perspective follows a different route; that is, translating various network types to computation theoretic machines and inspecting their Turing degree. There are claims made about the number of nodes necessary to simulate a Turing machine using shallow nets, and how various networks relate to one-another in terms of size complexity, but I'm not sure if this is anywhere close to what you're asking so I'll skip this part.</p>
<p>I did not go into the comparison between width and depth efficiency either, as this is not something you're asking, but there are many more experimental results on that topic (and many SO answers far better than I could ever write myself). </p> | 2018-04-28 13:43:48.963000+00:00 | 2018-05-01 06:20:50.533000+00:00 | 2018-05-01 06:20:50.533000+00:00 | null | 49,900,772 | <p>Other than just trial and error, what impact does varying the number of nodes in a deep learning model achieve?</p>
<p>How I interpret it is this: each learned representation of a layer is a dense vector if the number of nodes is low and inversely each representation is a sparse vector if the number of nodes is high. How does this contribute to more or less accurate training accuracy?</p> | 2018-04-18 13:18:38.880000+00:00 | 2018-05-01 09:13:12.137000+00:00 | 2018-05-01 09:13:12.137000+00:00 | matrix|vector|neural-network|deep-learning|sparse-matrix | ['http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.441.7873&rep=rep1&type=pdf', 'http://proceedings.mlr.press/v48/cohenb16.pdf', 'https://arxiv.org/abs/1711.00811'] | 3 |
70,459,989 | <p>This is additional answer for whom seeking for SSD architecture, the Tensorflow Object Detection API 1 has include <code>vgg</code> architecture from <code>slim</code> folder, we can import it <code>from nets import vgg</code> directly. I have just only tried with SSD architecture. I follow the ssd-mobilenet config to create respective models from 2 layers 'fc7' and 'conv4_3' as stated in the <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">paper</a>. Then, save your new <code>SSD-VGG16_feature_extractor.py</code> inside <code>models</code> folder.</p>
<p><strong>Notice</strong> For a correct matching with the <code>vgg</code> in the <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="nofollow noreferrer">paper</a>, you should change 4096 to 1024 and kernel size <code>[7,7]</code> to <code>[3,3]</code> for correct depth features <a href="https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py#L206-L209" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py#L206-L209</a></p>
<pre><code>import tensorflow.compat.v1 as tf
import tf_slim as slim
from object_detection.meta_architectures import ssd_meta_arch
from object_detection.models import feature_map_generators
from object_detection.utils import ops
from object_detection.utils import shape_utils
from nets import vgg
class SSDVgg16FeatureExtractor(ssd_meta_arch.SSDFeatureExtractor):
"""SSD Feature Extractor using Vgg16 features."""
def __init__(self,
is_training,
depth_multiplier,
min_depth,
pad_to_multiple,
conv_hyperparams_fn,
reuse_weights=None,
use_explicit_padding=False,
use_depthwise=False,
override_base_feature_extractor_hyperparams=False):
"""Vgg16 Feature Extractor for SSD Models.
Args:
is_training: whether the network is in training mode.
depth_multiplier: float depth multiplier for feature extractor.
min_depth: minimum feature extractor depth.
pad_to_multiple: the nearest multiple to zero pad the input height and
width dimensions to.
conv_hyperparams_fn: A function to construct tf slim arg_scope for conv2d
and separable_conv2d ops in the layers that are added on top of the
base feature extractor.
reuse_weights: Whether to reuse variables. Default is None.
use_explicit_padding: Whether to use explicit padding when extracting
features. Default is False.
use_depthwise: Whether to use depthwise convolutions. Default is False.
num_layers: Number of SSD layers.
override_base_feature_extractor_hyperparams: Whether to override
hyperparameters of the base feature extractor with the one from
`conv_hyperparams_fn`.
Raises:
ValueError: If `override_base_feature_extractor_hyperparams` is False.
"""
super(SSDVgg16FeatureExtractor, self).__init__(
is_training=is_training,
depth_multiplier=depth_multiplier,
min_depth=min_depth,
pad_to_multiple=pad_to_multiple,
conv_hyperparams_fn=conv_hyperparams_fn,
reuse_weights=reuse_weights,
use_explicit_padding=use_explicit_padding,
use_depthwise=use_depthwise,
override_base_feature_extractor_hyperparams=
override_base_feature_extractor_hyperparams)
if not self._override_base_feature_extractor_hyperparams:
raise ValueError('SSD Vgg16 feature extractor always uses'
'scope returned by `conv_hyperparams_fn` for both the '
'base feature extractor and the additional layers '
'added since there is no arg_scope defined for the base '
'feature extractor.')
def preprocess(self, resized_inputs):
"""SSD preprocessing.
Maps pixel values to the range [-1, 1].
Args:
resized_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
"""
return (2.0 / 255.0) * resized_inputs - 1.0
def extract_features(self, preprocessed_inputs):
"""Extract features from preprocessed inputs.
Args:
preprocessed_inputs: a [batch, height, width, channels] float tensor
representing a batch of images.
Returns:
feature_maps: a list of tensors where the ith tensor has shape
[batch, height_i, width_i, depth_i]
"""
preprocessed_inputs = shape_utils.check_min_image_dim(
33, preprocessed_inputs)
feature_map_layout = {
'from_layer': ['FeatureExtractor/vgg_16/conv4/conv4_3', 'FeatureExtractor/vgg_16/fc7', '', '', '', ''],
'layer_depth': [-1, -1, 512, 256, 256, 128],
'use_explicit_padding': self._use_explicit_padding,
'use_depthwise': self._use_depthwise,
}
with slim.arg_scope(self._conv_hyperparams_fn()):
with slim.arg_scope(vgg.vgg_arg_scope()):
_, image_features = vgg.vgg_16(
ops.pad_to_multiple(preprocessed_inputs, self._pad_to_multiple),
num_classes=0)
print("Available output head: ")
print([k for k,v in image_features.items()])
with slim.arg_scope(self._conv_hyperparams_fn()):
feature_maps = feature_map_generators.multi_resolution_feature_maps(
feature_map_layout=feature_map_layout,
depth_multiplier=self._depth_multiplier,
min_depth=self._min_depth,
insert_1x1_conv=True,
image_features=image_features)
return list(feature_maps.values())
</code></pre>
<p>Then, you just need to update <code>'ssd_vgg16': SSDVgg16FeatureExtractor</code> in <code>SSD_FEATURE_EXTRACTOR_CLASS_MAP</code> dict in <code>builder/model_builder.py</code> to complete the model.</p>
<p>I have tested and it works like charm</p>
<pre><code>INFO:tensorflow:global_step/sec: 0.195851
I1223 18:19:21.963316 139974845604416 basic_session_run_hooks.py:692] global_step/sec: 0.195851
INFO:tensorflow:loss = 3.674446, step = 700 (510.592 sec)
I1223 18:19:21.964789 139974845604416 basic_session_run_hooks.py:260] loss = 3.674446, step = 700 (510.592 sec)
</code></pre> | 2021-12-23 09:20:02.220000+00:00 | 2021-12-28 06:49:06.863000+00:00 | 2021-12-28 06:49:06.863000+00:00 | null | 50,972,169 | <p>I am in the process of creating a Custom VGG model as a feature extractor of Faster RCNN model in Tensorflow object detection API. As mentioned on in the document <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/defining_your_own_model.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/defining_your_own_model.md</a> the feature extractor code consists of <code>extract_proposal_features</code> and <code>extract_classifier_features</code>. I am using TF slim code of creating the convolution layers (since they Tensorflow team uses it). As a reference please find the model structure of VGG 16 returned using by TF slim</p>
<pre><code> ([('vgg_16/conv1/conv1_1',
<tf.Tensor 'vgg_16/vgg_16/conv1/conv1_1/Relu:0' shape=(?, 224, 224, 64) dtype=float32>),
('vgg_16/conv1/conv1_2',
<tf.Tensor 'vgg_16/vgg_16/conv1/conv1_2/Relu:0' shape=(?, 224, 224, 64) dtype=float32>),
('vgg_16/vgg_16/pool1',
<tf.Tensor 'vgg_16/vgg_16/pool1/MaxPool:0' shape=(?, 112, 112, 64) dtype=float32>),
('vgg_16/conv2/conv2_1',
<tf.Tensor 'vgg_16/vgg_16/conv2/conv2_1/Relu:0' shape=(?, 112, 112, 128) dtype=float32>),
('vgg_16/conv2/conv2_2',
<tf.Tensor 'vgg_16/vgg_16/conv2/conv2_2/Relu:0' shape=(?, 112, 112, 128) dtype=float32>),
('vgg_16/vgg_16/pool2',
<tf.Tensor 'vgg_16/vgg_16/pool2/MaxPool:0' shape=(?, 56, 56, 128) dtype=float32>),
('vgg_16/conv3/conv3_1',
<tf.Tensor 'vgg_16/vgg_16/conv3/conv3_1/Relu:0' shape=(?, 56, 56, 256) dtype=float32>),
('vgg_16/conv3/conv3_2',
<tf.Tensor 'vgg_16/vgg_16/conv3/conv3_2/Relu:0' shape=(?, 56, 56, 256) dtype=float32>),
('vgg_16/conv3/conv3_3',
<tf.Tensor 'vgg_16/vgg_16/conv3/conv3_3/Relu:0' shape=(?, 56, 56, 256) dtype=float32>),
('vgg_16/vgg_16/pool3',
<tf.Tensor 'vgg_16/vgg_16/pool3/MaxPool:0' shape=(?, 28, 28, 256) dtype=float32>),
('vgg_16/conv4/conv4_1',
<tf.Tensor 'vgg_16/vgg_16/conv4/conv4_1/Relu:0' shape=(?, 28, 28, 512) dtype=float32>),
('vgg_16/conv4/conv4_2',
<tf.Tensor 'vgg_16/vgg_16/conv4/conv4_2/Relu:0' shape=(?, 28, 28, 512) dtype=float32>),
('vgg_16/conv4/conv4_3',
<tf.Tensor 'vgg_16/vgg_16/conv4/conv4_3/Relu:0' shape=(?, 28, 28, 512) dtype=float32>),
('vgg_16/vgg_16/pool4',
<tf.Tensor 'vgg_16/vgg_16/pool4/MaxPool:0' shape=(?, 14, 14, 512) dtype=float32>),
('vgg_16/conv5/conv5_1',
<tf.Tensor 'vgg_16/vgg_16/conv5/conv5_1/Relu:0' shape=(?, 14, 14, 512) dtype=float32>),
('vgg_16/conv5/conv5_2',
<tf.Tensor 'vgg_16/vgg_16/conv5/conv5_2/Relu:0' shape=(?, 14, 14, 512) dtype=float32>),
('vgg_16/conv5/conv5_3',
<tf.Tensor 'vgg_16/vgg_16/conv5/conv5_3/Relu:0' shape=(?, 14, 14, 512) dtype=float32>),
('vgg_16/vgg_16/pool5',
<tf.Tensor 'vgg_16/vgg_16/pool5/MaxPool:0' shape=(?, 7, 7, 512) dtype=float32>),
('vgg_16/fc6',
<tf.Tensor 'vgg_16/vgg_16/fc6/Relu:0' shape=(?, 1, 1, 4096) dtype=float32>),
('vgg_16/fc7',
<tf.Tensor 'vgg_16/vgg_16/fc7/Relu:0' shape=(?, 1, 1, 4096) dtype=float32>)])
</code></pre>
<p>My question is that, which convolution layer needs to be included and returned in <code>extract_proposal_features</code> method and which convolution layers needs to be included and returned in <code>extract_classifier_features</code>. Please let me know.</p> | 2018-06-21 15:22:27.560000+00:00 | 2021-12-28 06:49:06.863000+00:00 | 2018-08-15 06:36:31.183000+00:00 | python|tensorflow|tensorflow-slim|vgg-net | ['https://arxiv.org/pdf/1512.02325.pdf', 'https://arxiv.org/pdf/1512.02325.pdf', 'https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py#L206-L209'] | 3 |
35,456,758 | <p>There's a suggestion at <a href="https://math.stackexchange.com/questions/897564/radon-transformation">math.SE</a> which might help. Then there's a rather complicated-looking research paper <a href="http://arxiv.org/pdf/1209.1215v2.pdf" rel="nofollow noreferrer">"Sharp endpoint estimates for the X-ray transform and the Radon
transform in finite fields"</a>, which appears just to show certain bounds on estimation accuracy.</p>
<p>From skimming other papers, it appears that it's a nontrivial problem. I suspect it may be simpler (if less accurate) to use some adaptation of a <a href="https://en.wikipedia.org/wiki/Sobel_operator" rel="nofollow noreferrer">Sobel-operation</a> to identify high gradient points along the discovered line, and claim those as endpoints. </p> | 2016-02-17 12:28:41.460000+00:00 | 2016-02-17 12:28:41.460000+00:00 | 2017-04-13 12:19:15.777000+00:00 | null | 35,412,573 | <p>I'm trying to detect lines in a grayscale image. For that purpose, I'm using Radon transform in MATLAB. An example of my m-file is like below. I can detect multiple lines using this code. I also draw lines using shift and rotation properties for lines. However, I didn't understand how to get the start and end points of the detecting lines after getting <i>rho</i> and <i>theta</i> values. </p>
<p>It is easy for Hough transform since there is a function called <i>houghlines()</i> that returns the list of the lines for the given peaks. Is there any function that i can use for Radon transform similar to this function? </p>
<pre><code> % Radon transform line detection algorithm
clear all; close all;
% Determine the path of the input image
str_inputimg = '3_lines.png' ;
% Read input image
I = imread(str_inputimg) ;
% If the input image is RGB or indexed color, convert it to grayscale
img_colortype = getfield(imfinfo(str_inputimg), 'ColorType') ;
switch img_colortype
case 'truecolor'
I = rgb2gray(I) ;
case 'indexedcolor'
I = ind2gray(I) ;
end
figure;
subplot(2,2,1) ;
imshow(I) ;
title('Original Image') ;
% Convert image to black white
%BW = edge(I,'Sobel');
BW=im2bw(I,0.25) ;
subplot(2,2,2) ;
imshow(BW);
title('BW Image') ;
% Radon transform
% Angle projections
theta = [0:179]' ;
[R, rho] = radon(BW, theta) ;
subplot(2,2,3) ;
imshow(R, [], 'XData', theta, 'YData', rho, 'InitialMagnification', 'fit');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
% Detect the peaks of transform output
% Threshold value for peak detection
threshold_val = ceil(0.3*max(R(:))) ;
% Maximum nof peaks to identify in the image
max_nofpeaks = 5 ;
max_indexes = find(R(:)>threshold_val) ;
max_values = R(max_indexes) ;
[sorted_max, temp_indexes] = sort(max_values, 'descend') ;
sorted_indexes = max_indexes(temp_indexes) ;
% Get the first highest peaks for the sorted array
if (length(sorted_max) <= max_nofpeaks)
peak_values = sorted_max(1:end) ;
peak_indexes = sorted_indexes(1:end) ;
else
peak_values = sorted_max(1:max_nofpeaks) ;
peak_indexes = sorted_indexes(1:max_nofpeaks) ;
end
[y, x] = ind2sub(size(R), peak_indexes ) ;
peaks = [rho(y) theta(x)] ;
plot(peaks(:,2), peaks(:,1), 's', 'color','white');
title('Radon Transform & Peaks') ;
% Detected lines on the image
subplot(2,2,4), imshow(I), title('Detected lines'), hold on
x_center = floor(size(I, 2)/2) ;
y_center = floor(size(I, 1)/2) ;
for p=1:length(peaks)
x_1 = [-x_center, x_center] ;
y_1 = [0, 0] ;
% Shift at first
x_1_shifted = x_1 ;
y_1_shifted = [y_1(1)-peaks(p,1), y_1(2)-peaks(p,1)] ;
% Rotate
peaks(p,2) = 90 - peaks(p,2) ;
t=peaks(p,2)*pi/180;
rotation_mat = [ cos(t) -sin(t) ; sin(t) cos(t) ] ;
x_y_rotated = rotation_mat*[x_1_shifted; y_1_shifted] ;
x_rotated = x_y_rotated(1,:) ;
y_rotated = x_y_rotated(2,:) ;
plot( x_rotated+x_center, y_rotated+y_center, 'b', 'linewidth', 2 );
end
hold off;
</code></pre> | 2016-02-15 15:00:07.513000+00:00 | 2022-03-23 18:04:57.660000+00:00 | null | matlab|line|transform | ['https://math.stackexchange.com/questions/897564/radon-transformation', 'http://arxiv.org/pdf/1209.1215v2.pdf', 'https://en.wikipedia.org/wiki/Sobel_operator'] | 3 |
61,199,603 | <p>There are a few reasonable vantage points from which to tackle this problem. My strategy here, while perhaps a little unpolished, does the job just fine, while illustrating the key ideas without too many technical complications.</p>
<p>This answer has two parts. The first part, which can be read independently if the reader is short of time, presents the chosen perspective and the main conclusion. The second part expands on that by providing detailed justification. At its very end, there is a concise list of things allowed and forbidden by the <code>Traversable</code> laws.</p>
<p>The answer grew a little long, so here is a list of section headings for skipping around with Ctrl+F:</p>
<ul>
<li><p>Part one</p>
<ul>
<li>Shape and contents</li>
<li>Duplicating effects</li>
<li>The free applicative presentation</li>
</ul>
</li>
<li><p>Part two</p>
<ul>
<li>Fillable and Traversable, up close</li>
<li>Duplicating effects: once more, with feeling</li>
<li>foldMapDefault and the other naturality law</li>
<li>Executive summary: dos and don'ts of Traversable</li>
</ul>
</li>
</ul>
<p>One might, in fact, object that this answer is too long for this format. In my defense, I note that the parent question is addressed in the sections about duplicating effects, and everything else either justifies the direct answer or is relevant in context.</p>
<h2>Shape and contents</h2>
<p>Ultimately, it all comes down to what I like to call the shape-and-contents decomposition. In the simplest possible terms, it means <code>Traversable</code> can be encoded through a class like this:</p>
<pre><code>class (Functor t, Foldable t) => Fillable t where
fill :: t () -> [a] -> t a
</code></pre>
<p><code>fill</code> takes a <code>t</code> functorial shape, which we represent here using a <code>t ()</code> value, and fills it with contents drawn from an <code>[a]</code> list. We can rely on <code>Functor</code> and <code>Foldable</code> to give us a conversion in the other direction:</p>
<pre><code>detach :: (Functor t, Foldable t) => t a -> (t (), [a])
detach = (() <$) &&& toList
</code></pre>
<p>With <code>fill</code> and <code>detach</code>, we can then implement <code>sequenceA</code> in terms of the concrete <code>sequenceA</code> for lists: detach, sequence the list, and then fill:</p>
<pre><code>sequenceFill :: (Fillable t, Applicative f) => t (f a) -> f (t a)
sequenceFill = uncurry (fmap . fill) . second (sequenceList) . detach
-- The usual sequenceA for lists.
sequenceList :: Applicative f => [f a] -> f [a]
sequenceList = foldr (liftA2 (:)) (pure [])
</code></pre>
<p>It is also possible, if a little awkward, to define <code>fill</code> in terms of <code>Traversable</code>:</p>
<pre><code>-- Partial, handle with care.
fillTr :: Traversable t => t () -> [a] -> t a
fillTr = evalState . traverse (const pop)
where
pop = state (\(a : as) -> (a, as))
</code></pre>
<p>(For prior art on this approach, see, for instance, <a href="https://stackoverflow.com/a/21083521">this answer</a>.)</p>
<p>In terms of <code>Fillable</code>, the <code>Traversable</code> laws say that <code>fill</code> and <code>detach</code> <em>almost</em> witness the two directions of an isomorphism:</p>
<ol>
<li><p><code>fill</code> must be a left inverse of <code>detach</code>:</p>
<pre><code> uncurry fill . detach = id
</code></pre>
<p>This amounts to the identity law of <code>Traversable</code>.</p>
</li>
<li><p><code>detach</code> must behave as a left inverse of <code>fill</code> as long as <code>fill</code> is only supplied lists and shapes with compatible sizes (otherwise the situation is hopeless):</p>
<pre><code> -- Precondition: length (toList sh) = length as
detach . uncurry fill $ (sh, as) = (sh, as)
</code></pre>
<p>This property corresponds to the composition law. On its own, it is, in fact, stronger than the composition law. If we assume the identity law, however, it becomes materially equivalent to the composition law. That being so, it is fine to take these properties as an alternate presentation of the <code>Traversable</code> laws, except perhaps if you want to study the composition law in isolation. (There will be a more detailed explanation of this near-equivalence in the second part of the answer, after we look more closely at the composition law.)</p>
</li>
</ol>
<h2>Duplicating effects</h2>
<p>What has all of that to do with your question? Suppose we want to define a traversal that duplicates effects without changing the traversable shape (as changing it would be a flagrant violation of the identity law). Now, assuming that our <code>sequenceA</code> is actually <code>sequenceFill</code>, as defined above, which options do we have? Since <code>sequenceFill</code> piggybacks on <code>sequenceList</code>, which is known to visit each element exactly once, our only hope is to rely on a companion <code>Foldable</code> instance such that <code>toList</code>, and therefore <code>detach</code>, generates a list with duplicated elements. Can we make the <code>Fillable</code> laws hold in such a scenario?</p>
<ul>
<li><p>The first law is not a big problem. In principle, we can always define <code>fill</code> so that it undoes the duplication, discarding the extra copies of elements introduced by <code>detach</code>.</p>
</li>
<li><p>If we have a deduplicating <code>fill</code>, however, the second law is a lost cause. By parametricity, <code>fill</code> can't distinguish between a list with duplicates introduced by <code>detach</code> from any other list we might feed it, and so <code>detach . uncurry fill</code> will always replace some elements with duplicates of other ones.</p>
</li>
</ul>
<p>That being so, a <code>traverseFill</code> that duplicates effects can only arise from an unlawful <code>Fillable</code>. Since the <code>Fillable</code> laws are equivalent to the <code>Traversable</code> ones, we conclude that a lawful <code>Traversable</code> cannot duplicate effects.</p>
<p>(The effect duplication scenario discussed above, by the way, applies to your <code>Bar</code> type: it fails the second <code>Fillable</code> law, and therefore it also fails the <code>Traversable</code> composition law, as your counterexample shows.)</p>
<p>A paper which I really like that covers this question and matters adjacent to it is Bird et al., <a href="https://www.cs.ox.ac.uk/jeremy.gibbons/publications/uitbaf.pdf" rel="nofollow noreferrer"><em>Understanding Idiomatic Traversals Backwards and Forwards</em></a> (2013). Though it might not look like that at first, its approach is closely related to what I have shown here. In particular, its "representation theorem" is essentially the same as the <code>detach</code>/<code>fill</code> relationship explored here, the main difference being that the definitions in the paper are tighter, obviating the need to fuss about what <code>fill</code> is supposed to do when given a list of the wrong length.</p>
<h2>The free applicative presentation</h2>
<p>Though I won't attempt to present the full argument of the Bird et al. paper, in the context of this answer it is worth noting how its proof of the aforementioned representation theorem relies on a formulation of the free applicative functor. We can twist that idea a little to obtain an additional presentation of <code>Traversable</code> in terms of <a href="http://hackage.haskell.org/package/free-5.1.3/docs/Control-Applicative-Free.html" rel="nofollow noreferrer"><code>Ap</code> from <em>free</em>'s <code>Control.Applicative.Free</code></a>:</p>
<pre><code>-- Adapted from Control.Applicative.Free.
data Ap f a where
Pure :: a -> Ap f a
Ap :: f a -> Ap f (a -> b) -> Ap f b
instance Applicative (Ap f) where
pure = Pure
Pure f <*> y = fmap f y
Ap x y <*> z = Ap x (flip <$> y <*> z)
liftAp :: f a -> Ap f a
liftAp x = Ap x (Pure id)
retractAp :: Applicative f => Ap f a -> f a
retractAp (Pure a) = pure a
retractAp (Ap x y) = x <**> retractAp y
</code></pre>
<pre><code>class (Functor t, Foldable t) => Batchable t where
toAp :: t (f a) -> Ap f (t a)
sequenceBatch :: (Batchable t, Applicative f) => t (f a) -> f (t a)
sequenceBatch = retractAp . toAp
toApTr :: Traversable t => t (f a) -> Ap f (t a)
toApTr = traverse liftAp
</code></pre>
<p>I'm pretty much sure the following are appropriate laws, though it might be worth it to doublecheck:</p>
<pre><code>retractAp . toAp . fmap Identity . runIdentity = id
toAp . fmap Identity . runIdentity . retractAp = id
</code></pre>
<p>Though this looks far removed from the humble <code>detach</code> and <code>fill</code> combination we started with, it ultimately is just a more precise encoding of the same idea. An <code>Ap f (t a)</code> value is either a single <code>t a</code> structure wrapped in <code>Pure</code>, or a sequence of zero or more <code>f a</code> values (the <code>Ap</code> constructor) capped by a function of the appropriate arity which takes as many <code>a</code>s as there are <code>f a</code>s and produces a <code>t a</code>. In terms of our initial stab at the shape-and-contents decomposition, we have:</p>
<ul>
<li><p>The <code>f a</code>s in the <code>Ap</code> values correspond to the list of contents;</p>
</li>
<li><p>The function (if there is one) encodes which shape to use when reassembling the traversable structure, as well as how it should be filled. The shape-list mismatch problem is neatly avoided at type level, it being statically guaranteed that the function will have the right arity;</p>
</li>
<li><p>As for the effects, <code>retractAp</code> performs the role of combining them in the obvious way, much like <code>sequenceList</code> did in <code>sequenceFill</code>.</p>
</li>
</ul>
<p>(End of part one.)</p>
<h2>Fillable and Traversable, up close</h2>
<p>As promised, part two will start with proving that <code>Fillable</code> really is a presentation of <code>Traversable</code>. In what follows, I will use tweaked versions of the definitions which are easier to manipulate with pen and paper:</p>
<pre><code>-- Making the tuple shuffling implicit. It would have been fine to use
-- the derived Foldable and Traversable. I will refrain from that here
-- only for the sake of explicitness.
newtype Decomp t a = Decomp { getDecomp :: (t (), [a]) }
deriving Functor
deriving instance (Show a, Show (t ())) => Show (Decomp t a)
detach' :: (Functor t, Foldable t) => t a -> Decomp t a
detach' = Decomp . detach
fill' :: Fillable t => Decomp t a -> t a
fill' = uncurry fill . getDecomp
-- Sequence the list, then shift the shape into the applicative layer.
-- Also a lawful sequenceA (amounts to Compose ((,) (t ())) []).
sequenceList' :: Applicative f => Decomp t (f a) -> f (Decomp t a)
sequenceList'
= fmap Decomp . uncurry (map . (,)) . second sequenceList . getDecomp
instance Traversable Decomp where
sequenceA = sequenceList'
instance Foldable Decomp where
foldMap = foldMapDefault
sequenceFill' :: (Fillable t, Applicative f) => t (f a) -> f (t a)
sequenceFill' = fmap fill' . sequenceList' . detach'
</code></pre>
<p>(By the way, the cleaner definitions above provide a good occasion to note that, if we were to leave the confines of writing actual Haskell, it wouldn't take much to move the shape carried all along the way in <code>sequenceFill'</code> to type level, in effect partitioning the traversable functor according to the possible shapes. As far as I understand it, that would get us well on the way towards the standard dependently typed treatment of containers. I won't delve further into that here; if you feel like exploring, I heartily recommend <a href="https://stackoverflow.com/search?q=user%3A828361+containers">the answers on the topic by Conor McBride (pigworker)</a>.)</p>
<h3>Identity</h3>
<p>We can begin by dealing with the identity law, which is a more straightforward matter:</p>
<pre><code>-- Abbreviations:
I = Identity
uI = runIdentity
C = Compose
uC = getCompose
</code></pre>
<pre><code>-- Goal: Given the identity law...
sequenceFill' @_ @I . fmap I = I
-- ... obtain detach-then-fill:
fill' . detach' = id
sequenceFill' @_ @I . fmap I = I
uI . fmap fill' . sequenceList' @I . detach' . fmap I = id
-- sequenceList is lawful (identity law):
uI . fmap fill' . I . fmap uI . detach' . fmap I = id
uI . fmap fill' . I . detach' . fmap uI . fmap I = id
uI . fmap fill' . I . detach' = id
uI . I . fill' . detach' = id
fill' . detach' = id -- Goal.
</code></pre>
<p>Since all steps in the derivation above are reversible, we can conclude the detach-then-fill direction of the isomorphism is equivalent to the identity law.</p>
<h3>Composition</h3>
<p>As for the composition law, we might start by using the same strategy:</p>
<pre><code>-- Goal: Given the composition law...
sequenceFill' @_ @(C _ _) . fmap C = C . fmap sequenceFill' . sequenceFill'
-- ... obtain fill-then-detach...
detach' . fill' = id
-- ... within the domain specified by its precondition.
sequenceFill' @_ @(C _ _) . fmap C = C . fmap sequenceFill' . sequenceFill'
fmap fill' . sequenceList' @(C _ _) . detach' . fmap C
= C . fmap (fmap fill' . sequenceList' . detach')
. fmap fill' . sequenceList' . detach'
-- LHS
fmap fill' . sequenceList' @(C _ _) . detach' . fmap C
fmap fill' . sequenceList' @(C _ _) . fmap C . detach'
-- sequenceList' is lawful (composition law)
fmap fill' . C . fmap sequenceList' . sequenceList' . detach'
C . fmap (fmap fill') . fmap sequenceList' . sequenceList' . detach'
C . fmap (fmap fill' . sequenceList') . sequenceList' . toList'
-- RHS
C . fmap (fmap fill' . sequenceList' . detach')
. fmap fill' . sequenceList' . detach'
C . fmap (fmap fill' . sequenceList') . fmap (detach' . fill')
. sequenceList' . detach'
-- LHS = RHS
C . fmap (fmap fill' . sequenceList') . sequenceList' . detach'
= C . fmap (fmap fill' . sequenceList') . fmap (detach' . fill')
. sequenceList' . detach'
-- C is injective:
fmap (fmap fill' . sequenceList') . sequenceList' . detach'
= fmap (fmap fill' . sequenceList') . fmap (detach' . fill')
. sequenceList' . detach' -- On hold.
</code></pre>
<p>At this point, it appears we are stuck with a property weaker than the <code>detach' . fill' = id</code> we expected to uncover. On the upside, there are a few nice things about it:</p>
<ul>
<li><p>All steps in the derivation above are reversible, so the property is equivalent to the composition law.</p>
</li>
<li><p>The <code>sequenceList' . detach'</code> and <code>fmap (fmap fill' . sequenceList')</code> extra terms that pad both sides of the equation make it so that every <code>fill'</code> is preceded by a <code>detach'</code>, and every <code>detach'</code> is followed by a <code>fill'</code>. That means the precondition of the fill-then-detach law automatically holds.</p>
</li>
<li><p>The fill-then-detach law is strictly stronger than this property. That being so, if <code>detach' . fill' = id</code> (within the bounds of the precondition, etc.) then this property, and therefore the composition law, also holds.</p>
</li>
</ul>
<p>I will get back to these observations in a little while in order to justify my earlier claim that <code>detach' . fill' = id</code> can be regarded as a <code>Traversable</code> law.</p>
<h3>Idempotency</h3>
<p>A short break, before we carry on with our regular schedule. There is a little piece of trivia we can uncover by specialising both applicative functors in the composition law to <code>Identity</code>. Continuing from where we stopped:</p>
<pre><code>fmap (fmap fill' . sequenceList') . sequenceList' . detach'
= fmap (fmap fill' . sequenceList') . fmap (detach' . fill')
. sequenceList' . detach'
-- In particular:
fmap (fmap fill' . sequenceList' @I) . sequenceList' @I . detach'
= fmap (fmap fill' . sequenceList' @I) . fmap (detach' . fill')
. sequenceList' @I . detach'
-- sequenceList' is lawful (identity):
fmap (fmap fill' . I . fmap uI) . I . fmap uI . detach'
= fmap (fmap fill' . I . fmap uI) . fmap (detach' . fill') . I
. fmap uI . detach'
-- shift the I leftwards, and the uI rightwards, on both sides:
I . I . fill' . detach' . fmap uI . fmap uI
= I . I . fill' . detach' . fill' . detach' . fmap uI . fmap uI
-- I is injective, and fmap uI is surjective:
fill' . detach' = fill' . detach' . fill' . detach'
</code></pre>
<p>We end up with an idempotency property for <code>fill' . detach'</code>, and, indirectly, also for <code>sequenceA</code>. Though such a property is unsurprising as far as <code>Traversable</code> is concerned, as it follows immediately from the identity law, it is rather interesting that it also follows from the composition law on its own. (On a related note, I sometimes wonder if we could get any mileage out of a <code>Semitraversable</code> class of sorts, which would only have the composition law.)</p>
<h2>Duplicating effects: once more, with feeling</h2>
<p>Now it is a good time to revisit your original question: exactly why duplication of effects causes trouble with the laws? The <code>Fillable</code> presentation helps to clarify the connection. Let's have another look at both sides of the composition law, in the form we have just given it:</p>
<pre><code> fmap (fmap fill' . sequenceList')
. sequenceList' . detach' -- LHS
fmap (fmap fill' . sequenceList')
. fmap (detach' . fill')
. sequenceList' . detach' -- RHS
</code></pre>
<p>Let's assume the identity law holds. In that case, the only possible source of duplicated effects in <code>sequenceFill'</code> are elements being duplicated by <code>detach'</code> (as <code>sequenceList'</code> doesn't duplicate, and <code>fill'</code> can't duplicate because of the identity law).</p>
<p>Now, if <code>detach'</code> introduces duplicates at certain positions, <code>fill'</code> must remove them so that the identity law holds. Thanks to parametricity, however, elements at those positions will be always removed, even if the relevant elements aren't actually duplicated because the list wasn't created by <code>detach'</code>. To put it in another way, there is a precondition for <code>fill'</code> being an innocuous removal of duplicates, namely, that it must be given lists that might have been produced by <code>detach'</code>. In the composition law, it might happen, depending on what the applicative effect is, that the first <code>sequenceList'</code> produces lists that fall outside of this precondition. In that case, the <code>fmap fill'</code> that follows it on the right hand side will eliminate inner effects (keep in mind the first <code>sequenceList'</code> only deals with the outer applicative layer) that weren't actually duplicated, the difference will be duly detected by the second <code>sequenceList' . detach'</code>, which acts on the inner effect layer, and we'll end up with a law violation.</p>
<p>In fact, we can affirm something stronger: if <code>sequenceFill'</code> duplicates effects, it is <em>always</em> possible to violate the law in the manner described above. All we need for such a claim is a good enough counterexample:</p>
<pre><code>advance :: State (Const (Sum Natural) x) (Const (Sum Natural) x)
advance = get <* modify (+1)
</code></pre>
<p>The trick is that if you sequence a list that only contains copies of <code>advance</code>, the list you'll be given back is guaranteed not to have any duplicated <code>Const (Sum Natural)</code> effects:</p>
<pre><code>GHCi> flip evalState 0 $ sequenceA (replicate 3 advance)
[Const (Sum {getSum = 0}),Const (Sum {getSum = 1}),Const (Sum {getSum = 2})]
</code></pre>
<p>That being so, if such a list reaches a <code>sequenceFill'</code> implementation that duplicates effects, the <code>fmap fill'</code> in it will invariably discard non-duplicates:</p>
<pre><code>data Bar a = Bar a
deriving (Show, Functor)
instance Foldable Bar where
foldMap f (Bar x) = f x <> f x
-- This corresponds to your Traversable instance.
instance Fillable Bar where
fill (Decomp (_, [x, y])) = Bar y
</code></pre>
<pre><code>GHCi> flip evalState 0 <$> (advance <$ Bar ())
Bar (Const (Sum {getSum = 0}))
GHCi> flip evalState 0 <$> detach' (advance <$ Bar ())
Decomp {getDecomp = (Bar (),[Const (Sum {getSum = 0}),Const (Sum {getSum = 0})])}
GHCi> flip evalState 0 $ (sequenceList' . detach') (advance <$ Bar ())
Decomp {getDecomp = (Bar (),[Const (Sum {getSum = 0}),Const (Sum {getSum = 1})])}
GHCi> flip evalState 0 $ (fmap fill' . sequenceList' . detach') (advance <$ Bar ())
Bar (Const (Sum {getSum = 1}))
</code></pre>
<p>A violation is now inevitable:</p>
<pre><code>GHCi> lhs = fmap (fmap fill' . sequenceList') . sequenceList' . detach'
GHCi> rhs = fmap (fmap fill' . sequenceList') . fmap (detach' . fill') . sequenceList' . detach'
GHCi> flip evalState 0 $ lhs (advance <$ Bar ())
Const (Sum {getSum = 1})
GHCi> flip evalState 0 $ rhs (advance <$ Bar ())
Const (Sum {getSum = 2})
</code></pre>
<p>(<code>advance</code>, as you might have noted, is very similar to the counterexample in <a href="https://stackoverflow.com/a/61195551">your answer</a>, only tweaked so that it can be used with arbitrary traversable-like structures.)</p>
<p>This suffices to show that duplication of effects is incompatible with the composition law.</p>
<h3>Simplifying the composition law</h3>
<p>At this point, there is a convenient way to justify why we can use the simplified fill-then-detach property...</p>
<pre><code>-- Precondition: length (toList sh) = length as
detach' . fill' $ (sh, as) = (sh, as)
</code></pre>
<p>... in lieu of the bulkier composition law we have been dealing with in the last few sections. Again, assume the identity law holds. In that case, we can classify the possible implementations of <code>detach'</code> in two cases:</p>
<ol>
<li><p><code>detach'</code> never duplicates elements. As a consequence, <code>detach'</code> is, within the limits of the fill-then-detach precondition, surjective (for instance, if the traversable functor is a vector of length six, <code>detach'</code> can generate all possible lists of length six, though it won't generate lists with other lengths). If a function that has a left inverse is surjective, though, its left inverse is also a right inverse. Therefore, <code>detach' . fill' = id</code> within the bounds of the precondition, and the composition law holds.</p>
<p>(The "within the limits of the fill-then-detach precondition" bit might feel like handwaving, but I believe it can be made rigorous by using dependent types to partition the traversable functor type according to the shapes, in the way I alluded at the beginning of the second part.)</p>
</li>
<li><p><code>detach'</code> duplicates elements. In that case, though, the ensuing duplication of effects means the composition law won't hold, as we have just shown, and neither will the stronger <code>detach' . fill' = id</code> property.</p>
</li>
</ol>
<p>That being so, the the <code>Traversable</code> composition law and the <code>Fillable</code> fill-then-detach law always agree as long as the identity law holds; the difference between them can only show up in implementations that violate the identity law. Therefore, if taken together, the <code>Fillable</code> laws as stated in the first part of the answer are equivalent to the <code>Traversable</code> ones.</p>
<h2>foldMapDefault and the other naturality law</h2>
<p>A beautiful feature of the <code>Fillable</code> presentation is how it makes it explicit that the only free choice we have in defining a lawful <code>sequenceA</code> is that of the order in which the effects will be sequenced. Once a certain order is chosen by picking a <code>Foldable</code> implementation, which determines <code>toList</code> and <code>detach'</code>, <code>sequenceList'</code> must follow that order upon sequencing the effects. Furthermore, since <code>fill'</code> is (within the bounds of the fill-then-detach precondition) a full inverse of <code>detach'</code>, it is uniquely determined.</p>
<p>The class hierarchy we have in the base libraries is not arranged in quite the same way as <code>Fillable</code>: the real <code>sequenceA</code> is a self-sufficient method of <code>Traversable</code> which, unlike <code>sequenceFill'</code>, does not rely on <code>Foldable</code> for its implementation. Rather, the connection between <code>Foldable</code> and <code>Traversable</code> is straightened out by a superclass coherence law:</p>
<pre><code>-- Given:
foldMapDefault :: (Traversable t, Monoid m) => (a -> m) -> t a -> m
foldMapDefault f = getConst . traverse (Const . f)
foldMapDefault = foldMap
</code></pre>
<p>(There is an analogous property for <code>Functor</code> and <code>fmapDefault</code>, but parametricity means it follows from the identity law.)</p>
<p>In terms of <code>toList</code> and <code>sequenceA</code>, this law becomes:</p>
<pre><code>toList = getConst . sequenceA . fmap (Const . (:[]))
</code></pre>
<p>If we use <code>sequenceA = sequenceFill'</code> to bring us back to the <code>Fillable</code> presentation...</p>
<pre><code>getConst . fmap fill' . sequenceList' . detach' . fmap (Const . (:[]))
getConst . fmap fill' . sequenceList' . fmap (Const . (:[])) . detach'
-- fmap @(Const _) doesn't do anything:
getConst . sequenceList' . fmap (Const . (:[])) . detach'
-- sequenceList' is lawful (foldMapDefault law):
toList @(Detach _) . detach'
snd . getDecomp . detach'
toList
</code></pre>
<p>... we conclude that the <code>foldMapDefault</code> law holds automatically.</p>
<h3>Bird et al.'s "'naturality' in the datatype"</h3>
<p>After the identity and composition laws, the third best known law of <code>Traversable</code> is naturality in the applicative functor, often referred to simply as the naturality law:</p>
<pre><code>-- Precondition: h is an applicative homomorphism, that is:
-- h (pure a) = pure a
-- h (u <*> v) = h u <*> h v
h . sequenceA = sequenceA . fmap h
</code></pre>
<p>While useful, as well as significant theory-wise (it reflects an alternative view of <code>sequenceA</code> as a natural transformation in the category of applicative functors and applicative homomorphisms, discussed for instance in Jaskelioff and Rypacek, <a href="https://arxiv.org/pdf/1202.2919.pdf" rel="nofollow noreferrer"><em>An Investigation of the Laws of Traversals</em></a>), the naturality law follows from a free theorem for <code>sequenceA</code> (in the vein of Voigtlรคnder, <a href="https://www.cs.tufts.edu/%7Enr/cs257/archive/janis-voigtlaender/free-thm-classes.pdf" rel="nofollow noreferrer"><em>Free Theorems Involving Constructor Classes</em></a>), and so there isn't much to say about it in the context of this answer.</p>
<p>The Bird et al. paper mentioned in the first part discusses, in section 6, a different naturality property, which the authors call "'naturality' in the datatype". Unlike the better known naturality law, it is a naturality property for the traversable functor itself:</p>
<pre><code>-- Precondition: r preserves toList, that is
-- toList . r = toList
fmap r . sequenceA = sequenceA . r
</code></pre>
<p>(Bird et al. don't use <code>Foldable</code> explictly, rather stating the property in terms of <code>contents = getConst . traverse (Const . (:[])</code>. Assuming the <code>foldMapDefault</code> coherence law holds, there is no difference.)</p>
<p>The <code>Fillable</code> perspective suits this naturality property very nicely. We can begin by noting we can lift a natural transformation on some functor <code>t</code> to work on <code>Decomp t</code> as well:</p>
<pre><code>-- Decomp as a higher-order functor.
hmapDecomp :: (forall x. t x -> u x) -> Decomp t a -> Decomp u a
hmapDecomp r (Decomp (sh, as)) = Decomp (r sh, as)
</code></pre>
<p>If <code>r</code> preserves <code>toList</code> (or, we might even say, if it is a foldable homomorphism), it follows that it also preserves <code>detach'</code>, and vice-versa:</p>
<pre><code>-- Equivalent to toList . r = toList
hmapDecomp r . detach' = detach' . r'
</code></pre>
<p>(<code>hmapDecomp</code> doesn't affect the list of contents, and, being a natural transformation, <code>r</code> commutes with the <code>(() <$)</code> half of <code>detach'</code>.)</p>
<p>If we further assume the <code>Fillable</code> laws, we can use the fact that <code>fill'</code> and <code>detach'</code> are inverses (within the bounds of the precondition of the fill-then-detach law) to shift <code>r</code> from <code>detach'</code> to <code>fill'</code>:</p>
<pre><code>hmapDecomp r . detach' = detach' . r
hmapDecomp r . detach' . fill' = detach' . r . fill'
hmapDecomp r = detach' . r . fill'
fill' . hmapDecomp r = fill' . detach' . r . fill'
fill' . hmapDecomp r = r . fill'
</code></pre>
<p>That is, applying <code>r</code> to the shape and then filling it is the same as filling and then applying <code>r</code> to the filled shape.</p>
<p>At this point, we can work our way back to <code>sequenceFill'</code>:</p>
<pre><code>fill' . hmapDecomp r = r . fill'
fmap (fill' . hmapDecomp r) = fmap (r . fill')
fmap (fill' . hmapDecomp r) . sequenceList' . detach'
= fmap (r . fill') . sequenceList' . detach'
-- LHS
fmap (fill' . hmapDecomp r) . sequenceList' . detach'
-- sequenceList' only changes the list, and `hmapDecomp` r only the shape.
fmap fill' . sequenceList' . hmapDecomp r . detach'
-- r is a foldable homomorphism.
fmap fill' . sequenceList' . detach' . r
sequenceFill' . r
-- RHS
fmap (r . fill') . sequenceList' . detach'
fmap r . sequenceFill'
-- LHS = RHS
fmap r . sequenceFill' = sequenceFill' . r
</code></pre>
<p>We have thus obtained the naturality in the traversable functor property, as might have been expected given the equivalence between the <code>Fillable</code> and <code>Traversable</code> laws. Still, we did learn something in the process. Bird et al. were justified in being cautious with the word "naturality" when talking of this property, as the restriction to <code>toList</code>-preserving natural transformations seems extraneous in the context of the standard class hierarchy. From the <code>Fillable</code> perspective, though, <code>fill'</code> is determined by our choice of <code>Foldable</code> instance, and so the property is about as sharp as any other naturality property for a constructor class. That being so, I believe we can drop the scare quotes around "naturality".</p>
<h2>Executive summary: dos and don'ts of Traversable</h2>
<p>We are now in a position to make a fairly complete list of the consequences of the <code>Traversable</code> laws. Though there is no real difference, I will speak here in terms of <code>traverse</code>, as using it instead of <code>sequenceA</code> makes it a little clearer what is meant by "elements", in contrast with "effects".</p>
<p>A lawful <code>traverse</code> <strong>must not</strong>:</p>
<ul>
<li><p><strong>Change the traversable shape</strong> in any way, due to the identity law.</p>
<ul>
<li>If the change is idempotent, the identity law will still be violated, but the composition law might hold.</li>
</ul>
</li>
<li><p><strong>Drop or duplicate elements</strong>, due to the identity law.</p>
<ul>
<li>In particular, that isn't allowed even if the shape is left unchanged by overwriting some of the elements with others.</li>
</ul>
</li>
<li><p><strong>Reorder elements</strong> in the traversable structure, due to the identity law.</p>
</li>
<li><p><strong>Duplicate effects</strong>, even if there is no duplication of elements, due to the composition law.</p>
</li>
</ul>
<p>A lawful <code>traverse</code> <strong>may</strong>:</p>
<ul>
<li><strong>Reorder effects</strong>, that is, sequence effects in a different order than that of elements in the original traversable structure.
<ul>
<li>The order of effects can even depend on the individual shape of the structure.</li>
</ul>
</li>
</ul>
<p>A lawful <code>traverse</code> <strong>must</strong>:</p>
<ul>
<li><strong>Sequence effects in the order given by <code>toList</code></strong> from the <code>Foldable</code> instance for the type, due to the <code>foldMapDefault</code> law.</li>
</ul>
<p>A lawful <code>traverse</code> <strong>will</strong>:</p>
<ul>
<li><p><strong>Preserve applicative homomorphisms</strong>, that is, natural transformations that preserve <code>pure</code> and <code>return</code>, due to the naturality law, which holds freely.</p>
</li>
<li><p><strong>Preserve foldable homomorphisms</strong>, that is, natural transformations that preserve <code>toList</code>/<code>foldMap</code>, due to the naturality-in-the-traversable law, which follows from the identity and composition laws.</p>
</li>
</ul> | 2020-04-14 02:03:10.123000+00:00 | 2021-03-12 01:37:13.823000+00:00 | 2021-03-12 01:37:13.823000+00:00 | null | 61,195,550 | <p>I remember reading somewhere that a type like this one can't be <code>Traversable</code>:</p>
<pre><code>data Bar a = Bar a deriving(Show)
instance Functor Bar where
fmap f (Bar x) = Bar (f x)
instance Foldable Bar where
foldMap f (Bar x) = f x <> f x
</code></pre>
<p>The bit of the explanation I remember is that for <code>foldMap = foldMapDefault</code> to hold, the <code>Traversable</code> instance would have to visit its elements more than once, which a lawful instance can't do. However, I don't remember why a lawful instance can't do that. Consider this one:</p>
<pre><code>instance Traversable Bar where
sequenceA (Bar x) = Bar <$ x <*> x
</code></pre>
<p>It looks fine at first glance. What's unlawful about doing that?</p> | 2020-04-13 19:46:32.337000+00:00 | 2021-03-12 01:37:13.823000+00:00 | null | haskell|traversable | ['https://stackoverflow.com/a/21083521', 'https://www.cs.ox.ac.uk/jeremy.gibbons/publications/uitbaf.pdf', 'http://hackage.haskell.org/package/free-5.1.3/docs/Control-Applicative-Free.html', 'https://stackoverflow.com/search?q=user%3A828361+containers', 'https://stackoverflow.com/a/61195551', 'https://arxiv.org/pdf/1202.2919.pdf', 'https://www.cs.tufts.edu/%7Enr/cs257/archive/janis-voigtlaender/free-thm-classes.pdf'] | 7 |
56,039,605 | <p>First of all, you have to change the computation of the gradient through a ReLU, i.e. <a href="https://i.stack.imgur.com/tfu46.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tfu46.png" alt="Guided BackProp Formula"></a></p>
<p>Here a graphic example from the <a href="https://arxiv.org/abs/1412.6806" rel="nofollow noreferrer">paper</a>.<a href="https://i.stack.imgur.com/L2i4T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L2i4T.png" alt="Graphical example"></a></p>
<p>This formula can be implemented with the following code:</p>
<pre><code>@tf.RegisterGradient("GuidedRelu")
def _GuidedReluGrad(op, grad):
gate_f = tf.cast(op.outputs[0] > 0, "float32") #for f^l > 0
gate_R = tf.cast(grad > 0, "float32") #for R^l+1 > 0
return gate_f * gate_R * grad
</code></pre>
<p>Now you have to override the original TF implementation of ReLU with:</p>
<pre><code>with tf.compat.v1.get_default_graph().gradient_override_map({'Relu': 'GuidedRelu'}):
#put here the code for computing the gradient
</code></pre>
<p>After computing the gradient, you can visualize the result.
However, one last remark. You compute a visualization for a single class. This means, you take the activation of a choosen neuron and set all the activations of the other neurons to zero for the input of Guided BackProp.</p> | 2019-05-08 11:20:44.700000+00:00 | 2019-07-09 18:40:41.337000+00:00 | 2019-07-09 18:40:41.337000+00:00 | null | 55,924,331 | <p>I am starting with <code>Tensorflow 2.0</code> and trying to implement Guided BackProp to display Saliency Map. I started by computing the loss between <code>y_pred</code> and <code>y_true</code> of an image, then find gradients of all layers due to this loss. </p>
<pre><code>with tf.GradientTape() as tape:
logits = model(tf.cast(image_batch_val, dtype=tf.float32))
print('`logits` has type {0}'.format(type(logits)))
xentropy = tf.nn.softmax_cross_entropy_with_logits(labels=tf.cast(tf.one_hot(1-label_batch_val, depth=2), dtype=tf.int32), logits=logits)
reduced = tf.reduce_mean(xentropy)
grads = tape.gradient(reduced, model.trainable_variables)
</code></pre>
<p>However, I don't know what to do with gradients in order to obtain the Guided Propagation.</p>
<p>This is my model. I created it using Keras layers:</p>
<pre><code>image_input = Input((input_size, input_size, 3))
conv_0 = Conv2D(32, (3, 3), padding='SAME')(image_input)
conv_0_bn = BatchNormalization()(conv_0)
conv_0_act = Activation('relu')(conv_0_bn)
conv_0_pool = MaxPool2D((2, 2))(conv_0_act)
conv_1 = Conv2D(64, (3, 3), padding='SAME')(conv_0_pool)
conv_1_bn = BatchNormalization()(conv_1)
conv_1_act = Activation('relu')(conv_1_bn)
conv_1_pool = MaxPool2D((2, 2))(conv_1_act)
conv_2 = Conv2D(64, (3, 3), padding='SAME')(conv_1_pool)
conv_2_bn = BatchNormalization()(conv_2)
conv_2_act = Activation('relu')(conv_2_bn)
conv_2_pool = MaxPool2D((2, 2))(conv_2_act)
conv_3 = Conv2D(128, (3, 3), padding='SAME')(conv_2_pool)
conv_3_bn = BatchNormalization()(conv_3)
conv_3_act = Activation('relu')(conv_3_bn)
conv_4 = Conv2D(128, (3, 3), padding='SAME')(conv_3_act)
conv_4_bn = BatchNormalization()(conv_4)
conv_4_act = Activation('relu')(conv_4_bn)
conv_4_pool = MaxPool2D((2, 2))(conv_4_act)
conv_5 = Conv2D(128, (3, 3), padding='SAME')(conv_4_pool)
conv_5_bn = BatchNormalization()(conv_5)
conv_5_act = Activation('relu')(conv_5_bn)
conv_6 = Conv2D(128, (3, 3), padding='SAME')(conv_5_act)
conv_6_bn = BatchNormalization()(conv_6)
conv_6_act = Activation('relu')(conv_6_bn)
flat = Flatten()(conv_6_act)
fc_0 = Dense(64, activation='relu')(flat)
fc_0_bn = BatchNormalization()(fc_0)
fc_1 = Dense(32, activation='relu')(fc_0_bn)
fc_1_drop = Dropout(0.5)(fc_1)
output = Dense(2, activation='softmax')(fc_1_drop)
model = models.Model(inputs=image_input, outputs=output)
</code></pre>
<p>I am glad to provide more code if needed.</p> | 2019-04-30 15:33:48.960000+00:00 | 2020-07-30 08:21:08.260000+00:00 | 2019-05-14 12:08:31.697000+00:00 | python|tensorflow|keras|backpropagation|tensorflow2.0 | ['https://i.stack.imgur.com/tfu46.png', 'https://arxiv.org/abs/1412.6806', 'https://i.stack.imgur.com/L2i4T.png'] | 3 |
16,128,597 | <p>You can't find LCP of any two suffixes by simply calculating the minimum of the lcp's of all pairs of adjacent suffixes between them on the array.</p>
<p>We can calculate the LCPs of any suffixes (i,j)
with the Help of Following :</p>
<pre><code>LCP(suffix i,suffix j)=LCP[RMQ(i + 1; j)]
</code></pre>
<p>Also Note <code>(i<j)</code> as <code>LCP (suff i,suff j)</code> may not necessarly equal <code>LCP (Suff j,suff i)</code>.
RMQ is <strong>Range Minimum Query</strong> .</p>
<p>Page 3 of this <a href="http://arxiv.org/pdf/1012.4263.pdf" rel="nofollow">paper</a>.</p>
<pre><code>Details:
</code></pre>
<p><strong>Step 1:</strong>
First Calculate LCP of Adjacents /consecutive Suffix Pairs .</p>
<p>n= Length of string.</p>
<p>suffixArray[] is Suffix array.</p>
<pre><code>void calculateadjacentsuffixes(int n)
{
for (int i=0; i<n; ++i) Rank[suffixArray[i]] = i;
Height[0] = 0;
for (int i=0, h=0; i<n; ++i)
{
if (Rank[i] > 0)
{
int j = suffixArray[Rank[i]-1];
while (i + h < n && j + h < n && str[i+h] == str[j+h])
{
h++;
}
Height[Rank[i]] = h;
if (h > 0) h--;
}
}
}
</code></pre>
<p><strong>Note: Height[i]=LCPs of (Suffix i-1 ,suffix i) ie. Height array contains LCP of adjacent suffix.</strong></p>
<p><strong>Step 2:</strong></p>
<p>Calculate LCP of Any two suffixes i,j using RMQ concept.
RMQ pre-compute function:</p>
<pre><code>void preprocesses(int N)
{
int i, j;
//initialize M for the intervals with length 1
for (i = 0; i < N; i++)
M[i][0] = i;
//compute values from smaller to bigger intervals
for (j = 1; 1 << j <= N; j++)
{
for (i = 0; i + (1 << j) - 1 < N; i++)
{
if (Height[M[i][j - 1]] < Height[M[i + (1 << (j - 1))][j - 1]])
{
M[i][j] = M[i][j - 1];
}
else
{
M[i][j] = M[i + (1 << (j - 1))][j - 1];
}
}
}
}
</code></pre>
<p>Step 3: Calculate LCP between any two Suffixes i,j</p>
<pre><code>int LCP(int i,int j)
{
/*Make sure we send i<j always */
/* By doing this ,it resolve following
suppose ,we send LCP(5,4) then it converts it to LCP(4,5)
*/
if(i>j)
swap(i,j);
/*conformation over*/
if(i==j)
{
return (Length_of_str-suffixArray[i]);
}
else
{
return Height[RMQ(i+1,j)];
//LCP(suffix i,suffix j)=LCPadj[RMQ(i + 1; j)]
//LCPadj=LCP of adjacent suffix =Height.
}
}
</code></pre>
<p>Where RMQ function is:</p>
<pre><code>int RMQ(int i,int j)
{
int k=log((double)(j-i+1))/log((double)2);
int vv= j-(1<<k)+1 ;
if(Height[M[i][k]]<=Height[ M[vv][ k] ])
return M[i][k];
else
return M[ vv ][ k];
}
</code></pre>
<p>Refer <strong><a href="http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=lowestCommonAncestor" rel="nofollow">Topcoder tutorials</a></strong> for RMQ.</p>
<p>You can check the complete implementation in C++ at my <strong><a href="http://riteshkumarnitw.webs.com/mycoderepository.htm" rel="nofollow">blog</a></strong>.</p> | 2013-04-21 06:34:18.207000+00:00 | 2014-08-03 06:03:42.717000+00:00 | 2014-08-03 06:03:42.717000+00:00 | null | 16,128,400 | <p>I was going through suffix array and its use to compute longest common prefix of two suffixes.</p>
<p>The source says: </p>
<blockquote>
<p>"The lcp between two suffixes is the minimum of the lcp's of all pairs of adjacent suffixes between them on the array"</p>
</blockquote>
<p>i.e. <code>lcp(x,y)=min{ lcp(x,x+1),lcp(x+1,x+2),.....,lcp(y-1,y) }</code>
where x and y are two index of the string from where the two suffix of the string starts.</p>
<p>I am not convinced with the statement as in example of string <code>"abca"</code>.</p>
<p><code>lcp(1,4)=1</code> (considering 1 based indexing)</p>
<p>but if I apply the above equation then </p>
<p><code>lcp(1,4)=min{lcp(1,2),lcp(2,3),lcp(3,4)}</code></p>
<p>and I think <code>lcp(1,2)=0</code>.</p>
<p>so the answer must be <code>0</code> according to the equation.</p>
<p>Am i getting it wrong somewhere?</p> | 2013-04-21 05:57:02.407000+00:00 | 2014-08-03 06:03:42.717000+00:00 | 2013-04-21 06:27:50.333000+00:00 | string|algorithm | ['http://arxiv.org/pdf/1012.4263.pdf', 'http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=lowestCommonAncestor', 'http://riteshkumarnitw.webs.com/mycoderepository.htm'] | 3 |
59,177,460 | <p>Yes, the standard way of computing a split for classification trees is decrease in Gini index. An alternative is using Entropy based methods, but results are similar and the formula has logarithms in it, so it is usually slower.</p>
<p>The split using decrease in Accuracy is usually not implemented in packages (it is not in R's randomForest and ranger, nor in Sklearn on python) as id does not respect some basic properties as a loss function and gives straight up bad results. <p>
You can find some details here <a href="https://arxiv.org/pdf/1407.7502.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1407.7502.pdf</a> if you want, around page 42-45</p> | 2019-12-04 13:43:29.183000+00:00 | 2019-12-04 13:43:29.183000+00:00 | null | null | 59,172,854 | <p>I am running a random forest in R with the package randomForest. </p>
<p>I have two questions:</p>
<ol>
<li><p>Is it correct that when using this package the <em>default</em> criterion is Mean Decrease in Gini? </p></li>
<li><p>I plot the variable importance with <code>varImpPlot</code> and obtain two measures of importance: Mean Decrease Accuracy and Mean Decrease Gini; how can I use the former for actually splitting the nodes? </p></li>
</ol> | 2019-12-04 09:36:56.343000+00:00 | 2020-01-01 16:05:02.103000+00:00 | 2019-12-04 11:14:37.037000+00:00 | r|random-forest | ['https://arxiv.org/pdf/1407.7502.pdf'] | 1 |
61,496,799 | <p>This question is very old; interestingly, as old as the solution I'm about to present, but it hasn't been mentioned here yet.</p>
<p>It's <a href="https://github.com/dsw/proquint" rel="noreferrer">Proquint</a>. Similar to Bubble Babble, but the differences make the results easier to read, in my opinion.</p>
<p>Here's how it works, from <a href="https://arxiv.org/html/0901.4016" rel="noreferrer">their documentation</a>:</p>
<blockquote>
<p>In sum, we propose encoding a 16-bit string as a proquint [PRO-nouncable QUINT-uplet] of alternating consonants and vowels as follows.</p>
<p>Four-bits as a consonant:</p>
<pre><code>0 1 2 3 4 5 6 7 8 9 A B C D E F
b d f g h j k l m n p r s t v z
</code></pre>
<p>Two-bits as a vowel:</p>
<pre><code>0 1 2 3
a i o u
</code></pre>
<p>Whole 16-bit word, where "con" = consonant, "vo" = vowel:</p>
<pre><code> 0 1 2 3 4 5 6 7 8 9 A B C D E F
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|con |vo |con |vo |con |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
</code></pre>
<p>Separate proquints using dashes, which can go un-pronounced or be pronounced "eh". The suggested
optional magic number prefix to a sequence of proquints is "0q-".</p>
<p>Here are some IP dotted-quads and their corresponding proquints.</p>
<pre><code>127.0.0.1 lusab-babad
63.84.220.193 gutih-tugad
63.118.7.35 gutuk-bisog
140.98.193.141 mudof-sakat
64.255.6.200 haguz-biram
128.30.52.45 mabiv-gibot
147.67.119.2 natag-lisaf
212.58.253.68 tibup-zujah
216.35.68.215 tobog-higil
216.68.232.21 todah-vobij
198.81.129.136 sinid-makam
12.110.110.204 budov-kuras
</code></pre>
</blockquote> | 2020-04-29 07:52:13.933000+00:00 | 2020-04-29 07:52:13.933000+00:00 | null | null | 1,648,206 | <p>I am using UUIDs, but they are not particularly nice to read, write and communicate. So I would like to encode them. I could use base64, or base32, but they would not be easy anyway: base64 has capitalized letters and symbols. Base32 is a bit better, but you can still obtain clumsy stuff.</p>
<p>I was wondering if there's a nice and clean way to encode a number into palatable phonemes, so to achieve better readability and hopefully a bit of compression.</p> | 2009-10-30 05:50:43.877000+00:00 | 2021-10-15 03:53:20.417000+00:00 | 2013-12-30 21:33:27.640000+00:00 | encoding|readable|phoneme | ['https://github.com/dsw/proquint', 'https://arxiv.org/html/0901.4016'] | 2 |
33,877,178 | <p>To map a continuous range (obtained for example as the output of a pseudo-random number generator, or alternatively the logistic map) to a small set of discrete values, you would need to split the continuous range into regions, and assign an output value to each of those regions. The limits of those regions would determine the corresponding threshold values to use.</p>
<p>For example, in the binary case you start off with a continuous range of values in <code>[0,1]</code> which you split into two regions: <code>[0,0.5]</code> and <code>(0.5,1]</code>. Each of those region begin assigned an output symbol, namely <code>-1</code> and <code>+1</code>. As you have noted, the boundary of the regions being set to the midpoint of your <code>[0,1]</code> input range gives you a threshold of 0.5. This could be implemented as:</p>
<pre><code>if (x > 0.5)
symbol = +1;
else
symbol = -1;
end
</code></pre>
<p>As a more compact implementation, the formula <code>2*(x>0.5)-1</code> takes advantage of the fact that in Matlab a <em>true</em> condition (from the <code>x>0.5</code> expression) has a value of 1, whereas <em>false</em> has a value of 0.</p>
<p>For 3 discrete output values, you'd similarly split your <code>[0,1]</code> input range into 3 regions: <code>[0,1/3]</code>, <code>(1/3,2/3]</code> and <code>(2/3,1]</code>. The corresponding thresholds thus being 1/3 and 2/3.</p>
<p>Finally for 8 discrete output values, you would similarly split your <code>[0,1]</code> input range into 8 regions: <code>[0,1/8]</code>, <code>(1/8,2/8]</code>, <code>(2/8,3/8]</code>, <code>(3/8,4/8]</code>, <code>(4/8,5/8]</code>, <code>(5/8,6/8]</code>, <code>(6/8,7/8]</code> and <code>(7/8,1]</code>. The corresponding thresholds thus being 1/8, 2/8, 3/8, 4/8, 5/8, 6/8 and 7/8, as illustrated in the following diagram:</p>
<pre><code>thresholding function input: |-----|-----|-----|-----|-----|-----|-----|-----|
0 | | | | | | | 1
thresholds: 1/8 2/8 3/8 4/8 5/8 6/8 7/8
| | | | | | | |
v v v v v v v v
generated symbol: -7 -5 -3 -1 +1 +3 +5 +7
</code></pre>
<p>This then gives the following symbol mapping implementation:</p>
<pre><code>if (x < 1/8)
symbol = -7;
elseif (x < 2/8)
symbol = -5;
elseif (x < 3/8)
symbol = -3;
elseif (x < 4/8)
symbol = -1;
elseif (x < 5/8)
symbol = +1;
elseif (x < 6/8)
symbol = +3;
elseif (x < 7/8)
symbol = +5;
else
symbol = +7;
end
</code></pre>
<p>As a more compact implementation, you could similarly use the <code>floor</code> function to obtain discrete levels:</p>
<pre><code>% x : some value in the [0,1] range
% s : a symbol in the {-7,-5,-3,-1,+1,+3,+5,+7} set
function s = threshold(x)
% Note on implementation:
% 8*x turns the input range from [0,1] to [0,8]
% floor(8*x) then turns that into values {0,1,2,3,4,5,6,7}
% then a linear transform (2*() - 7) is applied to map
% 0 -> -7, 1 -> -5, 2 -> -3, ..., 7 -> 7
% min/max finally applied just as a safety to make sure we don't overflow due
% to roundoff errors (if any).
s = min(7, max(-7, 2*floor(8*x) - 7));
end
</code></pre>
<p>Now if you want to generate complex symbols with 8 levels for the real part and 8 levels for the imaginary part, you'd simply combine them just like in the binary case. Mainly you'd generate a first value which gives you the real part, then a second value for the imaginary part:</p>
<pre><code>x_real = rand(); % random input 0 <= x_real <= 1
x_imag = rand(); % another one
s = threshold(x_real) + sqrt(-1)*threshold(x_imag);
</code></pre>
<hr>
<p><strong>Addressing some points raised by a previous revision of the question:</strong></p>
<p>One thing to note is that <code>x[n+1] = 4*x[n](1-x[n])</code> maps values in <code>[0,1]</code> to the same range of values. This makes it possible to iteratively apply the mapping to obtain additional values, and correspondingly generate a binary sequence with the threshold application (<code>x > 0.5</code>). The function <code>f(x)</code> you provided (in an earlier edit of the question) on the other hand, maps values within a range with discontinuities (roughly covering <code>[-7.5,7.5]</code> depending on <code>p</code>) to <code>[0,1]</code>. In other words you would need to either modify <code>f(x)</code> or otherwise map its output back to the input domain of <code>f(x)</code>. It would probably be easier to consider a general uniform pseudo-random number generator over the <code>[-8,+8]</code> range as input to the threshold function:</p>
<pre><code>% x : some value in the [-8,8] range
% s : a symbol in the {-7,-5,-3,-1,+1,+3,+5,+7} set
function s = threshold_8PAM(x)
s = min(7, max(-7, 2*round(x/2 + 0.5) - 1));
end
</code></pre>
<p>To get the final 64-QAM symbols you would combine two 8-PAM symbols in quadrature (i.e. <code>x64qam = xQ + sqrt(-1)*xI</code>, where <code>xQ</code> and <code>xI</code> have both been generated with the above procedure).</p>
<p>That said, if the goal is to implement a digital communication system using 64-QAM symbols with additional chaotic modulation, you'd ultimately want to take into account the source of input data to transmit rather than randomly generating both the chaotic modulation and the source data in one shot. That is even if for performance evaluation you wind up generating the source data randomly, it is still a good idea to be generating it independently of the chaotic modulation.</p>
<p>Addressing those concerns, the paper <a href="http://arxiv.org/abs/1302.3800" rel="nofollow noreferrer"><em>An Enhanced Spectral Efficiency Chaos-Based Symbolic Dynamics Transceiver Design</em></a> suggests a different approach based on the inverse map you provided, which can be implemented as:</p>
<pre><code>function x = inverse_mapping(x,SymbIndex,p)
if (SymbIndex==0)
x = ((1-p)*x-14)/2;
elseif (SymbIndex==1)
x = ((1-p)*x-10)/2;
elseif (SymbIndex==2)
x = ((1-p)*x-6)/2;
elseif (SymbIndex==3)
x = ((1-p)*x-2)/2;
elseif (SymbIndex==4)
x = ((1-p)*x+2)/2;
elseif (SymbIndex==5)
x = ((1-p)*x+6)/2;
elseif (SymbIndex==6)
x = ((1-p)*x+10)/2;
elseif (SymbIndex==7)
x = ((1-p)*x+14)/2;
end
end
</code></pre>
<p>As you may notice, the function takes a symbol index (3 bits, which you'd get from the input source data) and the current state of the modulated output (which you may seed with any value within the convergence range of <code>inverse_mapping</code>) as two independent input streams. Note that you can compute the bounds of the convergence range of <code>inverse_mapping</code> by finding the limits of repeated application of the mapping using input symbol index <code>s=0</code>, and <code>s=7</code> (using for example a seed of <code>x=0</code>). This should converge to <code>[-14/(1+p), 14/(1+p)]</code>.</p>
<p>The chaotic modulation described in <a href="http://arxiv.org/abs/1302.3800" rel="nofollow noreferrer">the above referenced paper</a> can then be achieved with (setting the control parameter <code>p=0.8</code> as an example):</p>
<pre><code>% Simulation parameters
Nsymb = 10000;
p = 0.8;
M = 64;
% Source data generation
SymbolIndexQ = randi([0 sqrt(M)-1],Nsymb,1);
SymbolIndexI = randi([0 sqrt(M)-1],Nsymb,1);
% Modulation
xmax = 14/(1+p); % found by iterative application of inverse_mapping
xQ = xmax*(2*rand(1)-1); % seed initial state
xI = xmax*(2*rand(1)-1); % seed initial state
x = zeros(Nsymb,1);
for i=1:Nsymb
xQ = inverse_mapping(xQ, SymbolIndexQ(i), p);
xI = inverse_mapping(xI, SymbolIndexI(i), p);
x(i) = xQ + sqrt(-1)*xI;
end
% x holds the modulated symbols
plot(real(x), imag(x), '.');
% if you also need the unmodulated symbols you can get them from
% SymbolIndexQ and SymbolIndexI
s = (2*SymbolIndexQ-7) + sqrt(-1)*(2*SymbolIndexI-7);
</code></pre>
<p>with should produce the corresponding constellation diagram:</p>
<p><a href="https://i.stack.imgur.com/TQ0Sf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TQ0Sf.png" alt="64QAM with chaotic modulation, p=0.8"></a></p>
<p>or with <code>p=1</code> (which is essentially unmodulated):</p>
<p><a href="https://i.stack.imgur.com/OFmId.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OFmId.png" alt="64QAM p=1.0"></a></p> | 2015-11-23 17:39:24.417000+00:00 | 2015-11-25 13:46:15.767000+00:00 | 2015-11-25 13:46:15.767000+00:00 | null | 33,664,902 | <p>An example : Consider the unimodal <a href="https://en.wikipedia.org/wiki/Logistic_map" rel="nofollow">logistic map</a> : <code>x[n+1] = 4*x[n](1-x[n])</code>. The map can be used to generate +1/-1 symbols using the technique </p>
<p>I want to extend the above concept using the map <code>f(x)</code> for 3 levels, each level corresponds to a symbol but I am unsure how I can do that.</p> | 2015-11-12 05:18:07.987000+00:00 | 2016-06-29 02:46:20.877000+00:00 | 2016-06-29 02:46:20.877000+00:00 | matlab|matrix|encoding | ['http://arxiv.org/abs/1302.3800', 'http://arxiv.org/abs/1302.3800', 'https://i.stack.imgur.com/TQ0Sf.png', 'https://i.stack.imgur.com/OFmId.png'] | 4 |
35,852,153 | <p><strong>What is exactly attention?</strong></p>
<p>To be able to understand this question, we need to dive a little into certain problems which attention seeks to solve. I think one of the seminal papers on <strong>hard attention</strong> is <a href="http://arxiv.org/pdf/1406.6247v1.pdf">Recurrent Models of Visual Attention</a> and I would encourage the reader to go through that paper, even if it doesn't seem fully comprehensible at first.</p>
<p>To answer the question of what exactly is attention, I'll try and pose a different question which I believe is easier to answer. Which is, <strong>Why attention?</strong>. The paper I have linked seeks to answer that question succinctly and I'll reproduce a part of the reasoning here.</p>
<p>Imagine you were blindfolded and taken to a surprise birthday party and you just opened your eyes. What would you see?
<a href="https://i.stack.imgur.com/l7Mc6.jpg"><img src="https://i.stack.imgur.com/l7Mc6.jpg" alt="Birthday Party!"></a></p>
<p>Now, when we say you see the picture, that's a shorter version of the following more technically correct <em>sequence of actions</em>, which is, to move your eyes around over time and gather information about the scene. You don't see <em>every pixel</em> of the image at once. You <em>attend</em> to certain aspects of the picture one time-step at a time and aggregate the information. Even in such a cluttered picture for example, you would recognize your uncle Bill and cousin Sam :). Why is that? Because you <em>attend to certain salient aspects</em> of the current image.</p>
<p>That is exactly the kind of power we want to give to our neural network models. Why? Think of this as some sort of regularization. (This portion of the answer references the paper) Your usual convolutional network model does have the ability to be able to recognize cluttered images but how do we find the exact set of weights which are "good"? That is a difficult task. By providing the network with a new architecture-level feature which allows it to <em>attend</em> to different parts of image sequentially and aggregate information over time, we make that job easier, because now the network can simply learn to ignore the clutter (or so is the hope).</p>
<p>I hope this answers the question <strong>What is hard attention?</strong>. Now onto the nature of its <strong>differentiability</strong>. Well, remember how we conveniently <em>picked</em> the correct spots to look at, while looking at the birthday picture? How did we do that? This process involves making <em>choices</em> which are difficult to represent in terms of a <em>differentiable function of the input(image)</em>. For example, Based on what you've looked at already and the image, decide where to look next. You could have a neural network which outputs the answer here, but we do not know the correct answer! There is no correct answer in fact. How then are we to <em>train</em> the network parameters? Neural network training depends critically on a differentiable loss function of the inputs. Examples of such loss functions include the log-likelihood loss function, squared loss function etc. But in this case, we do not have a correct answer of where to look next. How then can we define a loss? This is where a field of machine learning called <strong>reinforcement learning</strong>(RL) comes in. RL allows you to do a gradient in the space of policies by using methods such as the reinforce method and the actor critic algorithms.</p>
<p><strong>What is soft attention?</strong></p>
<p>This part of the answer borrows from a paper which goes by the name <a href="http://arxiv.org/pdf/1506.03340v3.pdf">teaching machines to read and comprehend</a>.
A major problem with RL methods such as the reinforce method is they have a high variance (in terms of the gradient of the expected reward computed) which scales linearly with the number of hidden units in your network. That's not a good thing, especially if you're going to build a large network. Hence, people try to look for <strong>differentiable</strong> models of attention. All this means is that the attention term and as a result the loss function are a differentiable function of the inputs and hence all gradients exist. Hence we can use our standard backprop algorithm along-with one of the usual loss functions for training our network. So what is soft attention?</p>
<p>In the context of text, it refers to the ability of the model to choose to associate <strong>more importance</strong> with certain words in the document vis-a-vis other tokens. If you're reading a document and have to answer a question based on it, concentrating on certain tokens in the document might help you answer the question better, than to just read each token as if it were equally important. That is the basic idea behind soft attention in text. The reason why it is a <strong>differentiable model</strong> is because you decide how much attention to pay to each token based purely on the particular token and the query in hand. You could for example represent the tokens of the document and the query in the same vector space and look at dot product/cosine similarity as a measure of how much attention should you pay to that particular token, given that query. Note that the cosine distance operation is completely differentiable with respect to its inputs and hence the overall model ends up being differentiable.
Note that the exact model used by the paper differs and this argument is just for demonstration's sake, although other models do use a dot product based attention-score.</p> | 2016-03-07 19:21:09.037000+00:00 | 2016-03-07 19:21:09.037000+00:00 | null | null | 35,549,588 | <p>In this blog post, <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="noreferrer">The Unreasonable Effectiveness of Recurrent Neural Networks</a>, Andrej Karpathy mentions future directions for neural networks based machine learning:</p>
<blockquote>
<p>The concept of attention is the most interesting recent architectural innovation in neural networks. [...] soft attention scheme for memory addressing is convenient because it keeps the model fully-differentiable, but unfortunately one sacrifices efficiency because everything that can be attended to is attended to (but softly). Think of this as declaring a pointer in C that doesn't point to a specific address but instead defines an entire distribution over all addresses in the entire memory, and dereferencing the pointer returns a weighted sum of the pointed content (that would be an expensive operation!). This has motivated multiple authors to swap soft attention models for hard attention where one samples a particular chunk of memory to attend to (e.g. a read/write action for some memory cell instead of reading/writing from all cells to some degree). This model is significantly more philosophically appealing, scalable and efficient, but unfortunately it is also non-differentiable.</p>
</blockquote>
<p>I think I understood the pointer metaphor, but what is exactly attention and why is the hard one not differentiable?</p>
<p>I found an explanation about attention <a href="https://www.quora.com/What-is-exactly-the-attention-mechanism-introduced-to-RNN-recurrent-neural-network-It-would-be-nice-if-you-could-make-it-easy-to-understand" rel="noreferrer">here</a>, but still confused about the soft/hard part.</p> | 2016-02-22 09:08:36.267000+00:00 | 2017-11-09 17:17:44.183000+00:00 | 2016-02-22 10:07:25.750000+00:00 | machine-learning|neural-network|recurrent-neural-network | ['http://arxiv.org/pdf/1406.6247v1.pdf', 'https://i.stack.imgur.com/l7Mc6.jpg', 'http://arxiv.org/pdf/1506.03340v3.pdf'] | 3 |
54,472,395 | <p>The <code>FastDiceRoller</code> algorithm described in <a href="https://arxiv.org/pdf/1304.1916v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1304.1916v1.pdf</a> gives you a random uniform(ish) distribution. </p>
<p>Here's an implementation ripped from that paper using your function:</p>
<pre><code>function fastDiceRoller(max_number){
v = 1
c = 0
while(true){
v = 2*v
c = 2*c + get_zero_or_one()
if(v >= max_number){
if(c < max_number) { return(c) }
v = v - max_number
c = c - max_number
}
}
}
</code></pre>
<p>Here be some frequencies from a million runs:</p>
<pre><code>Map {
0 => 100327,
1 => 99930,
2 => 100389,
3 => 99824,
4 => 100116,
5 => 99999,
6 => 99700,
7 => 99957,
8 => 99980,
9 => 99778 }
</code></pre>
<p>Hope that helps.</p>
<p>Essentially it uses your function to build a binary number that is less than <code>max_number</code>.</p> | 2019-02-01 03:18:10.620000+00:00 | 2019-02-02 22:18:43.707000+00:00 | 2019-02-02 22:18:43.707000+00:00 | null | 54,471,746 | <p>I asked you the last time, but I can not solve it.</p>
<p>I have a function that return 0 and 1 at random</p>
<pre><code>const get_zero_or_one = () => {
return Math.floor(Math.random() * (2))
}
</code></pre>
<p>I want to make a random function using the above function. But without Math.random (except get_zero_or_one used above).</p>
<pre><code>const RandomResult = (max_number) => {
let answer = 0;
for (let i = 0; i < max_number -1; i++) {
answer += get_zero_or_one()
}
return answer
}
</code></pre>
<p>This is the code I wrote. Can I change to better code?</p>
<p>Thanks.</p> | 2019-02-01 01:40:18.570000+00:00 | 2019-02-02 22:18:43.707000+00:00 | null | javascript | ['https://arxiv.org/pdf/1304.1916v1.pdf'] | 1 |
22,149,285 | <p>You asked many questions, here is the beginning of the answers:</p>
<p><strong>Q1:</strong> Yes. For example, take BitTorrent's very successful 10 million+ node network. Aside from the bootstrapping process, the protocol is entirely decentralized and asynchronous. See <a href="http://www.diva-portal.org/smash/get/diva2:436670/FULLTEXT01.pdf" rel="nofollow noreferrer">here</a> for more info.</p>
<p><strong>Q2:</strong> Yes! Go to www.whatismyip.com on your mobile telephone, and you will see your assigned IP. However, you are likely to be very filtered (e.g: incoming traffic on port 80 is likely to be blocked).</p>
<p><strong>Q3:</strong> It has elements of P2P and clever tricks to get around NAT issues - see <a href="http://arxiv.org/ftp/cs/papers/0412/0412017.pdf" rel="nofollow noreferrer">here</a> for more info.</p>
<p><strong>Q4:</strong> I don't know.</p> | 2014-03-03 14:27:49.697000+00:00 | 2017-05-13 12:13:44.023000+00:00 | 2017-05-13 12:13:44.023000+00:00 | null | 18,859,732 | <p>My knowledge about network programming is limited, so, all the comments are more than welcome. Essentially my question boils down to the following question: </p>
<p>Q1. Is there really such a thing as <em>decentralized asynchronous cross-platform peer-to-peer communication</em>? </p>
<p>Let me explain myself.</p>
<ul>
<li><p>If we have two http servers running on computers with actual IP addresses, then clearly the answer is yes, assuming one writes a protocol for the interaction. </p></li>
<li><p>To go one step further, if one of them (or both) is (are) behind a router, then, with port forwarding the communication can still be established. However, here the problems start because if someone wants to run such a server on the background, say in a mobile phone, the app that is relying on this server really works when one is at home (we can not really expect to request port forwarding everywhere we go). </p></li>
<li><p>But even beyond that, </p></li>
<li><p>Q2. do mobile phones obtain an actual IP address from telecommunication companies when someone is not using a wi-fi? </p></li>
<li>If this is true, then clearly one can have cross-platform asynchronous peer-to-peer communication at the expense of not using wi-fi by running an http server on a smartphone. (I understand that this is not convenient, but it is certainly doable.)</li>
</ul>
<p>Concluding, the two (perhaps there are more) relevant questions that I can think of are:</p>
<ul>
<li>Q3. <a href="http://wyliemac.blogspot.co.uk/2007/03/how-skype-works.html" rel="noreferrer">How does Skype really work?</a></li>
<li>Q4. How does Viber really work?</li>
</ul>
<p>Based on the answer for Skype, it says: <em>If one of the callee or both of them do not have a public IP, then they send voice traffic to another online Skype node over UDP or TCP.</em>
So, it appears that there is no direct communication in Skype, because they have to use a man-in-the-middle for such a scenario.</p>
<p>Regarding Viber, I could not find a good-thorough answer to this particular question. Do people talk to each other through a Viber centralized server, or, do they establish a direct connection? Of course if they do establish a direct connection, then I really want to know how they manage such a thing since a mobile phone may or may not have a physical address. How is a Viber message routed to my cell phone from a friend of mine even when Viber is not running and I am behind a router?</p>
<p>I guess the answer to Viber is really push notifications, but as far as I can understand, all the variations of push notifications rely on open connections, and then the servers of the applications send the notifications to the clients through such connection(s). So, this approach <em>gives us the feeling that it is</em> asynchronous, but essentially <em>it is not</em>. We are cheating, in the sense that there is a constantly open connection to a server, and moreover, as far as I can understand, the application server has to push the notification through that server. Schematically:</p>
<p>A > Central App Server > Central Server w/ open connection to my cellphone > me</p>
<p>So, this seems to be once again a centralized approach.</p>
<p>Honestly, the only approach that I can think of that is both decentralized and asynchronous (on mobile phones as well) is to run an http server on every platform/device, but this comes at the expense of not using Wi-Fi and assuming that a telecommunication company really assigns a physical IP address to every mobile phone (which I do not know if it is true, do you?).</p>
<p>What about WASTE, darknets, F2Fs, etc? Do they offer advantages in the sense of a more direct asynchronous communication between some interested parties? Are there real-world applications (also including mobile phones) using such approaches for communication.</p>
<p>Really, this is not the actual problem that I would like to work on, but I would like to know what the state of the art is so that I can figure out how I can proceed from there. So, all comments are really more than welcome. If you have references for the state of the art I would like to know about them as well, but a brief description would also be nice.</p>
<p>I appreciate all your time and effort in advance.</p> | 2013-09-17 20:55:27.563000+00:00 | 2017-05-13 12:13:44.023000+00:00 | 2013-09-17 21:29:39.097000+00:00 | asynchronous|push-notification|p2p|httpserver|communication-protocol | ['http://www.diva-portal.org/smash/get/diva2:436670/FULLTEXT01.pdf', 'http://arxiv.org/ftp/cs/papers/0412/0412017.pdf'] | 2 |
15,590,576 | <p>It is no wonder that each of the separate networks yields better performance on the according training set it has been trained on. But these prediction error values are misleading, because it is an <em>ill-posed</em> problem to minimize the error on a training set. Your ultimate goal is to maximize the generalization performance of your model, so it performs well on new data it has not seen during training. Imagine a network which just memorizes each of the characters and thus functions more like a hashtable. Such a network would yield 0 errors on the training data but would perform badly on other data.</p>
<p>One way to measure generalization performance is to extract a fraction (e.g. 10%) of your available data and to use it as a <em>test set</em>. You do not use this test set during training, only for measurement.</p>
<p>Further, you should check the topology of your network. How many hidden layers and how many neurons per hidden layer do you use? Make sure your topology is large enough so it can tackle the complexity of your problem. </p>
<p>Also have a look at other techniques to improve generalization performance of your network, like <em>L1 regularization</em> (subtracting a small fixed amount of the absolute value of your weights after each training step), <em>L2 regularization</em> (subtracting a small percentage of your weights after each training step) or <a href="http://arxiv.org/pdf/1207.0580.pdf" rel="nofollow">Dropout</a> (randomly turning off hidden units during training and halving the weight vector as soon as training is finished). Further, you should consider more efficient training algorithms like <em>RPROP-</em> or <em>RMSProp</em> rather than plain backpropagation (see <a href="https://www.coursera.org/course/neuralnets" rel="nofollow">Geoffrey Hinton's coursera course on neural networks</a>). You should also consider the MNIST dataset containing written numbers 0-9 for testing your setup (you should easily achieve less than 300 misclassificaitons on the test set). </p>
<p>To answer your original question on how to omit certain output neurons, you could create an own layer module. Have a look at the SoftmaxLayer, but before applying the softmax activation function, set all output-neurons to 0 which belong to the classes you want to omit. You need to manipulate the <code>outbuf</code> varable in <code>_forwardImplementation</code>. If you want to use this during training, make sure to set the error signal to zero for those classes before backpropagating the error to the previous layer (by manipulating <code>_backwardImplementation</code>). This can be useful e.g. if you have incomplete data and do not want to throw away each sample containing just one NaN value. But in your case you actually do not need this.</p> | 2013-03-23 18:36:11.850000+00:00 | 2013-03-23 18:36:11.850000+00:00 | null | null | 15,541,407 | <p>I'm creating a simple feed-forward neural network in PyBrain to classify characters (26 lower case, 26 upper case and 10 numbers) </p>
<p>There are two different documents - one has only upper case letters and numbers and the second has lower case letters, numbers as well as upper case letters. </p>
<p>Do I have to create two different networks ? Is there any way to disable the upper case nodes when the first document is being processed ? If more document (images of documents) are integrated to the project later, there will be other combinations too. Creating new networks for them all seems tedious. </p>
<p>Thanks in advance </p>
<p>PS: Does anyone know any really (really) good tutorials on pyBrain ? I'm a beginner and the documentation only addresses really simple examples. </p> | 2013-03-21 07:07:15.207000+00:00 | 2013-03-23 18:36:11.850000+00:00 | null | python|neural-network|ocr|pybrain | ['http://arxiv.org/pdf/1207.0580.pdf', 'https://www.coursera.org/course/neuralnets'] | 2 |
49,953,258 | <p>The most common approach is use labeled data in order to train a supervised machine learning algorithm. If you want to follow it, check this tutorial <a href="http://nlpforhackers.io/training-pos-tagger/" rel="nofollow noreferrer">train your own POS tagger</a>, then, you will need a POS tagset and a corpus for create a POS tagger in supervised fashion.</p>
<p>In the other hand you can try some unsupervised methods. I found this semi-supervised method for Sinhala precisely <a href="https://arxiv.org/ftp/arxiv/papers/1407/1407.2989.pdf" rel="nofollow noreferrer">HIDDEN MARKOV MODEL BASED PART OF SPEECH TAGGER FOR SINHALA LANGUAGE </a>. Consider semi-supervised learning is a variation of unsupervised learning, hence dispite you do not need make big efforts to tag an entire corpus, some labels are needed. Finally, there are some completely unsupervised alternatives you can adapt to Sinhala. </p>
<p>Good luck!</p> | 2018-04-21 07:01:00.317000+00:00 | 2018-04-21 07:01:00.317000+00:00 | null | null | 49,952,762 | <p>I'm kind of new to NLP and I'm trying to build a POS tagger for Sinhala language. Are there any specific steps to follow to build the system?</p> | 2018-04-21 05:45:36.037000+00:00 | 2018-04-21 07:01:00.317000+00:00 | null | python|nlp|nltk | ['http://nlpforhackers.io/training-pos-tagger/', 'https://arxiv.org/ftp/arxiv/papers/1407/1407.2989.pdf'] | 2 |
63,298,183 | <p>I have been working on character detection + recognition problem for industrial application. From my experience using only deep CNN and dense layer to predict character class is not the best approach to solve this problem. There are good research papers for scene text recognition problem, one common approach to design character recognition problem is to have ---</p>
<ol>
<li>any deep CNN model like VGG, ResNet or EfficientNet to extract the image feature.</li>
<li>Then add some RNN layers on top of CNN backbone to get character sequence from the extracted features. This would be a great plus if you want to predict variable length of character.</li>
<li>After getting character sequence from RNN layers, the next step is to decode this character sequence. For this you can either use <a href="https://www.cs.toronto.edu/%7Egraves/icml_2006.pdf" rel="nofollow noreferrer">CTC based method</a> or <a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">attention mechanism</a>. Both of these methods have their own pros and cons. CTC based methods are fast but performance is bit poor, on the other hand, attention based models give good results but they are very slow. So the selection of the method depends on your requirement.</li>
</ol>
<p>Below image from very famous text recognition paper <a href="https://arxiv.org/pdf/1507.05717v1.pdf" rel="nofollow noreferrer">CRNN</a> gives general idea about above steps.
[<img src="https://i.stack.imgur.com/oXNzR.png" alt="Steps for text recognition from very famous research paper: CRNN ][2]" /></p>
<p>For training the model, @Hemerson has given good suggestions. Try to build and train this type of model with multiple stages and I am sure you will get better results:)</p>
<p>Best regards!</p> | 2020-08-07 08:33:49.120000+00:00 | 2020-08-07 08:33:49.120000+00:00 | null | null | 62,930,951 | <p>I have a dataset of 180k images for which I try to recognize the characters on the images (License plate recognition). All of these license plates contain seven characters and 35 characters are possible, so the output vector y is of shape <code>(7, 35)</code>. I therefore onehot-encoded every license plate label.</p>
<p>I applied the bottom of the EfficicentNet-B0 model (<a href="https://keras.io/api/applications/efficientnet/#efficientnetb0-function" rel="nofollow noreferrer">https://keras.io/api/applications/efficientnet/#efficientnetb0-function</a>) together with a customized top, which is divided in 7 branches (because of seven characters per license plate). I used the weights of the imagenet and freezed the bottom layers of <code>efnB0_model</code>:</p>
<pre><code>def create_model(input_shape = (224, 224, 3)):
input_img = Input(shape=input_shape)
model = efnB0_model (input_img)
model = GlobalAveragePooling2D(name='avg_pool')(model)
model = Dropout(0.2)(model)
backbone = model
branches = []
for i in range(7):
branches.append(backbone)
branches[i] = Dense(360, name="branch_"+str(i)+"_Dense_16000")(branches[i])
branches[i] = BatchNormalization()(branches[i])
branches[i] = Activation("relu") (branches[i])
branches[i] = Dropout(0.2)(branches[i])
branches[i] = Dense(35, activation = "softmax", name="branch_"+str(i)+"_output")(branches[i])
output = Concatenate(axis=1)(branches)
output = Reshape((7, 35))(output)
model = Model(input_img, output)
return model
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
</code></pre>
<p>For training and validating the model I only use 10.000 training images and 3.000 validation images due to the big size of my model and the huge number of data which would make my training very, very slow.</p>
<p>I use this DataGenerator to feed batches to my model:</p>
<pre><code>class DataGenerator(Sequence):
def __init__(self, x_set, y_set, batch_size):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx*self.batch_size : (idx + 1)*self.batch_size]
batch_x = np.array([resize(imread(file_name), (224, 224)) for file_name in batch_x])
batch_x = batch_x * 1./255
batch_y = self.y[idx*self.batch_size : (idx + 1)*self.batch_size]
batch_y = np.array(batch_y)
return batch_x, batch_y
</code></pre>
<p>I fit the model using this code:</p>
<pre><code>model.fit_generator(generator=training_generator,
validation_data=validation_generator,
steps_per_epoch = num_train_samples // 32,
validation_steps = num_val_samples // 32,
epochs = 10, workers=6, use_multiprocessing=True)
</code></pre>
<p>Now, after several epochs of training, I observed big differences regarding training accuracy and validation accuracy. I think, one reason for that is the small size of data. Which other factors influence this overfitting in my model? Do you think, there is something completely wrong with my code/model? Do you think the model is to big and complex as well or is it maybe due to the preprocessing of the data?</p>
<p>Note: I already experimented with Data Augmentation and tried the model without Transfer Learning. That leads to poor results on training AND validation data. So, is there anything what I could do additionally?</p>
<p><a href="https://i.stack.imgur.com/bF7QG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bF7QG.png" alt="accuracy" /></a></p> | 2020-07-16 08:48:49.140000+00:00 | 2020-08-07 08:33:49.120000+00:00 | 2020-07-24 13:10:17.443000+00:00 | python|tensorflow|keras|conv-neural-network | ['https://www.cs.toronto.edu/%7Egraves/icml_2006.pdf', 'https://arxiv.org/abs/1706.03762', 'https://arxiv.org/pdf/1507.05717v1.pdf'] | 3 |
62,177,796 | <p>I must say PyTorch implementations are a bit confusing as it contains too many mask parameters. But I can shed light on the two mask parameters that you are referring to. Both <code>src_mask</code> and <code>src_key_padding_mask</code> is used in the <code>MultiheadAttention</code> mechanism. According to the documentation of <a href="https://pytorch.org/docs/master/generated/torch.nn.MultiheadAttention.html#torch.nn.MultiheadAttention" rel="noreferrer">MultiheadAttention</a>:</p>
<blockquote>
<p>key_padding_mask โ if provided, specified padding elements in the key will be ignored by the attention.</p>
<p>attn_mask โ 2D or 3D mask that prevents attention to certain positions.</p>
</blockquote>
<p>As you know from the paper, <a href="https://arxiv.org/abs/1706.03762" rel="noreferrer">Attention is all you need</a>, MultiheadAttention is used in both Encoder and Decoder. However, in Decoder, there are two types of MultiheadAttention. One is called <code>Masked MultiheadAttention</code> and another one is the regular <code>MultiheadAttention</code>. To accommodate both these techniques, PyTorch uses the above mentioned two parameters in their MultiheadAttention implementation.</p>
<p>So, long story short-</p>
<ul>
<li><code>attn_mask</code> and <code>key_padding_mask</code> is used in Encoder's <code>MultiheadAttention</code> and Decoder's <code>Masked MultiheadAttention</code>.</li>
<li><code>memory_mask </code> is used in Decoder's <code>MultiheadAttention</code> mechanism as pointed out <a href="https://github.com/pytorch/pytorch/blob/ec5d579929b2c56418aacaec0874b92937d095a4/torch/nn/modules/transformer.py#L124-L127" rel="noreferrer">here</a>.</li>
</ul>
<p>Looking into the implementation of <a href="https://github.com/pytorch/pytorch/blob/11f1014c05b902d3eef0fe01a7c432f818c2bdfe/torch/nn/functional.py#L3854" rel="noreferrer">MultiheadAttention</a> might help you.</p>
<p>As you can see from <a href="https://github.com/pytorch/pytorch/blob/11f1014c05b902d3eef0fe01a7c432f818c2bdfe/torch/nn/functional.py#L4110" rel="noreferrer">here</a> and <a href="https://github.com/pytorch/pytorch/blob/11f1014c05b902d3eef0fe01a7c432f818c2bdfe/torch/nn/functional.py#L4117" rel="noreferrer">here</a>, first <code>src_mask</code> is used to block specific positions from attending and then <code>key_padding_mask</code> is used to block attending to pad tokens.</p>
<p><strong>Note.</strong> Answer updated based on @michael-jungo's comment.</p> | 2020-06-03 16:26:02.107000+00:00 | 2020-06-03 20:17:32.180000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 62,170,439 | <p>I am having a difficult time in understanding transformers. Everything is getting clear bit by bit but one thing that makes my head scratch is
what is the difference between src_mask and src_key_padding_mask which is passed as an argument in forward function in both encoder layer and decoder layer.</p>
<p><a href="https://pytorch.org/docs/master/_modules/torch/nn/modules/transformer.html#Transformer" rel="noreferrer">https://pytorch.org/docs/master/_modules/torch/nn/modules/transformer.html#Transformer</a></p> | 2020-06-03 10:18:39.683000+00:00 | 2021-07-21 14:59:09.107000+00:00 | null | pytorch|transformer-model | ['https://pytorch.org/docs/master/generated/torch.nn.MultiheadAttention.html#torch.nn.MultiheadAttention', 'https://arxiv.org/abs/1706.03762', 'https://github.com/pytorch/pytorch/blob/ec5d579929b2c56418aacaec0874b92937d095a4/torch/nn/modules/transformer.py#L124-L127', 'https://github.com/pytorch/pytorch/blob/11f1014c05b902d3eef0fe01a7c432f818c2bdfe/torch/nn/functional.py#L3854', 'https://github.com/pytorch/pytorch/blob/11f1014c05b902d3eef0fe01a7c432f818c2bdfe/torch/nn/functional.py#L4110', 'https://github.com/pytorch/pytorch/blob/11f1014c05b902d3eef0fe01a7c432f818c2bdfe/torch/nn/functional.py#L4117'] | 6 |
62,743,728 | <p>TextRank implementations tend to be lightweight and can run fast even with limited memory resources, while the transformer models such as <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">BERT</a> tend to be rather large and require lots of memory. While the <a href="https://www.tinyml.org/summit/" rel="nofollow noreferrer">TinyML</a> community has outstanding work on techniques to make DL models run within limited resources, there may be a resource advantage for some use cases.</p>
<p>Some of the TextRank implementations can be "directed" by adding semantic relations, which one can consider as a priori structure to enrich the graph used -- or in some cases means of incorporating <em>human-in-the-loop</em> approaches. Those can provide advantages over supervised learning models which have been trained purely on data. Even so, there are similar efforts for DL in general (e.g., variations on the theme of <em>transfer learning</em>) from which transformers may benefit.</p>
<p>Another potential benefit is that TextRank approaches tend to be more <em>transparent</em>, while transformer models can be challenging in terms of <em>explainability</em>. There are tools that help greatly, but this concern becomes important in the context of <em>model bias and fairness</em>, <em>data ethics</em>, <em>regulatory compliance</em>, and so on.</p>
<p>Based on personal experience, while I'm the lead committer for one of the popular TextRank <a href="https://github.com/DerwenAI/pytextrank" rel="nofollow noreferrer">open source implementations</a>, I only use its <em>extractive summarization</em> features for use cases where a "cheap and fast" solution is needed. Otherwise I'd recommend considering more sophisticated approaches to summarization. For example, I recommend keeping watch on the ongoing research by the author of TextRank, <a href="https://twitter.com/radamihalcea" rel="nofollow noreferrer">Rada Mihalcea</a>, and her graduate students at U Michigan.</p>
<p>In terms of comparing <strong>"Which text summarization methods work better?"</strong> I'd point toward work on <em>abstractive summarization</em>, particularly recent work by <a href="https://arxiv.org/pdf/1904.08455.pdf" rel="nofollow noreferrer">John Bohannon, et al.</a>, at <a href="https://primer.ai/" rel="nofollow noreferrer">Primer</a>. For excellent examples, check the <a href="https://covid19primer.com/dailybriefing" rel="nofollow noreferrer">"Daily Briefings"</a> of CV19 research which their team generates using natural language understanding, knowledge graph, abstractive summarization, etc. Amy Heineike discusses their approach in <a href="https://thedataexchange.media/machines-for-unlocking-the-deluge-of-covid-19-papers-articles-and-conversations/" rel="nofollow noreferrer">"Machines for unlocking the deluge of COVID-19 papers, articles, and conversations"</a>.</p> | 2020-07-05 16:57:20.820000+00:00 | 2020-07-05 16:57:20.820000+00:00 | null | null | 62,731,497 | <p>What are the advantages of using text rank algorithm for summarization over BERT summarization?
Even though both can be used as extractive summarization method, is there any particular advantage for text rank?</p> | 2020-07-04 16:15:14.123000+00:00 | 2021-11-27 00:28:26.257000+00:00 | 2021-11-27 00:28:26.257000+00:00 | python|machine-learning|nlp|bert-language-model | ['https://arxiv.org/abs/1810.04805', 'https://www.tinyml.org/summit/', 'https://github.com/DerwenAI/pytextrank', 'https://twitter.com/radamihalcea', 'https://arxiv.org/pdf/1904.08455.pdf', 'https://primer.ai/', 'https://covid19primer.com/dailybriefing', 'https://thedataexchange.media/machines-for-unlocking-the-deluge-of-covid-19-papers-articles-and-conversations/'] | 8 |
36,221,266 | <p>This problem seems to be a known problem called <a href="https://en.wikipedia.org/wiki/Nurse_scheduling_problem" rel="nofollow">nurse scheduling</a> which is known to be <a href="https://en.wikipedia.org/wiki/NP-hardness" rel="nofollow">NP-hard</a>. Basically in your employees and dates correspond to nurses and shifts, respectively. Your hard constraints are the days of availability of the employees and soft constraints are the quality of their work, as you mentioned picking <em>the best 10</em> in your example.</p>
<p>Unfortunately, I don't think you can think of an optimal solution off the top of your head. Depending on the size of the sets of employees and dates, finding such a solution may be quite tricky, and I don't think just by your definition of the problem so far, it would be suitable to tell which method that can be used to solve nurse scheduling may be the best fit for your requirements.</p>
<p>Nevertheless, here are a few papers you can look into and see how they fit the specific requirements of your instance of the problem.</p>
<p><a href="http://link.springer.com/article/10.1007%2Fs10732-007-9066-7" rel="nofollow">A grasp-knapsack hybrid for a nurse-scheduling problem</a></p>
<p><a href="http://dl.acm.org/citation.cfm?id=2782372" rel="nofollow">A two-phase adaptive variable neighborhood approach for nurse rostering</a></p>
<p><a href="http://arxiv.org/abs/0803.2969?context=cs.CE" rel="nofollow">An Indirect Genetic Algorithm for a Nurse Scheduling Problem</a></p> | 2016-03-25 13:51:20.937000+00:00 | 2016-03-25 14:03:33.033000+00:00 | 2016-03-25 14:03:33.033000+00:00 | null | 36,218,584 | <p>This is my initial condition:</p>
<pre><code>I have a set of employees E1, E2, E3, ...
I have a set of dates for an activity D1, D2, D3, ...
For every employee, I know on which dates he is available to perform the activity
Every employee should perform the activity only once
</code></pre>
<p>I need to find the best configuration that will permit to every employee to perform the activity, minimizing the number of dates used and giving a maximum number of employees per date. So for example if on a particular date I can have 20 employees, I need to use only the best 10 of them, moving the other 10 on different dates.</p>
<p>I think the solution could be some algorithm related to bipartite graphs, but I can't find a good approach to solve it.</p>
<p>Do You have any idea on how to solve it or if the problem could fit in some already know algorithm?</p>
<p>Thanks a lot,
Marco</p> | 2016-03-25 10:54:00.217000+00:00 | 2016-03-25 14:03:33.033000+00:00 | null | php|algorithm|math|bipartite | ['https://en.wikipedia.org/wiki/Nurse_scheduling_problem', 'https://en.wikipedia.org/wiki/NP-hardness', 'http://link.springer.com/article/10.1007%2Fs10732-007-9066-7', 'http://dl.acm.org/citation.cfm?id=2782372', 'http://arxiv.org/abs/0803.2969?context=cs.CE'] | 5 |
15,799,056 | <p>Scikit-learn includes quite a few methods for feature ranking, among them:</p>
<ul>
<li>Univariate feature selection (<a href="http://scikit-learn.org/stable/auto_examples/feature_selection/plot_feature_selection.html" rel="noreferrer">http://scikit-learn.org/stable/auto_examples/feature_selection/plot_feature_selection.html</a>)</li>
<li>Recursive feature elimination (<a href="http://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_digits.html" rel="noreferrer">http://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_digits.html</a>)</li>
<li>Randomized Logistic Regression/stability selection (<a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RandomizedLogisticRegression.html" rel="noreferrer">http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RandomizedLogisticRegression.html</a>)</li>
</ul>
<p>(see more at <a href="http://scikit-learn.org/stable/modules/feature_selection.html" rel="noreferrer">http://scikit-learn.org/stable/modules/feature_selection.html</a>)</p>
<p>Among those, I definitely recommend giving Randomized Logistic Regression a shot. In my experience, it consistently outperforms other methods and is very stable.
Paper on this: <a href="http://arxiv.org/pdf/0809.2932v2.pdf" rel="noreferrer">http://arxiv.org/pdf/0809.2932v2.pdf</a></p>
<p><strong>Edit:</strong>
I have written a series of blog posts on different feature selection methods and their pros and cons, which are probably useful for answering this question in more detail:</p>
<ul>
<li><a href="http://blog.datadive.net/selecting-good-features-part-i-univariate-selection/" rel="noreferrer">http://blog.datadive.net/selecting-good-features-part-i-univariate-selection/</a></li>
<li><a href="http://blog.datadive.net/selecting-good-features-part-ii-linear-models-and-regularization/" rel="noreferrer">http://blog.datadive.net/selecting-good-features-part-ii-linear-models-and-regularization/</a></li>
<li><a href="http://blog.datadive.net/selecting-good-features-part-iii-random-forests/" rel="noreferrer">http://blog.datadive.net/selecting-good-features-part-iii-random-forests/</a></li>
<li><a href="http://blog.datadive.net/selecting-good-features-part-iv-stability-selection-rfe-and-everything-side-by-side/" rel="noreferrer">http://blog.datadive.net/selecting-good-features-part-iv-stability-selection-rfe-and-everything-side-by-side/</a></li>
</ul> | 2013-04-03 22:05:38.840000+00:00 | 2016-11-23 11:26:09.080000+00:00 | 2016-11-23 11:26:09.080000+00:00 | null | 15,796,247 | <p>I'm trying to classify some EEG data using a logistic regression model (this seems to give the best classification of my data). The data I have is from a multichannel EEG setup so in essence I have a matrix of 63 x 116 x 50 (that is channels x time points x number of trials (there are two trial types of 50), I have reshaped this to a long vector, one for each trial.</p>
<p>What I would like to do is after the classification to see which features were the most useful in classifying the trials. How can I do that and is it possible to test the significance of these features? e.g. to say that the classification was drive mainly by N-features and these are feature x to z. So I could for instance say that channel 10 at time point 90-95 was significant or important for the classification.</p>
<p>So is this possible or am I asking the wrong question?</p>
<p>any comments or paper references are much appreciated. </p> | 2013-04-03 19:26:40.243000+00:00 | 2016-11-23 11:26:09.080000+00:00 | null | scikit-learn|feature-selection | ['http://scikit-learn.org/stable/auto_examples/feature_selection/plot_feature_selection.html', 'http://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_digits.html', 'http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RandomizedLogisticRegression.html', 'http://scikit-learn.org/stable/modules/feature_selection.html', 'http://arxiv.org/pdf/0809.2932v2.pdf', 'http://blog.datadive.net/selecting-good-features-part-i-univariate-selection/', 'http://blog.datadive.net/selecting-good-features-part-ii-linear-models-and-regularization/', 'http://blog.datadive.net/selecting-good-features-part-iii-random-forests/', 'http://blog.datadive.net/selecting-good-features-part-iv-stability-selection-rfe-and-everything-side-by-side/'] | 9 |
56,252,824 | <p>I studied this in my research. </p>
<p>The first is you have to calibrate 2 sensors to know their extrinsic. There are a few open source packages you can play with which I listed Below</p>
<p>The Second is fuse the data. The simple way is just based on calibration transform and use the tf to send. The complicated way is to deply pipelines such as depth image to LIDAR alignment and depth map variance estimation and fusion. You can choose to do it ez way like easiler landmark included EKF estimation, or you can follow CMU Zhangji`s Visual-LIDAR-Inertial fusion work for the direct 3D feature to LIDAR alignment. The choice is urs</p>
<p>(1)
<a href="https://i.stack.imgur.com/mGCk1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mGCk1.png" alt="enter image description here"></a>
<a href="http://wiki.ros.org/velo2cam_calibration" rel="nofollow noreferrer">http://wiki.ros.org/velo2cam_calibration</a></p>
<p>Guindel, C., Beltrรกn, J., Martรญn, D. and Garcรญa, F. (2017). Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups. IEEE International Conference on Intelligent Transportation Systems (ITSC), 674โ679.</p>
<p>Pros. Pretty accurate and ez to use package. Cons. you have to made rigid cut board. </p>
<p>(2) <a href="https://github.com/ankitdhall/lidar_camera_calibration" rel="nofollow noreferrer">https://github.com/ankitdhall/lidar_camera_calibration</a>
<a href="https://i.stack.imgur.com/UoAax.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UoAax.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/LQup0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQup0.png" alt="enter image description here"></a></p>
<p>LiDAR-Camera Calibration using 3D-3D Point correspondences, arXiv 2017</p>
<p>Pros. Ez to use, Ez to make the hardware. Cons May is not so accurate</p>
<p>There were couple of others I listed In my thesis, I`ll go back and check for it and update here. If I remember</p> | 2019-05-22 08:49:55.123000+00:00 | 2019-05-22 08:55:37.677000+00:00 | 2019-05-22 08:55:37.677000+00:00 | null | 56,186,116 | <p>I have two separate pointclouds(<code>type= sensor_msgs/PointCloud2</code>) from two different sensors, a <strong><em>3D stereo camera</em></strong> and a <strong><em>2D LiDAR</em></strong>. I wanted to know how can I fuse these two pointclouds if the stereo pointcloud is 3D with fix length and a 2D LiDAR pointcloud with variable pointcloud length?</p>
<p>If someone has worked on it please help me, your help will be highly appreciated.
Thanks</p> | 2019-05-17 12:09:04.623000+00:00 | 2019-05-22 08:55:37.677000+00:00 | null | point-clouds|stereo-3d|fusion|lidar|stereotype | ['https://i.stack.imgur.com/mGCk1.png', 'http://wiki.ros.org/velo2cam_calibration', 'https://github.com/ankitdhall/lidar_camera_calibration', 'https://i.stack.imgur.com/UoAax.png', 'https://i.stack.imgur.com/LQup0.png'] | 5 |
47,602,722 | <p>I assume that you want to simulate a gas of particles with potential, which has attractive and hard-core repulsive parts.</p>
<p>Molecular dynamics simulation of many-particle systems is a well-developed area of science. The hard part of such simulation is to sample particles initially in such way that they don't overlap. This is solved in the classical paper by <a href="http://bayes.wustl.edu/Manual/EquationOfState.pdf" rel="nofollow noreferrer">Metropolis et al</a>. But you neglect this part anyway. There are also many textbooks and papers explaining, how to perform molecular dynamics simulations with hard spheres, like <a href="https://arxiv.org/pdf/1211.6718.pdf" rel="nofollow noreferrer">this</a> or <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwis99Oo7OnXAhVVzGMKHbC6DCMQFggvMAE&url=https%3A%2F%2Fdspace.library.uu.nl%2Fbitstream%2Fhandle%2F1874%2F8984%2Ffrenkel_89_molecular_hard_particles.pdf%3Fsequence%3D1&usg=AOvVaw30eGrhRL6OgtoDLHSb1Qn_" rel="nofollow noreferrer">this</a> or the <a href="http://aip.scitation.org/doi/pdf/10.1063/1.1730376" rel="nofollow noreferrer">1959 paper, where they first used Molecular Dynamics</a>.</p>
<p>If your project is to show nice spheres that collide like in Windows screensaver, just make your hard-core potential softer, decrease your time step and it will work like a charm. If you aim at scientific rigour, read those papers and don't forget to control energy conservation. Soft repulsion can be added, for example, by replacing</p>
<pre><code>double S_tr = 1 /r_ij_square;
</code></pre>
<p>by</p>
<pre><code>double S_tr = 1.0 / r_ij_square - 1.0 / (r_ij_square * r_ij_square);
</code></pre>
<p>The code formatting, structure and variable naming leave much to be desired. I would recommend to try <a href="https://clang.llvm.org/docs/ClangFormat.html" rel="nofollow noreferrer">clang-format</a> tool. Structurewise, there is a lot of duplication of periodic boundary conditions.</p>
<p>With the repulsive potential you may get a system that experiences a phase transition, an exciting object to study. Good luck! </p>
<p>UPDATE:</p>
<p>Note that the following piece of code is bugged:</p>
<pre><code>double rho, v0, tr, xr, l0, eta, dt, x[N], y[N];
double L = pow(N / rho , 0.5), l0_two = l0 * l0;
// ...
l0 = 1;
</code></pre>
<p>You first compute l0_two = l0 * l0 (never start variable name with l, it's really easy to confuse it with 1, sqrt(x) should be preferred to pow(x, 0.5)) and then assign l0 = 1. Therefore l0_two is undefined and is probably assigned to 0. This may cause part of your problem.</p>
<p>After fixing that bug you can go for two solutions. The first would be the soft repulsive potential described above if particles are close enough. The second is to do exactly as in the <a href="https://arxiv.org/pdf/1211.6718.pdf" rel="nofollow noreferrer">paper with event-based MD cited above</a>.</p> | 2017-12-01 23:04:06.857000+00:00 | 2017-12-03 20:36:31.320000+00:00 | 2017-12-03 20:36:31.320000+00:00 | null | 47,596,931 | <p>I have simulated a 2D system of particles which attract each other. Strength of attraction depends on distance between particles. The boundary condition and interactions are periodic. Because of attraction, particles go to each other and gather in a circle.</p>
<p>I want to add hard-sphere repulsion so that, whenever two or more particles gather in the same position, I want them to seperate in the line linking their centers, till they don't overlap. How can I do this?</p>
<p>The situation for adding hard-sphere when there is an attracting interactions is harder than the usual case, as there could be some situations in which 4 or more particles in the same position.</p>
<p>This is my code:</p>
<pre><code>#include <iostream>
#include <math.h>
#include <vector>
#include <array>
#include <list>
#include <random>
#include <functional>
#include <fstream>
#include <string>
#include <sstream>
#include <algorithm>
#include <chrono>
#include <set>
using namespace std;
std::minstd_rand gen(std::random_device{}());
std::uniform_real_distribution<double> unirnd(0, 1);
double PBCtwo(double pos, double L)
{
if(pos > L / 2.0)
return pos-L;
else if (pos < -L /2.0)
return L + pos;
else
return pos;
}
// main function
int main()
{ long c = 0;
int N=4000;
double rho, v0, tr,xr,l0, eta,dt, x[N],y[N],L=pow(N / rho , 0.5),l0_two = l0 * l0;
rho = 2;
v0 =300;eta = 1;dt = 0.0001;l0 = 1; c_prod = 500;c_display = 100;tr = -0.4;
// write initial configuration to the file
ofstream configFile;
configFile.open ("Initial Configuration.txt");
configFile << to_string(N) << "\n";
configFile << to_string(L) << "\n";
for (int i = 0; i < N; i++)
{ x[i] = unirnd(gen) * L;
y[i] = unirnd(gen) * L;
configFile << to_string(x[i]) << "\t" << to_string(y[i]) << "\n";
}
configFile.close();
while (c < c_prod)
{
double dx[N], dy[N];
c++;
for(int i = 0; i < N; i++)
{
dx[i] = 0;
dy[i] = 0;
double S_try = 0.0, S_trx = 0.0;
for(int j = 0; j < N; j++)
{
if (j==i) continue;
double delta_x = x[i]-x[j],
delta_y = y[i]-y[j];
double r_x_ij = PBCtwo(delta_x,L),
r_y_ij = PBCtwo(delta_y,L),
r_ij_square = r_x_ij * r_x_ij + r_y_ij * r_y_ij;
if (r_ij_square > l0_two)
{
double r_ij = sqrt(r_ij_square);
r_x_ij/= r_ij;
r_y_ij/= r_ij;
double S_tr = 1 /r_ij_square;
S_trx += r_x_ij * S_tr;
S_try += r_y_ij * S_tr;
}
}
dx[i] += tr * S_trx;
dy[i] += tr * S_try;
}
for(int i = 0; i < N; i++)
{
x[i]+= dt * dx[i];
y[i]+= dt * dy[i];
if (x[i] > L){
x[i]-= L;}
else if( x[i] < 0) {
x[i]+= L;}
if (y[i] > L){
y[i]-= L;}
else if( y[i] < 0){
y[i]+= L;}
}
}
ofstream finalConfigFile;
finalConfigFile.open ("Final Configuration.txt");
finalConfigFile << to_string(N) << "\n";
finalConfigFile << to_string(L) << "\n";
for (int i = 0; i < N; i++)
{
finalConfigFile << to_string(x[i]) << "\t" << to_string(y[i]) <<"\n";
}
finalConfigFile.close();
return 0;
}
</code></pre> | 2017-12-01 15:49:42.440000+00:00 | 2017-12-03 20:36:31.320000+00:00 | 2017-12-01 18:12:08.650000+00:00 | c++|c++11|simulation | ['http://bayes.wustl.edu/Manual/EquationOfState.pdf', 'https://arxiv.org/pdf/1211.6718.pdf', 'https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwis99Oo7OnXAhVVzGMKHbC6DCMQFggvMAE&url=https%3A%2F%2Fdspace.library.uu.nl%2Fbitstream%2Fhandle%2F1874%2F8984%2Ffrenkel_89_molecular_hard_particles.pdf%3Fsequence%3D1&usg=AOvVaw30eGrhRL6OgtoDLHSb1Qn_', 'http://aip.scitation.org/doi/pdf/10.1063/1.1730376', 'https://clang.llvm.org/docs/ClangFormat.html', 'https://arxiv.org/pdf/1211.6718.pdf'] | 6 |
23,518,798 | <p>Here is a late answer clarifying some concepts in relation to the question:</p>
<h2>Just return value / maximum</h2>
<p>In floating-point, division by zero is not a fatal error like integer division by zero is.
Since you know that <code>value</code> is between <code>0.0</code> and <code>maximum</code>, the only division by zero that can occur is <code>0.0 / 0.0</code>, which is defined as producing <code>NaN</code>. The floating-point value <code>NaN</code> is a perfectly acceptable value for function <code>obtainRatio</code> to return, and is in fact a much better exceptional value to return than <code>0.0</code>, as your proposed version is returning.</p>
<h2>Superstitions about floating-point are only superstitions</h2>
<p><strong>There is nothing approximate about the definition of <code><=</code> between floats</strong>. <code>a <= b</code> does <strong>not</strong> sometimes evaluate to true when <code>a</code> is just a little above <code>b</code>. If <code>a</code> and <code>b</code> are two finite <code>float</code> variables, <code>a <= b</code> evaluate to true exactly when the rational represented by <code>a</code> is less than or equal to the rational represented by <code>b</code>. The only little glitch one may perceive is actually not a glitch but a strict interpretation of the rule above: <code>+0.0 <= -0.0</code> evaluates to true, because โthe rational represented by <code>+0.0</code>โ and โthe rational represented by <code>-0.0</code>โ are both 0.</p>
<p>Similarly, <strong>there is nothing approximate about <code>==</code> between floats</strong>: two finite <code>float</code> variables <code>a</code> and <code>b</code> make <code>a == b</code> true if and only if the rational represented by <code>a</code> and the rational represented by <code>b</code> are the same.</p>
<p>Within a <code>if (f != 0.0)</code> condition, the value of <code>f</code> cannot be a representation of zero, and thus a division by <code>f</code> cannot be a division by zero. The division can still overflow. In the particular case of <code>value / maximum</code>, there cannot be an overflow because your function requires <code>0 โค value โค maximum</code>. And we don't need to wonder whether <code>โค</code> in the precondition means the relation between rationals or the relation between floats, since the two are essentially the same.</p>
<h2>This said</h2>
<p>C99 allows extra precision for floating-point expressions, which has been in the past <strong>wrongly interpreted by compiler makers as a license to make floating-point behavior erratic</strong> (to the point that the program <code>if (m != 0.) { if (m == 0.) printf("oh"); }</code> could be expected to print โohโ in some circumstances).</p>
<p>In reality, a C99 compiler that offers IEEE 754 floating-point and defines <code>FLT_EVAL_METHOD</code> to a nonnegative value cannot change the value of <code>m</code> after it has been tested. The variable <code>m</code> was set to a value representable as float when it was last assigned, and that value either is a representation of 0 or it isn't. Only operations and constants can have excess precision (See the C99 standard, 5.2.4.2.2:8).</p>
<p>In the case of GCC, recent versions do what is proper with <code>-fexcess-precision=standard</code>, implied by <code>-std=c99</code>.</p>
<h2>Further reading</h2>
<ul>
<li><p>David Monniaux's <a href="http://arxiv.org/abs/cs/0701192" rel="noreferrer">description</a> of the sad state of floating-point in C a few years ago (first version published in 2007). David's report does not try to interpret the C99 standard but describes the reality of floating-point computation in C as it was then, with real examples. The situation has much improved since, thanks to improved standard-compliance in compilers that care and thanks to the SSE2 instruction set that renders the entire issue moot.</p></li>
<li><p>The <a href="http://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html" rel="noreferrer">2008 mailing list post</a> by Joseph S. Myers describing the then current GCC situation with floats in GCC (bad), how he interpreted the standard (good) and how he was implementing his interpretation in GCC (GOOD).</p></li>
</ul> | 2014-05-07 13:10:44.237000+00:00 | 2014-05-07 14:31:45.317000+00:00 | 2014-05-07 14:31:45.317000+00:00 | null | 23,505,212 | <pre><code>// value will always be in the range of [0.0 - maximum]
float obtainRatio(float value, float maximum){
if(maximum != 0.f){
return value / maximum;
}else{
return 0.f;
}
}
</code></pre>
<p>The range of <code>maximum</code> can be anything, including negative numbers. The range of <code>value</code> can also be anything, though the function is only required to make "sense" when the input is in the range of <code>[0.0 - maximum]</code>. The output should always be in the range of <code>[0.0 - 1.0]</code></p>
<p>I have two questions that I'm wondering about, with this:</p>
<ul>
<li>Is this equality comparison enough to ensure the function never divides by zero?</li>
<li>If maximum is a degenerate value (extremely small or extremely large), is there a chance the function will return a result outside of [0.0 - 1.0] (assuming value is in the right range)?</li>
</ul> | 2014-05-06 21:55:31.003000+00:00 | 2014-05-07 14:31:45.317000+00:00 | null | c++|c|floating-point | ['http://arxiv.org/abs/cs/0701192', 'http://gcc.gnu.org/ml/gcc-patches/2008-11/msg00105.html'] | 2 |
31,136,718 | <p>You'll need n(2n-1) steps (worst case). Here is an intuitive explanation:</p>
<p>Suppose there are 4 sorted groups of size (n/2) each. Let's call them A,B,C,D.
Also suppose that each of these groups is sorted, that the initial input vector is DCBA and that the final sorted vector should be ABCD.</p>
<p>In each single operation we can change the order of 2 groups (e.g. change BA to AB).</p>
<p>sorting DCBA requires the following steps:</p>
<p>DCBA --> CDAB (2 steps) --> CADB (1 step) --> ACBD (2 steps) --> ABCD (1 step)</p>
<p>Total steps: 6 = 4*3/2</p>
<p>Now support that you need to sort FEDCBA:</p>
<p>FEDCBA --> EFCDAB (3 steps) --> ECFADB (2 steps) --> CEAFBD (3 steps) --> CAEBFD (2 steps) --> ACBEDF (3 steps) --> ABCDEF (2 steps)</p>
<p>Total steps: 15 = 6*5/2</p>
<p>And so on....</p>
<p>To sort x blocks of size (n/2) each you'll need x(x-1)/2 steps (each step sorts n consecutive elements).</p>
<p>nยฒ elements are 2n * (n/2) blocks, so you'll need (2n)(2n-1)/2 = n(2n-1) steps.</p>
<hr>
<p><strong>Edit:</strong></p>
<p><strong>What if a single n-sorter (F) can sort arbitrary elements (not necessarily consecutive)?</strong></p>
<p>This turns out to a research-level problem related to <a href="https://en.wikipedia.org/wiki/Sorting_network" rel="nofollow">sorting networks</a>. See also <a href="https://mitpress.mit.edu/sites/default/files/Chapter%2027.pdf" rel="nofollow">here</a>.</p>
<p>Take a look at this <a href="http://arxiv.org/pdf/1407.0961v1.pdf" rel="nofollow">recent paper by Shi, Yan, and Wagh</a>:</p>
<blockquote>
<p>In this work, we propose an n-way merging algorithm, which generalizes
the odd-even merge by using n-sorters as basic building blocks, where
n (โฅ 2) is prime. Based on this merging algorithm, we also propose a
sorting algorithm. For N = n^p input values, p + โn/2โ ร p(pโ1)/2
stages are needed. The complexity of the sorting network is evaluated
by the total number of n-sorters. The closed form expression for the
number of sorters is also derived.</p>
</blockquote> | 2015-06-30 11:13:33.780000+00:00 | 2015-06-30 20:10:44.783000+00:00 | 2015-06-30 20:10:44.783000+00:00 | null | 31,134,625 | <p>There is a function F() that can sort any n numbers, now there are n^2 numbers need to sort, how many times of calling F() you need at least ? (you can only call F() ).
I thought out a method like bubble sort, about O(n^2) times calling. Have any better way? </p> | 2015-06-30 09:35:10.967000+00:00 | 2015-06-30 20:10:44.783000+00:00 | null | algorithm|sorting | ['https://en.wikipedia.org/wiki/Sorting_network', 'https://mitpress.mit.edu/sites/default/files/Chapter%2027.pdf', 'http://arxiv.org/pdf/1407.0961v1.pdf'] | 3 |
11,677,185 | <p>I was wondering the same thing a while ago and created a prototypical implementation of such a thunk-duplication function. You can read about the result in my preprint โ<a href="http://arxiv.org/abs/1207.2017" rel="noreferrer">dup โ Explicit un-sharing in haskell</a>โ and see the code at <a href="http://darcs.nomeata.de/ghc-dup" rel="noreferrer">http://darcs.nomeata.de/ghc-dup</a>. Unfortunately, the paper was neither accepted for the Haskell Symposium nor the Haskell Implementors Workshop this year.</p>
<p>To my knowledge, there is no real-world-ready solution to the problem; only fragile work-arounds as the unit parameter trick that might break due to one or the other compiler optimizations.</p> | 2012-07-26 20:05:08.353000+00:00 | 2012-07-26 20:22:55.487000+00:00 | 2012-07-26 20:22:55.487000+00:00 | null | 11,675,807 | <p>One of my struggles with lazy evaluation in Haskell is the difficulty of reasoning about memory usage. I think the ability to duplicate a thunk would make this much easier for me. Here's an example.</p>
<p>Let's create a really big list:</p>
<pre><code>let xs = [1..10000000]
</code></pre>
<p>Now, let's create a bad function:</p>
<pre><code>bad = do
print $ foldl1' (+) xs
print $ length xs
</code></pre>
<p>With no optimizations, this eats up a few dozen MB of ram. The garbage collector can't deallocate xs during the fold because it will be needed for calculating the length later.</p>
<p>Is it possible to reimplement this function something like this:</p>
<pre><code>good = do
(xs1,xs2) <- copyThunk xs
print $ foldl1' (+) xs1
print $ length xs2
</code></pre>
<p>Now, xs1 and xs2 would represent the same value, but also be independent of each other in memory so the garbage collector can deallocate during the fold preventing memory wasting. (I think this would slightly increase the computational cost though?)</p>
<p>Obviously in this trivial example, refactoring the code could easily solve this problem, but It seems like it's not always obvious how to refactor. Or sometimes refactoring would greatly reduce code clarity.</p> | 2012-07-26 18:34:12.347000+00:00 | 2013-06-04 20:11:10.457000+00:00 | null | haskell|lazy-evaluation | ['http://arxiv.org/abs/1207.2017', 'http://darcs.nomeata.de/ghc-dup'] | 2 |
17,749,722 | <p>As others say, the secure RNG can have limited throughput. To mitigate this
you can either stretch that secure randomness by seeding a CPRNG, or you can
try to optimise your usage of the bitstream.</p>
<p>To shuffle a pack of cards, for example, you need only 226 bits, but a naive
algorithm (calling <code>nextInt(n)</code> for each card) will likely use 1600 or 3200
bits, wasting 85% of your entropy and making you seven times more susceptible
to delays.</p>
<p>For this situation I think the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow">Doctor Jacques
method</a> would be appropriate.</p>
<p>To go with that, here's some performance analysis against progressively more
costly entropy sources (also contains code):</p>
<p><a href="http://arxiv.org/pdf/1012.4290.pdf" rel="nofollow">Bit recycling for scaling random number generators</a></p>
<p>I would lean towards efficient usage rather than stretching, because I think it
would be a lot easier to prove the fairness of an efficient consumer of a
trustworthy entropy stream, than to prove the fairness of any drawing method
with a well-seeded PRNG.</p>
<hr>
<p><strong>EDIT<sup>2</sup>:</strong>
I don't really know Java, but I put this together:</p>
<pre><code>public class MySecureRandom extends java.security.SecureRandom {
private long m = 1;
private long r = 0;
@Override
public final int nextInt(int n) {
while (true) {
if (m < 0x80000000L) {
m <<= 32;
r <<= 32;
r += (long)next(32) - Integer.MIN_VALUE;
}
long q = m / n;
if (r < n * q) {
int x = (int)(r % n);
m = q;
r /= n;
return x;
}
m -= n * q;
r -= n * q;
}
}
}
</code></pre>
<p>This does away with the greedy default uniform [0,n-1] generator and replaces it with a modified Doctor Jacques version. Timing a card-shuffle range of values shows almost a 6x speed-up over the <code>SecureRandom.nextInt(n)</code>.</p>
<p>My previous version of this code (only 2x speed-up) assumed that <code>SecureRandom.next(b)</code> was efficient, but it turns out that call was discarding entropy and dragging the whole loop down. This version manages its own chunking.</p> | 2013-07-19 15:20:40.437000+00:00 | 2013-07-20 14:53:52.440000+00:00 | 2013-07-20 14:53:52.440000+00:00 | null | 17,746,768 | <p>Java provides an cryptographically secure random number generator in the package java.secure.random. </p>
<p>Is it possible to use this number generator if I consider things like seeding and cyclic re-instantiation of the RNG? Or can I use the number generator 'as it is'?</p>
<p>Has anyone experience with this generator?</p>
<p><strong>EDIT</strong>: the requirements are:</p>
<p>a) Be statistically independent</p>
<p>b) Be fairly distributed (within statistically expected bounds) over their range</p>
<p>c) Pass various recognized statistical tests</p>
<p>d) Be cryptographically strong.</p> | 2013-07-19 13:02:34.267000+00:00 | 2015-01-02 09:48:02.213000+00:00 | 2013-07-19 13:22:40.147000+00:00 | java|algorithm|random|prng | ['http://mathforum.org/library/drmath/view/65653.html', 'http://arxiv.org/pdf/1012.4290.pdf'] | 2 |
69,129,741 | <p>In my answer I refer to the original paper <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">Attention Is All You Need</a> by Vaswani et al..</p>
<ol>
<li>The input is transformed into the matrix. For this purpose, a Word embedding layer is used, which can be thought of as a lookup table.</li>
<li>The encoder creates a representation matrix in one shot.
This is then the input for the decoder. So the matrix <strong>V</strong> and <strong>K</strong> which is used in the decoder is the representation of the input.</li>
<li>The encoder already knows the total input in advance. To avoid cheating, the words still to be processed are masked so that the encoder cannot "cheat".</li>
</ol> | 2021-09-10 08:53:57.110000+00:00 | 2021-09-10 08:53:57.110000+00:00 | null | null | 65,579,177 | <p>I am trying to understand the transformer model. Please consider my below example and help me to understand the concept.</p>
<p>Example: English to french conversion</p>
<p>My questions:</p>
<ol>
<li><p>Is the input word embedding is an English- french pretrained embedding?</p>
</li>
<li><p>In which step of the decoder the prediction of a french word is happening?</p>
</li>
<li><p>Is output embedding in decoder is just decoder's output till predicted, if so why should I mask the next word since that is unknown to me as I still yet not passed as output</p>
</li>
</ol>
<p>Please clarify me this doubt</p>
<p>I also referred to these links:</p>
<ul>
<li><a href="https://datascience.stackexchange.com/questions/81727/what-would-be-the-target-input-for-transformer-decoder-during-test-phase">https://datascience.stackexchange.com/questions/81727/what-would-be-the-target-input-for-transformer-decoder-during-test-phase</a></li>
<li><a href="https://datascience.stackexchange.com/questions/51785/what-is-the-first-input-to-the-decoder-in-a-transformer-model">https://datascience.stackexchange.com/questions/51785/what-is-the-first-input-to-the-decoder-in-a-transformer-model</a></li>
</ul> | 2021-01-05 12:43:50.783000+00:00 | 2021-09-10 08:53:57.110000+00:00 | 2021-01-11 19:32:32.097000+00:00 | nlp|sequence|transformer-model|attention-model | ['https://arxiv.org/pdf/1706.03762.pdf'] | 1 |
42,172,069 | <p>If your graph is connected, a way to construct the array xy to pass to gplot is as v(:,[2 3]) where v is the matrix of eigenvectors of the Laplacian matrix, ordered from smallest eigenvalues to largest. So we can do it this way:</p>
<pre><code>L=diag(sum(A))-A;
[v,~]=eig(L);
xy=v(:,[2 3])
gplot(A,xy)
</code></pre>
<p>or this way:</p>
<pre><code>L=diag(sum(A))-A;
[v,~]=eigs(L,3,'SM')
xy=v(:,[2 1])
gplot(A,xy)
</code></pre>
<p>The second one should be more efficient, especially if A is big.</p>
<p>This will create a nice plot under normal circumstances. This is not guaranteed to work; in particular, it is not guaranteed to assign different nodes different coordinates. But usually it works pretty nicely.</p>
<p>Some theory behind this can be found at <a href="https://arxiv.org/pdf/1311.2492.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1311.2492.pdf</a></p> | 2017-02-11 04:00:20.440000+00:00 | 2017-02-11 04:13:09.473000+00:00 | 2017-02-11 04:13:09.473000+00:00 | null | 27,339,909 | <p>I want to create a plot showing connections between nodes from an adjacency matrix like the one below.</p>
<p><img src="https://i.stack.imgur.com/WyV7Q.png" alt="enter image description here"></p>
<p><a href="http://www.mathworks.com/help/matlab/ref/gplot.html" rel="nofollow noreferrer">gplot</a> seems like the best tool for this. However, in order to use it, I need to pass the coordinate of each node. The problem is that I don't know where the coordinates should be, I was hoping the function would be capable of figuring out a good layout for me.</p>
<p>For example here's my output using the following arbitrary coordinates:</p>
<pre><code> A = [1 1 0 0 1 0;
1 0 1 0 1 0;
0 1 0 1 0 0;
0 0 1 0 1 1;
1 1 0 1 0 0;
0 0 0 1 0 0];
crd = [0 1;
1 1;
2 1;
0 2;
1 2;
2 2];
gplot (A, crd, "o-");
</code></pre>
<p><img src="https://i.stack.imgur.com/JKtD3.png" alt="enter image description here"></p>
<p>Which is hard to read, but if I play around with the coordinates a bit and change them to the following it becomes much more readable.</p>
<pre><code> crd = [0.5 0;
0 1;
0 2;
1 2;
1 1;
1.5 2.5];
</code></pre>
<p><img src="https://i.stack.imgur.com/VRrwp.png" alt="enter image description here"></p>
<p>I don't expect perfectly optimized coordinates or anything, but how can I tell MATLAB to automatically figure out a set of coordinates for me that looks okay using some sort of <a href="http://cs.brown.edu/~rt/gdhandbook/chapters/force-directed.pdf" rel="nofollow noreferrer">algorithm</a> so I can graph something that looks like the top picture.</p>
<p>Thanks in advance.</p> | 2014-12-07 05:26:54.540000+00:00 | 2017-02-11 04:13:09.473000+00:00 | 2016-10-25 13:53:40.807000+00:00 | matlab|matrix|octave|graph-theory|adjacency-matrix | ['https://arxiv.org/pdf/1311.2492.pdf'] | 1 |
65,682,293 | <p>As an alternative approach, one may consider <a href="https://arxiv.org/abs/0904.1413" rel="nofollow noreferrer">Engel's algorithm for absorbing Markov chains</a> to compute absorption probabilities. This requires no matrix inversion / linear system solution, and hence there is no need in rational number arithmetic.</p> | 2021-01-12 10:40:35.007000+00:00 | 2021-01-12 10:40:35.007000+00:00 | null | null | 40,433,526 | <p>I'm trying to figure out this problem. Hopefully someone can tell me how to complete this. I consulted the following pages, but I was unable to write a code in java/python that produces the correct output and passes all test cases. I'd appreciate any and all help.</p>
<p><a href="https://stackoverflow.com/questions/25282408/markov-chain-probability-calculation-python">Markov chain probability calculation - Python</a></p>
<p><a href="https://stackoverflow.com/questions/12011803/calculating-markov-chain-probabilities-with-values-too-large-to-exponentiate">Calculating Markov chain probabilities with values too large to exponentiate</a></p>
<p>Write a function answer(m) that takes an array of array of nonnegative ints representing how many times that state has gone to the next state and return an array of ints for each terminal state giving the exact probabilities of each terminal state, represented as the numerator for each state, then the denominator for all of them at the end and in simplest form. The matrix is at most 10 by 10. It is guaranteed that no matter which state the ore is in, there is a path from that state to a terminal state. That is, the processing will always eventually end in a stable state. The ore starts in state 0. The denominator will fit within a signed 32-bit integer during the calculation, as long as the fraction is simplified regularly.
For example, consider the matrix m:</p>
<pre><code>[
[0,1,0,0,0,1], # s0, the initial state, goes to s1 and s5 with equal probability
[4,0,0,3,2,0], # s1 can become s0, s3, or s4, but with different probabilities
[0,0,0,0,0,0], # s2 is terminal, and unreachable (never observed in practice)
[0,0,0,0,0,0], # s3 is terminal
[0,0,0,0,0,0], # s4 is terminal
[0,0,0,0,0,0], # s5 is terminal
]
So, we can consider different paths to terminal states, such as:
s0 -> s1 -> s3
s0 -> s1 -> s0 -> s1 -> s0 -> s1 -> s4
s0 -> s1 -> s0 -> s5
Tracing the probabilities of each, we find that
s2 has probability 0
s3 has probability 3/14
s4 has probability 1/7
s5 has probability 9/14
</code></pre> | 2016-11-05 00:25:16.550000+00:00 | 2021-01-12 10:40:35.007000+00:00 | 2017-05-23 10:31:02.950000+00:00 | java|python | ['https://arxiv.org/abs/0904.1413'] | 1 |
49,570,929 | <p>You can still use only one model for that kind of problem.</p>
<p>However, your model will have a 5-fold architecture as follows:</p>
<ol>
<li>A first "core" model taking in input your dog photos</li>
<li>A second model taking in input the output of the core model and predicting the age</li>
<li>A third model taking in input the output of the core model and predicting the gender</li>
<li>A fourth model taking in input the output of the core model and predicting the breed</li>
<li>A fifth model taking in input the output of the core model and predicting the coordinates...</li>
</ol>
<p>While theoretically feasible, this may be a bit cumbersome to put in place in order to have good results for all models.</p>
<p>That is more or less what google brain did <a href="https://arxiv.org/abs/1706.05137" rel="nofollow noreferrer">here</a> (except that their models are more diverse).</p>
<p>PS: For this kind of question, you should rather ask them on stats.stackexchange.com or datascience.stackexchange.com</p> | 2018-03-30 08:21:59.330000+00:00 | 2018-03-30 08:21:59.330000+00:00 | null | null | 49,569,472 | <p>Suppose I have a dataset of photos of dogs. There is only a single dog per photograph. What does the output vector for a single sample look like, and how do I train the network if I want it to:</p>
<ul>
<li><p>predict the age (in days) of the dog in the picture</p></li>
<li><p>classify the dog's gender</p></li>
<li><p>classify the dog's breed</p></li>
<li><p>predict (x,y) coordinates of dog's nose within the picture (where each coordinate is a value between 0-1, indicating percentage of the distance from top left corner of the input image)</p></li>
</ul> | 2018-03-30 06:25:02.007000+00:00 | 2018-03-30 08:21:59.330000+00:00 | null | tensorflow|machine-learning|deep-learning|classification|regression | ['https://arxiv.org/abs/1706.05137'] | 1 |
58,160,260 | <p>You can use <a href="https://docs.python.org/3/library/gzip.html#gzip.decompress" rel="nofollow noreferrer">gzip.decompress</a>:</p>
<pre><code>import tarfile, os, gzip
import sys
tar = tarfile.open("arXiv_src_9107_001a.tar")
n = 0
for member in tar.getmembers():
#Skip directory labeled at the top
if(n==0):
n=1
continue
f=tar.extractfile(member)
print(member)
content=f.read()
expanded = gzip.decompress(content)
# do whatever with expanded here
tar.close()
</code></pre> | 2019-09-30 00:33:34.240000+00:00 | 2019-09-30 00:33:34.240000+00:00 | null | null | 58,127,520 | <p>I'm trying to extract a compressed file from a tar archive using Python 3.6.5. </p>
<p>I'm trying to extract files from a tar archive that contains compressed gz files. I've followed the advice <a href="https://stackoverflow.com/a/2018576/4021436">this</a> Stackoverflow answer : </p>
<pre><code>import tarfile,os
import sys
tar = tarfile.open("arXiv_src_9107_001a.tar")
n = 0
for member in tar.getmembers():
#Skip directory labeled at the top
if(n==0):
n=1
continue
f=tar.extractfile(member)
print(member)
content=f.read()
print("{} has {} newlines".format(member, content.count("\n")))
print("{} has {} spaces".format(member, content.count(" ")))
print("{} has {} characters".format(member, len(content)))
#sys.exit()
tar.close()
</code></pre>
<p>When I print out <code>vars(tar)</code> in <code>pdb</code></p>
<pre><code>(Pdb) p vars(tar)
{'mode': 'r', '_mode': 'rb', '_extfileobj': False, 'name': '/Users/user/Downloads/arXiv_src_9107_001a.tar', 'fileobj': <_io.BufferedReader name='arXiv_src_9107_001a.tar'>, 'errors': 'surrogateescape', 'pax_headers': {}, 'copybufsize': None, 'closed': False, 'members': [<TarInfo '9107' at 0x11004b048>, <TarInfo '9107/hep-lat9107001.gz' at 0x11004b110>, <TarInfo '9107/hep-lat9107002.gz' at 0x11004b1d8>, <TarInfo '9107/qc_01.gz' at 0x11004b2a0>, <TarInfo '9107/qc_02.gz' at 0x11004b368>, <TarInfo '9107/qi_01.gz' at 0x11004b430>, <TarInfo '9107/qs_01.gz' at 0x11004b4f8>, <TarInfo '9107/quant_only_01.gz' at 0x11004b5c0>], '_loaded': True, 'offset': 69120, 'inodes': {}, 'firstmember': None}
</code></pre>
<p>If I print out the <code>content</code> variable, I get a bytes object. E.g. : </p>
<pre><code>b'\x1f\x8b\x08\x08\xe5C\x12M\x00\x03hep-lat9107001\x00\xed}{w\xdbF\x92\xef\xfc\x1b|\x8a\xbe\xf72\x13i#R\x00\x08\xf0\x91\x8c\xf7\x1c?c\xcf\xc6\x8f\xb5\x9d\xc9\xeeZN\x06"!\tc\x92\xe0\x10\xa0d\x85W\xf9\xec\xf...
</code></pre>
<p><strong>Question</strong></p>
<p>In the case where a tar archive is composed of individually compressed files, how do I read/decompress those gz files into usable human language strings?</p> | 2019-09-27 03:38:28.050000+00:00 | 2019-09-30 00:33:34.240000+00:00 | null | python-3.x|gzip|tar | ['https://docs.python.org/3/library/gzip.html#gzip.decompress'] | 1 |
58,776,265 | <p>Here are some of the things that I would try next.
(I am also an amateur. Please correct me if I am wrong)</p>
<ol>
<li>Try to extract <a href="https://machinelearningmastery.com/what-are-word-embeddings/" rel="nofollow noreferrer">vector representation</a> from the text. Try out word2vec, GloVe, FastText, ELMo. Extract vector representation and then feed them into the network. You could also create an <a href="https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/" rel="nofollow noreferrer">embedding layer</a> to help with that. This <a href="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/" rel="nofollow noreferrer">blog</a> has more information.</li>
<li>256 Recurrent units might be too much. I think that one should never start with a huge network. Start small. See if you are underfitting. If yes, then go larger.</li>
<li>Switch out the optimizer. I find that Adam tends to overfit. I had better success with rmsprop and Adadelta.</li>
<li>Perhaps, <a href="https://duckduckgo.com/?q=attention%20is%20all%20you%20need&t=osx&ia=web" rel="nofollow noreferrer">attention is all you need?</a> Transformers have recently made massive contributions to NLP. Perhaps you could try <a href="https://github.com/philipperemy/keras-attention-mechanism" rel="nofollow noreferrer">implementing simple soft attention mechanism</a> in your network. Here is a <a href="https://www.youtube.com/watch?v=SysgYptB198" rel="nofollow noreferrer">nice video series</a> if you are not already familiar. An <a href="https://distill.pub/2016/augmented-rnns/#attentional-interfaces" rel="nofollow noreferrer">interactive research paper</a> on it.</li>
<li>CNN's are also <a href="https://arxiv.org/abs/1408.5882" rel="nofollow noreferrer">pretty dope</a> in NLP applications. Although they intuitively don't make any sense for text data (to most people). Perhaps you could try leveraging them, stack it, etc. Play around. Here is a <a href="https://arxiv.org/pdf/1510.03820.pdf" rel="nofollow noreferrer">guide</a> on how to use it for sentence classification. I know, your domain is different. But I think the intuition carries over. :)</li>
</ol> | 2019-11-09 03:55:12.203000+00:00 | 2019-11-09 03:55:12.203000+00:00 | null | null | 58,764,687 | <p>I'm trying to do text prediction using recurrent neural networks (LSTM) with dataset from books. It doesn't matter how much I try changing layers size or other parameters, it always overfits. </p>
<p>I've been trying changing amount of layers, amount of units in LSTM layer, regularization, normalization, batch_size, shuffle training data/validation data, change dataset to bigger. For now I try with ~140kb txt book. I have also tried 200kb, 1mb, 5mb.</p>
<p>Creating training/validation data:</p>
<pre><code>sequence_length = 30
x_data = []
y_data = []
for i in range(0, len(text) - sequence_length, 1):
x_sequence = text[i:i + sequence_length]
y_label = text[i + sequence_length]
x_data.append([char2idx[char] for char in x_sequence])
y_data.append(char2idx[y_label])
X = np.reshape(x_data, (data_length, sequence_length, 1))
X = X/float(vocab_length)
y = np_utils.to_categorical(y_data)
# Split into training and testing set, shuffle data
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, shuffle=False)
# Shuffle testing set
X_test, y_test = shuffle(X_test, y_test, random_state=0)
</code></pre>
<p>Creating model:</p>
<pre><code>model = Sequential()
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2]), return_sequences=True, recurrent_initializer='glorot_uniform', recurrent_dropout=0.3))
model.add(LSTM(256, return_sequences=True, recurrent_initializer='glorot_uniform', recurrent_dropout=0.3))
model.add(LSTM(256, recurrent_initializer='glorot_uniform', recurrent_dropout=0.3))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
</code></pre>
<p><a href="https://i.stack.imgur.com/RC3ME.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RC3ME.png" alt="enter image description here"></a>
Compile model:</p>
<pre><code>model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
</code></pre>
<p>I get following characteristics:
<a href="https://i.stack.imgur.com/rpk0V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rpk0V.png" alt="enter image description here"></a></p>
<p>I don't know what to do about this overfitting, because I am searching internet, trying many things but none of them seems to work.</p>
<p>How could I get better results? These prediction doesn't seem to be good right now.</p> | 2019-11-08 10:22:48.283000+00:00 | 2019-11-09 03:55:12.203000+00:00 | 2019-11-08 11:17:42.477000+00:00 | python|tensorflow|machine-learning|keras|neural-network | ['https://machinelearningmastery.com/what-are-word-embeddings/', 'https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/', 'http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/', 'https://duckduckgo.com/?q=attention%20is%20all%20you%20need&t=osx&ia=web', 'https://github.com/philipperemy/keras-attention-mechanism', 'https://www.youtube.com/watch?v=SysgYptB198', 'https://distill.pub/2016/augmented-rnns/#attentional-interfaces', 'https://arxiv.org/abs/1408.5882', 'https://arxiv.org/pdf/1510.03820.pdf'] | 9 |
62,368,492 | <p><a href="https://arxiv.org/pdf/1211.0906v2.pdf" rel="nofollow noreferrer">This paper</a> proposes a kernel for Hamming distances measured between categorical features. It's simply a matter of replacing the Euclidean distance in the standard exponential kernel with Hamming.</p>
<p><a href="https://i.stack.imgur.com/ES3Jh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ES3Jh.png" alt="enter image description here"></a></p>
<p>It is also possible to combine Euclidean and Hamming distances into one kernel, which would be good for datasets with a mix of continuous and discrete variables.</p>
<p><a href="https://i.stack.imgur.com/35fCk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35fCk.png" alt="enter image description here"></a></p>
<p>The good news is that they also prove that this kernel is indeed positive definite (on page 14). </p> | 2020-06-14 04:57:11.653000+00:00 | 2020-06-14 04:57:11.653000+00:00 | null | null | 5,551,172 | <p>I have a standard {-1,+1} machine learning problem. The main difference is that the data points are binary strings, so their prooximity is measured by Hamming distance.
Can SVM be applied in this case? What SVM library is suited better for this task? </p> | 2011-04-05 11:37:13.110000+00:00 | 2020-06-14 04:57:11.653000+00:00 | null | machine-learning|svm | ['https://arxiv.org/pdf/1211.0906v2.pdf', 'https://i.stack.imgur.com/ES3Jh.png', 'https://i.stack.imgur.com/35fCk.png'] | 3 |
58,079,028 | <p>I have the same issue, but I don't really have time to write a library myself unfortunately. There are several potential options that I am considering: </p>
<ol>
<li><p>Stick with TF1.X until someone creates a library </p></li>
<li><p>Switch to using lightfm to continue using WALS </p></li>
<li><p>Switch to neural collaborative filtering using embedding layers with keras and a dot product layer. See this paper <a href="https://arxiv.org/abs/1708.05031" rel="nofollow noreferrer">https://arxiv.org/abs/1708.05031</a>, and this code implementation:</p></li>
</ol>
<pre><code>from tensorflow.keras.layers import Input, Embedding, Flatten, Dot, Dense
from tensorflow.keras.models import Model
#import tensorflow.distribute
def get_compiled_model(n_users, n_items, embedding_dims=20):
# Product embedding
prod_input = Input(shape=[1], name="Item-Input")
prod_embedding = Embedding(n_items+1, embedding_dims, name="Item-Embedding")(prod_input)
prod_vec = Flatten(name="Flatten-Product")(prod_embedding)
# User embedding
user_input = Input(shape=[1], name="User-Input")
user_embedding = Embedding(n_users+1, embedding_dims, name="User-Embedding")(user_input)
user_vec = Flatten(name="Flatten-Users")(user_embedding)
# The output is the dot product of the two, i.e. a one-hot vector
dot_product = Dot(name="Dot-Product", axes=1)([prod_vec, user_vec])
# compile - uncomment these two lines to make training distributed
# dist_strat = distribute.Strategy()
# with dist_strat.scope():
model = Model(inputs = [user_input, prod_input], outputs = dot_product)
model.compile(
optimizer='adam',
loss='mean_squared_error'
)
return model
</code></pre> | 2019-09-24 11:13:09.620000+00:00 | 2019-09-24 11:13:09.620000+00:00 | null | null | 57,902,387 | <p>I am using WALS method in order to perform matrix factorization. Initially in tensorflow 1.13 I can import factorization_ops using</p>
<pre><code>from tensorflow.contrib.factorization.python.ops import factorization_ops
</code></pre>
<p>As described in the <a href="https://github.com/GoogleCloudPlatform/tensorflow-recommendation-wals/blob/master/wals_ml_engine/trainer/wals.py" rel="nofollow noreferrer">documentation</a> </p>
<p>Wals model can be called from factorization_ops by using</p>
<pre><code>factorization_ops.WALSModel
</code></pre>
<p>Using same command in tensorflow 2.0 giving me following error</p>
<p><strong>ModuleNotFoundError: No module named 'tensorflow.contrib.factorization</strong></p>
<p>Going through the <a href="https://github.com/tensorflow/tensorflow/issues/31350" rel="nofollow noreferrer">issue</a> there appears to be no way out to use WALSModel in tensorflow 2.0+. </p>
<p>Also it has been mentioned <a href="https://github.com/tensorflow/tensorflow/releases" rel="nofollow noreferrer">here</a> in tensorflow release updates that tf.contrib has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as tensorflow/addons or tensorflow/io, or removed entirely.</p>
<p>How can I use WALS model in tensorflow 2.0 (Currently I am using 2.0.0-rc0 on windows machine) ? Is WALSModel has been removed or I am missing out some information ?</p> | 2019-09-12 08:09:34.017000+00:00 | 2019-12-06 15:24:36.577000+00:00 | 2019-09-24 15:51:42.510000+00:00 | python|tensorflow|tensorflow2.0|matrix-factorization | ['https://arxiv.org/abs/1708.05031'] | 1 |
29,849,947 | <p>There is a expected running time O(n^{3/2}) algorithm.</p>
<p>If you generate a uniform random digraph with m vertices such that each vertex has k labelled out-arcs (a k-out digraph), then with high probability the largest SCC (strongly connected component) in this digraph is of size around c_k m, where c_k is a constant depending on k. Actually, there is about 1/\sqrt{m} probability that the size of this SCC is exactly c_k m (rounded to an integer). </p>
<p>So you can generate a uniform random 2-out digraph of size n/c_k, and check the size of the largest SCC. If its size is not exactly n, just try again until success. The expected number of trials needed is \sqrt{n}. And generating each digraph should be done in O(n) time. So in total the algorithm has expected running time O(n^{3/2}). See <a href="http://arxiv.org/pdf/1504.06238v1.pdf" rel="nofollow">this paper</a> for more details.</p> | 2015-04-24 14:20:53.957000+00:00 | 2015-04-24 14:20:53.957000+00:00 | null | null | 5,525,643 | <p>The DFA must have the following four properties:</p>
<ul>
<li><p>The DFA has N nodes</p></li>
<li><p>Each node has 2 outgoing transitions.</p></li>
<li><p>Each node is reachable from every other node.</p></li>
<li><p>The DFA is chosen with perfectly uniform randomness from all possibilities</p></li>
</ul>
<p>This is what I have so far:</p>
<ol>
<li>Start with a collection of N nodes.</li>
<li>Choose a node that has not already been chosen.</li>
<li>Connect its output to 2 other randomly selected nodes</li>
<li>Label one transition 1 and the other transition 0.</li>
<li>Go to 2, unless all nodes have been chosen.</li>
<li>Determine if there is a node with no incoming connections.</li>
<li>If so, steal an incoming connection from a node with more than 1 incoming connection.</li>
<li>Go to 6, unless there are no nodes with no incoming connections</li>
</ol>
<p>However, this is algorithm is not correct. Consider the graph where node 1 has its two connections going to node 2 (and vice versa), while node 3 has its two connection going to node 4 (and vice versa). That is something like:</p>
<p>1 <==> 2</p>
<p>3 <==> 4</p>
<p>Where, by <==> I mean two outgoing connections both ways (so a total of 4 connections). This seems to form 2 cliques, which means that not every state is reachable from every other state.</p>
<p>Does anyone know how to complete the algorithm? Or, does anyone know another algorithm? I seem to vaguely recall that a binary tree can be used to construct this, but I am not sure about that.</p> | 2011-04-02 20:24:43.723000+00:00 | 2015-04-24 14:20:53.957000+00:00 | 2011-04-02 20:49:31.717000+00:00 | algorithm|random|finite-automata|dfa|state-machine | ['http://arxiv.org/pdf/1504.06238v1.pdf'] | 1 |
69,857,412 | <p>When the Facebook engineers have been asked similar questions in their Github repository issues, they've usually pointed to <a href="https://github.com/facebookresearch/fastText/issues/807#issuecomment-495552561" rel="nofollow noreferrer">one</a> or <a href="https://github.com/facebookresearch/fastText/issues/401#issuecomment-368935887" rel="nofollow noreferrer">the</a> other of two shell scripts in their public code (& especially the 'normalize_text' functions within).</p>
<p><a href="https://github.com/facebookresearch/fastText/blob/master/tests/fetch_test_data.sh#L20" rel="nofollow noreferrer">https://github.com/facebookresearch/fastText/blob/master/tests/fetch_test_data.sh#L20</a></p>
<pre><code>normalize_text() {
tr '[:upper:]' '[:lower:]' | sed -e 's/^/__label__/g' | \
sed -e "s/'/ ' /g" -e 's/"//g' -e 's/\./ \. /g' -e 's/<br \/>/ /g' \
-e 's/,/ , /g' -e 's/(/ ( /g' -e 's/)/ ) /g' -e 's/\!/ \! /g' \
-e 's/\?/ \? /g' -e 's/\;/ /g' -e 's/\:/ /g' | tr -s " " | myshuf
}
</code></pre>
<p><a href="https://github.com/facebookresearch/fastText/blob/master/get-wikimedia.sh#L12" rel="nofollow noreferrer">https://github.com/facebookresearch/fastText/blob/master/get-wikimedia.sh#L12</a></p>
<pre><code>normalize_text() {
sed -e "s/โ/'/g" -e "s/โฒ/'/g" -e "s/''/ /g" -e "s/'/ ' /g" -e "s/โ/\"/g" -e "s/โ/\"/g" \
-e 's/"/ " /g' -e 's/\./ \. /g' -e 's/<br \/>/ /g' -e 's/, / , /g' -e 's/(/ ( /g' -e 's/)/ ) /g' -e 's/\!/ \! /g' \
-e 's/\?/ \? /g' -e 's/\;/ /g' -e 's/\:/ /g' -e 's/-/ - /g' -e 's/=/ /g' -e 's/=/ /g' -e 's/*/ /g' -e 's/|/ /g' \
-e 's/ยซ/ /g' | tr 0-9 " "
}
</code></pre>
<p>They've also referenced <a href="https://fasttext.cc/docs/en/crawl-vectors.html#tokenization" rel="nofollow noreferrer">this page's section on 'Tokenization'</a> (which names some libraries), and the <a href="https://arxiv.org/abs/1802.06893" rel="nofollow noreferrer">academic paper which describes the earlier work making individual language vectors</a>.</p>
<p>None of these are guaranteed to exactly match what was used to create their pretrained classification models, & it's a bit frustrating that each release of such models doesn't contain the <em>exact</em> code to reproduce. But, these sources seem to be as much detail as is available, without getting direct answers/help from the team that created them.</p> | 2021-11-05 17:53:52.093000+00:00 | 2021-11-05 17:53:52.093000+00:00 | null | null | 69,852,169 | <p>I want to use pretreained fastext model for language detection: <a href="https://fasttext.cc/docs/en/language-identification.html" rel="nofollow noreferrer">https://fasttext.cc/docs/en/language-identification.html</a> . Where can I find <strong>the exact Python code</strong> for text preprocessing used for training this specific model? I am not interested in general answers about how should we prepare text for using models - I ma looking for identical transformations as those used for training.</p> | 2021-11-05 10:56:50.507000+00:00 | 2021-11-05 17:53:52.093000+00:00 | 2021-11-05 11:51:14.690000+00:00 | nlp|text-processing|text-classification|fasttext | ['https://github.com/facebookresearch/fastText/issues/807#issuecomment-495552561', 'https://github.com/facebookresearch/fastText/issues/401#issuecomment-368935887', 'https://github.com/facebookresearch/fastText/blob/master/tests/fetch_test_data.sh#L20', 'https://github.com/facebookresearch/fastText/blob/master/get-wikimedia.sh#L12', 'https://fasttext.cc/docs/en/crawl-vectors.html#tokenization', 'https://arxiv.org/abs/1802.06893'] | 6 |
61,157,245 | <blockquote>
<p>We have a performance issue where an expression is first translated to a Boost.uBLAS vector and then evaluated. It makes a difference if the vector creation could be skipped and that the vector_expression is used directly. </p>
</blockquote>
<p>uBLAS provides expression templates (ET) with which temporary object creation is avoided. This is very beneifical if you have multiple elementwise operations in one expression. In such cases, ET has a positiv impact on the runtime performance - although I have to say that it depends on the vector and matrix sizes. Have a look at <a href="https://arxiv.org/pdf/1104.1729.pdf" rel="nofollow noreferrer">this paper</a>.</p>
<blockquote>
<p>I couldn't find in the Boost.uBLAS documentation if this is allowed.</p>
</blockquote>
<p>It is allowed - otherwise the copy constructors would have been protected or private, see the class <a href="https://github.com/boostorg/ublas/blob/1e8689bb0b316340ef93c2e4c9bfd14ad626d391/include/boost/numeric/ublas/matrix_expression.hpp#L49" rel="nofollow noreferrer">matrix_expression</a> description.</p>
<blockquote>
<p>In fact the examples in the documentation are with the container
classes and not with expressions directly. </p>
</blockquote>
<p>True. We need to provide more examples.
However, the ublas <a href="https://www.boost.org/doc/libs/1_72_0/libs/numeric/ublas/doc/vector_expression.html#vector_operations" rel="nofollow noreferrer">documentation</a> shows that the binary vector operation creates an <code>expression_type</code> which holds a reference to the operation and operands. You can use such an instance.</p>
<blockquote>
<p>Boost.uBLAS uses expression templates which in theory should make the
case work. The norm_2 function accepts a vector_expression as argument
which could be a second clue.</p>
</blockquote>
<p>Yes, have a look at the generated assembler in <a href="https://godbolt.org/z/NcE6-8" rel="nofollow noreferrer">compiler explorer</a>. No memory is allocated inside the <code>for</code> loop. So your approach is valid.</p>
<p>You can also write</p>
<pre class="lang-cpp prettyprint-override"><code>// norm_2 uses a temporary expression instance again without memory allocation
d += ublas::norm_2(row1 - row2);
</code></pre> | 2020-04-11 13:03:15.340000+00:00 | 2020-04-13 10:42:23.470000+00:00 | 2020-04-13 10:42:23.470000+00:00 | null | 61,136,792 | <p>We have a performance issue where an expression is first translated to a Boost.uBLAS vector and then evaluated. It makes a difference if the vector creation could be skipped and that the vector_expression is used directly. I couldn't find in the Boost.uBLAS documentation if this is allowed. In fact the examples in the documentation are with the container classes and not with expressions directly. It only mentions that Boost.uBLAS uses expression templates which in theory should make the case work. The norm_2 function accepts a vector_expression as argument which could be a second clue.</p>
<p>A simplified case is like this where the norm between rows of a matrix is calculated:</p>
<pre><code>#include <boost/numeric/ublas/assignment.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/matrix_proxy.hpp>
#include <boost/numeric/ublas/vector.hpp>
int main()
{
namespace ublas = boost::numeric::ublas;
ublas::matrix<double> m{3, 4};
m <<= 0, 1, 2, 3,
3, 4, 5, 6,
6, 7, 8, 9;
double d = 0;
for (size_t n = 1; n != m.size1(); ++n)
{
const ublas::matrix_row<ublas::matrix<double>> row1{m, 0};
const ublas::matrix_row<ublas::matrix<double>> row2{m, n};
#if 1
const auto e = row1 - row2; // creates an expression
d += ublas::norm_2(e); // uses the expression
#else
const ublas::vector<double> v = row1 - row2; // creates a vector (performance issue)
d += ublas::norm_2(v);
#endif
}
return 0;
}
</code></pre>
<p>Does anyone know if this is allowed?</p> | 2020-04-10 08:41:31.823000+00:00 | 2020-04-13 10:42:23.470000+00:00 | 2020-04-10 11:15:07.453000+00:00 | c++|boost-ublas | ['https://arxiv.org/pdf/1104.1729.pdf', 'https://github.com/boostorg/ublas/blob/1e8689bb0b316340ef93c2e4c9bfd14ad626d391/include/boost/numeric/ublas/matrix_expression.hpp#L49', 'https://www.boost.org/doc/libs/1_72_0/libs/numeric/ublas/doc/vector_expression.html#vector_operations', 'https://godbolt.org/z/NcE6-8'] | 4 |
46,291,282 | <p>During training, the BatchNorm-layer tries to do two things:</p>
<ul>
<li>estimate the mean and variance of the entire training set (population statistics)</li>
<li>normalize the inputs mean and variance, such that they behave like a Gaussian</li>
</ul>
<p>In the ideal case, one would use the population statistic of the entire dataset in the second point. However, these are unknown and change during training. There are also some other issues with this.</p>
<p>A <strong>work-around</strong> is doing the normalization of the input by</p>
<pre><code>gamma * (x - mean) / sigma + b
</code></pre>
<p>based on <strong>mini-batch</strong> statistics <code>mean</code>, <code>sigma</code>.</p>
<p>During training, the running average of mini-batch statistics is used to approximate the <strong>population</strong> statistics.</p>
<p>Now, the original BatchNorm formulation uses the approximated mean and variance of the <strong>entire</strong> dataset for normalization during inference. As the network is fixed, the approximation of the <code>mean</code> and <code>variance</code> should be pretty good. While it seems to make sense to now use the population statistic, it is a critical change: from mini-batch statistics to statistics of the entire training data.</p>
<p>It is critical when batches are not iid or have very small batch sizes during training. (But I also observed it for batches of size 32).</p>
<p>The proposed BatchNorm implicitly simply assumes that both statistics are very similar. In particular, training on mini-batches of size 1 as in pix2pix or dualgan gives very bad information about the population statistic. Here it is the case, that they might contain totally different values.</p>
<p>Having now a deep network, the late layers expect inputs to be normalized batches (in the sense of mini-batch statistics). Note, they are trained on this particular kind of data. But using the entire-dataset statistics violates the assumption during inference.</p>
<p>How to solve this issue? Either also use mini-batch statistics during inference as in the implementations you mentioned. Or use <a href="https://arxiv.org/pdf/1702.03275.pdf" rel="noreferrer">BatchReNormalization</a> which introduces 2 additional terms to remove the difference between the mini-batch and population statistics
or simply use InstanceNormalization (for regression tasks), which is, in fact, the same as BatchNorm but treats each example in the batch individually and also does no use population statistics.</p>
<p>I also had this issue during research and now use for the regression task the InstanceNorm layer.</p> | 2017-09-19 03:25:50.870000+00:00 | 2019-01-07 09:51:04.197000+00:00 | 2019-01-07 09:51:04.197000+00:00 | null | 46,290,930 | <p>Batch Normalization has different behavior in training phase and testing phase.</p>
<p>For example, when using <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/layers/batch_norm" rel="noreferrer">tf.contrib.layers.batch_norm</a> in tensorflow, we should set different value for <code>is_training</code> in different phase. </p>
<p>My <strong>qusetion</strong> is: what if I still set <code>is_training=True</code> when testing? That is to say what if I still use the training mode in testing phase? </p>
<p>The reason why I come up with this question is that, the released code of both <a href="https://github.com/affinelayer/pix2pix-tensorflow/blob/master/pix2pix.py#L116" rel="noreferrer">Pix2Pix</a> and <a href="https://github.com/duxingren14/DualGAN/blob/master/ops.py#L8" rel="noreferrer">DualGAN</a> don't set <code>is_training=False</code> when testing. And it seems that if <code>is_training=False</code> is set when testing, the quality of generated images could be very bad.</p>
<p>Is there someone could please explain this? thanks.</p> | 2017-09-19 02:34:50.293000+00:00 | 2019-01-07 09:51:04.197000+00:00 | 2017-09-19 02:40:21.523000+00:00 | tensorflow|normalization | ['https://arxiv.org/pdf/1702.03275.pdf'] | 1 |
51,879,680 | <p>The Object Detection API losses are defined in: <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/core/losses.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/core/losses.py</a></p>
<p>In particular, the following loss classes have been implemented:</p>
<p><strong>Classification losses:</strong></p>
<ol>
<li>WeightedSigmoidClassificationLoss</li>
<li>SigmoidFocalClassificationLoss</li>
<li>WeightedSoftmaxClassificationLoss</li>
<li>WeightedSoftmaxClassificationAgainstLogitsLoss</li>
<li>BootstrappedSigmoidClassificationLoss</li>
</ol>
<p><strong>Localization losses:</strong></p>
<ol>
<li>WeightedL2LocalizationLoss</li>
<li>WeightedSmoothL1LocalizationLoss</li>
<li>WeightedIOULocalizationLoss</li>
</ol>
<p>The weight parameters are used to balance anchors (prior boxes) and are of size <code>[batch_size, num_anchors]</code> in addition to hard negative mining. Alternatively, the <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer">focal loss</a> down weighs well classified examples and focusses on the hard examples.</p>
<p>The primary class imbalance is due to many more negative examples (bounding boxes without objects of interest) in comparison to very few positive examples (bounding boxes with object classes). That seems to be the reason why class imbalance within positive examples (i.e. unequal distribution of positive class labels) is not implemented as part of object detection losses.</p> | 2018-08-16 14:39:34.747000+00:00 | 2018-08-16 14:39:34.747000+00:00 | null | null | 51,862,997 | <p>I'm fine-tuning <a href="https://arxiv.org/pdf/1512.02325.pdf" rel="noreferrer">SSD</a> object detector using <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="noreferrer">TensorFlow object detection API</a> on <a href="https://storage.googleapis.com/openimages/web/index.html" rel="noreferrer">Open Images Dataset</a>. My training data contains imbalanced classes, e.g.</p>
<ol>
<li>top (5K images)</li>
<li>dress (50K images)</li>
<li>etc...</li>
</ol>
<p>I would like to add class weights to classification loss to improve performance. How do I do that? The following section of the config file seems relevant:</p>
<pre><code>loss {
classification_loss {
weighted_sigmoid {
}
}
localization_loss {
weighted_smooth_l1 {
}
}
...
classification_weight: 1.0
localization_weight: 1.0
}
</code></pre>
<p>How can I change the config file to add classification loss weights per class? If not through a config file, what's a recommended way of doing that? </p> | 2018-08-15 16:48:18.790000+00:00 | 2021-10-15 00:17:12.620000+00:00 | null | python|tensorflow|object-detection-api | ['https://github.com/tensorflow/models/blob/master/research/object_detection/core/losses.py', 'https://arxiv.org/pdf/1708.02002.pdf'] | 2 |
47,206,880 | <p>Object detection requires a lot of labeled data of the same class to generalize well, and in your setting it would be impossible to train a network with only single instance.</p>
<p>I assume that in your case online object trackers can work, at least give it a try. There are some convolutional object trackers that work great like <a href="https://arxiv.org/abs/1606.09549" rel="nofollow noreferrer">Siamese CNNs</a>. The code is open source at <a href="https://github.com/bertinetto/siamese-fc" rel="nofollow noreferrer">github</a>, and you can watch <a href="https://youtu.be/jZoUalMMZ_0?t=5m45s" rel="nofollow noreferrer">this video</a> to see its performance.</p>
<p><strong>Online object tracking:</strong> Given the initialized state (e.g., position
and size) of a target object in a frame of a video, the goal
of tracking is to estimate the states of the target in the subsequent
frames.-<a href="https://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Wu_Online_Object_Tracking_2013_CVPR_paper.pdf" rel="nofollow noreferrer">source</a>-</p> | 2017-11-09 16:26:03.973000+00:00 | 2017-11-09 16:26:03.973000+00:00 | null | null | 47,174,676 | <p>The vision system is given a single training image (e.g. a piece of 2D artwork ) and it is asked whether the piece of artwork is present in the newly captured photos. The newly captured photos can contain a lot of any other object and when the artwork is presented, it must face up but may be occluded.</p>
<p>The pose space is x,y,rotation and scale. The artwork may be highly symmetric or not.</p>
<p>What is the latest state of the art handling this kind of problem?</p>
<p>I have tried/considered the following options but there are some problems in all of them. If my argument is invalid please correct me.</p>
<ol>
<li><p>deep learning(rcnn/yolo): a lot of labeled data are needed means a lot of human labor is needed for each new pieces of artwork.</p></li>
<li><p>traditional machine learning(SVM,Random forest): same as above</p></li>
<li><p>sift/surf/orb + ransac or voting: when the artwork is symmetric, the features matched are mostly incorrect. A lot of time is needed in the ransac/voting stage.</p></li>
<li><p>generalized hough transform: the state space is too large for the voting table. Pyramid can be applied but it is difficult to choose some universal thresholds for different kinds of artwork to proceed down the pyramid.</p></li>
<li><p>chamfer matching: the state space is too large. Too much time is needed in searching across the state space.</p></li>
</ol>
<hr> | 2017-11-08 08:19:50.440000+00:00 | 2018-02-26 06:54:47.853000+00:00 | 2018-02-26 06:54:47.853000+00:00 | computer-vision | ['https://arxiv.org/abs/1606.09549', 'https://github.com/bertinetto/siamese-fc', 'https://youtu.be/jZoUalMMZ_0?t=5m45s', 'https://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Wu_Online_Object_Tracking_2013_CVPR_paper.pdf'] | 4 |
44,862,653 | <ol>
<li><p>Encode tree structure: Think of Recurrent Neural Network, which you have one chain which can be construct by for loop. But here you have a tree. So you would need do some kind of loop with branch. Recursive function call might work with some Python overhead.
I suggest you build neural network with 'define by run' framework (like <a href="https://chainer.org/" rel="nofollow noreferrer">Chainer</a>, <a href="http://pytorch.org/" rel="nofollow noreferrer">PyTorch</a>) to reduce overhead. Because your tree may have to be rebuild different for each data sample, which require to rebuilding computation graph.
Read <a href="https://arxiv.org/abs/1503.00075" rel="nofollow noreferrer">Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks</a>, with original Torch7 implementation <a href="https://github.com/stanfordnlp/treelstm" rel="nofollow noreferrer">here</a> and <a href="https://github.com/ttpro1995/TreeLSTMSentiment" rel="nofollow noreferrer">PyTorch implementation</a>, you may have some ideal.</p></li>
<li><p>For encoding a tag at node, I think an easiest way would be encoding them as you do with word.
For example, a node data is [word vector][tag vector]. If node is leaf, you have word, but may not have tag (you did not say that there is tag at leaf node), so leaf data representation is [word][zero vector] (or [word vector][tag vector]). The case inner node that does not have word=> [zero vector][tag vector]. Then, you have inner node and leaf with same vector dimension of data representation. You may treat them equally (or not :3) </p></li>
</ol> | 2017-07-01 15:36:01.480000+00:00 | 2017-07-01 15:36:01.480000+00:00 | null | null | 26,022,866 | <p>I have a tree, specifically a parse tree with tags at the nodes and strings/words at the leaves. I want to pass this tree as input into a neural network all the while preserving its structure.</p>
<p>Current approach
Assume we have some dictionary of words w1,w2.....wn
Encode the words that appear in the parse tree as n dimensional binary vectors with a 1 showing up in the ith spot whenever the word in the parse tree is wi</p>
<p>Now how about the tree structure? There are about 2^n possible parent tags for n words that appear at the leaves So we cant set a max length of input words and then just brute force enumerate all trees.</p>
<p>Right now all i can think of is to approximate the tree by choosing the direct parent of a leaf. This can be represented by a binary vector as well with dimension equal to number of different types of tags - on the order of ~ 100 i suppose.
My input is then two dimensional. The first is just the vector representation of a word and the second is the vector representation of its parent tag</p>
<p>Except this will lose a lot of the structure in the sentence. Is there a standard/better way of solving this problem?</p> | 2014-09-24 17:22:51.507000+00:00 | 2018-01-15 14:20:34.097000+00:00 | null | machine-learning|nlp|neural-network|stanford-nlp|deep-learning | ['https://chainer.org/', 'http://pytorch.org/', 'https://arxiv.org/abs/1503.00075', 'https://github.com/stanfordnlp/treelstm', 'https://github.com/ttpro1995/TreeLSTMSentiment'] | 5 |
41,542,955 | <p>First of all, we do know that finding any longest common subsequence of two sequences with length n cannot be done in O(n<sup>2-ฮต</sup>) time unless the Strong Exponential Time Hypothesis fails, see:
<a href="https://arxiv.org/abs/1412.0348" rel="nofollow noreferrer">https://arxiv.org/abs/1412.0348</a></p>
<p>This pretty much implies that you cannot count the number of ways how to align common subsequences to the input sequences in O(n<sup>2-ฮต</sup>) time.
On the other hand, it is possible to count the number of ways of such alignments in O(n<sup>2</sup>) time. It is also possible to count them in O(n<sup>2</sup>/log(n)) time with the so-called four-Russians speed-up.</p>
<p>Now the real question if you really intended to calculate this or you want to find the number of <em>different</em> subsequences? I am afraid that this latter is a #P-complete counting problem. At least, we do know that counting the number of sequences with a given length that a regular grammar can generate is #P-complete:</p>
<blockquote>
<p>S. Kannan, Z. Sweedyk, and S. R. Mahaney. Counting
and random generation of strings in regular languages.
In ACM-SIAM Symposium on Discrete Algorithms
(SODA), pages 551โ557, 1995</p>
</blockquote>
<p>This is a similar problem in that sense that counting the number of <em>ways</em> a regular grammar can generate sequences of a given length is a trivial dynamic programming algorithm. However, if you do not want to distinguish generations resulting the same sequence, then the problem turns from easy to extremely hard. My natural conjecture is that this should be the case for sequence alignment problems, too (longest common subsequence, edit distance, shortest common superstring, etc.).</p>
<p>So if you would like to calculate the number of <em>different</em> subsequences of two sequences, then very likely your current algorithm is wrong and any algorithm cannot calculate it in polynomial time unless P = NP (and more...).</p> | 2017-01-09 07:28:04.793000+00:00 | 2017-01-09 07:36:57.087000+00:00 | 2017-01-09 07:36:57.087000+00:00 | null | 2,245,364 | <p>I'm trying to calculate the <strong>amount</strong> of longest possible subsequences that exist between two strings. </p>
<p>e.g.
String X = "efgefg";
String Y = "efegf";</p>
<p>output: The Number of longest common sequences is: 3
(i.e.: efeg, efef, efgf - this doesn't need to be calculated by the algorithm, just shown here for demonstration)</p>
<p>I've managed to do this in O(|X|*|Y|) using dynamic programming based on the general idea here: <a href="https://stackoverflow.com/questions/2207987/cheapest-path-algorithm">Cheapest path algorithm</a>. </p>
<p>Can anyone think of a way to do this calculation with better runtime efficiently?</p>
<p>--Edited in response to Jason's comment.</p> | 2010-02-11 15:13:55.117000+00:00 | 2017-07-03 16:56:37.123000+00:00 | 2017-05-23 11:46:05.743000+00:00 | algorithm|lcs | ['https://arxiv.org/abs/1412.0348'] | 1 |
57,533,803 | <p>TF Ranking provides a framework for you to implement your own algorithm. It helps you set up the components around the core of your model (input_fn, metrics and loss functions, etc.). But the scoring logic (aka scoring function) should be provided by the user.</p>
<p>You probably have already seen these, but just in case:</p>
<p><a href="https://arxiv.org/abs/1812.00073" rel="nofollow noreferrer">TF-Ranking: Scalable TensorFlow Library for Learning-to-Rank
</a></p>
<p>and</p>
<p><a href="https://arxiv.org/abs/1811.04415" rel="nofollow noreferrer">Learning Groupwise Multivariate Scoring Functions Using Deep Neural Networks</a></p> | 2019-08-17 06:23:01.103000+00:00 | 2019-08-17 06:23:01.103000+00:00 | null | null | 56,875,105 | <p>What is the algorithm behind tf-ranking? and can we use Lambdamart algorithm in tf-ranking .Can anyone suggest some good sources to learn these </p> | 2019-07-03 17:13:09.950000+00:00 | 2019-08-17 06:23:01.103000+00:00 | null | tensorflow|ranking|google-ranking | ['https://arxiv.org/abs/1812.00073', 'https://arxiv.org/abs/1811.04415'] | 2 |
8,613,711 | <p>Joule is a pure asynchronous message passing language: </p>
<p><a href="http://en.wikipedia.org/wiki/Joule_%28programming_language%29" rel="nofollow">http://en.wikipedia.org/wiki/Joule_%28programming_language%29</a></p>
<p><a href="http://www.erights.org/history/joule/MANUAL.BK5.pdf" rel="nofollow">http://www.erights.org/history/joule/MANUAL.BK5.pdf</a></p>
<p>ActorScript is a pure Actor message-passing language, but appears to only exist as a specification: </p>
<p><a href="http://arxiv.org/abs/1008.2748" rel="nofollow">http://arxiv.org/abs/1008.2748</a></p> | 2011-12-23 08:20:17.627000+00:00 | 2011-12-23 08:20:17.627000+00:00 | null | null | 8,547,272 | <p>Is there a programming language where you don't have to define actors yourself - every function is just ran as a separate actor (which can mean a separate thread if there are free cores available) by default?</p>
<p>For example it means that if I write something as simple as</p>
<pre><code>v = fA(x) + fB(y)
</code></pre>
<p>then fA and fB could be calculated simultaneously before the sum of their results was assigned to v.</p> | 2011-12-17 19:15:54.287000+00:00 | 2011-12-23 08:20:17.627000+00:00 | 2011-12-17 23:13:25.877000+00:00 | functional-programming|parallel-processing|actor | ['http://en.wikipedia.org/wiki/Joule_%28programming_language%29', 'http://www.erights.org/history/joule/MANUAL.BK5.pdf', 'http://arxiv.org/abs/1008.2748'] | 3 |
28,822,509 | <p>Other answers being good, i will try to provide another perspective and tackle the intuitive part of the question.</p>
<p><a href="http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm" rel="nofollow">EM (Expectation-Maximization) algorithm</a> is a variant of a class of iterative algorithms using <a href="http://en.wikipedia.org/wiki/Duality_%28mathematics%29" rel="nofollow">duality</a></p>
<p>Excerpt (emphasis mine):</p>
<blockquote>
<p>In mathematics, a duality, generally speaking, translates concepts,
theorems or mathematical structures into other concepts, theorems or
structures, in a one-to-one fashion, often (but not always) by means
of an involution operation: if the dual of A is B, then the dual of B
is A. Such involutions <strong>sometimes have fixed points</strong>, so that the dual
of A is A itself</p>
</blockquote>
<p>Usually a <em>dual</em> B of an <em>object</em> A is related to A in some way that preserves some <em>symmetry or compatibility</em>. For example AB = <strong>const</strong></p>
<p>Examples of iterative algorithms, employing duality (in the previous sense) are:</p>
<ol>
<li><a href="http://en.wikipedia.org/wiki/Euclidean_algorithm" rel="nofollow">Euclidean algorithm for Greatest Common Divisor, and its variants</a></li>
<li><a href="http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process" rel="nofollow">GramโSchmidt Vector Basis algorithm and variants</a></li>
<li><a href="http://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric_mean" rel="nofollow">Arithmetic Mean - Geometric Mean Inequality, and its variants</a></li>
<li><a href="http://arxiv.org/pdf/1105.1476.pdf" rel="nofollow">Expectation-Maximization algorithm and its variants</a> (see also <a href="http://www.researchgate.net/profile/Shun_Ichi_Amari/publication/222444851_Information_geometry_of_the_EM_and_em_algorithms_for_neural_networks/links/0f3175385938a75a6c000000.pdf" rel="nofollow">here for an information-geometric view</a>)</li>
<li>(.. other similar algorithms..)</li>
</ol>
<p>In a similar fashion, <a href="http://www.cs.toronto.edu/~fritz/absps/emk.pdf" rel="nofollow">the EM algorithm can also be seen as two dual maximization steps</a>:</p>
<blockquote>
<p>..[EM] is seen as maximizing a joint function of the parameters and of
the distribution over the unobserved variables.. The E-step maximizes
this function with respect to the distribution over the unobserved
variables; the M-step with respect to the parameters..</p>
</blockquote>
<p>In an iterative algorithm using duality there is the explicit (or implicit) assumption of an equilibrium (or fixed) point of convergence (for EM this is proved using Jensen's inequality)</p>
<p>So the outline of such algorithms is:</p>
<ol>
<li><strong>E-like step:</strong> Find best solution <strong>x</strong> with respect to given <strong>y</strong> being held constant.</li>
<li><strong>M-like step (dual):</strong> Find best solution <strong>y</strong> with respect to <strong>x</strong> (as computed in previous step) being held constant.</li>
<li><strong>Criterion of Termination/Convergence step:</strong> Repeat steps 1, 2 with the updated values of <strong>x</strong>,<strong>y</strong> until convergence (or specified number of iterations is reached)</li>
</ol>
<p><strong>Note</strong> that when such an algorithm converges to a (global) optimum, it has found a configuration which is <strong>best in both senses</strong> (i.e in both the <strong>x</strong> domain/parameters and the <strong>y</strong> domain/parameters). However the algorithm can just find a <em>local</em> optimum and not the <em>global</em> optimum.</p>
<p>i would say this is the intuitive description of the outline of the algorithm</p>
<p>For the statistical arguments and applications, other answers have given good explanations (check also references in this answer)</p> | 2015-03-03 01:22:51.580000+00:00 | 2015-03-03 21:20:44.257000+00:00 | 2015-03-03 21:20:44.257000+00:00 | null | 11,808,074 | <p>Expectation Maximization (EM) is a kind of probabilistic method to classify data. Please correct me if I am wrong if it is not a classifier. </p>
<p>What is an intuitive explanation of this EM technique? What is <code>expectation</code> here and what is being <code>maximized</code>?</p> | 2012-08-04 10:56:12.230000+00:00 | 2019-04-28 18:28:00.697000+00:00 | 2019-04-28 18:28:00.697000+00:00 | machine-learning|cluster-analysis|data-mining|mathematical-optimization|expectation-maximization | ['http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm', 'http://en.wikipedia.org/wiki/Duality_%28mathematics%29', 'http://en.wikipedia.org/wiki/Euclidean_algorithm', 'http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process', 'http://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric_mean', 'http://arxiv.org/pdf/1105.1476.pdf', 'http://www.researchgate.net/profile/Shun_Ichi_Amari/publication/222444851_Information_geometry_of_the_EM_and_em_algorithms_for_neural_networks/links/0f3175385938a75a6c000000.pdf', 'http://www.cs.toronto.edu/~fritz/absps/emk.pdf'] | 8 |
34,166,753 | <p>There could be multiple ways from using a neural network to calculate even posteriors and feeding those posteriors into Hopfield like described in this paper:</p>
<p><a href="http://www.assta.org/sst/SST-92/cache/SST-92-NeuralNetworks-p14.pdf" rel="nofollow">http://www.assta.org/sst/SST-92/cache/SST-92-NeuralNetworks-p14.pdf</a></p>
<p>Second approach would be just to convert samples to bits, that would be more complex but also more interesting like in the following modern research:</p>
<p>Multilingual Language Processing From Bytes
Dan Gillick, Cliff Brunk, Oriol Vinyals, Amarnag Subramanya
<a href="http://arxiv.org/abs/1512.00103" rel="nofollow">http://arxiv.org/abs/1512.00103</a></p>
<p>You should have study algorithm and choose yourself first.</p> | 2015-12-08 21:54:21.687000+00:00 | 2015-12-08 21:54:21.687000+00:00 | null | null | 34,143,739 | <p>I want to write a sound recognition program. I have the algorithm, but I cant properly read the voice from the microphone. I have a code from <a href="https://stackoverflow.com/">https://stackoverflow.com/</a>, witch read data from a wav file, but I don't know how to put the wav raw data in a binary vector or array. So basically I need a binary vector or array which contains the data bits (1-s or 0-s) and it has to be 2000-4000 bit long. How can I do that?</p>
<p>(I am using it with a Hopfield Neural Network)</p>
<pre><code> #include <iostream>
#include <string>
#include <fstream>
#include <cstdint>
using std::cin;
using std::cout;
using std::endl;
using std::fstream;
using std::string;
typedef struct WAV_HEADER
{
/* RIFF Chunk Descriptor */
uint8_t RIFF[4]; // RIFF Header Magic header
uint32_t ChunkSize; // RIFF Chunk Size
uint8_t WAVE[4]; // WAVE Header
/* "fmt" sub-chunk */
uint8_t fmt[4]; // FMT header
uint32_t Subchunk1Size; // Size of the fmt chunk
uint16_t AudioFormat; // Audio format 1=PCM,6=mulaw,7=alaw, 257=IBM Mu-Law, 258=IBM A-Law, 259=ADPCM
uint16_t NumOfChan; // Number of channels 1=Mono 2=Sterio
uint32_t SamplesPerSec; // Sampling Frequency in Hz
uint32_t bytesPerSec; // bytes per second
uint16_t blockAlign; // 2=16-bit mono, 4=16-bit stereo
uint16_t bitsPerSample; // Number of bits per sample
/* "data" sub-chunk */
uint8_t Subchunk2ID[4]; // "data" string
uint32_t Subchunk2Size; // Sampled data length
} wav_hdr;
// Function prototypes
int getFileSize(FILE* inFile);
int main(int argc, char* argv[])
{
wav_hdr wavHeader;
int headerSize = sizeof(wav_hdr), filelength = 0;
const char* filePath;
string input;
if (argc <= 1)
{
cout << "Input wave file name: ";
cin >> input;
cin.get();
filePath = input.c_str();
}
else
{
filePath = argv[1];
cout << "Input wave file name: " << filePath << endl;
}
FILE* wavFile = fopen(filePath, "r");
if (wavFile == nullptr)
{
fprintf(stderr, "Unable to open wave file: %s\n", filePath);
return 1;
}
//Read the header
size_t bytesRead = fread(&wavHeader, 1, headerSize, wavFile);
cout << "Header Read " << bytesRead << " bytes." << endl;
if (bytesRead > 0)
{
//Read the data
uint16_t bytesPerSample = wavHeader.bitsPerSample / 8; //Number of bytes per sample
uint64_t numSamples = wavHeader.ChunkSize / bytesPerSample; //How many samples are in the wav file?
static const uint16_t BUFFER_SIZE = 4096;
int8_t* buffer = new int8_t[BUFFER_SIZE];
while ((bytesRead = fread(buffer, sizeof buffer[0], BUFFER_SIZE / (sizeof buffer[0]), wavFile)) > 0)
{
/** DO SOMETHING WITH THE WAVE DATA HERE **/
cout << "Read " << bytesRead << " bytes." << endl;
}
delete [] buffer;
buffer = nullptr;
filelength = getFileSize(wavFile);
cout << "File is :" << filelength << " bytes." << endl;
cout << "RIFF header :" << wavHeader.RIFF[0] << wavHeader.RIFF[1] << wavHeader.RIFF[2] << wavHeader.RIFF[3] << endl;
cout << "WAVE header :" << wavHeader.WAVE[0] << wavHeader.WAVE[1] << wavHeader.WAVE[2] << wavHeader.WAVE[3] << endl;
cout << "FMT :" << wavHeader.fmt[0] << wavHeader.fmt[1] << wavHeader.fmt[2] << wavHeader.fmt[3] << endl;
cout << "Data size :" << wavHeader.ChunkSize << endl;
// Display the sampling Rate from the header
cout << "Sampling Rate :" << wavHeader.SamplesPerSec << endl;
cout << "Number of bits used :" << wavHeader.bitsPerSample << endl;
cout << "Number of channels :" << wavHeader.NumOfChan << endl;
cout << "Number of bytes per second :" << wavHeader.bytesPerSec << endl;
cout << "Data length :" << wavHeader.Subchunk2Size << endl;
cout << "Audio Format :" << wavHeader.AudioFormat << endl;
// Audio format 1=PCM,6=mulaw,7=alaw, 257=IBM Mu-Law, 258=IBM A-Law, 259=ADPCM
cout << "Block align :" << wavHeader.blockAlign << endl;
cout << "Data string :" << wavHeader.Subchunk2ID[0] << wavHeader.Subchunk2ID[1] << wavHeader.Subchunk2ID[2] << wavHeader.Subchunk2ID[3] << endl;
}
fclose(wavFile);
return 0;
}
// find the file size
int getFileSize(FILE* inFile)
{
int fileSize = 0;
fseek(inFile, 0, SEEK_END);
fileSize = ftell(inFile);
fseek(inFile, 0, SEEK_SET);
return fileSize;
}
</code></pre> | 2015-12-07 21:40:06.270000+00:00 | 2015-12-08 22:15:25.243000+00:00 | 2017-05-23 11:44:37.783000+00:00 | c++|neural-network|speech-recognition|recurrent-neural-network | ['http://www.assta.org/sst/SST-92/cache/SST-92-NeuralNetworks-p14.pdf', 'http://arxiv.org/abs/1512.00103'] | 2 |
58,450,613 | <p>I have found an approximate solution in <a href="https://arxiv.org/pdf/1611.09813v5.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.09813v5.pdf</a></p> | 2019-10-18 12:07:18.553000+00:00 | 2019-10-18 12:07:18.553000+00:00 | null | null | 58,449,727 | <p>I only know my camera rotation <code>R</code>, but without the translational component <code>T</code>. Intrinsics are known, and I have a set of 3D points <code>P</code> with their 2D correspondences <code>Q</code>. Is there a method to compute <code>T</code> directly, and at best is differentiable in <code>P</code>.</p> | 2019-10-18 11:15:29.903000+00:00 | 2019-10-18 12:07:18.553000+00:00 | null | camera-calibration | ['https://arxiv.org/pdf/1611.09813v5.pdf'] | 1 |
57,807,008 | <p>There are a few things you need to address here:</p>
<ol>
<li><p>Is there a good reason not to just use larger batches? Are you trying to implement the <a href="https://arxiv.org/abs/1907.08610" rel="nofollow noreferrer">lookahead optimizer</a> or something?</p></li>
<li><p>You look like you're getting started with TensorFlow. Consider turning on eager execution with <code>tf.enable_eager_execution()</code>. TensorFlow 2.0 is coming soon, don't waste your time messing with <code>tf.Sessions</code>.</p></li>
<li><p>Variables are not differentiable. So accumulating the losses in a variable doesn't make any sense. </p></li>
<li><p>I would make a copy of all the model's variables, and accumulate new values there. Then, after <code>N</code> iterations assign those values back to the model. Something like:</p></li>
</ol>
<pre><code>model = tf.keras.Sequential(...)
vars = model.trainable_variables
weight_acc = [tf.Variable(var) for var in model.trainable_variables]
for n,(batch, label) in enumerate(dataset):
with tf.GradientTape() as tape:
pred = model(batch)
loss = cal_loss(batch, label)
grads = tape.gradients(loss, vars)
for g, a in zip(grad, weight_acc):
a.assign_add(learning_rate*g)
if n%25 == 0:
for a, v in zip(weight_acc, vars):
v.assign_add(lookahead_fraction*(a-v))
</code></pre> | 2019-09-05 13:49:14.727000+00:00 | 2019-09-05 13:49:14.727000+00:00 | null | null | 57,739,512 | <p>I am trying to achieve the following:</p>
<p>compute the losses in the previous 25 predictions and sum them before
computing the gradient. I have tried this:</p>
<pre><code>loss_summation=tf.Variable(0,dtype=tf.dtypes.float32,name="loss")
xentropy=tf.nn.sparse_softmax_cross_entropy_with_logits(labels=next_element[1],logits=logits2,name="xentropy")
loss=tf.math.reduce_sum(tf.reduce_mean(xentropy,name="loss"))
loss_summation=tf.assign(loss_summation,loss_summation+loss)
optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)
gvs = optimizer.compute_gradients(loss_summation,[vars])
with tf.Session() as sess():
for i in range(25):
b=sess.run([loss_summation])
</code></pre>
<p>However <code>optimizer.compute_gradients()</code> complains that
<code>None values not supported</code>. How can go around this ?</p>
<p>I am actually trying to implement the following function(feedforward of LSTM) in tensorflow to predict the next word given the previous ones</p>
<pre><code>def feedforward(self,x_s,hpre,targets,p_s):
fts,its,gts,css,ots,output,inputs=[],[],[],[],[],[],[]
losses=[]
hprev=hpre
hts=[hprev]
loss=0
losses=[]
previous_state=p_s
css.append(previous_state)
for x,y in zip(x_s,targets):
k=np.zeros((self.vocab_size,1))
k[x]=1
M_c=np.row_stack((hprev,k))
ft=self.sigmoid(np.dot(self.W1,M_c)+self.b1)
fts.append(ft)
it=self.sigmoid(np.dot(self.W2,M_c)+self.b2)
its.append(it)
gt=np.tanh(np.dot(self.W3,M_c)+self.b3)
gts.append(gt)
cs=(ft*previous_state)+(it*gt)
previous_state=cs
css.append(cs)
ot=self.sigmoid(np.dot(self.W4,M_c)+self.b4)
ots.append(ot)
ht=ot*np.tanh(cs)
hts.append(ht)
yt=self.softmax(np.dot(self.W5,ht)+self.b5)
hprev=ht
output.append(yt)
inputs.append(M_c)
loss+=-np.log(yt[y])
losses.append(loss)
return fts,its,gts,css,ots,output,hts,loss,hts[-1],css[-1],inputs
</code></pre>
<p><code>x_s</code> is a list of integers representing words. </p>
<pre><code>x_s=[0,1,2,3,4,5,6,7,8....,24]
</code></pre>
<p>targets is the list of integers expected i.e if x_s=0 then next letter is 1</p>
<pre><code>targets=[1,2,3,4,5,6,7,8,9...,25]
</code></pre>
<p>The loss which is a summation of 25 losses is what will be minimized.</p> | 2019-08-31 15:52:25.060000+00:00 | 2019-09-06 11:43:31.220000+00:00 | 2019-09-06 11:43:31.220000+00:00 | tensorflow | ['https://arxiv.org/abs/1907.08610'] | 1 |
38,158,539 | <p>Ensembles are used to fight overfitting / improve generalization or to fight specific weaknesses / use strength of different classifiers. They can be applied in any classification task.</p>
<p>I used ensembles in <a href="https://arxiv.org/abs/1707.09725" rel="nofollow noreferrer">my masters thesis</a>. The <a href="https://github.com/MartinThoma/msthesis-experiments" rel="nofollow noreferrer">code is on Github</a>.</p>
<h2>Example 1</h2>
<p>For example, think of a binary problem where you have to tell if a data point is of class A or B. This could be an image and you have to decide if there is a (A) a dog or (B) a cat on it. Now you have two classifiers (1) and (2) (e.g. two neural networks, but trained in different ways; or one SVM and a decision tree, or ...). They make the following errors:</p>
<pre><code>(1): Predicted
T | A B
R ------------
U A | 90% 10%
E B | 50% 50%
(2): Predicted
T | A B
R ------------
U A | 60% 40%
E B | 40% 60%
</code></pre>
<p>You could, for example, combine them to an ensemble by first using (1). If it predicts <code>B</code>, then you can use (2). Otherwise you stick with it.</p>
<p>Now, what would be the expected error, (falsely) assuming both are independent)?</p>
<p>If the true class is <code>A</code>, then we predict with 90% the true result. In 10% of the cases we predict <code>B</code> and use the second classifier. This one gets it right in 60% of the cases. This means if we have <code>A</code>, we predict <code>A</code> in <code>0.9 + 0.1*0.6 = 0.96 = 96%</code> of the cases.</p>
<p>If the true class is <code>B</code>, we predict in <code>50%</code> of the cases <code>B</code>. But we also need to get it right the second time, so only in <code>0.5*0.6 = 0.3 = 30%</code> of the cases we get it right there.</p>
<p>So in this simple example we made the situation better for one class, but worse for the other.</p>
<h2>Example 2</h2>
<p>Now, lets say we have 3 classifiers with </p>
<pre><code> Predicted
T | A B
R ------------
U A | 60% 40%
E B | 40% 60%
</code></pre>
<p>each, but the classifications are independent. What do you get when you make a majority vote?</p>
<p>If you have class A, the probability that at least two say it is class A is</p>
<pre><code> 0.6 * 0.6 * 0.6 + 0.6 * 0.6 * 0.4 + 0.6 * 0.4 * 0.6 + 0.4 * 0.6 * 0.6
= 1*0.6^3 + 3*(0.6^2 * 0.4^1)
= (3 nCr 3) * 0.6 + (3 nCr 2) * (0.6^2 * 0.4^1)
= 0.648
</code></pre>
<p>The same goes for the other class. So we improved the classifier to</p>
<pre><code> Predicted
T | A B
R ------------
U A | 65% 35%
E B | 35% 65%
</code></pre>
<h2>Code</h2>
<p>See <a href="http://scikit-learn.org/stable/modules/ensemble.html" rel="nofollow noreferrer">sklearns page on Ensembles</a> for code.</p>
<p><strong>The most specific example of ensemble learning are random forests.</strong></p> | 2016-07-02 09:21:14.237000+00:00 | 2017-11-27 17:50:10.540000+00:00 | 2017-11-27 17:50:10.540000+00:00 | null | 38,135,557 | <p>What are some concrete real life examples which can be solved using Boosting/Bagging algorithms? Code snippets would be greatly appreciated.</p> | 2016-07-01 01:07:59.387000+00:00 | 2020-11-27 18:15:17.363000+00:00 | null | machine-learning|nlp|ensemble-learning|boosting | ['https://arxiv.org/abs/1707.09725', 'https://github.com/MartinThoma/msthesis-experiments', 'http://scikit-learn.org/stable/modules/ensemble.html'] | 3 |
71,957,540 | <p>The <code>AsObjects.simplify</code> function is internally applied to produce the default <code>TBranch.interpretation</code> that you're using if you don't override the <code>interpretation</code> when loading a TBranch as an array. The only reason you'd pass a custom <code>interpretation</code> is if the default is wrongโit's a back-door to fixing cases in which Uproot auto-detected the <code>interpretation</code> incorrectly.</p>
<p>If the default <code>TBranch.interpreation</code> is</p>
<pre class="lang-py prettyprint-override"><code>AsObjects(AsVector(True, AsVector(False, dtype('>f4')))
</code></pre>
<p>then it did <em>try</em> to simplifyโi.e. replace the <code>AsObjects</code> with an <code>AsStridedObjects</code> or <code>AsJagged</code>โbut couldn't. This must be a C++ <code>std::vector<std::vector<float>></code>, which has a variable number of bytes per object, so there aren't any simplified interpretations that will work. What's "simplified" about <code>AsStridedObjects</code> and <code>AsJagged</code> is that they have a fixed number of bytes per object and can therefore be interpreted in bulk, without a Python for loop over all the items in the TBasket.</p>
<p>Incidentally, we studied this exact case in <a href="https://arxiv.org/abs/2102.13516" rel="nofollow noreferrer">https://arxiv.org/abs/2102.13516</a>, and the AwkwardForth solution described in that paper will be adapted into Uproot this summer. Unfortunately, that doesn't help you right now.</p>
<p>The slow-fast pattern you're seeing is because each time you ask for an entry from a different TBasket, Uproot interprets the whole TBasket. If you were running sequentially, you'd see a pause at the beginning of each TBasket. The lazy array is caching interpreted data, so when your random-access comes back to a previously read TBasket, it should be fast again: by only looking at the first few requests, you're getting an impression that each request will be slow, but that's just because early requests are more likely to hit unread TBaskets than late requests.</p>
<p>If you're only looking into this because the process as a whole is too slow (i.e. just letting it run and fill up its cache isn't good enough), then consider reading the whole TBranch into an array and randomly access the array. If your random access is in a Python loop (as opposed to Numba), then there's also nothing to be gained and some performance to be lost by calling <code>__getitem__</code> on an Awkward Array as opposed to a NumPy array, so pass <code>library="np"</code>.</p>
<p>If you don't have enough memory to load the entire TBranch into an arrayโwhich could explain why you're using a lazy arrayโthen you're in a difficult position, because the lazy array's caching would work against you: it would evict from cache the TBaskets that haven't been hit in a while, so even a long-running process would end up repeatedly reading/interpreting. This is a fundamental issue in random-access problems of data that are too large for memory: there isn't a good way to cache it because new requests keep pushing old results out of cache. (The same problem applies to disk access, web-cached data, databases, etc.)</p>
<p>Hopefully, the array fits into memory and you can random-access it in memory. Awkward Arrays have slower <code>__getitem__</code> than NumPy, but they're more compact in memory, so which one will work best for you depends on the details.</p>
<p>I hope these pointers help!</p> | 2022-04-21 16:02:57.540000+00:00 | 2022-04-21 16:02:57.540000+00:00 | null | null | 71,947,132 | <p>I am using Uproot to access a Root Tree in Python and I am noticing a significant slowdown when I try to access one particular branch: wf, which contains an array of jagged arrays</p>
<p><a href="https://i.stack.imgur.com/PSBo1.png" rel="nofollow noreferrer">Root Tree Branches</a></p>
<p>I am accessing the branches by using the Lazy/Awkward method and I am using the step_size option.</p>
<pre><code>LazyFileWF = uproot.lazy('../Layers9_Xe_Phantom102_run1.root:dstree;111', filter_name= "wf",step_size=100)
</code></pre>
<p>I experience a 6 to 10 second slow down when I want to access an entry in "LazyFileWF" but if I move on to the next consecutive entry, it only takes about 14 ms up until the end of the step_size. However my script needs to select entries randomly, not sequentially, which means every entry will take me about 8 seconds to access. I am able to access data from the other branches fairly quickly with the exception of this one and I wanted to find out why.</p>
<p>By using <code>uproot.open()</code> and then <code>.show()</code> I noticed that the interpretation of the branch was being labeled as <code>AsObjects(AsObjects(AsVector(True, AsVector(False, dtype('>f4'))))</code></p>
<p>I did some digging in the Documentation and found this:</p>
<p><a href="https://i.stack.imgur.com/ryAxK.png" rel="nofollow noreferrer">Uproot AsObjects Doc</a></p>
<p>It mentions I can use <code>simplify</code> to improve the slow deserialization.</p>
<p>So here's what I would like to know, based on the Root Tree I have, can I use <code>simplify</code> to reduce the 8 second slowdown to access my branch? And if so how can implement it? Is there a better way to read this branch?</p>
<p>I tried:</p>
<pre><code>a = uproot.AsObjects.simplify(LazyFileWF.wf)
a
</code></pre>
<p>but I got an error telling me</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_147439/260639244.py in <module>
5 LazyFileWF = uproot.lazy('../Layers9_Xe_Phantom102_run1.root:dstree;111', filter_name= "wf",step_size=100)
6 events.show(typename_width=35, interpretation_width= 60)
----> 7 a = uproot.AsObjects.simplify(LazyFileWF.wf)
8 a
~/anaconda3/envs/rapids-21.10/lib/python3.7/site-packages/uproot/interpretation/objects.py in simplify(self)
245 ``self``.
246 """
--> 247 if self._branch is not None:
248 try:
249 return self._model.strided_interpretation(
~/anaconda3/envs/rapids-21.10/lib/python3.7/site-packages/awkward/highlevel.py in __getattr__(self, where)
1129 raise AttributeError(
1130 "no field named {0}".format(repr(where))
-> 1131 + ak._util.exception_suffix(__file__)
1132 )
1133
AttributeError: no field named '_branch'
(https://github.com/scikit-hep/awkward-1.0/blob/1.7.0/src/awkward/highlevel.py#L1131)
</code></pre> | 2022-04-20 23:17:10.383000+00:00 | 2022-04-21 16:02:57.540000+00:00 | null | python|optimization|uproot | ['https://arxiv.org/abs/2102.13516'] | 1 |
44,969,996 | <p>It is not the intended use of word2vec. The word2vec algorithm internally tries to predict exact words, using surrounding words, as a roundabout way to learn useful vectors for those surrounding words. </p>
<p>But even so, it's not forming exact predictions during training. It's just looking at a single narrow training example โ context words and target word โ and performing a very simple comparison and internal nudge to make its conformance to that one example slightly better. Over time, that self-adjusts towards useful vectors โ even if the predictions remain of wildly-varying quality. </p>
<p>Most word2vec libraries don't offer a direct interface for showing ranked predictions, given context words. The Python gensim library, for the last few versions (as of current version 2.2.0 in July 2017), has offered a <code>predict_output_word()</code> method that roughly shows what the model would predict, given context-words, for some training modes. See:</p>
<p><a href="https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.predict_output_word" rel="nofollow noreferrer">https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.predict_output_word</a></p>
<p>However, considering your fill-in-the-blank query (also called a 'cloze deletion' in related education or machine-learning contexts):</p>
<pre><code>_____, who dominated chess for more than 15 years, will compete against nine top players in St Louis, Missouri
</code></pre>
<p>A vanilla word2vec model is unlikely to get that right. It has little sense of the relative importance of words (except when some words are more narrowly predictive of others). It has no sense of grammar/ordering, or or of the compositional-meaning of connected-phrases (like 'dominated chess' as opposed to the separate words 'dominated' and 'chess'). Even though words describing the same sorts of things are usually near each other, it doesn't know categories to be able to determine that the blank must be a 'person' and a 'chess player', and the fuzzy-similarities of word2vec don't guarantee words-of-a-class will necessarily all be nearer-each-other than other words.</p>
<p>There has been a bunch of work to train word/concept vectors (aka 'dense embeddings') to be better at helping at such question-answering tasks. A random example might be <a href="https://arxiv.org/abs/1609.08097" rel="nofollow noreferrer">"Creating Causal Embeddings for Question Answering with Minimal Supervision"</a> but queries like [word2vec question answering] or [embeddings for question answering] will find lots more. I don't know of easy out-of-the-box libraries for doing this, with or without a core of word2vec, though. </p> | 2017-07-07 11:46:02.900000+00:00 | 2017-07-07 11:46:02.900000+00:00 | null | null | 44,951,605 | <p>can word2vec be used for guessing words with just context?
having trained the model with a large data set e.g. Google news how can I use word2vec to predict a similar word with only context e.g. with input ", who dominated chess for more than 15 years, will compete against nine top players in St Louis, Missouri." The output should be Kasparov or maybe Carlsen.</p>
<p>I'ven seen only the similarity apis but I can't make sense how to use them for this? is this not how word2vec was intented to use? </p> | 2017-07-06 14:21:13.430000+00:00 | 2017-07-07 11:46:02.900000+00:00 | null | word2vec | ['https://radimrehurek.com/gensim/models/word2vec.html#gensim.models.word2vec.Word2Vec.predict_output_word', 'https://arxiv.org/abs/1609.08097'] | 2 |
65,270,580 | <p>I assume you want to forecast 50 time steps with the 125 previous ones (as an example). I give you the most basic Encoder-Decoder Structure for time Series but it can be improved (with <a href="https://arxiv.org/abs/1508.04025" rel="nofollow noreferrer">Luong Attention</a> for instance).</p>
<pre><code>from tensorflow.keras import layers,models
input_timesteps=125
input_features=2
output_timesteps=50
output_features=2
units=100
#Input
encoder_inputs = layers.Input(shape=(input_timesteps,input_features))
#Encoder
encoder = layers.LSTM(units, return_state=True, return_sequences=False)
encoder_outputs, state_h, state_c = encoder(encoder_inputs) # because return_sequences=False => encoder_outputs=state_h
#Decoder
decoder = layers.RepeatVector(output_timesteps)(state_h)
decoder_lstm = layers.LSTM(units, return_sequences=True, return_state=False)
decoder = decoder_lstm(decoder, initial_state=[state_h, state_c])
#Output
out = layers.TimeDistributed(Dense(output_features))(decoder)
model = models.Model(encoder_inputs, out)
</code></pre>
<p>So the core idea here is :</p>
<ol>
<li>Encode the time series into two states : <code>state_h</code> and <code>state_c</code>. Check <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="nofollow noreferrer">this</a> to understand the work of LSTM cells.</li>
<li>Repeat <code>state_h</code> the number of time steps you want to forecast</li>
<li>Decode using an LSTM with initial states calculated by the encoder</li>
<li>Use a Dense layer to shape the number of needed features for each time steps</li>
</ol>
<p>I advise you to test our achtecture and visualize them with <code>model.summary()</code> and <code>tf.keras.utils.plot_model(mode,show_shapes=True)</code>. It gives you good representations like, for the summary :</p>
<pre><code>Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) [(None, 125, 2)] 0
__________________________________________________________________________________________________
lstm_8 (LSTM) [(None, 100), (None, 41200 input_5[0][0]
__________________________________________________________________________________________________
repeat_vector_4 (RepeatVector) (None, 50, 100) 0 lstm_8[0][1]
__________________________________________________________________________________________________
lstm_9 (LSTM) (None, 50, 100) 80400 repeat_vector_4[0][0]
lstm_8[0][1]
lstm_8[0][2]
__________________________________________________________________________________________________
time_distributed_4 (TimeDistrib (None, 50, 2) 202 lstm_9[0][0]
==================================================================================================
Total params: 121,802
Trainable params: 121,802
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
<p>and the model plotted :</p>
<p><a href="https://i.stack.imgur.com/IDpvr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IDpvr.png" alt="enter image description here" /></a></p> | 2020-12-12 22:42:57.727000+00:00 | 2020-12-12 22:42:57.727000+00:00 | null | null | 65,269,119 | <p>I need to use encoder-decoder structure to predict 2D trajectories. As almost all available tutorials are related to NLP -with sparse vectors-, I couldn't be sure about how to adapt the solutions to a continuous data.</p>
<p>In addition to my ignorance in seqence-to-sequence models, <code>embedding</code> process for words confused me more. I have a dataset that consists of 3,000,000 samples each having <code>x-y</code> coordinates (-1, 1) with <code>125</code> observations, which means the shape of each sample is <code>(125, 2)</code>. I thought I could think of this as 125 words with 2 dimensional already embedded words, but the encoder and the decoder in this <a href="https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html" rel="nofollow noreferrer">Keras Tutorial</a> expect 3D arrays as <code>(num_pairs, max_english_sentence_length, num_english_characters)</code>.</p>
<p>I doubt I need to train each sample <code>(125, 2)</code> separately with this model, as the way Google's search bar does with only one word written.</p>
<p>As far as I understood, an encoder is <code>many-to-one</code> type model and a decoder is <code>one-to-many</code> type model. I need to get a memory state <code>c</code> and a hiddenstate <code>h</code> as vectors(?). Then I should use those vectors as input to decoder and extract predictions in the shape of (x,y) as many as I determine as encoder output.</p>
<p>I'd be so thankful if someone could give an example of an encoder-decoder LSTM architecture over the shape of my dataset, especially in terms of dimensions required for encoder-decoder inputs and outputs, particulary on Keras model if possible.</p> | 2020-12-12 19:44:35.813000+00:00 | 2020-12-12 22:42:57.727000+00:00 | null | keras|lstm|encoder|decoder|encoder-decoder | ['https://arxiv.org/abs/1508.04025', 'https://colah.github.io/posts/2015-08-Understanding-LSTMs/', 'https://i.stack.imgur.com/IDpvr.png'] | 3 |
36,391,033 | <p>Yes, it is possible, assuming you have ground-truth data for training. As early as 2006, there were publications on this subject, but using Markov Random Fields. You can read it <a href="https://papers.nips.cc/paper/2921-learning-depth-from-single-monocular-images.pdf" rel="noreferrer">here</a>. More recently is was done with <a href="https://papers.nips.cc/paper/5539-depth-map-prediction-from-a-single-image-using-a-multi-scale-deep-network.pdf" rel="noreferrer">Convolutional Neural Networks</a> and <a href="http://arxiv.org/abs/1502.07411" rel="noreferrer">Deep Convolutional Neural Fields</a>. Those 3 examples estimate the depth of every single pixel on the images, so they need the correct measurement for each of them.</p>
<p>If you're using a planar range finder, you'll have the correct depth for various columns of your image, according to your laser's resolution. This may imply that you need to train your NN with single rows of pixels from your images instead of full images. For full scene depth extraction, people usually employ binocular cameras or something like Kinect (just for training, of course).</p> | 2016-04-03 20:52:02.580000+00:00 | 2016-04-03 20:52:02.580000+00:00 | null | null | 36,389,893 | <p>Is it feasible to use a neural network to estimate distance in a still image or video stream?</p>
<p>I have a laser ranger finder that provides video output as well as a distance measurement. However, the distance measurement requires shining a laser into the environment, which isn't always ideal or allowed. I'd like to have an option to switch this into "passive" mode where the image is fed to a neural network, which then provides a distance estimation without the need to activate the laser. The network would initially be trained on the image+distance pair from the ranger finder in active mode.</p>
<p>I'm no expert on neural networks, and although Google finds lots of uses for neural networks with image classification and pose estimation, I can't find any prior art for distance estimation. Does this seem practical, or am I wasting my time? Would a basic feed-forward network with one input per N pixels be enough or would I need a different architecture?</p> | 2016-04-03 19:09:14.817000+00:00 | 2016-04-03 20:52:02.580000+00:00 | null | algorithm|neural-network | ['https://papers.nips.cc/paper/2921-learning-depth-from-single-monocular-images.pdf', 'https://papers.nips.cc/paper/5539-depth-map-prediction-from-a-single-image-using-a-multi-scale-deep-network.pdf', 'http://arxiv.org/abs/1502.07411'] | 3 |
67,274,126 | <p>I ran into a similar problem. The issue with WGAN is that the weight clipping method really cripples the model's ability to learn. The learning can saturate very quick. Weights are updates via backprop after every epoch but then they are clipped. I would suggest that you experiment with the clipping value more to extremes. Try [-1,1] and [-0.0001, 0.0001]. You will surely see a change. An example of saturating:
<a href="https://i.stack.imgur.com/QwTms.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QwTms.png" alt="WGAN Critic loss for 100000 epochs" /></a></p>
<p>As you can see, loss value went to 0.999975 in the first few hundred iterations and then didn't move at all for 100000 iterations. I tried experimenting with different clipping values, the loss values were different but the behavior was same. When I tried [-0.005, 0.005], the loss saturated at around 1, for [-0.02, 0.02] around 0.8.</p>
<p>Your implementation looks correct but sometimes in GANs there's only so much you can do. So I suggest you try WGAN with gradient penalty. It has a nice method of enforcing K-Lipschitz continuity by fixing the L2-norm of the interpolated image as close to 1 (check out the <a href="https://arxiv.org/abs/1704.00028" rel="nofollow noreferrer">paper</a>).
For evaluation in WGAN-GP, ideally you should see the critic's loss value start at some large negative number and then converge to 0.</p> | 2021-04-26 21:37:42.303000+00:00 | 2021-04-26 21:37:42.303000+00:00 | null | null | 61,809,378 | <p>I am trying to implement WGAN in Keras. I am using David Foster's Generative Deep Learning Book and <a href="https://github.com/eriklindernoren/Keras-GAN/blob/636ec0533df1d1ba2bfe4ec9ae7aa66bd7ee2177/wgan/wgan.py" rel="nofollow noreferrer">this code</a> as reference. I wrote down this simple code. However, whenever I start training the model, the accuracy is always 0 and the losses for Critic and Discriminator are ~0.</p>
<p>They are stuck at these number no matter how many epochs they train for. I tried various network configurations and different hyperparameters, but the result don't seem to change. Google did not help much either. I cannot pin down the source of this behavior. </p>
<p>This is the code I wrote.</p>
<pre><code>
from os.path import expanduser
import os
import struct as st
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import RMSprop
import keras.backend as K
def wasserstein_loss(y_true, y_pred):
return K.mean(y_true * y_pred)
class WGAN:
def __init__(self):
# Data Params
self.genInput=100
self.imChannels=1
self.imShape = (28,28,1)
# Build Models
self.onBuildDiscriminator()
self.onBuildGenerator()
self.onBuildGAN()
pass
def onBuildGAN(self):
if self.mGenerator is None or self.mDiscriminator is None: raise Exception('Generator Or Descriminator Uninitialized.')
self.mDiscriminator.trainable=False
self.mGAN=Sequential()
self.mGAN.add(self.mGenerator)
self.mGAN.add(self.mDiscriminator)
ganOptimizer=RMSprop(lr=0.00005)
self.mGAN.compile(loss=wasserstein_loss, optimizer=ganOptimizer, metrics=['accuracy'])
print('GAN Model')
self.mGAN.summary()
pass
def onBuildGenerator(self):
self.mGenerator=Sequential()
self.mGenerator.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.genInput))
self.mGenerator.add(Reshape((7, 7, 128)))
self.mGenerator.add(UpSampling2D())
self.mGenerator.add(Conv2D(128, kernel_size=4, padding="same"))
self.mGenerator.add(BatchNormalization(momentum=0.8))
self.mGenerator.add(Activation("relu"))
self.mGenerator.add(UpSampling2D())
self.mGenerator.add(Conv2D(64, kernel_size=4, padding="same"))
self.mGenerator.add(BatchNormalization(momentum=0.8))
self.mGenerator.add(Activation("relu"))
self.mGenerator.add(Conv2D(self.imChannels, kernel_size=4, padding="same"))
self.mGenerator.add(Activation("tanh"))
print('Generator Model')
self.mGenerator.summary()
pass
def onBuildDiscriminator(self):
self.mDiscriminator = Sequential()
self.mDiscriminator.add(Conv2D(16, kernel_size=3, strides=2, input_shape=self.imShape, padding="same"))
self.mDiscriminator.add(LeakyReLU(alpha=0.2))
self.mDiscriminator.add(Dropout(0.25))
self.mDiscriminator.add(Conv2D(32, kernel_size=3, strides=2, padding="same"))
self.mDiscriminator.add(ZeroPadding2D(padding=((0,1),(0,1))))
self.mDiscriminator.add(BatchNormalization(momentum=0.8))
self.mDiscriminator.add(LeakyReLU(alpha=0.2))
self.mDiscriminator.add(Dropout(0.25))
self.mDiscriminator.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
self.mDiscriminator.add(BatchNormalization(momentum=0.8))
self.mDiscriminator.add(LeakyReLU(alpha=0.2))
self.mDiscriminator.add(Dropout(0.25))
self.mDiscriminator.add(Conv2D(128, kernel_size=3, strides=1, padding="same"))
self.mDiscriminator.add(BatchNormalization(momentum=0.8))
self.mDiscriminator.add(LeakyReLU(alpha=0.2))
self.mDiscriminator.add(Dropout(0.25))
self.mDiscriminator.add(Flatten())
self.mDiscriminator.add(Dense(1))
disOptimizer=RMSprop(lr=0.00005)
self.mDiscriminator.compile(loss=wasserstein_loss, optimizer=disOptimizer, metrics=['accuracy'])
print('Discriminator Model')
self.mDiscriminator.summary()
pass
def fit(self, trainData, nEpochs=1000, batchSize=64):
lblForReal = -np.ones((batchSize, 1))
lblForGene = np.ones((batchSize, 1))
for ep in range(1, nEpochs+1):
for __ in range(5):
# Get Valid Images
validImages = trainData[ np.random.randint(0, trainData.shape[0], batchSize) ]
# Get Generated Images
noiseForGene=np.random.normal(0, 1, size=(batchSize, self.genInput))
geneImages=self.mGenerator.predict(noiseForGene)
# Train Critic On Valid And Generated Images With Labels -1 And 1 Respectively
disValidLoss=self.mDiscriminator.train_on_batch(validImages, lblForReal)
disGeneLoss=self.mDiscriminator.train_on_batch(geneImages, lblForGene)
# Perform Critic Weight Clipping
for l in self.mDiscriminator.layers:
weights = l.get_weights()
weights = [np.clip(w, -0.01, 0.01) for w in weights]
l.set_weights(weights)
# Train Generator Using Combined Model
geneLoss=self.mGAN.train_on_batch(noiseForGene, lblForReal)
print(' Epoch', ep, 'Critic Valid Loss,Acc', disValidLoss, 'Critic Generated Loss,Acc', disGeneLoss, 'Generator Loss,Acc', geneLoss)
pass
pass
if __name__ == '__main__':
(trainData, __), (__, __) = mnist.load_data()
trainData = (trainData.astype(np.float32)/127.5) - 1
trainData = np.expand_dims(trainData, axis=3)
WGan = WGAN()
WGan.fit(trainData)
</code></pre>
<p>I get output very similar to the following for all configs that I try.</p>
<pre><code>
Epoch 1 Critic Valid Loss,Acc [-0.00016362152, 0.0] Critic Generated Loss,Acc [0.0003417502, 0.0] Generator Loss,Acc [-0.00016735379, 0.0]
Epoch 2 Critic Valid Loss,Acc [-0.0001719332, 0.0] Critic Generated Loss,Acc [0.0003365979, 0.0] Generator Loss,Acc [-0.00017250411, 0.0]
Epoch 3 Critic Valid Loss,Acc [-0.00017473527, 0.0] Critic Generated Loss,Acc [0.00032945914, 0.0] Generator Loss,Acc [-0.00017612436, 0.0]
Epoch 4 Critic Valid Loss,Acc [-0.00017181305, 0.0] Critic Generated Loss,Acc [0.0003266656, 0.0] Generator Loss,Acc [-0.00016987178, 0.0]
Epoch 5 Critic Valid Loss,Acc [-0.0001683443, 0.0] Critic Generated Loss,Acc [0.00032702673, 0.0] Generator Loss,Acc [-0.00016638976, 0.0]
Epoch 6 Critic Valid Loss,Acc [-0.00017005506, 0.0] Critic Generated Loss,Acc [0.00032805002, 0.0] Generator Loss,Acc [-0.00017040147, 0.0]
Epoch 7 Critic Valid Loss,Acc [-0.00017353195, 0.0] Critic Generated Loss,Acc [0.00033711304, 0.0] Generator Loss,Acc [-0.00017537423, 0.0]
Epoch 8 Critic Valid Loss,Acc [-0.00017059325, 0.0] Critic Generated Loss,Acc [0.0003263024, 0.0] Generator Loss,Acc [-0.00016974319, 0.0]
Epoch 9 Critic Valid Loss,Acc [-0.00017530039, 0.0] Critic Generated Loss,Acc [0.00032463064, 0.0] Generator Loss,Acc [-0.00017845634, 0.0]
Epoch 10 Critic Valid Loss,Acc [-0.00017530067, 0.0] Critic Generated Loss,Acc [0.00033131015, 0.0] Generator Loss,Acc [-0.00017526663, 0.0]
</code></pre> | 2020-05-14 23:53:37.330000+00:00 | 2021-04-26 21:37:42.303000+00:00 | 2020-05-19 17:09:22.850000+00:00 | tensorflow|keras|deep-learning|neural-network|generative-adversarial-network | ['https://i.stack.imgur.com/QwTms.png', 'https://arxiv.org/abs/1704.00028'] | 2 |
31,479,254 | <p>You are confusing two types of measures: internal and external criteria, as defined for clustering problems (see <a href="http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html" rel="nofollow">this page</a>).</p>
<ul>
<li>Internal criterion: blindly assesses the quality of the detected community structure. This means you don't have any reference structure to which you could compare the estimated structure. Ex.: Modularity, Conductance...</li>
<li>External criterion: you compare the estimated community structure to a reference community structure (aka. ground truth, gold standard, etc.). Ex.: NMI, (A)RI, purity...</li>
</ul>
<p>There's not a 'best' measure: they are all different, and rely on a different notion of how the performance of a community detection algorithm should be quantified. A more relevant question would be: which measures are appropriate to your situation?</p>
<p>Indeed, the measures you list all require a partition of the node set. You mentioned your algorithm ignores certain nodes, so this could be a problem. A basic workaround would consist in considering each ignored node constitutes its own community. Alternatively, certain measures defined for overlapping community structures are able to handle this case.</p>
<p>Another important point is the data you're using for testing your algorithm. Do you have the actual community structures for these data? If not, then you can't use external criteria at all.</p>
<p>Note that most external criteria consider a community structure is just a partition (in the mathematical sense) of the node set. Consequently, they rely on the comparison of the reference and estimated <em>partitions</em>. This is due to the fact they all originate from the field of cluster analysis. The problem with this is they completely fail to take the network links into account. Yet, a community structure is not just a partition of the node set: the way links are distributed over this partition is very important. For this reason, you might want to assess your community structure in a more qualitative way, for instance by comparing the topological properties of the detected communities ( see <a href="http://arxiv.org/abs/1206.4987" rel="nofollow">Orman'12</a>). You can alternatively change the existing measures to make them take links into account (see <a href="http://arxiv.org/abs/1303.5441" rel="nofollow">Labatut'13</a>). Not that I particularly want to cite myself, but the papers seem relevant.</p>
<p>Regarding the concrete processing of those measures, you might want to look at the documentation of the tool you're using to perform the community detection: some of them are bundled with performance measures. For instance, if you use igraph, there is a <a href="http://igraph.org/r/doc/compare.html" rel="nofollow">function just for that</a>.</p> | 2015-07-17 15:23:40.277000+00:00 | 2015-07-17 15:23:40.277000+00:00 | null | null | 28,952,104 | <p>I want to evaluate and compare the result of my community detection algorithm in R. My algorithm doesn't allow overlapping, and there are some nodes that are not treated. For example, for the Zachary Karate club, I have 1 node not treated.
I've found a lot of metrics (NMI, ARI, Modulaity(Q), Purity, Rank Index...), and I don't which ones are the best. Currently, I'm use the modularity, purity and the Rank Index. </p>
<p><strong>Are those chosen evaluation metrics are sufficient?</strong> </p>
<p>For example, for the Rank Index is the RI(P,R)= (a+d)/(a+b+c+d) where a, b, c and d be the number of pairs of nodes that are respectively in a same community according to P and R, in a same community according to P but in different communities according to R, in different communities as given by P but in a same community as given by R, and in different communities according to both P and R, and P = {p1, p2, . . . , pk} be the output of a community detection algorithm applied to graph G =< V,E >, and R = {r1, r2, . . . , rn} be the real community structure. </p>
<p><strong>So if I deal with a large graph, how can I calculate those values? Where can I find R(the real community structure)?</strong></p> | 2015-03-09 21:29:49.350000+00:00 | 2015-07-17 15:23:40.277000+00:00 | 2015-03-10 11:41:44.990000+00:00 | detection|ranking|evaluation|modularity | ['http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html', 'http://arxiv.org/abs/1206.4987', 'http://arxiv.org/abs/1303.5441', 'http://igraph.org/r/doc/compare.html'] | 4 |
58,609,627 | <p>It depends whether the method involves training a model on style data or not.</p>
<p>At least <a href="https://arxiv.org/abs/1508.06576" rel="nofollow noreferrer">one method</a> does not require that at all, instead training a network on a classification task and then infering the style of an image during the style transfer. So you can use a model that has been pre-trained on images that you do not have, and then use it and your images to perform the style transfers.</p>
<p>There is some ready-to use code to do that : <a href="https://github.com/titu1994/Neural-Style-Transfer" rel="nofollow noreferrer">example</a></p> | 2019-10-29 14:36:57.187000+00:00 | 2019-10-31 10:13:37.563000+00:00 | 2019-10-31 10:13:37.563000+00:00 | null | 58,609,431 | <p>I have a set of around 60 fractals (e.g</p>
<p><a href="https://i.stack.imgur.com/87uAU.png)" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/87uAU.png)" alt="Fractal"></a></p>
<p>And a set of 60 snacks (e.g</p>
<p><a href="https://i.stack.imgur.com/J8lwW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J8lwW.png" alt="enter image description here"></a></p>
<p>And I want to apply the style of the fractal on the snack.</p>
<p>Is this possible?
Or must I take specifically images from an existing data set with a pre-trained images model?</p>
<p>Thanks</p> | 2019-10-29 14:26:58.783000+00:00 | 2019-10-31 10:13:37.563000+00:00 | null | python|neural-network|deep-learning|style-transfer | ['https://arxiv.org/abs/1508.06576', 'https://github.com/titu1994/Neural-Style-Transfer'] | 2 |
59,643,459 | <p>It is still not fully understand what multilingual BERT does and why it works. Recently there were two papers (first <a href="https://arxiv.org/pdf/1906.01502.pdf" rel="nofollow noreferrer">from June</a>, second <a href="https://arxiv.org/pdf/1911.03310.pdf" rel="nofollow noreferrer">from November</a>) that play around with this a little bit.</p>
<p>From the papers, it seems that the vectors tend to cluster according to languages (and even language families), so it is super easy to classify the language. This is the clustering that is shown <a href="https://arxiv.org/pdf/1911.03310.pdf" rel="nofollow noreferrer">in the paper</a>:</p>
<p><a href="https://i.stack.imgur.com/gHGJy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gHGJy.png" alt="enter image description here"></a></p>
<p>Because of that you can subtract the mean of the language from the representation and end up with a somehow cross-lingual vector that both the papers show that can be used for cross-lingual sentence retrieval.</p>
<p>Also, it seems a thousand parallel sentences (e.g., in both languages) are enough to learn a projection between the languages. Note that they did not use the <code>[CLS]</code> vector, but they mean-pooled the vectors for individual subwords.</p> | 2020-01-08 10:03:09.140000+00:00 | 2020-01-08 10:03:09.140000+00:00 | null | null | 59,619,760 | <p>Playing around with BERT, I downloaded the Huggingface Multilingual Bert and entered three sentences, saving their sentence vectors (the embedding of <code>[CLS]</code>), then translated them via Google Translate, passed them through the model and saved their sentence vectors.</p>
<p>I then compared the results using cosine similarity.</p>
<p>I was surprised to see that each sentence vector was pretty far from the one generated from the sentence translated from it (0.15-0.27 cosine distance) while different sentences from the same language were quite close indeed (0.02-0.04 cosine distance).</p>
<p>So instead of having sentences of similar meaning (but different languages) grouped together (in 768 dimensional space ;) ), dissimilar sentences of the same language are closer. </p>
<p>To my understanding the whole point of Multilingual Bert is inter-language transfer learning - for example training a model (say, and FC net) on representations in one language and having that model be readily used in other languages. </p>
<p>How can that work if sentences (of different languages) of the exact meaning are mapped to be more apart than dissimilar sentences of the same language? </p>
<p>My code:</p>
<pre><code>import torch
import transformers
from transformers import AutoModel,AutoTokenizer
bert_name="bert-base-multilingual-cased"
tokenizer = AutoTokenizer.from_pretrained(bert_name)
MBERT = AutoModel.from_pretrained(bert_name)
#Some silly sentences
eng1='A cat jumped from the trees and startled the tourists'
e=tokenizer.encode(eng1, add_special_tokens=True)
ans_eng1=MBERT(torch.tensor([e]))
eng2='A small snake whispered secrets to large cats'
t=tokenizer.tokenize(eng2)
e=tokenizer.encode(eng2, add_special_tokens=True)
ans_eng2=MBERT(torch.tensor([e]))
eng3='A tiger sprinted from the bushes and frightened the guests'
e=tokenizer.encode(eng3, add_special_tokens=True)
ans_eng3=MBERT(torch.tensor([e]))
# Translated to Hebrew with Google Translate
heb1='ืืชืื ืงืคืฅ ืืืขืฅ ืืืืืื ืืช ืืชืืืจืื'
e=tokenizer.encode(heb1, add_special_tokens=True)
ans_heb1=MBERT(torch.tensor([e]))
heb2='ื ืืฉ ืงืื ืืืฉ ืกืืืืช ืืืชืืืื ืืืืืื'
e=tokenizer.encode(heb2, add_special_tokens=True)
ans_heb2=MBERT(torch.tensor([e]))
heb3='ื ืืจ ืจืฅ ืืืฉืืืื ืืืคืืื ืืช ืืืืจืืื'
e=tokenizer.encode(heb3, add_special_tokens=True)
ans_heb3=MBERT(torch.tensor([e]))
from scipy import spatial
import numpy as np
# Compare Sentence Embeddings
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_heb1[1].data.numpy())
print ('Eng1-Heb1 - Translated sentences',result)
result = spatial.distance.cosine(ans_eng2[1].data.numpy(), ans_heb2[1].data.numpy())
print ('Eng2-Heb2 - Translated sentences',result)
result = spatial.distance.cosine(ans_eng3[1].data.numpy(), ans_heb3[1].data.numpy())
print ('Eng3-Heb3 - Translated sentences',result)
print ("\n---\n")
result = spatial.distance.cosine(ans_heb1[1].data.numpy(), ans_heb2[1].data.numpy())
print ('Heb1-Heb2 - Different sentences',result)
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng2[1].data.numpy())
print ('Heb1-Heb3 - Similiar sentences',result)
print ("\n---\n")
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng2[1].data.numpy())
print ('Eng1-Eng2 - Different sentences',result)
result = spatial.distance.cosine(ans_eng1[1].data.numpy(), ans_eng3[1].data.numpy())
print ('Eng1-Eng3 - Similiar sentences',result)
#Output:
"""
Eng1-Heb1 - Translated sentences 0.2074061632156372
Eng2-Heb2 - Translated sentences 0.15557605028152466
Eng3-Heb3 - Translated sentences 0.275478720664978
---
Heb1-Heb2 - Different sentences 0.044616520404815674
Heb1-Heb3 - Similar sentences 0.027982771396636963
---
Eng1-Eng2 - Different sentences 0.027982771396636963
Eng1-Eng3 - Similar sentences 0.024596810340881348
"""
</code></pre>
<p>P.S.</p>
<p>At least the Heb1 was closer to Heb3 than to Heb2.
This was also observed for the English equivalents, but less so. </p> | 2020-01-06 22:19:23.577000+00:00 | 2020-01-08 10:03:09.140000+00:00 | null | python|deep-learning|pytorch|multilingual|bert-language-model | ['https://arxiv.org/pdf/1906.01502.pdf', 'https://arxiv.org/pdf/1911.03310.pdf', 'https://arxiv.org/pdf/1911.03310.pdf', 'https://i.stack.imgur.com/gHGJy.png'] | 4 |
65,210,085 | <p>QuiteRSS is able to parse the information correctly.</p>
<p>Note also that instead of using the url <code>http://export.arxiv.org/rss/hep-ph</code> (for example), you can try to use <code>http://export.arxiv.org/api/query?search_query=(cat:hep-th)&sortBy=lastUpdatedDate&sortOrder=descending&max_results=200</code>, with the flags adjusted as you desire. Note that I haven't confirmed that the two feed outputs are identical (i.e. nothing falls through the cracks). The second option is just arXiv search results in an RSS-like format</p> | 2020-12-09 03:09:37.760000+00:00 | 2020-12-09 03:09:37.760000+00:00 | null | null | 36,637,572 | <p>Is there any RSS feed reader that is compatible with Arxiv rss feeds which have the annoyance of using html tags for authors? So I want a reader that does not display the author as <code><a href="http://arxiv.org/find/quant-ph/1.</code>.. but rather author's name, I do not really care about the link. I tried outlook, Rssowl, various plugins chrome extensions but either the extensions are clumsy or they cannot handle the html tags in author. I prefer a program, not some web site rss feed reader.</p> | 2016-04-15 03:05:23+00:00 | 2020-12-09 03:09:37.760000+00:00 | null | rss|rss-reader | [] | 0 |
53,893,903 | <p>feeder (f-droid) seems to work with arxiv
spaRSS fails</p> | 2018-12-22 07:41:12.833000+00:00 | 2018-12-22 07:41:12.833000+00:00 | null | null | 36,637,572 | <p>Is there any RSS feed reader that is compatible with Arxiv rss feeds which have the annoyance of using html tags for authors? So I want a reader that does not display the author as <code><a href="http://arxiv.org/find/quant-ph/1.</code>.. but rather author's name, I do not really care about the link. I tried outlook, Rssowl, various plugins chrome extensions but either the extensions are clumsy or they cannot handle the html tags in author. I prefer a program, not some web site rss feed reader.</p> | 2016-04-15 03:05:23+00:00 | 2020-12-09 03:09:37.760000+00:00 | null | rss|rss-reader | [] | 0 |
37,906,334 | <p>The Vienna RSS reader appears to display the author's name correctly. (Tested with Vienna 3.1.4 on the arxiv.org cs updates feed).</p>
<p><a href="https://github.com/ViennaRSS/vienna-rss" rel="nofollow">link to Vienna on github</a></p> | 2016-06-19 10:40:04.467000+00:00 | 2016-06-19 10:40:04.467000+00:00 | null | null | 36,637,572 | <p>Is there any RSS feed reader that is compatible with Arxiv rss feeds which have the annoyance of using html tags for authors? So I want a reader that does not display the author as <code><a href="http://arxiv.org/find/quant-ph/1.</code>.. but rather author's name, I do not really care about the link. I tried outlook, Rssowl, various plugins chrome extensions but either the extensions are clumsy or they cannot handle the html tags in author. I prefer a program, not some web site rss feed reader.</p> | 2016-04-15 03:05:23+00:00 | 2020-12-09 03:09:37.760000+00:00 | null | rss|rss-reader | ['https://github.com/ViennaRSS/vienna-rss'] | 1 |
32,014,239 | <p>A third-party that is monitoring traffic may also be able to determine the page visited by examining your traffic an comparing it with the traffic another user has when visiting the site. For example if there were 2 pages only on a site, one much larger than the other, then comparison of the size of the data transfer would tell which page you visited. There are ways this could be hidden from the third-party but they're not normal server or browser behaviour. See for example this paper from SciRate, <a href="https://scirate.com/arxiv/1403.0297" rel="noreferrer">https://scirate.com/arxiv/1403.0297</a>.</p>
<p>In general other answers are correct, practically though this paper shows that pages visited (ie URL) can be determined quite effectively.</p> | 2015-08-14 16:03:26.780000+00:00 | 2015-08-14 16:03:26.780000+00:00 | null | null | 499,591 | <p>Are all URLs encrypted when using TLS/SSL (HTTPS) encryption? I would like to know because I want all URL data to be hidden when using TLS/SSL (HTTPS).</p>
<p>If TLS/SSL gives you total URL encryption then I don't have to worry about hiding confidential information from URLs.</p> | 2009-01-31 21:15:34.577000+00:00 | 2022-07-10 19:43:29.703000+00:00 | 2019-04-29 18:00:30.847000+00:00 | ssl|https|httprequest | ['https://scirate.com/arxiv/1403.0297'] | 1 |
46,219,566 | <p>This is very much expected. <strong>This problem is called over-fitting</strong>. This is when your model starts "memorizing" the training examples without actually learning anything useful for the Test set. In fact, this is exactly why we use a test set in the first place. Since if we have a complex enough model we can always fit the data perfectly, even if not meaningfully. The test set is what tells us what the model has actually learned. </p>
<p>Its also useful to use a <strong>Validation set</strong> which is like a test set, but you use it to find out when to stop training. When the Validation error stops lowering you stop training. <strong>why not use the test set for this?</strong> The test set is to know how well your model would do in the real world. If you start using information from the test set to choose things about your training process, than its like your cheating and you will be punished by your test error no longer representing your real world error.</p>
<p>Lastly, <strong>convolutional neural networks are notorious for their ability to over-fit</strong>. It has been shown the Conv-nets can get zero training error even if you shuffle the labels and even <em>random pixels</em>. That means that there doesn't have to be a real pattern for the Conv-net to learn to represent it. This means that <strong>you have to regularize a conv-net</strong>. That is, you have to use things like <strong>Dropout</strong>, <strong>batch normalization</strong>, <strong>early stopping</strong>.</p>
<p>I'll leave a few links if you want to read more:</p>
<p>Over-fitting, validation, early stopping
<a href="https://elitedatascience.com/overfitting-in-machine-learning" rel="nofollow noreferrer">https://elitedatascience.com/overfitting-in-machine-learning</a></p>
<p>Conv-nets fitting random labels:
<a href="https://arxiv.org/pdf/1611.03530.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.03530.pdf</a>
(this paper is a bit advanced, but its interresting to skim through)</p>
<p>P.S. to actually improve your test accuracy you will need to change your model or train with data augmentation. You might want to try transfer learning as well.</p> | 2017-09-14 12:45:03.723000+00:00 | 2017-09-14 13:20:22.417000+00:00 | 2017-09-14 13:20:22.417000+00:00 | null | 46,216,981 | <p>While training a convolutional neural network following <a href="https://arxiv.org/pdf/1412.6622.pdf" rel="nofollow noreferrer">this</a> article, the accuracy of the training set increases too much while the accuracy on the test set settles. </p>
<p>Below is an example with 6400 training examples, <strong>randomly chosen at each epoch</strong> (so some examples might be seen at the previous epochs, some might be new), and 6400 <strong>same test examples</strong>.</p>
<p>For a bigger data set (64000 or 100000 training examples), the increase in training accuracy is even more abrupt, going to 98 on the third epoch.</p>
<p>I also tried using <strong>the same 6400 training examples</strong> each epoch, just randomly shuffled. As expected, the result is worse.</p>
<pre><code>epoch 3 loss 0.54871 acc 79.01
learning rate 0.1
nr_test_examples 6400
TEST epoch 3 loss 0.60812 acc 68.48
nr_training_examples 6400
tb 91
epoch 4 loss 0.51283 acc 83.52
learning rate 0.1
nr_test_examples 6400
TEST epoch 4 loss 0.60494 acc 68.68
nr_training_examples 6400
tb 91
epoch 5 loss 0.47531 acc 86.91
learning rate 0.05
nr_test_examples 6400
TEST epoch 5 loss 0.59846 acc 68.98
nr_training_examples 6400
tb 91
epoch 6 loss 0.42325 acc 92.17
learning rate 0.05
nr_test_examples 6400
TEST epoch 6 loss 0.60667 acc 68.10
nr_training_examples 6400
tb 91
epoch 7 loss 0.38460 acc 95.84
learning rate 0.05
nr_test_examples 6400
TEST epoch 7 loss 0.59695 acc 69.92
nr_training_examples 6400
tb 91
epoch 8 loss 0.35238 acc 97.58
learning rate 0.05
nr_test_examples 6400
TEST epoch 8 loss 0.60952 acc 68.21
</code></pre>
<p>This is my model (I'm using RELU activation after each convolution):</p>
<pre><code>conv 5x5 (1, 64)
max-pooling 2x2
dropout
conv 3x3 (64, 128)
max-pooling 2x2
dropout
conv 3x3 (128, 256)
max-pooling 2x2
dropout
conv 3x3 (256, 128)
dropout
fully_connected(18*18*128, 128)
dropout
output(128, 128)
</code></pre>
<p><strong>What could be the cause?</strong></p>
<p>I'm using Momentum Optimizer with learning rate decay:</p>
<pre><code> batch = tf.Variable(0, trainable=False)
train_size = 6400
learning_rate = tf.train.exponential_decay(
0.1, # Base learning rate.
batch * batch_size, # Current index into the dataset.
train_size*5, # Decay step.
0.5, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(cost, global_step=batch)
</code></pre> | 2017-09-14 10:44:24.417000+00:00 | 2017-09-14 13:22:27.960000+00:00 | 2017-09-14 13:22:27.960000+00:00 | machine-learning|tensorflow|conv-neural-network|training-data | ['https://elitedatascience.com/overfitting-in-machine-learning', 'https://arxiv.org/pdf/1611.03530.pdf'] | 2 |
35,999,074 | <p>Package author here :)</p>
<p>That is certainly a mistake on our part, the user <em>SHOULD</em> be able to set <code>maxIt</code> explicitly and it solves the problem in your case. I'll try to release a new version soon with the fix. (<strong>Update</strong>: pse 0.4.5 submitted to CRAN. Should be online soon)</p>
<p>However, it is important to note that there is another problem that might happen in cases such as this. The problem is that it may be impossible to generate samples from a multivariate distribution with fixed parameters and correlation matrix. In some situations, all correlation matrices are admissible (for instance, if all your <code>q</code> distributions are <code>qnorm</code>, you can specify the correlation terms freely). However, more complicated distributions sometimes do not allow for a free specification of correlation terms. This is the case, for your data, in the <code>qweibull</code> and <code>qlogis</code> distributions. So it might happen that the program will not converge, no matter how large <code>maxIt</code> is set.</p>
<p>We wrote a small section about this problem here: <a href="http://arxiv.org/abs/1210.6278" rel="nofollow">http://arxiv.org/abs/1210.6278</a></p>
<p>A better (but maybe denser) mathematical background can be found here: <a href="http://arxiv.org/abs/1010.6118" rel="nofollow">http://arxiv.org/abs/1010.6118</a></p> | 2016-03-14 22:10:53.140000+00:00 | 2016-03-14 22:49:33.940000+00:00 | 2016-03-14 22:49:33.940000+00:00 | null | 35,998,285 | <p>I noticed that function <code>LHS()</code> in package <strong>pse</strong> provides the argument <code>opts = list()</code> which accepts two arguments: <code>COR</code> and <code>eps</code>, the former being a correlation matrix and the latter the tolerance for the deviation between the prescribed correlation matrix and the result. Both feed down into a function called <code>LHScorcorr()</code> </p>
<p>I can run the function successfully for a high <code>eps</code> (e.g. 0.5, 1), however the default is 0.005. At low values of <code>eps</code> (< 0.1 or so) the function does not converge (see error message at bottom). </p>
<p>I would like to allow it to converge at lower values, and looking at <code>LHScorcorr.R</code> it does take a <code>maxIt</code> argument (maximum iterations). It is set to <code>maxIt = 2*sqrt(dim(vars)[1])</code>, however, unlike eps, cannot seemingly be fed into LHS(). According to <code>?LHS</code>, it also doesn't take <code>...</code> arguments.</p>
<p>1) Am I missing something or is there really no way of feeding a maxIt into the function call? 2) Is that even the cause of the problem I am encountering? 3) What are the "acceptable" ranges of eps to feed?</p>
<p>Here is a subset 'subdf' of my real-data data frame (I've tested my code on it):
<a href="https://drive.google.com/file/d/0BwjVzXSG-JMJX2lZSG44TjNjb28/view?usp=sharing" rel="nofollow">https://drive.google.com/file/d/0BwjVzXSG-JMJX2lZSG44TjNjb28/view?usp=sharing</a></p>
<p>And here is the function I have (currently still poorly commented since it is a modified version of one which takes more options): <a href="https://github.com/ewiik/flux/blob/master/functions/gasExchangeSensitivity.R" rel="nofollow">https://github.com/ewiik/flux/blob/master/functions/gasExchangeSensitivity.R</a></p>
<p>This is what I run:</p>
<pre><code>datacorr <- cor(subdf, method = "spearman",use = "complete.obs")
factors <- c("Temperature", "Conductivity", "pH", "meanWindMS", "SalCalc", "TICumol",
"Pressure", "pco2atm")
distro <- c("qweibull", "qnorm", "qlogis", "qnorm", "qweibull", "qnorm", "qnorm", "qunif")
props <- list( list(shape=4.02, scale=18.65), list(mean=1013, sd=499),
list(location=8.84,scale=0.31), list(mean=4.98, sd=0.83),
list(shape=2.13, scale=0.68), list(mean=3821, sd=1068),
list(mean=94.6, sd=0.17), list(min=356.9, max=402.2))
latincorr <- LHS(gasExchangeSens, factors = factors, N = 200, q = distro, q.arg = props,
nboot = 200, opts = list(COR = datacorr, eps = 0.1))
</code></pre>
<p><code>latincorr <-</code> at the above eps setting gives:</p>
<pre><code>In internal.LHScorcorr(vars, COR, l = l, eps = eps, it = it + 1, :
LHScorcorr: correlation does not converge after maximum iterations
</code></pre> | 2016-03-14 21:19:46.417000+00:00 | 2016-03-14 22:49:33.940000+00:00 | 2016-03-14 21:37:41.973000+00:00 | r|function|statistics|arguments | ['http://arxiv.org/abs/1210.6278', 'http://arxiv.org/abs/1010.6118'] | 2 |
18,034,239 | <p><strong>The problem</strong><br>
If the problem is a real task to schedule meetings then there are some mistakes in posing a question.<br>
It's because number of workers and even a number of available tables and seats not a fundamental physical constant:</p>
<ul>
<li>someone may be fired and can't participate in the next meeting;</li>
<li>HR hired 10 more workers for new project and all of them must participate in next meeting;</li>
<li>On next week starts renovation of dining room and only 20 tables would be available for next month.</li>
</ul>
<p>So problem sounds like this: "We need to schedule meetings for next 5-10 working days in a such a way that as many persons as possible meet with persons that they didn't talk before and as low persons as possible talk with another persons twice and more".</p>
<p>Therefore the problem isn't about generating a full set of permutations. Problem is about optimal planning for next N meetings. </p>
<p><strong>Theory</strong><br>
Problem can be classified as generic <a href="http://en.wikipedia.org/wiki/Mathematical_optimization" rel="nofollow noreferrer">mathematical optimization problem</a>. For that class of problems we have a goal to find optimal solution presented as set of argument value(s) for function(s) which provides maximum or minimum value for objective function(s).<br>
To formulate a problem we must to find the root of the question: </p>
<ul>
<li>for each person maximize a number of persons to meet with</li>
<li>for each pair of persons minimize a number of meetings</li>
</ul>
<p>Each of that goals talks about conversations between one pair of persons so we must formulate a problem in terms of "meet".<br>
Denote <code>P</code> as number of persons and <code>i in [1..P]</code> and <code>j in [1..P]</code> as person indexes.<br>
Denote <code>M</code> as quantity of meetings and <code>m in [1 .. M]</code> as meeting number.<br>
Then let's introduce <code>a(i,j,m) | i < j, i in [1..P], j in [1..P], m in [1..M]</code> as a fact of meeting between two persons on concrete meeting.
After that it's possible to formulate an objective function and bounding conditions for the problem.</p>
<p><strong>Math approach</strong><br>
Please note, that the exact solution (anyone meet another person only one time until cycle finished) possible only in very rare cases.<br>
This is NP-complete class problem and best matched formulation is "optimization problem of perfect matching in k-uniform hypergraphs satisfying a 1-degree co-degree condition".<br>
For further theory research you can ask a question at <a href="https://math.stackexchange.com/">Mathematics</a> or examine <a href="https://www.google.com/webhp?hl=en#hl=en&output=search&sclient=psy-ab&q=site:arxiv.org%20k-uniform%20hypergraphs%20co-degree%20perfect%20matching&oq=site:arxiv.org%20k-uniform%20hypergraphs%20co-degree%20perfect%20matching&gs_l=hp.3...1189.1189.0.2279.1.1.0.0.0.0.83.83.1.1.0....0...1c..23.psy-ab..1.0.0.If26gVMUDU0&pbx=1&bav=on.2,or.r_qf.&bvm=bv.50165853,d.bGE,pv.xjs.s.en_US.seW1cfrvSKg.O&fp=e0dbde624e795fe5&biw=1333&bih=712" rel="nofollow noreferrer">latest works available</a> for k-uniform hypergraph partitioning, e.g. <a href="http://arxiv.org/pdf/1307.2608.pdf" rel="nofollow noreferrer">"Polynomial-time perfect matchings in dense hypergraphs"</a></p>
<p>Solution must have exactly <code>(P-1)/(T-1)=(320-1)/(8-1)=45.5714285714</code> meetings because every time person meets 7 others and "others" number is 319. So it can be 45 meetings according conditions of the question before some pair of persons meets twice.</p>
<p>There are a similar question with good answer already on StackOverflow (<a href="https://stackoverflow.com/questions/2955318/creating-combinations-that-have-no-more-one-intersecting-element/2955527#2955527">link</a>). Note that this algorithm leaves empty places, because for full placement of all persons it requires to <code>seats * prime = person_count</code> and 41 chosen as prime.<br>
Below is query using this solution (<a href="http://www.sqlfiddle.com/#!4/d41d8/15390" rel="nofollow noreferrer">SQLFiddle</a>).</p>
<pre><code>with params as (
select
320 n, -- number of persons
8 k, -- number of seats per table
41 p -- least prime which greather or equal n/k
from dual
),
person_set as (
select level person_id from dual connect by level <= (select n from params)
),
person_map as (
select
person_id,
mod( mod(person_id, p.k * p.p), p.k ) x,
trunc( mod(person_id, p.k * p.p) / p.k ) y
from person_set, params p
),
meetings as (
select (level-1) meeting_no
from dual
connect by level <= (select least(k*p, (n-1)/(k-1)) from params)
),
seats as (
select (level-1) seat_no
from dual
connect by level <= (select k from params)
),
tables as (
select (level-1) table_no
from dual
connect by level <= (select p from params)
),
meeting_plan as (
select --+ ordered use_nl(seats tables)
meeting_no,
seat_no,
table_no,
(
select
person_id
from
person_map
where
x = seat_no
and
y = mod(meeting_no*seat_no + table_no, p.p)
) person_id
from
meetings, seats, tables, params p
)
select
meeting_no,
table_no,
max(case when seat_no = 0 then person_id else null end) seat1,
max(case when seat_no = 1 then person_id else null end) seat2,
max(case when seat_no = 2 then person_id else null end) seat3,
max(case when seat_no = 3 then person_id else null end) seat4,
max(case when seat_no = 4 then person_id else null end) seat5,
max(case when seat_no = 5 then person_id else null end) seat6,
max(case when seat_no = 6 then person_id else null end) seat7,
max(case when seat_no = 7 then person_id else null end) seat8
from meeting_plan
group by meeting_no, table_no
order by meeting_no, table_no
</code></pre>
<p><strong>Practical approach</strong><br>
From practical point of view we don't need exactly optimal solution with theoretical proof. If one person meet another more than once it's not a big deal, so it's possible to stop at nearly optimal solution.<br>
Such a solution can be generated on basis of empirical rules if we start to place persons one by one to meetings and tables trying to keep number of intersection for each pair of persons as low as possible.<br>
There are many strategies of placing possible and one of them illustrated below.</p>
<p>For demonstration purposes I use Oracle because this database present in question tags and it's available at <a href="http://www.sqlfiddle.com/#!4" rel="nofollow noreferrer">SQLFiddle</a> site. </p>
<p>Example database schema setup includes three tables:</p>
<p><code>person</code> - table with list of workers; </p>
<p><code>person_pair</code> - table with list of all unique pairs of workers and count of intersection for each pair, totally <code>floor((P*P)/2) - floor(P/2)</code> rows. In case of <code>P</code>=320 it holds 51040 rows. </p>
<p><code>meeting</code> - table with placement information for each person on each meeting.</p>
<p>In example code number of workers limited to <code>20</code> and number of seats to <code>4</code> because of resource consumption limits on SQLFiddle site and to keep result datasets observable.</p>
<p>Below is code for scheme setup and fill. Please look through the comments to find out more about table fields.</p>
<pre><code>-- List of persons
create table person(
person_id number not null -- Unique person identifier.
);
-- primary key
alter table person add constraint pk_person primary key (person_id) using index;
-- List of all possible unique person pairs
create table person_pair(
person1_id number not null, -- 1st person from pair, refers person table.
person2_id number not null, -- 2nd person from pair, refers person table.
-- person1_id always less than person2_id.
meet_count number -- how many times persons in pair meet each other.
);
-- primary key
alter table person_pair add constraint pk_person_pair primary key (person1_id, person2_id) using index;
-- indexes for search
alter table person_pair add constraint idx_pair2 unique (person2_id, person1_id) using index;
-- Placement information for meetings
create table meeting(
meeting_number number not null, -- sequential meeting number
table_number number not null, -- table number
person_id number not null, -- person placed on that table and meeting
seat_no number -- seat number
);
-- primary key: person can seat on the same table only once in one meeting
alter table meeting add constraint pk_meeting primary key (meeting_number, table_number, person_id) using index;
-- disallow duplicate seats on the same table during one meeting
alter table meeting add constraint miting_unique_seat unique (meeting_number, table_number, seat_no) using index;
-- person can participate in meeting only once
alter table meeting add constraint miting_unique_person unique (meeting_number, person_id) using index;
</code></pre>
<p>Fill initial data (<a href="http://www.sqlfiddle.com/#!4/9937a/2" rel="nofollow noreferrer">SQLFiddle</a>):</p>
<pre><code>begin
-- Fill persons list with initial data
insert into person(person_id)
select level from dual connect by level <=20;
-- generate person pairs
insert into
person_pair(person1_id, person2_id, meet_count)
select
p1.person_id,
p2.person_id,
0
from
person p1,
person p2
where
p1.person_id < p2.person_id
;
end;
/
select * from person order by person_id
/
select * from person_pair order by person1_id, person2_id
/
</code></pre>
<p><strong>Generating meetings</strong> </p>
<p>Strategy consist of 2 parts:<br>
1. Select persons in specific order;<br>
2. Place persons from list one-by-one at most appropriate table.</p>
<p>Arranging people in selection list is attempt to place persons who meet before many times before as early as possible and place it at separate tables. </p>
<p>Placing persons are more tricky and main purpose at that stage is to maximize number of first meetings and minimize number of repeated meetings. So it's close to problem of construction of proper objective function for optimization problem, what is non-trivial in most of a real world cases.</p>
<p>I choose this criteria:</p>
<p>For each table counted two factors: "attractive"(<code>A</code>) - why place person at that table and "repellent"(<code>R</code>) - why person can't seat on that table.<br>
This factor composed toghether to get final table arranging factor:<br>
<code>-A*A - (if A=0 then 0 else R/2) + R</code><br>
"Attractive" factor counted as number of persons already placed at the table with which current person not meet before.<br>
"Repellent" factor counted as sum of number of meetings of current person with all persons already at the table.</p>
<p>Very probably it not so good as it can, but enough for purposes of example.
For example formula can be extended to take into account how much time has been passed since the last meeting.</p>
<p>You can experiment with building good expression for choosing table on your own.</p>
<p>Next is code for generation of meetings.</p>
<p>Code (<a href="http://www.sqlfiddle.com/#!4/9937a/1" rel="nofollow noreferrer">SQLFiddle</a>)</p>
<pre><code>declare
vMeetingNumber number; -- number of current meeting
vNotMeetPairCount number; -- number of pairs not meet before
vTableCapacity number := 4; -- number of places at one table
vTableCount number; -- number of tables
begin
-- get next meeting number for case of continous generation
select nvl(max(meeting_number),0) + 1 into vMeetingNumber from meeting;
-- count minimum required table number
select ceil(count(1)/vTableCapacity) into vTableCount from person;
-- number of remaining pairs who don't meet before
select count(1) into vNotMeetPairCount
from person_pair
where meet_count < 1;
-- Generate new meetings while not all persons meet each other
while (vNotMeetPairCount > 0) loop
-- select list of persons to place
for cPersons in (
with person_meets as (
select
pp.person1_id, pp.person2_id, pp.meet_count,
( row_number() over (
order by pp.meet_count desc, pp.person1_id
)
) row_priority
from
person_pair pp
)
select person_id from (
select person_id, sum(pair_meet_count*pair_meet_count) pair_meetings from (
select person1_id person_id, meet_count pair_meet_count from person_meets
union all
select person2_id person_id, meet_count pair_meet_count from person_meets
)
group by person_id
)
order by pair_meetings desc
) loop
-- add current person to most applicable table
insert into meeting(meeting_number, table_number, person_id, seat_no)
select
vMeetingNumber, table_number, cPersons.person_id, seat_no
from (
with available_tables as (
select
table_number, places_occupied
from (
select
t.table_number,
(
select count(1)
from meeting m
where
m.meeting_number = vMeetingNumber
and
m.table_number = t.table_number
) places_occupied
from (
select level table_number
from dual connect by level <= vTableCount
) t
)
where places_occupied < vTableCapacity
)
select
table_number,
seat_no,
( row_number() over ( order by
-attractor_factor*attractor_factor - decode(attractor_factor,0,0,repellent_factor/2) + repellent_factor
)
) row_priority
from (
select
t.table_number,
t.places_occupied + 1 seat_no,
(
select
count(1)
from
meeting m,
person_pair pp
where
m.table_number = t.table_number
and
m.meeting_number = vMeetingNumber
and
pp.person1_id = least(m.person_id, cPersons.person_id)
and
pp.person2_id = greatest(m.person_id, cPersons.person_id)
and
pp.meet_count = 0
) attractor_factor,
(
select
nvl(sum(meet_count),0)
from
meeting m,
person_pair pp
where
m.table_number = t.table_number
and
m.meeting_number = vMeetingNumber
and
pp.person1_id = least(m.person_id, cPersons.person_id)
and
pp.person2_id = greatest(m.person_id, cPersons.person_id)
and
pp.meet_count > 0
) repellent_factor,
1 random_factor --trunc(dbms_random.value(0,1000000)) random_factor
from
available_tables t
)
)
where
row_priority = 1
;
end loop;
-- Update number of meets
update person_pair
set meet_count = meet_count + 1
where
(person1_id, person2_id) in (
select
m1.person_id person1_id,
m2.person_id person2_id
from
meeting m1,
meeting m2
where
m1.meeting_number = vMeetingNumber
and
m2.meeting_number = vMeetingNumber
and
m1.table_number = m2.table_number
and
m1.person_id < m2.person_id
)
;
-- advice to next meeting
vMeetingNumber := vMeetingNumber + 1;
-- count pairs who don't meet before
select count(1) into vNotMeetPairCount
from person_pair
where meet_count < 1;
end loop;
end;
</code></pre>
<p><strong>A little bit more theory</strong></p>
<p>Generated solution can be used as start point for some <a href="http://en.wikipedia.org/wiki/Multi-objective_optimization" rel="nofollow noreferrer">multicriteria optimization methods</a>, but to use it you must have a good formal formulation of the problem.</p>
<p>Hope that all stated above helps you to resolve the problem.</p> | 2013-08-03 15:10:31.200000+00:00 | 2013-08-05 17:56:14.840000+00:00 | 2017-05-23 12:17:50.790000+00:00 | null | 17,833,851 | <p>EDIT: I am looking for an APL function, or MS Access VBA function, which takes as arguments the total number of employees, total number of dinning tables, and number of employees per dinning table for generating rotating seating assignments.</p>
<p>This challenge is a <a href="http://www.logic.at/prolog/mst.pdf" rel="nofollow noreferrer">Social Golfer Problem</a> scenario. I have a company with 280 persons. I recently implemented a Management By Objectives (MBO) program where each worker is assigned goals to be completed on a monthly basis. One of the recurring goals is to arrive on time at work to attend a 30 minute coffee and dounut meeting each morning. The meeting is held in our dinning hall which has 50 tables. Each table can seat up to 12 persons maximum, however we are only using 6 per table because of the Covid pandemic.</p>
<p>I want to generate unique sets of seating arrangement for each dinning table so that each person can meet and collaborate with every other person on a rotating basis until all unique sets are exhausted. Then the cycle starts all over where two or more employees might be seated at the same table again.</p>
<p><strong>(EDIT)</strong> RULE: Unique sets of 6 people are required for each workday. A person cannot be seated again with other persons they have sat with in the past until all possible permutations have been exhausted.</p>
<p><strong>EDIT:</strong> An example of the desired result is:</p>
<pre><code>Day 1:
Table 1 will seat worker numbers 1,2,3,4,5,6.
Table 2 will seat worker numbers 7,8,9,10,11,12.
...
Table 50 will seat worker numbers 275,276,277,278,279,280.
Day 2:
Table 1 will seat worker numbers 7,13,19,26,33,40.
Table 2 will seat worker numbers 14,20,27,34,41,48
...
</code></pre>
<p><strong>NOTE:</strong> (So, the next workday and thereafter, workers 1 through 6
cannot ever be seated together at the same table with any other workers from that same set until all possible permutations have been exhausted).</p> | 2013-07-24 12:17:58.827000+00:00 | 2022-04-23 09:22:07.900000+00:00 | 2022-04-23 09:22:07.900000+00:00 | arrays|vba|ms-access|mathematical-optimization|apl | ['http://en.wikipedia.org/wiki/Mathematical_optimization', 'https://math.stackexchange.com/', 'https://www.google.com/webhp?hl=en#hl=en&output=search&sclient=psy-ab&q=site:arxiv.org%20k-uniform%20hypergraphs%20co-degree%20perfect%20matching&oq=site:arxiv.org%20k-uniform%20hypergraphs%20co-degree%20perfect%20matching&gs_l=hp.3...1189.1189.0.2279.1.1.0.0.0.0.83.83.1.1.0....0...1c..23.psy-ab..1.0.0.If26gVMUDU0&pbx=1&bav=on.2,or.r_qf.&bvm=bv.50165853,d.bGE,pv.xjs.s.en_US.seW1cfrvSKg.O&fp=e0dbde624e795fe5&biw=1333&bih=712', 'http://arxiv.org/pdf/1307.2608.pdf', 'https://stackoverflow.com/questions/2955318/creating-combinations-that-have-no-more-one-intersecting-element/2955527#2955527', 'http://www.sqlfiddle.com/#!4/d41d8/15390', 'http://www.sqlfiddle.com/#!4', 'http://www.sqlfiddle.com/#!4/9937a/2', 'http://www.sqlfiddle.com/#!4/9937a/1', 'http://en.wikipedia.org/wiki/Multi-objective_optimization'] | 10 |
Subsets and Splits