a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
2,855,496
<p>If you are on SQL server 2008 or above then you can use the filestream feature that is like the best of both worlds. It stores the image in the filesystem but has it under transactional control and is also included in your backups when they are taken.</p> <p>If you are not on 2008 or above then I would say to keep the images in the DB, see this old Microsoft white paper for my reasons why.</p> <p><a href="http://arxiv.org/pdf/cs.DB/0701168" rel="nofollow noreferrer">http://arxiv.org/pdf/cs.DB/0701168</a></p>
2010-05-18 08:12:08.320000+00:00
2010-05-18 08:12:08.320000+00:00
null
null
2,850,337
<p>I want to store images of my employees with their profile details in SQL in a SQL Server database, but I have following reservations:</p> <p>Should I compress images or not and if yes, please can the community provide me sample code or direct me to a guide that can assist me with this.</p> <p>How should I retrieve images efficiently? I am afraid of asp.net application performance issues. I think with thousands of employee records, the system may halt or slow down.</p>
2010-05-17 15:22:05.167000+00:00
2012-06-03 21:06:43.593000+00:00
2012-06-03 21:06:43.593000+00:00
c#|asp.net|sql-server|image
['http://arxiv.org/pdf/cs.DB/0701168']
1
61,860,446
<p>To start with, carefully consider whether you need to differentiate across the JPEG compression step. The vast majority of projects do not differentiate across this step, and if you're unsure if you need to, you probably don't.</p> <hr> <p>If you really need to differentiate across an image compressor, you might consider a codec that is easier to implement than JPEG. Wavelett-based compression (the technology behind the ill-fated <a href="https://en.wikipedia.org/wiki/JPEG_2000" rel="nofollow noreferrer">JPEG 2000 format</a>) is mathematically elegant and easy to differentiate across. In a recent application of this technique, <a href="https://arxiv.org/abs/1904.12356" rel="nofollow noreferrer">Thies et al. 2019</a> represent an image as a laplacian pyramid, with a loss component that serves to force sparsity in the higher resolution levels.</p> <hr> <p>Now, as a thought experiment, we can look at the <a href="http://pi.math.cornell.edu/~web6140/TopTenAlgorithms/JPEG.html" rel="nofollow noreferrer">different steps within JPEG compression</a> and determine if they could be implemented in a differentiable way.</p> <ul> <li><p><strong>Color transform (RBG to YCbCr):</strong> We can represent this as a point-wise convolution.</p></li> <li><p><strong>Chroma downsampling:</strong> Easy enough with <code>torch.nn.functional.interpolate</code> on the chroma channels.</p></li> <li><p><strong>Discrete Cosine Transform (DCT):</strong> Now things are getting interesting. Here is a Pytorch implementation of DCT that might work: <a href="https://github.com/zh217/torch-dct" rel="nofollow noreferrer">https://github.com/zh217/torch-dct</a>.</p></li> <li><p><strong>Quantization table:</strong> Easy again. This should just be multiplying output of the DCT with the values in the table.</p></li> <li><p><strong>Huffman encoding:</strong> Hard; I'm not sure this is possible. The number of output elements is going to vary based on the image entropy, which rules out many differentiable building blocks. Depending on your application, you might be able to skip this step (this step is lossless compression; so if you're trying to differentiate across the compression artifacts introduced by JPEG, the previous steps should be sufficient).</p></li> </ul> <p>For an interesting related work on inputting JPEG DCT components directly into a neural net, see <a href="https://eng.uber.com/neural-networks-jpeg/" rel="nofollow noreferrer">Faster Neural Networks Straight from JPEG</a>.</p>
2020-05-18 00:07:26.847000+00:00
2020-05-18 00:07:26.847000+00:00
null
null
61,132,905
<p>During a CNN classification model training while calculating the loss I am applying the encoding jpeg compression on the image in PyTorch. While I call loss.backward() it must also backpropagate through encoding and compression operation performed on the images. </p> <p>Are those compression algorithms (e.g. encoding and JPEG compression) are differentiable otherwise how to backpropagate the loss gradient through those operations?</p> <p><strong>If those operations are not differentiable is there any differentiable compression algorithm that exists in PyTorch which performs H.264 encoding and JPEG compression?</strong> </p> <p>Any suggestions will be highly helpful.</p>
2020-04-10 01:52:26.617000+00:00
2020-05-18 00:07:26.847000+00:00
null
pytorch|conv-neural-network|gradient-descent|image-compression
['https://en.wikipedia.org/wiki/JPEG_2000', 'https://arxiv.org/abs/1904.12356', 'http://pi.math.cornell.edu/~web6140/TopTenAlgorithms/JPEG.html', 'https://github.com/zh217/torch-dct', 'https://eng.uber.com/neural-networks-jpeg/']
5
58,075,942
<p>The difference between the two types of algebras is one between effectful and non effectful algebras. Indeed one can write the UserRepo with a GADT in Dotty (Scala3) like this too: </p> <pre><code>enum UserRepo[A]{ case GetUserById(id: UserID) extends UserRepo[User] case GetUserProfile(user: User) extends UserRepo[UserProfile] case UpdateUserProfile(user: User, profile: UserProfile) extends UserRepo[Unit] } </code></pre> <p>If we leave aside the problem of <a href="https://cstheory.stackexchange.com/questions/45565/what-category-are-tagless-final-algebras-final-in/45566">how final tagless relates to algebras</a> and accept that they are isomorphic to GADTs, then we can rephrase the problem in terms of algebras. There it looks like Andrej Bauer has answered the problem in detail in lecture notes from March 2019 <a href="https://arxiv.org/pdf/1807.05923.pdf" rel="nofollow noreferrer">What is Algebraic about Effects and Handlers</a>.</p> <p>Andrej Bauer clearly explains what algebras are, starting from signatures, and moving on to explain what interpretations and models of a theory are. Then he moves on in "§2 Computational Effects as Algebraic Operations" to model effectful computations by parameterisation of algebras. When that is done we get very similar looking algebras to the ones I was wondering about. </p> <p>In "§4 What is coalgebraic about algebraic effects and Handlers?" he shows how the dual of such parameterised algebras give us co-interpretations, co-models and co-operations for what are quite clearly coalgebras. </p> <p>I am told these ways of handling effects are not the same as using monads, and I need to work out what the difference is, and if this affects the problem.</p>
2019-09-24 08:18:10.480000+00:00
2019-09-24 08:18:10.480000+00:00
null
null
58,025,604
<h2>Background</h2> <p>The Haskell and Scala community have been very enamored recently with what they call tagless <strong>final</strong> 'pattern' of programming. These are referenced as dual to initial free algebras, so I was wondering what Tagless Final was final of. On ncatlab one only finds talk of final coalgebras, not final algebras. </p> <p>Asking the Question <a href="https://cstheory.stackexchange.com/questions/45565/what-category-are-tagless-final-algebras-final-in/45566">What Category are Tagless Final Algebras Final In</a> on CS-Theory Stack Exchange I got a very good answer pointing to this blog post <a href="http://prl.ccs.neu.edu/blog/2017/09/27/final-algebra-semantics-is-observational-equivalence/" rel="nofollow noreferrer">Final Algebra Semantics is Observational Equivalence</a>. So these are indeed final algebras, but not in the same category of algebras as the initial one....</p> <h2>Question</h2> <p>Yet, when we look at how <em>final tagless</em> is <strong>used</strong>, we find that it is very often applied for things that look like coalgebras. See for example the two examples of a <code>Console</code> or a <code>UserRepository</code> in part 1 of <a href="https://dzone.com/articles/the-false-hope-of-managing-effects-with-tagless-fi" rel="nofollow noreferrer">The False Hope of Managing Effects with Tagless-Final in Scala</a>. </p> <p>So instead of having Algebras which in category theory are expressed with endofunctors <code>F</code> as maps of the form <code>F(X) ⟹ X</code>, it looks like many use <code>final tagless</code> with Coalgebras which are maps <code>X ⟹ F(X)</code>, and represent processes. Are these really the same thing? Or is something else going on here?</p> <h2>ADTs and Final Tagless</h2> <h3>On Algebras</h3> <p>Let's start by the explanations of final tagless given by Olivier Blanvillain's <a href="https://gist.github.com/OlivierBlanvillain/48bb5c66dbb0557da50465809564ee80" rel="nofollow noreferrer">Scala translation of examples taken from coursework on in Haskell</a>. One notices that this starts with an Algebraic Data Type that is indeed the archetype of an Algebraic structure: a Group.</p> <p>In category a group can be built out of a the Polynomial Functor <code>F[X] = X×X + X + 1</code> which takes any type to the type that is either the pair of that type or that type or 1. Then selecting one specific type for X, say A an algebra is a function <code>F[A] ⟹ A</code>. The most widely known group is the set of positive and negative natural numbers and 0 denoted ℤ, and so our algebra is:</p> <pre><code>ℤ×ℤ + ℤ + 1 ⟹ ℤ </code></pre> <p>The algebra can be decomposed into 3 function <code>+: ℤ×ℤ ⟹ ℤ</code>, <code>-: ℤ ⟹ ℤ</code> and the constant <code>zero: 1 ⟹ ℤ</code>. If we vary the type X we get different algebras, and these form a category, with morphisms between them, where the most important one is the initial algebra.</p> <p>The initial algebra is the free one which allows one to build all the structure without ever loosing any information. In <a href="https://dotty.epfl.ch/docs/reference/enums/adts.html" rel="nofollow noreferrer">Scala 3</a> we can build the initial algebra for a group like this: </p> <pre class="lang-scala prettyprint-override"><code>enum IExp { case Lit(i: Int) case Neg(e: IExp) case Add(r: IExp, l: IExp) } </code></pre> <p>And we can build a simple structure using it:</p> <pre><code>import IExp._ val fe: IExp = Add(Lit(8), Neg(Add(Lit(1), Lit(2)))) </code></pre> <p>The <code>fe</code> structure can then be interpreted by creating functions <code>IExp =&gt; Int</code> or <code>IExp =&gt; String</code>, which are morphisms in the category of algebras, which one reaches by deconstructing the ADT, and recursively mapping these to an algebra with for a specific X (eg <code>String</code> or <code>Int</code>). This morphism is known as a fold. (See the 1997 book <a href="https://themattchan.com/docs/algprog.pdf" rel="nofollow noreferrer">The Algebra of Programming, by Richard Bird and Oege de Moor</a>)</p> <p>In Tagless final this is transformed into the trait</p> <pre class="lang-scala prettyprint-override"><code>trait Exp[T] { def lit(i: Int): T def neg(t: T): T def add(l: T, r: T): T } </code></pre> <p>As is a set of three functions all returning the same type. Expressions are function applications</p> <pre class="lang-scala prettyprint-override"><code>def tf0[T] given (e: Exp[T]): T = import e._ add(lit(8), neg(add(lit(1), lit(2)))) </code></pre> <p>and these can be interpreted with a given instance</p> <pre><code>given as Exp[Int] { def lit(i: Int): Int = i def neg(t: Int): Int = -t def add(l: Int, r: Int): Int = l + r } tf0[Int] // 5 </code></pre> <p>Essentially the interpretation is the implementation of the interface <code>Exp</code> that is <code>given</code> (or in Scala 2 <code>implicit</code>) in the context.</p> <p>So here we are moving from an algebraic structure expressed from an initial ADT to a final tagless version. (See the discussion on <a href="https://cstheory.stackexchange.com/questions/45565/what-category-are-tagless-final-algebras-final-in">cstheory on what that is</a>).</p> <h3>On Coalgebras</h3> <p>Now if we take the <code>UserRepository</code> example from <a href="https://dzone.com/articles/the-false-hope-of-managing-effects-with-tagless-fi" rel="nofollow noreferrer">The False Hope of Managing Effects with Tagless-Final in Scala</a>, we have</p> <pre class="lang-scala prettyprint-override"><code>trait UserRepository { def getUserById(id: UserID): User def getUserProfile(user: User): UserProfile def updateUserProfile(user: User, profile: UserProfile): Unit } </code></pre> <p>this is clearly (for anyone who has read some of Bart Jacobs' work starting with <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54.7619&amp;rep=rep1&amp;type=pdf" rel="nofollow noreferrer">Objects and Classes Coalgebraically</a>) a coalgebra on the state S of <code>UserRepository</code>. A Coalgebra is the dual of an Algebra: the arrows are reversed. Starting from a Functor F, and a specific type S an coalgebra is a function <code>S ⟹ F[S]</code></p> <p>In the case of a user repository we can see this to be </p> <pre><code>S ⟹ (Uid → User) × (User → Profile) × (User × Profile → S) </code></pre> <p>Here the functor <code>F(X)</code> takes any type <code>X</code> to a 3-tuple of functions. The coalgebra is such a functor F, a set of states S, and a transition morphism <code>S ⟹ F(S)</code>. We have 2 observational methods <code>getUserById</code> and <code>getUserProfile</code> and one state changing one <code>updateUserProfile</code> also known as a setter. By varying the type of states we vary the coalgebra. If we look at all coalgebras on such a functor F, and the morphisms between them, we get a category of coalgebras. Of which the most important one is the final one which gives the structure of all observations on the coalgebras of that functor.</p> <p>For more info on coalgebras and their relation to OO see the work by Bart Jacobs such as his <a href="https://pdfs.semanticscholar.org/40bb/e9978e2c4080740f55634ac58033bfb37d36.pdf" rel="nofollow noreferrer">Tutorial on (co)Algebras and (co)Induction</a> or <a href="https://twitter.com/bblfish/status/1172265457153392640" rel="nofollow noreferrer">this Twitter thread</a>.</p> <p>Essentially we have a process such as a UserRepository or a Console that have state and can change state, whereas it does not make sense to think of change of state for a number.</p> <p>Now it is true that in the Tagless Final example of UserRepository it is Genericised by a Functor <code>F[_]</code>, like this:</p> <pre class="lang-scala prettyprint-override"><code>trait UserRepository[F[_]] { def getUserById(id: UserID): F[User] def getUserProfile(user: User): F[UserProfile] def updateUserProfile(user: User, profile: UserProfile): F[Unit] } </code></pre> <p>Is that enough to change UserRepository into an algebra? It does in a way look like the functions all have the same range of type F[_], but does that really work? What if F is the Identity functor? Then we have the previous case.</p> <p>Going the other way, from Final Tagless to an ADT, one should ask what would it be to have an ADT for <code>UserRepository</code>? (I have written something like that myself to model commands to change <a href="https://github.com/read-write-web/rww-play/blob/dev/app/rww/ldp/LDPCommand.scala" rel="nofollow noreferrer">a remote RDF database</a> and found that really helpful, but I am not sure if this is correct mathematically.) </p> <h2>Further References</h2> <ul> <li>The influential Haskell <a href="http://okmij.org/ftp/tagless-final/course/lecture.pdf" rel="nofollow noreferrer">Typed Tagless Final Interpreters</a> lecture notes</li> <li>A translation of that article for Scala (Dotty) <a href="https://gist.github.com/OlivierBlanvillain/48bb5c66dbb0557da50465809564ee80" rel="nofollow noreferrer">Revisiting Tagless Final Interpreters</a></li> <li>A blog post <a href="https://oleksandrmanzyuk.wordpress.com/2014/06/18/from-object-algebras-to-finally-tagless-interpreters-2/" rel="nofollow noreferrer">From Object Algebras to Finally Tagless Interpreters</a> makes the case that Object algebras are equivalent to Tagless Final.</li> <li>It cites the paper <a href="https://www.cs.utexas.edu/~wcook/Drafts/2012/ecoop2012.pdf" rel="nofollow noreferrer">Extensibility for the Masses, practical extensibility with Object Algebras</a>.</li> </ul> <p>Two advantages claimed of Tagless Final are</p> <ul> <li>it makes testing easy: by moving to programming with interfaces one can easily create mock implementations of the interface to test code such as database access, IO, etc...</li> <li>it is extensible: one can easily extend an 'algebra' with new methods overcoming what is known as the expression problem. (The expression problem is nicely illustrated in the blog post <a href="https://oleksandrmanzyuk.wordpress.com/2014/06/18/from-object-algebras-to-finally-tagless-interpreters-2/" rel="nofollow noreferrer">From Object Algebras to Finally Tagless Interpreters</a>).</li> </ul> <p>The following looks like it could provide a clue:</p> <p>The recent article <a href="https://link.springer.com/chapter/10.1007/978-3-030-17184-1_5" rel="nofollow noreferrer">Codata in Action</a> claims that codata (a coalgebraic concept) is the bridge between functional and OO programming, and actually uses the visitor pattern described (also used in <a href="https://www.cs.utexas.edu/~wcook/Drafts/2012/ecoop2012.pdf" rel="nofollow noreferrer">Extensibility for the Masses</a>) to map between data and codata. <a href="https://twitter.com/bblfish/status/1173269815802441728" rel="nofollow noreferrer">see illustration</a>. The claims for codata are the language agnostic representation of procedural abstraction (called modularity above), and the testability comes from the multiple implementations of an interface that Jacobs decribes with the category for a coalgebra.</p>
2019-09-20 09:29:42.447000+00:00
2019-09-24 08:18:10.480000+00:00
2019-09-21 21:03:08.917000+00:00
scala|oop|functional-programming|category-theory|tagless-final
['https://cstheory.stackexchange.com/questions/45565/what-category-are-tagless-final-algebras-final-in/45566', 'https://arxiv.org/pdf/1807.05923.pdf']
2
67,701,865
<p>You are mixing two different versions of DeepAR, that's why you have errors. <a href="https://arxiv.org/abs/1704.04110" rel="nofollow noreferrer">DeepAR (Salinas et al.)</a> is actually implemented in 3 places:</p> <ol> <li>In <strong><a href="https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-recipe-deeparplus.html" rel="nofollow noreferrer">Amazon Forecast</a></strong>: DeepAR+ is a managed implementation of DeepAR, where all the science code is written for you and you only need to use Forecast SDK to launch the service on your S3 data. (<a href="https://github.com/aws-samples/amazon-forecast-samples/blob/master/notebooks/advanced/Getting_started_with_DeepAR%2B/Getting_started_with_DeepAR%2B.ipynb" rel="nofollow noreferrer">example</a>). Use Amazon Forecast if you do not want to write any scientific code and want a managed experience.</li> <li>In <strong><a href="https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html" rel="nofollow noreferrer">Amazon SageMaker</a></strong>: SageMaker has a built-in DeepAR container, that you can use to train on S3 data. See <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_hyperparameters.html" rel="nofollow noreferrer">here the available hyperparameters</a>, and <a href="https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/deepar_electricity/DeepAR-Electricity.ipynb" rel="nofollow noreferrer">here examples</a>. Use SageMaker DeepAR if you want more control, while still not having to write model code (more hyperparameters, hardware choice)</li> <li><strong><a href="https://ts.gluon.ai/" rel="nofollow noreferrer">In open-source GluonTS</a></strong>: The GluonTS open-source neural forecasting library created by AWS comes with an implementation of DeepAR (<a href="https://aws.amazon.com/blogs/machine-learning/creating-neural-time-series-models-with-gluon-time-series/" rel="nofollow noreferrer">example</a>). Because it's open-source, you can use it for free, you can browse the code and install it anywhere compatible. For example, you can use the GluonTS DeepAR in a SageMaker container (<a href="https://aws.amazon.com/blogs/machine-learning/training-debugging-and-running-time-series-forecasting-models-with-the-gluonts-toolkit-on-amazon-sagemaker/" rel="nofollow noreferrer">blog post</a>). <a href="https://towardsdatascience.com/deep-demand-forecasting-with-amazon-sagemaker-e0226410763a" rel="nofollow noreferrer">This blog post</a> shows a GluonTS LSTNet model in a SageMaker container. Use GluonTS if you want more freedom. But remember that with more freedom comes more responsibility: you will have to choose the training and inference hardware and write science and orchestration code.</li> </ol> <p>There is no evidence that those 3 implementations of DeepAR have anything in common (beyond coming from AWS), their codebases may be different.</p> <p>In order to run hyperparameter tuning with DeepAR you have several options:</p> <ol> <li>In Amazon Forecast: use the <a href="https://docs.aws.amazon.com/forecast/latest/dg/API_HyperParameterTuningJobConfig.html" rel="nofollow noreferrer">HyperparameterTuningJobConfig</a></li> <li>In Amazon SageMaker DeepAR: use SageMaker Model Tuning, <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/deepar-tuning.html" rel="nofollow noreferrer">as explained here</a></li> <li>With GluonTS in Amazon SageMaker: use SageMaker Model Tuning with a custom metric.</li> <li>With GluonTS out of SageMaker: use the hyperparameter tuning library of your choice (<a href="https://docs.ray.io/en/master/tune/index.html" rel="nofollow noreferrer">RayTune</a>, <a href="https://optuna.org/" rel="nofollow noreferrer">Optuna</a>,...) on the infrastructure of your choice</li> </ol>
2021-05-26 09:10:00.177000+00:00
2021-05-26 09:10:00.177000+00:00
null
null
67,698,061
<p>This is what I got after running the code:</p> <p>File &quot;C:\Users\admin\anaconda3\envs\tensorflow_env\lib\site-packages\sagemaker\tuner.py&quot;, line 484, in _prepare_estimator_for_tuning estimator._prepare_for_training(job_name)</p> <p>AttributeError: 'DeepAREstimator' object has no attribute '_prepare_for_training'</p> <p>It seems that very few examples of hyperparameters tuning about Amazon sagemaker deepar algorithm are available on the internet. Can anybody help me with this issue ?</p> <pre><code>import mxnet as mx import pandas as pd import numpy as np import matplotlib.pyplot as plt from gluonts.model.deepar import DeepAREstimator from gluonts.mx.trainer import Trainer from gluonts.dataset.common import ListDataset from itertools import islice from gluonts.evaluation.backtest import make_evaluation_predictions from sagemaker.tuner import HyperparameterTuner, IntegerParameter, CategoricalParameter, ContinuousParameter df = pd.read_csv('final.csv', index_col=0,parse_dates=True) training_data = ListDataset( [{&quot;start&quot;: df.index[0], &quot;target&quot;: df.outbound_qty[:pd.to_datetime('2021-01-01')], &quot;feat_dynamic_real&quot;: [df.is_holiday[:pd.to_datetime('2021-01-01')], df.is_salary[:pd.to_datetime('2021-01-01')], df.count_qty[:pd.to_datetime('2021-01-01')], df.shelf_qty[:pd.to_datetime('2021-01-01')]] }], freq=&quot;D&quot; ) estimator = DeepAREstimator(freq=&quot;D&quot;,prediction_length=7,trainer=Trainer(ctx=mx.context.cpu())) hyperparams = {'learning_rate': ContinuousParameter(0.001, 0.1), 'epochs': IntegerParameter(10, 100), 'context_length': IntegerParameter(7, 90), 'mini_batch_size': IntegerParameter(32, 128) } tuner = HyperparameterTuner(estimator=estimator, objective_metric_name=&quot;test:RMSE&quot;, objective_type='Minimize',hyperparameter_ranges=hyperparams) tuner.fit(inputs = training_data) </code></pre>
2021-05-26 03:13:21.347000+00:00
2021-05-26 09:10:00.177000+00:00
2021-05-26 05:42:36.187000+00:00
python|machine-learning|deep-learning|time-series|lstm
['https://arxiv.org/abs/1704.04110', 'https://docs.aws.amazon.com/forecast/latest/dg/aws-forecast-recipe-deeparplus.html', 'https://github.com/aws-samples/amazon-forecast-samples/blob/master/notebooks/advanced/Getting_started_with_DeepAR%2B/Getting_started_with_DeepAR%2B.ipynb', 'https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html', 'https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_hyperparameters.html', 'https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/deepar_electricity/DeepAR-Electricity.ipynb', 'https://ts.gluon.ai/', 'https://aws.amazon.com/blogs/machine-learning/creating-neural-time-series-models-with-gluon-time-series/', 'https://aws.amazon.com/blogs/machine-learning/training-debugging-and-running-time-series-forecasting-models-with-the-gluonts-toolkit-on-amazon-sagemaker/', 'https://towardsdatascience.com/deep-demand-forecasting-with-amazon-sagemaker-e0226410763a', 'https://docs.aws.amazon.com/forecast/latest/dg/API_HyperParameterTuningJobConfig.html', 'https://docs.aws.amazon.com/sagemaker/latest/dg/deepar-tuning.html', 'https://docs.ray.io/en/master/tune/index.html', 'https://optuna.org/']
14
61,294,178
<p>I realize this question is older, but i thought reproducible examples might not hurt:</p> <pre><code>library(pdftools) pdftools::pdf_text(pdf = &quot;http://arxiv.org/pdf/1403.2805.pdf&quot;) </code></pre> <p><strong>Offline version:</strong></p> <pre><code>pdf(file = &quot;tmp.pdf&quot;) plot(1, main = &quot;mytext&quot;) dev.off() pdftools::pdf_text(pdf = &quot;tmp.pdf&quot;) </code></pre> <p>I come back to this question from time to time and even though the current answer is great, i always hope to find reproducible code. So i thought i add it. It can be removed if not needed.</p>
2020-04-18 18:28:42.703000+00:00
2021-04-18 20:48:48.353000+00:00
2021-04-18 20:48:48.353000+00:00
null
38,592,600
<p>Someone can help me to let me know how to read the pdf file, which is including some tables. I want to extract the data in the table, and arrange to csv file.</p> <p>Thanks a lot</p>
2016-07-26 14:26:38.713000+00:00
2022-09-02 14:42:06.077000+00:00
null
r|pdf
[]
0
70,478,112
<p>I think Faiss is exactly what you are looking for. The Github page is <a href="https://github.com/facebookresearch/faiss" rel="nofollow noreferrer">here</a>, if you are interested in the implementation details (this is pretty technical) see <a href="https://arxiv.org/abs/1702.08734" rel="nofollow noreferrer">here</a>, and the tutorial is <a href="https://github.com/facebookresearch/faiss/wiki/Getting-started" rel="nofollow noreferrer">here</a>.</p>
2021-12-25 05:09:58.310000+00:00
2021-12-25 05:09:58.310000+00:00
null
null
69,820,812
<p><strong>background:</strong> I have a machine learning model in which given an object returns an embedding vector with dimension d, the model is trained in a way such that the semantic similarity of two embedding vectors is very close. Now, the verification process is relatively simple, I can take something like the cosine similarity of the two vectors. For recognition, it's a little bit complicated, either I can loop through all the anchor documents and compare the cosine similarity, or use something like kNN (online).</p> <p><strong>problem:</strong> I have a list of embedding vectors, each vector has a dimension d, with length N. Each vector contains floating-point data.</p> <p>What will be an efficient data structure + algorithm that can do the following:</p> <ol> <li>Can add a new vector with a unique ID to the list efficiently (&lt;= logarithmic complexity)</li> <li>Search with a random vector in the list, and retrieve top k vectors, such that the Manhattan distance / L1 norm is minimum for those vectors efficiently (hopefully, &lt;= logarithmic complexity).</li> </ol> <p><strong>example:</strong></p> <pre><code>[ [1., 2., 3.], [5., 6., 8.], [-11., 2., 31.] ] </code></pre> <p><code>k = 2</code> <code>query = [1.5, 2.5, 3.2]</code> <code>results:</code></p> <pre><code>[ [1., 2., 3.], [5., 6., 8.], ] </code></pre>
2021-11-03 07:02:01.397000+00:00
2022-01-19 22:35:46.427000+00:00
2021-11-04 13:45:00.167000+00:00
python|algorithm|search|data-structures|similarity
['https://github.com/facebookresearch/faiss', 'https://arxiv.org/abs/1702.08734', 'https://github.com/facebookresearch/faiss/wiki/Getting-started']
3
53,835,137
<p>I got over 40 fps with this script (on i5-7500 3.4GHz, GTX 1060, 48GB RAM). There are a lot of APIs used to capture the screen. Among them, mss runs much faster and is not difficult to use. Here is an implementation of mss with darkflow(<a href="https://arxiv.org/pdf/1612.08242.pdf" rel="nofollow noreferrer">YOLOv2</a>), in which 'mon' defines the area you want apply prediction on the screen. </p> <p><em>options</em> is passed to the darkflow, that specifies which config file and checkpoint we want to use, threshold for detection, and how much this process occupies the GPU. Before we run this script, we have to have at least one trained model (or Tensorflow checkpoint). Here, <em>load</em> is the checkpoint number.</p> <p>If you think that the network detects too many bounding boxes, I recommend you to lower the <em>threshold</em>.</p> <pre><code>import numpy as np import cv2 import glob from moviepy.editor import VideoFileClip from mss import mss from PIL import Image from darkflow.net.build import TFNet import time options = { 'model' : 'cfg/tiny-yolo-voc-1c.cfg' , 'load' : 5500, 'threshold' : 0.1, 'gpu' : 0.7 } tfnet = TFNet( options ) color = (0, 255, 0) # bounding box color. # This defines the area on the screen. mon = {'top' : 10, 'left' : 10, 'width' : 1000, 'height' : 800} sct = mss() previous_time = 0 while True : sct.get_pixels(mon) frame = Image.frombytes( 'RGB', (sct.width, sct.height), sct.image ) frame = np.array(frame) # image = image[ ::2, ::2, : ] # can be used to downgrade the input frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) results = tfnet.return_predict( frame ) for result in results : tl = ( result['topleft']['x'], result['topleft']['y'] ) br = ( result['bottomright']['x'], result['bottomright']['y'] ) label = result['label'] confidence = result['confidence'] text = '{} : {:.0f}%'.format( label, confidence * 100 ) frame = cv2.rectangle( frame, tl, br, color, 5 ) frame = cv2.putText( frame, text, tl, cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2 ) cv2.imshow ( 'frame', frame ) if cv2.waitKey ( 1 ) &amp; 0xff == ord( 'q' ) : cv2.destroyAllWindows() txt1 = 'fps: %.1f' % ( 1./( time.time() - previous_time )) previous_time = time.time() print txt1 </code></pre>
2018-12-18 14:25:06.017000+00:00
2018-12-19 05:02:26.133000+00:00
2018-12-19 05:02:26.133000+00:00
null
51,114,378
<p>I've trained darkflow on my data set and have good result! I can feed it a pre recorded image or video and it draws the bounding boxes around the right things, win!</p> <p>Now I'd like to run it live as has been done with camera feeds, except I'd like my feed to be from the screen, not the camera. I have a specific window, which is launched from a specific process, or I can just take a section of the screen (from coords) either is fine for my application.</p> <p>Currently I use PILs image grab and then feed the images into darkflow, but this feels quite slow (maybe a few frames per second) nothing like the 30 ish fps you can get with video files!</p>
2018-06-30 11:13:44.377000+00:00
2018-12-19 05:02:26.133000+00:00
null
python-imaging-library|screen-capture|darkflow
['https://arxiv.org/pdf/1612.08242.pdf']
1
63,166,311
<p>There are many ways to generate a random integer with a custom distribution (also known as a <em>discrete distribution</em>). The choice depends on many things, including the number of integers to choose from, the shape of the distribution, and whether the distribution will change over time.</p> <p>One of the simplest ways to choose an integer with a custom weight function <code>f(x)</code> is the <em>rejection sampling</em> method. The following assumes that the highest possible value of <code>f</code> is <code>max</code> and each weight is 0 or greater. The time complexity for rejection sampling is constant on average, but depends greatly on the shape of the distribution and has a worst case of running forever. To choose an integer in [1, <code>k</code>] using rejection sampling:</p> <ol> <li>Choose a uniform random integer <code>i</code> in [1, <code>k</code>].</li> <li>With probability <code>f(i)/max</code>, return <code>i</code>. Otherwise, go to step 1. (For example, if all the weights are integers greater than 0, choose a uniform random integer in [1, <code>max</code>] and if that number is <code>f(i)</code> or less, return <code>i</code>, or go to step 1 otherwise.)</li> </ol> <p>Other algorithms have an average sampling time that doesn't depend so greatly on the distribution (usually either constant or logarithmic), but often require you to precalculate the weights in a setup step and store them in a data structure. Some of them are also economical in terms of the number of random bits they use on average. Many of these algorithms were introduced after 2011, and they include—</p> <ul> <li>The Bringmann–Larsen succinct data structure (&quot;Succinct Sampling from Discrete Distributions&quot;, 2012),</li> <li>Yunpeng Tang's multi-level search (&quot;An Empirical Study of Random Sampling Methods for Changing Discrete Distributions&quot;, 2019), and</li> <li>the <a href="https://arxiv.org/abs/2003.03830v2" rel="nofollow noreferrer">Fast Loaded Dice Roller</a> (2020).</li> </ul> <p>Other algorithms include the <em>alias method</em> (already mentioned in your article), the Knuth–Yao algorithm, the MVN data structure, and more. See my section &quot;<a href="https://peteroupc.github.io/randomfunc.html#Weighted_Choice_With_Replacement" rel="nofollow noreferrer">Weighted Choice With Replacement</a>&quot; for a survey.</p>
2020-07-30 04:42:15.353000+00:00
2022-04-06 18:42:46.290000+00:00
2022-04-06 18:42:46.290000+00:00
null
5,027,757
<p>Suppose that I have an <em>n</em>-sided loaded die, where each side <em>k</em> has some probability <em>p</em><sub><em>k</em></sub> of coming up when I roll it. I’m curious if there is a good data structure for storing this information statically (i.e., for a fixed set of probabilities), so that I can efficiently simulate a random roll of the die.</p> <p>Currently, I have an O(lg <em>n</em>) solution for this problem. The idea is to store a table of the cumulative probability of the first <em>k</em> sides for all <em>k</em>, then generate a random real number in the range [0, 1) and perform a binary search over the table to get the largest index whose cumulative value is no greater than the chosen value.</p> <p>I rather like this solution, but it seems odd that the runtime doesn’t take the probabilities into account. In particular, in the extreme cases of one side always coming up or the values being uniformly distributed, it’s possible to generate the result of the roll in O(1) using a naive approach, while my solution will still take logarithmically many steps.</p> <p>Does anyone have any suggestions for how to solve this problem in a way that is somehow “adaptive” in it’s runtime?</p> <p><strong>Update:</strong> Based on the answers to this question, I have written up <strong><a href="http://www.keithschwarz.com/darts-dice-coins/" rel="nofollow noreferrer">an article describing many approaches to this problem</a></strong>, along with their analyses. It looks like Vose’s implementation of the alias method gives Θ(<em>n</em>) preprocessing time and O(1) time per die roll, which is truly impressive. Hopefully this is a useful addition to the information contained in the answers!</p>
2011-02-17 10:33:43.977000+00:00
2022-04-07 02:10:07.750000+00:00
2022-04-07 02:10:07.750000+00:00
algorithm|language-agnostic|data-structures|random|probability
['https://arxiv.org/abs/2003.03830v2', 'https://peteroupc.github.io/randomfunc.html#Weighted_Choice_With_Replacement']
2
42,698,921
<h2>Actually, Softmax functions are already used deep within neural networks, in certain cases, when dealing with differentiable memory and with attention mechanisms!</h2> <p>Softmax layers can be used within neural networks such as in <a href="https://arxiv.org/pdf/1410.5401v2.pdf" rel="noreferrer">Neural Turing Machines (NTM)</a> and an improvement of those which are <a href="http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz" rel="noreferrer">Differentiable Neural Computer (DNC)</a>. </p> <p>To summarize, those architectures are <a href="https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition" rel="noreferrer">RNNs/LSTMs</a> which have been modified to contain a differentiable (neural) memory matrix which is possible to write and access through time steps. </p> <p>Quickly explained, the softmax function here enables a normalization of a fetch of the memory and other similar quirks for content-based addressing of the memory. About that, I really liked <a href="http://distill.pub/2016/augmented-rnns/" rel="noreferrer">this article</a> which illustrates the operations in an NTM and other recent RNN architectures with interactive figures. </p> <p>Moreover, Softmax is used in attention mechanisms for, say, machine translation, such as in <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="noreferrer">this paper</a>. There, the Softmax enables a normalization of the places to where attention is distributed in order to "softly" retain the maximal place to pay attention to: that is, to also pay a little bit of attention to elsewhere in a soft manner. However, this could be considered like to be a mini-neural network that deals with attention, within the big one, as explained in the paper. Therefore, it could be debated whether or not Softmax is used only at the end of neural networks.</p> <p>Hope it helps!</p> <p>Edit - More recently, it's even possible to see Neural Machine Translation (NMT) models where only attention (with softmax) is used, without any RNN nor CNN: <a href="http://nlp.seas.harvard.edu/2018/04/03/attention.html" rel="noreferrer">http://nlp.seas.harvard.edu/2018/04/03/attention.html</a></p>
2017-03-09 15:14:32.630000+00:00
2018-04-09 00:17:49.817000+00:00
2018-04-09 00:17:49.817000+00:00
null
37,588,632
<p>Most examples of neural networks for classification tasks I've seen use the a softmax layer as output activation function. Normally, the other hidden units use a sigmoid, tanh, or ReLu function as activation function. Using the softmax function here would - as far as I know - work out mathematically too.</p> <ul> <li>What are the theoretical justifications for not using the softmax function as hidden layer activation functions?</li> <li>Are there any publications about this, something to quote?</li> </ul>
2016-06-02 10:01:08.060000+00:00
2020-03-03 11:59:16.057000+00:00
2017-07-12 23:32:44.227000+00:00
machine-learning|neural-network|classification|softmax|activation-function
['https://arxiv.org/pdf/1410.5401v2.pdf', 'http://www.nature.com/articles/nature20101.epdf?author_access_token=ImTXBI8aWbYxYQ51Plys8NRgN0jAjWel9jnR3ZoTv0MggmpDmwljGswxVdeocYSurJ3hxupzWuRNeGvvXnoO8o4jTJcnAyhGuZzXJ1GEaD-Z7E6X_a9R-xqJ9TfJWBqz', 'https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition', 'http://distill.pub/2016/augmented-rnns/', 'https://arxiv.org/pdf/1409.0473.pdf', 'http://nlp.seas.harvard.edu/2018/04/03/attention.html']
6
65,508,862
<p>You are asking two different questions, I will try to answer both.</p> <ul> <li><p>Indeed, you should first reshape to <code>(c, h, w)</code> where <code>c</code> is the channel dimension In most cases, you will need that extra dimension because most 'image' layers are built to receive 3d dimensional tensors - not counting the batch dimension - such as <a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html" rel="nofollow noreferrer"><code>nn.Conv2d</code></a>, <a href="https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html?highlight=batch%20norm#torch.nn.BatchNorm2d" rel="nofollow noreferrer"><code>BatchNorm2d</code></a>, etc... I don't believe there's anyways around it, and doing so would restrict yourself to one-layer image datasets.</p> <p>You can broadcast to the desired shape with <a href="https://pytorch.org/docs/stable/generated/torch.reshape.html?highlight=reshape#torch.reshape" rel="nofollow noreferrer"><code>torch.reshape</code></a> or <a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view" rel="nofollow noreferrer"><code>Tensor.view</code></a>:</p> <pre><code>X = X.reshape(1, *X.shape) </code></pre> <p>Or by adding an additional dimension using <a href="https://pytorch.org/docs/stable/generated/torch.unsqueeze.html" rel="nofollow noreferrer"><code>torch.unsqueeeze</code></a>:</p> <pre><code>X.unsqueeze(0) </code></pre> </li> <li><p>About normalization. <em>Batch-normalization</em> and <em>dataset-normalization</em> are two different approaches.</p> <p><strong>The former</strong> is <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">a technique</a> that can achieve improved performance in convolution networks. This kind of operation can be implemented using a <a href="https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html?highlight=batch%20norm#torch.nn.BatchNorm2d" rel="nofollow noreferrer"><code>nn.BatchNorm2d</code></a> layer and is done using learnable parameters: a scale factor (~ std) and a bias (~ mean). This type of normalization is applied when the model is called and is applied per-batch.</p> <p><strong>The latter</strong> is a pre-processing technique which allows making different features have the same scale. This normalization can be applied inside the dataset per-element. It requires you measure the mean and standard deviation of your training set.</p> </li> </ul>
2020-12-30 14:50:38.947000+00:00
2020-12-30 16:42:31.333000+00:00
2020-12-30 16:42:31.333000+00:00
null
65,508,577
<p>For more robustnes of my model I want to normalize my feature tensor.</p> <p>I tried doing it the way that is to the best of my knowledge standard for pictures:</p> <pre><code>class Dataset(torch.utils.data.Dataset): 'Characterizes a dataset for PyTorch' def __init__(self, input_tensor, transform = transforms.Normalize(mean= 0.5, std=0.5)): self.labels = input_tensor[:,:,-1] self.features = input_tensor[:,:,:-1] self.transform = transform def __len__(self): return self.labels_planned.shape[0] def __getitem__(self, index): # Load data and get label X = self.features[index] y = self.labelslabels[index] if self.transform: X = self.transform(X) return X, y </code></pre> <p>But receive this error message:</p> <pre><code>ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([8, 25]). </code></pre> <p>Everywhere I looked people suggest that one should use .view to generate the third dimension in order to comply with the standard shape of pictures, but this seems very odd to me. Is there maybe a cleaner way to do this. Also where should I best place the normalization? Just for the batch or for the entire train dataset?</p>
2020-12-30 14:29:29.297000+00:00
2020-12-30 16:42:31.333000+00:00
null
python|machine-learning|pytorch
['https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html', 'https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html?highlight=batch%20norm#torch.nn.BatchNorm2d', 'https://pytorch.org/docs/stable/generated/torch.reshape.html?highlight=reshape#torch.reshape', 'https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view', 'https://pytorch.org/docs/stable/generated/torch.unsqueeze.html', 'https://arxiv.org/abs/1502.03167', 'https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html?highlight=batch%20norm#torch.nn.BatchNorm2d']
7
46,452,247
<p>I think it will depend on the specific context of your problem. What are you trying to predict based on what kind of input?</p> <p>For example, <a href="https://en.wikipedia.org/wiki/Recommender_system" rel="nofollow noreferrer">recommender systems</a> are used by companies like Netflix to predict a user's rating of, for example, movies based on a very sparse feature vector (user's existing ratings of a tiny percentage of all of the movies in the catalog).</p> <p>Another option is to develop some mapping algorithm from your sparse feature space to a common latent space on which you perform your classification with, e.g., an SVM or neural network. I believe <a href="http://www.mirlab.org/conference_papers/International_Conference/ISMIR%202011/papers/PS6-5.pdf" rel="nofollow noreferrer">this paper</a> does something similar. You can also look in to papers like <a href="https://arxiv.org/pdf/1702.03431.pdf" rel="nofollow noreferrer">this one</a> for a classifier that translates data from two different domains (your training vs. testing set, for example, where both contain similar information, but one has complete data and the other does not) into a common latent space for classification. There is a lot out there actually on domain-independent classification.</p> <p>Keywords to look up (with some links to get you started): <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">generative adversarial networks (GAN)</a>, <a href="https://arxiv.org/abs/1505.07818" rel="nofollow noreferrer">domain-adversarial training</a>, domain-independent classification, transfer learning.</p>
2017-09-27 15:53:27.303000+00:00
2017-09-27 15:53:27.303000+00:00
null
null
46,450,574
<p>Eg: For training, you use data for which users have filled up all the fields (around 40 fields) in a form along with an expected output. </p> <p>We now build a model (could be an artificial neural net or SVM or logistic regression, etc). </p> <p>Finally, a user now enters 3 fields in the form and expects a prediction. </p> <p>In this scenario, what is the best ML algorithm I can use? </p>
2017-09-27 14:32:24.033000+00:00
2017-09-27 15:53:27.303000+00:00
2017-09-27 15:22:17.143000+00:00
algorithm|machine-learning|neural-network
['https://en.wikipedia.org/wiki/Recommender_system', 'http://www.mirlab.org/conference_papers/International_Conference/ISMIR%202011/papers/PS6-5.pdf', 'https://arxiv.org/pdf/1702.03431.pdf', 'https://arxiv.org/abs/1406.2661', 'https://arxiv.org/abs/1505.07818']
5
32,539,381
<p>"...translation in other languages? Where are its synonyms?"</p> <p>There are three bad news for you.</p> <ol> <li><p>All this information (translations, synonyms) are a plain text of the Wiktionary article. </p></li> <li><p>Different Wiktionaries have different structure of the dictionary article. For example, compare the structure of the article in the <a href="https://en.wiktionary.org/wiki/Wiktionary:ELE" rel="nofollow">English Wiktioinary</a> and in the <a href="https://ru.wiktionary.org/wiki/%D0%92%D0%B8%D0%BA%D0%B8%D1%81%D0%BB%D0%BE%D0%B2%D0%B0%D1%80%D1%8C:%D0%9F%D1%80%D0%B0%D0%B2%D0%B8%D0%BB%D0%B0_%D0%BE%D1%84%D0%BE%D1%80%D0%BC%D0%BB%D0%B5%D0%BD%D0%B8%D1%8F_%D1%81%D1%82%D0%B0%D1%82%D0%B5%D0%B9" rel="nofollow">Russian Wiktionary</a>.</p></li> <li><p>The structure of Wiktionary article is not presented in the XML-file, it is just a simple plain text, see item 1. Thus you need to parse this text in order to extract synonyms or translation.</p></li> </ol> <p>You are welcome to read my paper about transforming (parsing) texts of Wiktionary articles to machine-readable database: <a href="http://arxiv.org/abs/1011.1368" rel="nofollow">http://arxiv.org/abs/1011.1368</a></p>
2015-09-12 13:28:43.623000+00:00
2015-09-12 13:28:43.623000+00:00
null
null
32,511,244
<p>I'm going to parse a Wiktionary file in many languages (English, Japanese, etc). From here (<a href="https://stackoverflow.com/questions/25200094/parse-wiktionary-data-dump-xml-into-mysql-database-using-php">Parse Wiktionary XML data dump into MySQL database using PHP</a>) I see the basic structure of it. But my question is that what these elements stand for?</p> <p>For example, I think the title under page element is a word in the vocabulary. But where is its translation in other languages? Where are its synonyms? </p>
2015-09-10 20:41:40.230000+00:00
2015-09-12 13:28:43.623000+00:00
2017-05-23 11:58:08.073000+00:00
xml|wiktionary
['https://en.wiktionary.org/wiki/Wiktionary:ELE', 'https://ru.wiktionary.org/wiki/%D0%92%D0%B8%D0%BA%D0%B8%D1%81%D0%BB%D0%BE%D0%B2%D0%B0%D1%80%D1%8C:%D0%9F%D1%80%D0%B0%D0%B2%D0%B8%D0%BB%D0%B0_%D0%BE%D1%84%D0%BE%D1%80%D0%BC%D0%BB%D0%B5%D0%BD%D0%B8%D1%8F_%D1%81%D1%82%D0%B0%D1%82%D0%B5%D0%B9', 'http://arxiv.org/abs/1011.1368']
3
52,662,424
<p>The algorithms you described for computing the <em>n</em>-th Fibonacci number are not the fastest. The fastest possible algorithms (for large <em>n</em>) are based on different recursive formulations. Since you are probably not interested to the theoretical details underlying the algorithms, here is, for your reference, a "practical" paper discussing the Python implementation of 12 different algorithms. These algorithms are compared against each other for different ranges of <em>n</em>, and the results are discussed. You will see that, depending on the value of <em>n</em>, the best algorithm changes.</p> <p><a href="https://arxiv.org/pdf/1803.07199.pdf" rel="nofollow noreferrer">Twelve Simple Algorithms to Compute Fibonacci Numbers</a></p>
2018-10-05 09:25:14.793000+00:00
2018-10-05 09:25:14.793000+00:00
null
null
52,659,521
<p>There are two functions that implement Fibonacci sequence. <code>fibo()</code> was recursive style, and <code>iterfibo()</code> was implemented using a circular method. I compared the performance time of the two functions. </p> <pre><code>import matplotlib.pyplot as plt import time # iteration style def iterfibo(count): if count &lt;= 1 : return count left, right = 0, 1 for i in range(count - 1): temp = left + right left = right right = temp return temp # recursion style def fibo(n): if n &lt;= 1: return n return fibo(n - 1) + fibo(n - 2) length = [x for x in range(25)] iterfibo_time = [] fibo_time = [] for i in length: # fibo's execution time ts = time.time() fibo(i) fibo_time.append(time.time() - ts) # iterfibo's execution time ts = time.time() iterfibo(i) iterfibo_time.append(time.time() - ts) plt.plot(length, iterfibo_time) plt.show() </code></pre> <p>However, I was wondering that the graph of <code>iterfibo()</code> was not a smooth curve. And in some cases, the performance time was reduced, not increased.</p> <p><code>fibo()</code> (recursive) time: <img src="https://i.stack.imgur.com/McBmH.png" alt="fibo_time.png"></p> <p><code>iterfibo()</code> (iterative) time: <img src="https://i.stack.imgur.com/V0mYz.png" alt="iterfibo_time.png"></p> <p>So I wonder why the graph takes this form.</p>
2018-10-05 06:28:03.403000+00:00
2018-10-05 09:25:14.793000+00:00
2018-10-05 06:33:02.823000+00:00
python|algorithm
['https://arxiv.org/pdf/1803.07199.pdf']
1
43,467,836
<p>I found this <a href="https://arxiv.org/abs/1510.01378" rel="nofollow noreferrer">https://arxiv.org/abs/1510.01378</a> If you normalize it may improve convergence so you will get lower training times.</p>
2017-04-18 08:58:57.640000+00:00
2017-04-18 08:58:57.640000+00:00
null
null
43,467,597
<p>I am playing some demos about recurrent neural network. </p> <p>I noticed that the scale of my data in each column differs a lot. So I am considering to do some preprocess work before I throw data batches into my RNN. The close column is the target I want to predict in the future.</p> <pre><code> open high low volume price_change p_change ma5 ma10 \ 0 20.64 20.64 20.37 163623.62 -0.08 -0.39 20.772 20.721 1 20.92 20.92 20.60 218505.95 -0.30 -1.43 20.780 20.718 2 21.00 21.15 20.72 269101.41 -0.08 -0.38 20.812 20.755 3 20.70 21.57 20.70 645855.38 0.32 1.55 20.782 20.788 4 20.60 20.70 20.20 458860.16 0.10 0.48 20.694 20.806 ma20 v_ma5 v_ma10 v_ma20 close 0 20.954 351189.30 388345.91 394078.37 20.56 1 20.990 373384.46 403747.59 411728.38 20.64 2 21.022 392464.55 405000.55 426124.42 20.94 3 21.054 445386.85 403945.59 473166.37 21.02 4 21.038 486615.13 378825.52 461835.35 20.70 </code></pre> <p>My question is, is preprocessing the data with, say StandardScaler in sklearn necessary in my case? And why?</p> <p>(You are welcome to edit my question)</p>
2017-04-18 08:46:17.210000+00:00
2017-04-20 17:40:39.990000+00:00
null
machine-learning|neural-network|deep-learning|recurrent-neural-network
['https://arxiv.org/abs/1510.01378']
1
66,176,972
<p>tf-idf does not [attempt to] capture semantic information about individual words - it is a purely frequency-based model. As such, you shouldn't expect to see neat word analogies pop up (think about it, why should the relative frequencies of 'man', 'woman', 'king' and 'queen' be so neatly related).</p> <p>In a Word2Vec model we have queen ~= king + woman - man word analogies emerge in part because we represented as n-dimensional vectors that (hopefully) encode the semantics of each word.</p> <p>In a tf-idf matrix, on the other hand, each element of our word vector just represents a function of its frequency in a particular document, so the constraint you're placing is not only that the relative frequency of these words be strongly, correlated, but that this occurs at the level of individual documents, which is a big ask for a model that just counts word frequencies.</p> <p>If you'd like to understand why word analogies emerge in Word Embedding models like Word2Vec I'd recommend having a look at this <a href="https://arxiv.org/abs/1901.09813" rel="nofollow noreferrer">paper</a> and the associated <a href="https://icml.cc/Conferences/2019/ScheduleMultitrack?event=4883" rel="nofollow noreferrer">talk</a>.</p>
2021-02-12 18:11:01.630000+00:00
2021-02-12 18:11:01.630000+00:00
null
null
66,122,290
<p>I have fit a TF-IDF model using Python's sklearn library using my own dataset:</p> <pre class="lang-py prettyprint-override"><code>tfidf_featuriser = sklearn.feature_extraction.text.TfidfVectorizer(stop_words=None) tfidf_featuriser.fit(documents) tfidf_docterm_matrix = tfidf_featuriser.transform(documents) </code></pre> <p>I am trying to solve word analogies (man::king as woman::queen) as it's possible to do with gensim's Word2Vec model. I have tried the following so far:</p> <pre class="lang-py prettyprint-override"><code>vec1 = tfidf_docterm_matrix.transpose()[tfidf_featuriser.vocabulary_['man'], :] vec2 = tfidf_docterm_matrix.transpose()[tfidf_featuriser.vocabulary_['woman'], :] vec3 = tfidf_docterm_matrix.transpose()[tfidf_featuriser.vocabulary_['king'], :] vec4 = vec2 + vec3 - vec1 </code></pre> <p>How can I retrieve similar vectors to vec4, hoping that one of the word vectors is of &quot;queen&quot;?</p>
2021-02-09 15:44:09.513000+00:00
2021-02-12 18:11:01.630000+00:00
null
python|scikit-learn|nlp
['https://arxiv.org/abs/1901.09813', 'https://icml.cc/Conferences/2019/ScheduleMultitrack?event=4883']
2
64,697,663
<p>In case someone is having a similar issue with future sequential (or temporal) data, University of Oxford and Google Cloud AI have come up with a new architecture to handle all three types of input (past temporal, future temporal as well as static). It is called <strong>Temporal Fusion Transformer</strong> and, at least from reading the <a href="https://arxiv.org/abs/1912.09363" rel="nofollow noreferrer">paper</a>, looks like a neat fit. However, I have yet to implement and test it. There is also a <a href="https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html" rel="nofollow noreferrer">PyTorch Tutorial</a> available.</p>
2020-11-05 12:55:51.880000+00:00
2020-11-05 12:55:51.880000+00:00
null
null
64,242,747
<p>I am trying to do multi-step (i.e., sequence-to-sequence) forecasts for product sales using both (multivariate) sequential and non-sequential inputs.</p> <p>Specifically, I am using sales numbers as well as some other sequential inputs (e.g., price, is day before holiday, etc...) of the past n days to predict the sales for future m days. Additionally, I have some non-sequential features characterizing the product itself.</p> <p>Definitions:</p> <ul> <li>n_seq_features &lt;- number of sequential features (in the multivariate time-series) including sales</li> <li>n_non_seq_features &lt;- number of non-sequential features characterizing a product</li> </ul> <p>I got as far as building a hybrid-model, where first the sequential input is passed through some LSTM layers. The output of the final LSTM layer is then concatenated with the non-sequential features and fed into some dense layers.</p> <p>What I can't quite get my head around, though, is how to input future sequntial input (everything except sales numbers for the following m days) in a way that efficiently utilizes the sequential information (i.e., causality, etc...). For m=1, I can simply input the sequential data for this one day together with the non-sequential input after the LSTM layers, however as soon as m becomes greater than 1 this appears to be a waste of causal information.</p> <p>The only ways I could think of were:</p> <ul> <li>to incorporate the sequential information for future m days as features in the LSTM input blowing up the input shape from (..., n, n_seq_features) to (..., n, n_seq_features + m*(n_seq_features-1))</li> <li>add a separate LSTM branch handling the future data, the output of which is then 'somehow' fed into the dense layers at the last stage of the model</li> </ul> <p>I only started using LSTM networks a while ago so I unfortunately have only limited intuition on how they are best utilized (especially in hybrid approaches). For this reason, I would like to ask:</p> <ol> <li>Is the general approach of injecting sequential and non-sequential input at different stages of the same model (i.e., trained concurrently) useful or would one rather split it into separate models which can be trained independently for more fine-grained control?</li> <li>How is future sequential input injected into an LSTM network to preserve causal information? Can this be achieved with a high-level frontend like KERAS or does it require a very deep dive into the tensorflow backend?</li> <li>Are LSTM networks not the way to go for this specific problem in the first place?</li> </ol> <p>Cheers and thanks in advance for any advice, resources or thoughts on the matter. :)</p>
2020-10-07 11:05:28.203000+00:00
2020-11-05 12:55:51.880000+00:00
null
tensorflow|keras|time-series|lstm|forecasting
['https://arxiv.org/abs/1912.09363', 'https://pytorch-forecasting.readthedocs.io/en/latest/tutorials/stallion.html']
2
69,286,469
<blockquote> <p>when batch_size = 1, variance will be 0</p> </blockquote> <p>No, because when you compute mean and variance for BN (for example using <code>tf.nn.monents</code>) you will be computing it over axis <code>[0, 1, 2]</code> (assuming you have NHWC tensor channels order).</p> <p>From &quot;Group Normalization&quot; paper: <a href="https://arxiv.org/pdf/1803.08494.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1803.08494.pdf</a> <a href="https://i.stack.imgur.com/8fokp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8fokp.png" alt="enter image description here" /></a></p> <p>With batch_size=1 batch normalization is equal to instance normalization and it can be helpful in some tasks.</p> <p>But if you are using sort of encoder-decoder and in some layer you have tensor with spatial size of 1x1 it will be a problem, because each channel only have only one value and mean of value will be equal to this value, so BN will zero out information.</p>
2021-09-22 14:42:22.117000+00:00
2021-09-22 14:53:08.223000+00:00
2021-09-22 14:53:08.223000+00:00
null
59,648,509
<p>What will happen when I use batch normalization but set <code>batch_size = 1</code>?</p> <p>Because I am using 3D medical images as training dataset, the batch size can only be set to 1 because of GPU limitation. Normally, I know, when <code>batch_size = 1</code>, variance will be 0. And <code>(x-mean)/variance</code> will lead to error because of division by 0.</p> <p>But why did errors not occur when I set <code>batch_size = 1</code>? Why my network was trained as good as I expected? Could anyone explain it?</p> <p><a href="https://stackoverflow.com/questions/59648509/batch-normalization-when-batch-size-1?noredirect=1#comment105458690_59648509">Some people</a> argued that:</p> <blockquote> <p>The <code>ZeroDivisionError</code> may not be encountered because of two cases. First, the exception is caught in a <code>try</code> catch block. Second, a small rational number is added ( <code>1e-19</code> ) to the variance term so that it is never zero. </p> </blockquote> <p>But <a href="https://stackoverflow.com/questions/59651396/how-to-calculate-batch-normalization-with-python/59654494?noredirect=1#comment105470721_59654494">some people</a> disagree. They said that:</p> <blockquote> <p>You should calculate mean and std across all pixels in the images of the batch. (So even <code>batch_size = 1</code>, there are still a lot of pixels in the batch. So the reason why <code>batch_size=1</code> can still work is not because of <code>1e-19</code>)</p> </blockquote> <p>I have checked the Pytorch source code, and from the code I think the latter one is right. </p> <p>Does anyone have different opinion???</p>
2020-01-08 14:57:55.127000+00:00
2021-12-26 14:16:32.820000+00:00
2020-01-11 23:47:45.273000+00:00
python|tensorflow|keras|deep-learning|batch-normalization
['https://arxiv.org/pdf/1803.08494.pdf', 'https://i.stack.imgur.com/8fokp.png']
2
59,699,724
<blockquote> <p>variance will be 0</p> </blockquote> <p>No, it won't; <code>BatchNormalization</code> computes statistics only with respect to a <em>single axis</em> (usually the channels axis, <code>=-1</code> (last) by default); every other axis is <em>collapsed</em>, i.e. summed over for averaging; details below.</p> <p>More importantly, however, unless you can explicitly justify it, I advise against using <code>BatchNormalization</code> with <code>batch_size=1</code>; there are strong theoretical reasons against it, and multiple publications have shown BN performance degrade for <code>batch_size</code> under 32, and severely for &lt;=8. In a nutshell, batch statistics &quot;averaged&quot; over a single sample vary greatly sample-to-sample (high variance), and BN mechanisms don't work as intended.</p> <p><strong>Small mini-batch alternatives</strong>: <a href="https://arxiv.org/abs/1702.03275" rel="nofollow noreferrer">Batch Renormalization</a> -- <a href="https://arxiv.org/abs/1607.06450" rel="nofollow noreferrer">Layer Normalization</a> -- <a href="https://arxiv.org/abs/1602.07868" rel="nofollow noreferrer">Weight Normalization</a></p> <hr> <p><strong>Implementation details</strong>: from <a href="https://github.com/keras-team/keras/blob/master/keras/layers/normalization.py#L137" rel="nofollow noreferrer">source code</a>:</p> <pre class="lang-py prettyprint-override"><code>reduction_axes = list(range(len(input_shape))) del reduction_axes[self.axis] </code></pre> <p>Eventually, <a href="https://github.com/tensorflow/tensorflow/blob/r2.1/tensorflow/python/ops/nn_impl.py#L1220" rel="nofollow noreferrer"><code>tf.nn.monents</code></a> is called with <code>axes=reduction_axes</code>, which performs a <code>reduce_sum</code> to compute <code>variance</code>. Then, in the TensorFlow backend, <code>mean</code> and <code>variance</code> are <a href="https://github.com/keras-team/keras/blob/master/keras/backend/tensorflow_backend.py#L2191" rel="nofollow noreferrer">passed</a> to <a href="https://github.com/tensorflow/tensorflow/blob/r2.1/tensorflow/python/ops/nn_impl.py#L1373-L1434" rel="nofollow noreferrer"><code>tf.nn.batch_normalization</code></a> to return train- or inference-normalized inputs.</p> <p>In other words, if your input is <code>(batch_size, height, width, depth, channels)</code>, or <code>(1, height, width, depth, channels)</code>, then BN will run calculations over the <code>1</code>, <code>height</code>, <code>width</code>, and <code>depth</code> dimensions.</p> <p><strong>Can variance ever be zero?</strong> - yes, if every single datapoint for any given <code>channel</code> slice (along every dimension) is the same. But this should be near-impossible for real data.</p> <hr> <p><strong>Other answers</strong>: first one is misleading:</p> <blockquote> <p>a small rational number is added (<code>1e-19</code>) to the variance</p> </blockquote> <p>This doesn't happen in computing variance, but it is added <em>to</em> variance when normalizing; nonetheless, it is rarely necessary, as <code>variance</code> is far from zero. Also, the epsilon term is actually defaulted to <code>1e-3</code> by Keras; it serves roles in regularizing, beyond mere avoiding zero-division.</p> <hr> <p><strong>Update</strong>: I failed to address an important piece of intuition with suspecting variance to be 0; indeed, the <em>batch statistics</em> variance is zero, since there is only <em>one statistic</em> - but the &quot;statistic&quot; itself concerns the mean &amp; variance of the channel + spatial dimensions. In other words, the variance <em>of</em> the mean &amp; variance (<em>of</em> the single train sample) is zero, but the mean &amp; variance themselves aren't.</p>
2020-01-11 23:47:12.530000+00:00
2021-12-26 14:16:32.820000+00:00
2021-12-26 14:16:32.820000+00:00
null
59,648,509
<p>What will happen when I use batch normalization but set <code>batch_size = 1</code>?</p> <p>Because I am using 3D medical images as training dataset, the batch size can only be set to 1 because of GPU limitation. Normally, I know, when <code>batch_size = 1</code>, variance will be 0. And <code>(x-mean)/variance</code> will lead to error because of division by 0.</p> <p>But why did errors not occur when I set <code>batch_size = 1</code>? Why my network was trained as good as I expected? Could anyone explain it?</p> <p><a href="https://stackoverflow.com/questions/59648509/batch-normalization-when-batch-size-1?noredirect=1#comment105458690_59648509">Some people</a> argued that:</p> <blockquote> <p>The <code>ZeroDivisionError</code> may not be encountered because of two cases. First, the exception is caught in a <code>try</code> catch block. Second, a small rational number is added ( <code>1e-19</code> ) to the variance term so that it is never zero. </p> </blockquote> <p>But <a href="https://stackoverflow.com/questions/59651396/how-to-calculate-batch-normalization-with-python/59654494?noredirect=1#comment105470721_59654494">some people</a> disagree. They said that:</p> <blockquote> <p>You should calculate mean and std across all pixels in the images of the batch. (So even <code>batch_size = 1</code>, there are still a lot of pixels in the batch. So the reason why <code>batch_size=1</code> can still work is not because of <code>1e-19</code>)</p> </blockquote> <p>I have checked the Pytorch source code, and from the code I think the latter one is right. </p> <p>Does anyone have different opinion???</p>
2020-01-08 14:57:55.127000+00:00
2021-12-26 14:16:32.820000+00:00
2020-01-11 23:47:45.273000+00:00
python|tensorflow|keras|deep-learning|batch-normalization
['https://arxiv.org/abs/1702.03275', 'https://arxiv.org/abs/1607.06450', 'https://arxiv.org/abs/1602.07868', 'https://github.com/keras-team/keras/blob/master/keras/layers/normalization.py#L137', 'https://github.com/tensorflow/tensorflow/blob/r2.1/tensorflow/python/ops/nn_impl.py#L1220', 'https://github.com/keras-team/keras/blob/master/keras/backend/tensorflow_backend.py#L2191', 'https://github.com/tensorflow/tensorflow/blob/r2.1/tensorflow/python/ops/nn_impl.py#L1373-L1434']
7
65,655,743
<blockquote> <p>How come the stack cannot be increased during runtime in most operating system?</p> </blockquote> <p>This is wrong for Linux. On recent Linux systems, each thread has its own <a href="https://en.wikipedia.org/wiki/Call_stack" rel="noreferrer">call stack</a> (see <a href="https://man7.org/linux/man-pages/man7/pthreads.7.html" rel="noreferrer">pthreads(7)</a>), and an application could (with clever tricks) increase some call stacks using <a href="https://man7.org/linux/man-pages/man2/mmap.2.html" rel="noreferrer">mmap(2)</a> and <a href="https://man7.org/linux/man-pages/man2/mremap.2.html" rel="noreferrer">mremap(2)</a> after querying the call stacks thru <code>/proc/</code> (see <a href="https://man7.org/linux/man-pages/man5/proc.5.html" rel="noreferrer">proc(5)</a> and use <code>/proc/self/maps</code>) like e.g. <a href="https://man7.org/linux/man-pages/man1/pmap.1.html" rel="noreferrer">pmap(1)</a> does.</p> <p>Of course, such code is architecture specific, since in some cases the call stack grows towards increasing addresses and in other cases towards decreasing addresses.</p> <p>Read also <a href="http://pages.cs.wisc.edu/%7Eremzi/OSTEP/" rel="noreferrer"><em>Operating Systems: Three Easy Pieces</em></a> and the <a href="https://www.osdev.org/" rel="noreferrer">OSDEV</a> wiki, and study the source code of <a href="https://www.gnu.org/software/libc/" rel="noreferrer">GNU libc</a>.</p> <p>BTW, Appel's book <a href="https://www.cambridge.org/core/books/compiling-with-continuations/7CA9C36DCE78AD82218E745F43A4E740" rel="noreferrer"><em>Compiling with Continuations</em></a>, his old paper <a href="https://www.cs.princeton.edu/%7Eappel/papers/45.pdf" rel="noreferrer"><em>Garbage Collection can be faster than Stack Allocation</em></a> and this paper on <a href="https://arxiv.org/pdf/1805.08842.pdf" rel="noreferrer"><em>Compiling with Continuations and LLVM</em></a> could interest you, and both are very related to your question: sometimes, there is almost &quot;no call stack&quot; and it makes no sense to &quot;increase it&quot;.</p>
2021-01-10 16:42:28.423000+00:00
2021-01-10 16:54:57.600000+00:00
2021-01-10 16:54:57.600000+00:00
null
65,642,325
<p>Is it to avoid fragmentation? Or some other reason? A set lifetime for a memory allocation is a pretty useful construct, compared to <code>malloc()</code> which has a manual lifetime.</p>
2021-01-09 11:58:03.720000+00:00
2021-01-10 16:54:57.600000+00:00
2021-01-09 14:06:44.097000+00:00
c|operating-system|stack-allocation
['https://en.wikipedia.org/wiki/Call_stack', 'https://man7.org/linux/man-pages/man7/pthreads.7.html', 'https://man7.org/linux/man-pages/man2/mmap.2.html', 'https://man7.org/linux/man-pages/man2/mremap.2.html', 'https://man7.org/linux/man-pages/man5/proc.5.html', 'https://man7.org/linux/man-pages/man1/pmap.1.html', 'http://pages.cs.wisc.edu/%7Eremzi/OSTEP/', 'https://www.osdev.org/', 'https://www.gnu.org/software/libc/', 'https://www.cambridge.org/core/books/compiling-with-continuations/7CA9C36DCE78AD82218E745F43A4E740', 'https://www.cs.princeton.edu/%7Eappel/papers/45.pdf', 'https://arxiv.org/pdf/1805.08842.pdf']
12
52,033,273
<p>First of all, Deep Learning isn't a mythical hammer you can throw at every problem and expect better results. It requires careful analysis of your problem, choosing the right method, crafting your network, properly setting up your training, and <em>only then, <a href="https://arxiv.org/abs/1803.03635" rel="nofollow noreferrer">with a lot of luck</a></em> will you see significantly better results than classical methods.</p> <p>From what you describe (and without any more details about your implementation), it seems to me that there could have been several things going wrong:</p> <ol> <li>Your task is simply not designed for a neural network. Some tasks are still better solved with classical methods, since they <em>manually</em> account for patterns in your data, or distill your advanced reasoning/knowledge into a prediction. You might not be directly aware of it, but sometimes neural networks are just overkill.</li> <li>You don't describe how your 11000 instances are distributed with respect to the target classes, how big the input is, what kind of preprocessing you are performing for either method, etc, etc. Maybe your data is simply processed wrong, your training is diverging due to unfortunate parameter setups, or plenty of other things.</li> </ol> <p>To expect a reasonable answer, you would have to share at least a bit of code regarding the implementation of your task, and parameters you are using for training.</p>
2018-08-27 05:55:30.677000+00:00
2018-08-27 05:55:30.677000+00:00
null
null
52,033,171
<p>I have a dataset with 11k instances containing 0s,1s and -1s. I heard that deep learning can be applied to feature values.Hence applied the same for my dataset but surprisingly it resulted in less accuracy (&lt;50%) compared to traditional machine learning algos (RF,SVM,ELM). Is it appropriate to apply deep learning algos to feature values for classification task? Any suggestion is greatly appreciated.</p>
2018-08-27 05:45:23.413000+00:00
2018-08-27 06:19:25.460000+00:00
2018-08-27 06:19:25.460000+00:00
python|tensorflow|machine-learning|deep-learning|computer-vision
['https://arxiv.org/abs/1803.03635']
1
59,826,721
<p>The group that presented ShapeNet also presented a paper illustrating how to generated watertight meshes from ShapeNet, see: <a href="https://arxiv.org/abs/1802.01698" rel="nofollow noreferrer">https://arxiv.org/abs/1802.01698</a></p> <p>The code is available here: <a href="https://github.com/hjwdzh/Manifold" rel="nofollow noreferrer">https://github.com/hjwdzh/Manifold</a> And on the github page you also find preprocessed data for 13 ShapeNet classes.</p>
2020-01-20 15:47:37.850000+00:00
2020-01-20 15:47:37.850000+00:00
null
null
57,344,344
<p>I would like to convert ShapeNet meshes to watertight meshes. Meshlab claims to be able to generate watertight meshes (for 3d printing preparations) but even after following tutorials for operations like Duplicate Face / Vertex removal (As meshlab meshes have double sided faces), I have not been able to get the mesh to a state that watertight-requiring operations can run on it.</p> <p>Is there a tutorial on what can be applied to a mesh to make it watertight?</p>
2019-08-04 06:29:23.693000+00:00
2020-01-20 15:47:37.850000+00:00
null
meshlab
['https://arxiv.org/abs/1802.01698', 'https://github.com/hjwdzh/Manifold']
2
55,353,859
<p>As Asterisk explained in his comment, there is a fundamental difference between dropout within a recurrent unit and dropout after the unit's output. This is the architecture from the <a href="https://github.com/keras-team/keras/blob/master/examples/imdb_bidirectional_lstm.py" rel="noreferrer">keras tutorial</a> you linked in your question:</p> <pre><code>model = Sequential() model.add(Embedding(max_features, 128, input_length=maxlen)) model.add(Bidirectional(LSTM(64))) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) </code></pre> <p>You're adding a dropout layer <strong>after</strong> the LSTM finished its computation, meaning that there won't be any more recurrent passes in that unit. Imagine this dropout layer as teaching the network not to rely on the output for a specific feature of a specific time step, but to generalize over information in different features and time steps. Dropout here is no different to feed-forward architectures.</p> <p>What <a href="https://arxiv.org/abs/1512.05287" rel="noreferrer">Gal &amp; Ghahramani</a> propose in their paper (which you linked in the question) is dropout <strong>within</strong> the recurrent unit. There, you're dropping input information between the time steps of a sequence. I found <a href="https://becominghuman.ai/learning-note-dropout-in-recurrent-networks-part-1-57a9c19a2307" rel="noreferrer">this blogpost</a> to be very helpful to understand the paper and how it relates to the keras implementation.</p>
2019-03-26 09:37:51.247000+00:00
2019-03-26 09:37:51.247000+00:00
null
null
50,458,428
<p>I am confused between how to correctly use dropout with RNN in keras, specifically with GRU units. The keras documentation refers to this paper (<a href="https://arxiv.org/abs/1512.05287" rel="nofollow noreferrer">https://arxiv.org/abs/1512.05287</a>) and I understand that same dropout mask should be used for all time-steps. This is achieved by dropout argument while specifying the GRU layer itself. What I don't understand is:</p> <ol> <li><p>Why there are several examples over the internet including keras own example (<a href="https://github.com/keras-team/keras/blob/master/examples/imdb_bidirectional_lstm.py" rel="nofollow noreferrer">https://github.com/keras-team/keras/blob/master/examples/imdb_bidirectional_lstm.py</a>) and "Trigger word detection" assignment in Andrew Ng's Coursera Seq. Models course, where they add a dropout layer explicitly "model.add(Dropout(0.5))" which, in my understanding, will add a different mask to every time-step.</p></li> <li><p>The paper mentioned above suggests that doing this is inappropriate and we might lose the signal as well as long-term memory due to the accumulation of this dropout noise over all the time-steps. But then, how are these models (using different dropout masks at every time-step) are able to learn and perform well.</p></li> </ol> <p>I myself have trained a model which uses different dropout masks at every time-step, and although I haven't gotten results as I wanted, the model is able to overfit the training data. This, in my understanding, invalidates the "accumulation of noise" and "signal getting lost" over all the time-steps (I have 1000 time-step series being input to the GRU layers).</p> <p>Any insights, explanations or experience with the situation will be helpful. Thanks.</p> <p>UPDATE:</p> <p>To make it more clear I'll mention an extract from keras documentation of Dropout Layer ("noise_shape: 1D integer tensor representing the shape of the binary dropout mask that will be multiplied with the input. For instance, if your inputs have shape (batch_size, timesteps, features) and you want the dropout mask to be the same for all timesteps, you can use noise_shape=(batch_size, 1, features"). So, I believe, it can be seen that when using Dropout layer explicitly and needing the same mask at every time-step (as mentioned in the paper), we need to edit this noise_shape argument which is not done in the examples I linked earlier.</p>
2018-05-22 01:08:17.753000+00:00
2019-03-26 09:37:51.247000+00:00
2018-05-23 04:08:50.107000+00:00
machine-learning|keras|deep-learning|recurrent-neural-network|dropout
['https://github.com/keras-team/keras/blob/master/examples/imdb_bidirectional_lstm.py', 'https://arxiv.org/abs/1512.05287', 'https://becominghuman.ai/learning-note-dropout-in-recurrent-networks-part-1-57a9c19a2307']
3
60,797,977
<p>There is a far simpler approach to generating random numbers in a range from a random bit stream, which is not only optimally efficient, but also exact. It's called the "Fast Dice Roller" method of J. Lumbroso:</p> <p>"<a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Optimal Discrete Uniform Generation from Coin Flips, and Applications</a>", 2013.</p> <p>See also <a href="https://stackoverflow.com/questions/60777414/uniformly-distributed-bit-sequence/60777779#60777779">this question</a>.</p>
2020-03-22 10:03:36.983000+00:00
2020-03-22 10:03:36.983000+00:00
null
null
60,796,887
<p>I am trying to implement a range mapper for TRNG output files for a C application with ranges of up to 4 bits in size. Due to the pigeonhole bias problem I have settled on using a discard algorithm.</p> <p>My idea for a parsimonious algorithm would be something like:</p> <p>-- Read 16 bytes from file and store as an indexed 128 bit unsigned integer bitbucket to be bitmask selected n bits at a time.<br> -- Predetermine as much as possible the ranges/buckets required for each input and store in an array.<br> -- For each n bits in the bitbucket select an input from the array that will not discard it if one exists. If 2 bits cannot find an input try 3 bits and if that cannot find an input try with 4 bits. At first when there are many inputs it should be easy not to discard, but as the choice of inputs gets low discards will become more common. I am not entirely sure if it is better to start with fewer bits and work my way up or to do the opposite.</p> <p>The downside of this bit sipping range mapper seems to be that I need to assume about twice as much random input data as would be required with biased scaling methods. For instance a 9 bucket input from a 4 bit rand output will miss about 43% of the time.</p> <p>Existing implementations/algorithms: <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">This</a> seems like an example of a more complex and efficient method of parsimonious range mapping but I find his explanation entirely impenetrable. Can anyone explain it to me in English or suggest a book I might read or a university class I might take that would give me a background to understand it?</p> <p>There is also <a href="http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/lib/libc/crypt/arc4random_uniform.c?rev=1.1&amp;content-type=text/plain" rel="nofollow noreferrer">arc4random</a> which seems to be a runtime optimized unbiased modulo discard implementation. Like almost all unbiased range mapper implementations I have found this seems not to particularly care about how much data it uses. That does not however mean that it is necessarily less data efficient because it has the advantage of fewer misses.</p> <p>The basic idea of arc4random seems to be that as long as the number of pigeons (max_randvalue_output) is evenly divisible by the number of holes (rangeupperbound) the modulo function itself is an elegant <em>and unbiased</em> range mapper. However modulo only seems to be relevant when you are not bit sipping, i.e. when the output from the random source is more than ceil(log2(buckets)) bits.</p> <p>There seems to be a tradeoff between the number of 'wasted' random bits and the percentage of discards. The percentage of misses is inversely proportional to the number of excess bits in the input to the range mapper. It seems like there should be a mathematical way to compare the data efficiency of a bit sipping range mapper with a more bit hungry version with fewer misses, but I don't know it.</p> <p>So my plan is to just write two implementations: a bit sipping parsimonious type of range mapper that may or may not be a little like the mathforum example (which I don't understand) and an invariant byte input modulo range mapper which accepts byte inputs from a TRNG and uses a discard-from-the-top-of-largest-multiple modulo method of debiasing to match (x)n pigeons to n holes which is intended to be like arc4random. When finished I plan to post them on codereview.</p> <p>I am basically looking for help or advice with any of these issues that might help me to write a more parsimonious but still unbiased range mapper particularly with respect to my parsimonious algorithm. Runtime efficiency is not a priority.</p>
2020-03-22 07:25:24.340000+00:00
2020-03-28 16:55:01.690000+00:00
2020-03-22 07:55:40.583000+00:00
algorithm|random|range|mapping|parsimonious
['https://arxiv.org/abs/1304.1916', 'https://stackoverflow.com/questions/60777414/uniformly-distributed-bit-sequence/60777779#60777779']
2
46,257,775
<p>Repast isn't the best for open libraries, but I've had some luck searching GitHub. Here's a basic ped agent I built once, you'll have to build a context with a scheduler class to call the pedestrians:</p> <p>context:</p> <pre><code>public class RoadBuilder extends DefaultContext&lt;Object&gt; implements ContextBuilder&lt;Object&gt; { context.setId("driving1"); ContinuousSpaceFactory spaceFactory = ContinuousSpaceFactoryFinder.createContinuousSpaceFactory(null); ContinuousSpace&lt;Object&gt; space = spaceFactory.createContinuousSpace("space",context, new SimpleCartesianAdder&lt;Object&gt;(), new StrictBorders(), roadL, worldW); clock = RunEnvironment.getInstance().getCurrentSchedule(); flowSource = new Scheduler(); context.add(flowSource); return context; } </code></pre> <p>the scheduler: </p> <pre><code>public class Scheduler { static ArrayList&lt;Ped&gt; allPeds; @ScheduledMethod(start = 1, interval = 1, priority = 1) public void doStuff() { Ped addedPed = addPed(1); allPeds.add(addedPed); for (Ped a : allPeds) { a.calc();} for (Ped b : allPeds) { b.walk();} public Ped addPed(int direction) { Context&lt;Object&gt; context = ContextUtils.getContext(this); ContinuousSpace&lt;Object&gt; space = (ContinuousSpace&lt;Object&gt;) context.getProjection("space"); Ped newPed = new Ped(space,direction); context.add(newPed); space.moveTo(newPed,xPlacement,yPlacement); newPed.myLoc = space.getLocation(newPed); return(newPed); } </code></pre> <p>The pedestrians - This is based on a "generalized force model" (source: Simulating Dynamical Features of Escape Panic - Helbing, Farkas, and Vicsek - <a href="https://arxiv.org/pdf/cond-mat/0009448.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/cond-mat/0009448.pdf</a>)</p> <p>and here's the pedestrian class</p> <pre><code>public class Ped { private ContinuousSpace&lt;Object&gt; space; private List&lt;Double&gt; forcesX, forcesY; private NdPoint endPt; private Random rnd = new Random(); private int age; private double endPtDist, endPtTheta, critGap; private double side = RoadBuilder.sidewalk; private double wS, etaS, wV, etaV, sigR; //errors private double m, horiz, A, B, k, r; //interactive force constants (accT is also) public NdPoint myLoc, destination; public double[] v, dv, newV; public double xTime, accT, maxV, xLoc, yLoc; public int dir; // dir = 1 walks up, -1 walks down public void calc() { myLoc = space.getLocation(this); dv = accel(myLoc,dir,destination); newV = sumV(v,dv); newV = limitV(newV); } public void walk() { v = newV; move(myLoc,v); } public double[] accel(NdPoint location, int direct, NdPoint endPt) { forcesX = new ArrayList&lt;Double&gt;(); forcesY = new ArrayList&lt;Double&gt;(); double xF, yF; double[] acc; xF = yF = 0; //calculate heading to endpoint endPtDist = space.getDistance(location, endPt); double endPtDelX = endPt.getX()-location.getX(); endPtTheta = FastMath.asin((double)direct*endPtDelX/endPtDist); if (direct == -1) { endPtTheta += Math.PI;} //calculate motive force Double motFx = (maxV*Math.sin(endPtTheta) - v[0])/accT; Double motFy = (maxV*Math.cos(endPtTheta) - v[1])/accT; forcesX.add(motFx); forcesY.add(motFy); //calculate interactive forces //TODO: write code to make a threshold for interaction instead of the arbitrary horizon for (Ped a : Scheduler.allPeds) { if (a != this) { NdPoint otherLoc = space.getLocation(a); double otherY = otherLoc.getY(); double visible = Math.signum((double)dir*(otherY-yLoc)); if (visible == 1) { //peds only affected by those in front of them double absDist = space.getDistance(location, otherLoc); if (absDist &lt; horiz) { double delX = location.getX()-otherLoc.getX(); double delY = location.getY()-otherLoc.getY(); double delXabs = Math.abs(delX); double signFx = Math.signum(delX); double signFy = Math.signum(delY); double theta = FastMath.asin(delXabs/absDist); double rij = r + a.r; Double interFx = signFx*A*Math.exp((rij-absDist)/B)*Math.sin(theta)/m; Double interFy = signFy*A*Math.exp((rij-absDist)/B)*Math.cos(theta)/m; forcesX.add(interFx); forcesY.add(interFy);}}}} //sum all forces for (Double b : forcesX) { xF += b;} for (Double c : forcesY) { yF += c;} acc = new double[] {xF, yF}; return acc; } public void move(NdPoint loc, double[] displacement) { double[] zero = new double[] {0,0}; double yl = loc.getY(); if (displacement != zero) { space.moveByDisplacement(this,displacement); myLoc = space.getLocation(this);} } public double[] limitV(double[] input) { double totalV, norm; if (this.dir == 1) { if (input[1] &lt; 0) { input[1] = 0;}} else { if (input[1] &gt; 0) { input[1] = 0;}} totalV = Math.sqrt(input[0]*input[0] + input[1]*input[1]); if (totalV &gt; maxV) { norm = maxV/totalV; input[0] = input[0]*norm; input[1] = input[1]*norm;} return input; } public double[] sumV(double[] a, double[] b) { double[] c = new double[2]; for (int i = 0; i &lt; 2; i++) { c[i] = a[i] + b[i];} return c; } public Ped(ContinuousSpace&lt;Object&gt; contextSpace, int direction) { space = contextSpace; maxV = rnd.nextGaussian() * UserPanel.pedVsd + UserPanel.pedVavg; dir = direction; // 1 moves up, -1 moves down v = new double[] {0,(double)dir*.5*maxV}; age = 0; //3-circle variables - from Helbing, et al (2000) [r from Rouphail et al 1998] accT = 0.5/UserPanel.tStep; //acceleration time m = 80; //avg ped mass in kg horiz = 5/RoadBuilder.spaceScale; //distance at which peds affect each other A = 2000*UserPanel.tStep*UserPanel.tStep/RoadBuilder.spaceScale; //ped interaction constant (kg*space units/time units^2) B = 0.08/RoadBuilder.spaceScale; //ped distance interaction constant (space units) k = 120000*UserPanel.tStep*UserPanel.tStep; //wall force constant r = 0.275/RoadBuilder.spaceScale; //ped radius (space units) } } </code></pre>
2017-09-16 19:38:26.410000+00:00
2017-09-16 19:38:26.410000+00:00
null
null
44,942,828
<p>Are there any examples of pedestrian modelling in repast simphony? I am novice in repast and was trying to model a simple pedestrian movement simulation. Any pointers to useful resources/ examples?</p>
2017-07-06 07:49:59.820000+00:00
2017-09-16 19:38:26.410000+00:00
null
simulation|repast-simphony
['https://arxiv.org/pdf/cond-mat/0009448.pdf']
1
47,937,903
<p>If you take a look to <a href="https://arxiv.org/pdf/1711.08506.pdf" rel="nofollow noreferrer">W-net</a>, you can see that you can do unsupervised segmentation with deep learning.</p>
2017-12-22 08:06:19.503000+00:00
2017-12-22 08:06:19.503000+00:00
null
null
47,927,713
<p>Is there a crucial distinction between semantic and just normal image segmentation with neural networks? Is non-semantic segmentation some type of unsupervised pixel-clustering method?</p>
2017-12-21 15:07:10.243000+00:00
2017-12-22 08:06:19.503000+00:00
null
image-segmentation|unsupervised-learning
['https://arxiv.org/pdf/1711.08506.pdf']
1
51,846,672
<p>This appears to be an example of the <a href="https://en.wikipedia.org/wiki/Bin_packing_problem" rel="nofollow noreferrer">bin packing problem</a>.</p> <p>This isn't a particularly easy problem to solve and more precise fits are <a href="https://arxiv.org/ftp/arxiv/papers/1508/1508.01376.pdf" rel="nofollow noreferrer">likely to be more complicated.</a></p> <p>Below is a greedy algorithm that should solve your problem with a rough estimate. It's possible to get better matches but, as you do, you make things more complicated and computationally expensive.</p> <p>This solution happens to be recursive and somewhat functional, but that's only my preference; it's probably possible to make a neater and less expensive algorithm if you're not interested in making the code functional or recursive.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>const matrixExample = [{ size: 10, type: 'card' }, { size: 4, type: 'card' }, { size: 2, type: 'card' }, { size: 11, type: 'card' }, { size: 6, type: 'card' }]; const sumCardList = cardList =&gt; cardList.reduce((prev, curr) =&gt; prev + curr.size, 0); const packNextToBin = (cards, bins, max) =&gt; { if (cards.length === 0) { // there are no more cards to pack, use bins as is return bins; } // get the next card to pack into the bins const cardToPack = cards[0]; // get the indices of bins which can still be filled const availableBinIndices = bins .map((bin, i) =&gt; ({sum: sumCardList(bin), index: i})) .filter(binData =&gt; binData.sum + cardToPack.size &lt; max) .map(binData =&gt; binData.index); // if there are no more bins which can fit this card, makea new bin if (availableBinIndices.length === 0) { const updatedBins = [ ...bins, [ cardToPack ] ]; return packNextToBin( cards.slice(1), updatedBins, max ); } // get the first available bin which can accept the card const binToPack = availableBinIndices[0]; // get a version of the matched bin with the new card added const binWithInsertion = [ ...bins[binToPack], cardToPack, ]; // get the bins with the updated bin updated const updatedBins = bins .map((bin, i) =&gt; i === binToPack ? binWithInsertion : bin ); // pack the next card into the bins return packNextToBin( cards.slice(1), updatedBins, max ); } const results = packNextToBin(matrixExample, [[]], 12) console.dir(results)</code></pre> </div> </div> </p>
2018-08-14 17:15:22.107000+00:00
2018-08-14 17:15:22.107000+00:00
null
null
51,825,352
<p>How would one organize a dynamic matrix for best fit? So, let say you are attempting to always display the best fit for a display, and need to organize all cells so that there are no gaps between each item. Each item can either have a size from 1 - 12, and the max width of each row is 12. Using the example dataset, how can will dynamic sort and generate a new array that best fits the display? </p> <pre><code>let matrixExample = [{ size: 10, type: 'card' }, { size: 4, type: 'card' }, { size: 2, type: 'card' }, { size: 11, type: 'card' }, { size: 6, type: 'card' }]; let endingResult = [ [{ size: 10, type: 'card' }, { size: 2, type: 'card' }], [{ size: 4, type: 'card' }, { size: 6, type: 'card' }], [{ size: 11, type: 'card' }] ]; </code></pre> <p><strong>The user purpose of this?</strong> When generating dynamic data to a UI, and the UI needs to optimize for component space.</p>
2018-08-13 15:05:40.070000+00:00
2018-08-14 17:15:22.107000+00:00
2018-08-13 15:10:35.797000+00:00
javascript|arrays|sorting
['https://en.wikipedia.org/wiki/Bin_packing_problem', 'https://arxiv.org/ftp/arxiv/papers/1508/1508.01376.pdf']
2
37,556,379
<p>As mentioned in the comments, the easiest and most promising way is to switch to a Convolutional Neural Network. But with you current model you can:</p> <ul> <li><p>Add more layers with less neurons each, which increases learning capacity and should increase accuracy by a bit. Problem is that you might start overfitting. Use regularization to counter this.</p></li> <li><p>Use <a href="http://arxiv.org/abs/1502.03167" rel="nofollow">batch Normalization</a> (BN). While you are already using regularization, BN accelerates training and also does regularization, and is a NN specific algorithm that might work better.</p></li> <li><p>Make an ensemble. Train several NNs on the same dataset, but with a different initialization. This will produce slightly different classifiers and you can combine their output to get a small increase in accuracy.</p></li> <li><p>Cross-entropy loss. You don't mention what loss function you are using, if its not Cross-entropy, then you should start using it. All the high accuracy classifiers use cross-entropy loss.</p></li> <li><p>Switch to backpropagation and Stochastic Gradient Descent. I do not know the effect of using a different optimization algorithm, but backpropagation might outperform the optimization algorithm you are currently using, and you could combine this with other optimizers such as Adagrad or <a href="http://arxiv.org/abs/1412.6980" rel="nofollow">ADAM</a>.</p></li> <li><p>Other small changes that might increase accuracy are changing the activation functions (like ReLU), shuffle training samples after every epoch, and do data augmentation.</p></li> </ul>
2016-05-31 22:03:45.663000+00:00
2016-05-31 22:03:45.663000+00:00
null
null
37,528,015
<p>I've made digit recognition (56x56 digits) using Neural Networks, but I'm getting 89.5% accuracy on test set and 100% on training set. I know that it's possible to get >95% on test set using this training set. Is there any way to improve my training so I can get better predictions? Changing iterations from 300 to 1000 gave me +0.12% accuracy. I'm also file size limited so increasing number of nodes can be impossible, but if that's the case maybe I could cut some pixels/nodes from the input layer.</p> <p>To train I'm using:</p> <ul> <li>input layer: 3136 nodes</li> <li>hidden layer: 220 nodes</li> <li>labels: 36</li> <li>regularized cost function with lambda=0.1</li> <li>fmincg to calculate weights (1000 iterations)</li> </ul>
2016-05-30 14:23:12.250000+00:00
2016-05-31 22:03:45.663000+00:00
null
matlab|neural-network|classification
['http://arxiv.org/abs/1502.03167', 'http://arxiv.org/abs/1412.6980']
2
54,314,977
<p>The merge compaction behavior and strategy used by Google Cloud Bigtable is currently not tunable by end users via the Cloud Bigtable APIs, although the underlying system which backs the Cloud Bigtable product is dynamic and tunable by our engineering and operations teams. </p> <p>Here's a somewhat recent paper on different approaches to merge compaction algorithms which have been explored in Bigtable: </p> <p>Online Bigtable merge compaction <a href="https://arxiv.org/pdf/1407.3008.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1407.3008.pdf</a></p> <p>We employ a number of proprietary approaches to dynamically adjusting and tuning merge compaction behavior. If you do have more specific questions related to your use of the system or are experiencing issues with merge compaction behavior you can of course feel free to file a support case. </p>
2019-01-22 19:15:01.050000+00:00
2019-01-22 19:15:01.050000+00:00
null
null
54,313,062
<p>Google BigTable is a system that uses LSM-tree as its core data structure for storage. LSM-tree can use different merge strategies. The two most common ones are (1) leveled merging which is more read-optimized, and (2) tiered merging which is more write-optimized. These merge strategies can further be configured by adjusting the size ratio between adjacent levels. </p> <p>I have not been able to find anywhere what is BigTable's default behavior in these respects, and whether it can be tuned or not. As a result, it is hard to understand it's default performance properties and how to adapt them to different workloads. </p> <p>With tiered merging, a level of LSM-tree gathers runs until it reaches capacity. It then merges these runs and flushes the resulting run to the next larger level. There are at most O(T * log_T(N)) runs at each level, and write cost is O(log_T(N) / B), where N is the data size, B is the block size, and T is the size ratio between levels. </p> <p>With leveled merging, there is one run at each level of LSM-tree. A merge takes place as soon as a new run comes into the level, and if the level exceeds capacity the resulting run is flushed to the next larger level. There are at most O(log_T(N)) runs at each level, and write cost is O((T * log_T(N)) / B). </p> <p>As a result, these schemes have different read/write performance properties. However, I have been unable to find sources on whether Google's BigTable uses leveled or tiered merging, and what is the default size ratio T? Also, are these aspects of the system fixed, or are they tunable? </p>
2019-01-22 16:59:49.707000+00:00
2019-01-22 19:15:01.050000+00:00
null
merge|bigtable|google-cloud-bigtable|lsm-tree
['https://arxiv.org/pdf/1407.3008.pdf']
1
54,307,013
<p>You don't have to use a pretrained network in order to train a model for your task. However, in practice using a pretrained network and retraining it to your task/dataset is usually <strong>faster</strong> and often you end up with better models yielding <strong>higher accuracy</strong>. This is especially the case if you do not have a lot of training data.</p> <p><strong>Why faster?</strong></p> <p>It turns out that (relatively) independent of the dataset and target classes, the first couple of layers converge to similar results. This is due to the fact that low level layers usually act as edge, corner and other simple structure detectors. <a href="https://www.researchgate.net/profile/Jeff_Clune/publication/279068412/figure/fig2/AS:614020520349697@1523405308298/Visualization-of-example-features-of-eight-layers-of-a-deep-convolutional-neural.png" rel="nofollow noreferrer">Check out this example</a> that visualizes the structures that filters of different layers "react" to. Having already trained the lower layers, adapting the higher level layers to your use case is much faster.</p> <p><strong>Why more accurate?</strong></p> <p>This question is harder to answer. IMHO it is due to the fact that pretrained models that you use as basis for transfer learning were trained on massive datasets. This means that the knowledge acquired flows into your retrained network and will help you to find a better local minimum of your loss function.</p> <p>If you are in the compfortable situation that you have a lot of training data you should probably train a model from scratch as the pretained model might "point you in the wrong direction". In <a href="https://arxiv.org/abs/1610.05567" rel="nofollow noreferrer">this master thesis</a> you can find a bunch of tasks (small datasets, medium datasets, small semantical gap, large semantical gap) where 3 methods are compared : fine tuning, features extraction + SVM, from scratch. Fine tuning a model pretrained on Imagenet is almost always a better choice.</p>
2019-01-22 11:11:24.577000+00:00
2019-01-22 13:48:26.037000+00:00
2019-01-22 13:48:26.037000+00:00
null
54,305,791
<p>I'm learning transfer learning with some pre-trained models (vgg16, vgg19,…), and I wonder why I need to load pre-trained weight to train my own dataset. </p> <p>I can understand if the classes in my dataset are included in the dataset that the pre-trained model is trained with. For example, VGG model was trained with 1000 classes in Imagenet dataset, and my model is to classify cat-dog, which are also in the Imagenet dataset. But here the classes in my dataset are not in this dataset. So how the pre-trained weight can help?</p>
2019-01-22 10:07:24.597000+00:00
2019-01-22 13:48:26.037000+00:00
null
machine-learning|deep-learning|classification|vgg-net|transfer-learning
['https://www.researchgate.net/profile/Jeff_Clune/publication/279068412/figure/fig2/AS:614020520349697@1523405308298/Visualization-of-example-features-of-eight-layers-of-a-deep-convolutional-neural.png', 'https://arxiv.org/abs/1610.05567']
2
61,891,412
<p>There is a polynomial-time algorithm to find hamiltonian cycles in graphs where every vertex degree is at least N/2.</p> <p>It's described in <a href="https://arxiv.org/abs/1606.03687" rel="nofollow noreferrer">"A Simple Extension of Dirac’s Theorem on Hamiltonicity" Yasemin Büyükçolak, Didem Gözüpek, Sibel Özkan, Mordechai Shalom</a>.</p>
2020-05-19 12:48:22.140000+00:00
2020-05-19 12:48:22.140000+00:00
null
null
61,891,258
<p>I am trying to solve the Hamiltonian Cycle problem.</p> <p>The condition of my task is:</p> <p>The group consists of N people. In it, everyone has exactly N / 2 friends. Friendship is symmetrical (if A is friend B, then B is friend A). One person in the group has a book (his number X), which everyone would like to read and then discuss with some of the others.</p> <p>It is necessary to determine the method of transferring the book, in which it would visit everyone exactly once, passing only from friend to friend, and finally return to its owner.</p> <p>That is, it satisfies the condition of the Dirac's theorem.</p> <p>On small graphs <a href="https://www.geeksforgeeks.org/hamiltonian-cycle-backtracking-6/" rel="nofollow noreferrer">my solutions</a> works properly, but on big graphs my solution gives time limit exception.</p> <p>Is there any method how it can be solved faster than O(n!)?</p>
2020-05-19 12:38:54.753000+00:00
2020-05-19 12:48:22.140000+00:00
null
algorithm|math|graph|graph-algorithm|hamiltonian-cycle
['https://arxiv.org/abs/1606.03687']
1
35,967,589
<p>In fact, training recurrent nets is often done by unrolling the net. That is, replicating the net over the temporal steps (sharing weights across the temporal steps) and simply doing forward-backward passes on the unrolled model.</p> <p>To unroll LSTM (or any other unit) you don't have to use <a href="http://jeffdonahue.com/" rel="noreferrer">Jeff Donahue</a>'s recurrent branch, but rather use <code>NetSpec()</code> to explicitly unroll the model.</p> <p>Here's a simple example:</p> <pre><code>from caffe import layers as L, params as P, to_proto import caffe # some utility functions def add_layer_to_net_spec(ns, caffe_layer, name, *args, **kwargs): kwargs.update({'name':name}) l = caffe_layer(*args, **kwargs) ns.__setattr__(name, l) return ns.__getattr__(name) def add_layer_with_multiple_tops(ns, caffe_layer, lname, ntop, *args, **kwargs): kwargs.update({'name':lname,'ntop':ntop}) num_in = len(args)-ntop # number of input blobs tops = caffe_layer(*args[:num_in], **kwargs) for i in xrange(ntop): ns.__setattr__(args[num_in+i],tops[i]) return tops # implement single time step LSTM unit def single_time_step_lstm( ns, h0, c0, x, prefix, num_output, weight_names=None): """ see arXiv:1511.04119v1 """ if weight_names is None: weight_names = ['w_'+prefix+nm for nm in ['Mxw','Mxb','Mhw']] # full InnerProduct (incl. bias) for x input Mx = add_layer_to_net_spec(ns, L.InnerProduct, prefix+'lstm/Mx', x, inner_product_param={'num_output':4*num_output,'axis':2, 'weight_filler':{'type':'uniform','min':-0.05,'max':0.05}, 'bias_filler':{'type':'constant','value':0}}, param=[{'lr_mult':1,'decay_mult':1,'name':weight_names[0]}, {'lr_mult':2,'decay_mult':0,'name':weight_names[1]}]) Mh = add_layer_to_net_spec(ns, L.InnerProduct, prefix+'lstm/Mh', h0, inner_product_param={'num_output':4*num_output, 'axis':2, 'bias_term': False, 'weight_filler':{'type':'uniform','min':-0.05,'max':0.05}, 'bias_filler':{'type':'constant','value':0}}, param={'lr_mult':1,'decay_mult':1,'name':weight_names[2]}) M = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/Mx+Mh', Mx, Mh, eltwise_param={'operation':P.Eltwise.SUM}) raw_i1, raw_f1, raw_o1, raw_g1 = \ add_layer_with_multiple_tops(ns, L.Slice, prefix+'lstm/slice', 4, M, prefix+'lstm/raw_i', prefix+'lstm/raw_f', prefix+'lstm/raw_o', prefix+'lstm/raw_g', slice_param={'axis':2,'slice_point':[num_output,2*num_output,3*num_output]}) i1 = add_layer_to_net_spec(ns, L.Sigmoid, prefix+'lstm/i', raw_i1, in_place=True) f1 = add_layer_to_net_spec(ns, L.Sigmoid, prefix+'lstm/f', raw_f1, in_place=True) o1 = add_layer_to_net_spec(ns, L.Sigmoid, prefix+'lstm/o', raw_o1, in_place=True) g1 = add_layer_to_net_spec(ns, L.TanH, prefix+'lstm/g', raw_g1, in_place=True) c1_f = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/c_f', f1, c0, eltwise_param={'operation':P.Eltwise.PROD}) c1_i = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/c_i', i1, g1, eltwise_param={'operation':P.Eltwise.PROD}) c1 = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/c', c1_f, c1_i, eltwise_param={'operation':P.Eltwise.SUM}) act_c = add_layer_to_net_spec(ns, L.TanH, prefix+'lstm/act_c', c1, in_place=False) # cannot override c - it MUST be preserved for next time step!!! h1 = add_layer_to_net_spec(ns, L.Eltwise, prefix+'lstm/h', o1, act_c, eltwise_param={'operation':P.Eltwise.PROD}) return c1, h1, weight_names </code></pre> <p>Once you have the single time step, you can unroll it as many times you want...</p> <pre><code>def exmaple_use_of_lstm(): T = 3 # number of time steps B = 10 # batch size lstm_output = 500 # dimension of LSTM unit # use net spec ns = caffe.NetSpec() # we need initial values for h and c ns.h0 = L.DummyData(name='h0', dummy_data_param={'shape':{'dim':[1,B,lstm_output]}, 'data_filler':{'type':'constant','value':0}}) ns.c0 = L.DummyData(name='c0', dummy_data_param={'shape':{'dim':[1,B,lstm_output]}, 'data_filler':{'type':'constant','value':0}}) # simulate input X over T time steps and B sequences (batch size) ns.X = L.DummyData(name='X', dummy_data_param={'shape': {'dim':[T,B,128,10,10]}} ) # slice X for T time steps xt = L.Slice(ns.X, name='slice_X',ntop=T,slice_param={'axis':0,'slice_point':range(1,T)}) # unroling h = ns.h0 c = ns.c0 lstm_weights = None tops = [] for t in xrange(T): c, h, lstm_weights = single_time_step_lstm( ns, h, c, xt[t], 't'+str(t)+'/', lstm_output, lstm_weights) tops.append(h) ns.__setattr__('c'+str(t),c) ns.__setattr__('h'+str(t),h) # concat all LSTM tops (h[t]) to a single layer ns.H = L.Concat( *tops, name='concat_h',concat_param={'axis':0} ) return ns </code></pre> <p>Writing the prototxt:</p> <pre><code>ns = exmaple_use_of_lstm() with open('lstm_demo.prototxt','w') as W: W.write('name: "LSTM using NetSpec example"\n') W.write('%s\n' % ns.to_proto()) </code></pre> <p>The resulting unrolled net (for three time steps) looks like</p> <p><a href="https://i.stack.imgur.com/K11tK.png" rel="noreferrer"><img src="https://i.stack.imgur.com/K11tK.png" alt="LSTM"></a></p>
2016-03-13 07:10:34.850000+00:00
2016-04-21 08:20:26.010000+00:00
2016-04-21 08:20:26.010000+00:00
null
32,225,388
<p>Does anyone know if there exists a nice LSTM module for Caffe? I found one from a github account by russel91 but apparantly the webpage containing examples and explanations disappeared (Formerly <a href="http://apollo.deepmatter.io/" rel="noreferrer">http://apollo.deepmatter.io/</a> --> it now redirects only to the <a href="https://github.com/russell91/apollocaffe" rel="noreferrer">github page</a> which has no examples or explanations anymore).</p>
2015-08-26 11:27:22.137000+00:00
2017-06-14 09:57:30.740000+00:00
2017-06-14 09:57:30.740000+00:00
neural-network|deep-learning|caffe|lstm|recurrent-neural-network
['http://jeffdonahue.com/', 'https://i.stack.imgur.com/K11tK.png']
2
62,371,144
<p>You can try using <a href="https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work-with-qat-networks" rel="nofollow noreferrer">TensorRT library</a>.</p> <p>One of the features of the library is quantization. In general mobilenets are difficult to quantize (see <a href="https://arxiv.org/pdf/2004.09602.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2004.09602.pdf</a>) but the library should do a good work</p>
2020-06-14 10:16:10.023000+00:00
2020-06-14 10:35:29.420000+00:00
2020-06-14 10:35:29.420000+00:00
null
50,565,880
<p>I wanted to quantize (change all the floats into INT8) a ssd-mobilenet model and then want to deploy it onto my raspberry-pi. So far, I have not yet found any thing which can help me with it. Any help would be highly appreciated. I saw tensorflow-lite but it seems it only supports android and iOS. Any library/framweork is acceptable.</p> <p>Thanks in advance.</p>
2018-05-28 11:56:07.303000+00:00
2020-06-14 10:35:29.420000+00:00
null
tensorflow|raspberry-pi|deep-learning
['https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work-with-qat-networks', 'https://arxiv.org/pdf/2004.09602.pdf']
2
28,445,719
<p>I'm going to go ahead and promote my comment to an answer.</p> <ul> <li><p>there may be a terminology issue about exactly what you mean by "latent trait scores", but I'm 99% certain that you want <code>ranef()</code>: see <code>?ranef.merMod</code>. (In the linear mixed model world these are called BLUPs; Doug Bates prefers to call them <em>conditional modes</em> so that the terminology extends to GLMMs (where they are no longer necessarily best, linear, or unbiased).)</p></li> <li><p>I'm sorry to give you a link rather than an explicit answer, but the best source to find out about the guts of <code>merMod</code> objects is probably <a href="http://arxiv.org/abs/1406.5823" rel="nofollow">this ArXiv preprint</a>, in press at <em>Journal of Statistical Software</em> (hopefully, out any day now). <code>?getME</code> may be useful, too -- it has the advantage that anything you find there can be safely used without worrying that the guts of <code>merMod</code> objects will change in future releases. (Once you use the <code>@</code>-accessor, all bets are off.)</p></li> </ul>
2015-02-11 02:45:27.347000+00:00
2015-02-11 02:45:27.347000+00:00
null
null
28,367,857
<p>this is probably a "stupid" question, but I need to obtain the <strong>latent trait scores</strong> from a <em>merMod</em> object (<em>lme4</em> package). Also, I don't seem to find any explanation of the values in the merMod object. It would be helpful for me to know what the mu, wtres, eta, u, LUtx, Utx, Utr, V, and Xwts are supposed to be (generally).</p> <p>I guess(ed) that the latent trait scores are not in the object, but need to be computed. Irtoys offers a function that does that (e.g., dpv), but requires me to have a matrix of responses that can only be 0 or 1 with no NAs. My data includes NA by design though. Also, I will be working with models that have values other than 0 and 1. Any ideas on workarounds? The Irtoys package seems to offer all I need, but if I'm not able to use it because of these limitations, that would be a pity.</p> <p>Thank you in advance, KH</p>
2015-02-06 14:23:01.613000+00:00
2015-02-11 02:45:27.347000+00:00
null
r|lme4
['http://arxiv.org/abs/1406.5823']
1
70,035,448
<p>This formula is for a mixed-effects model with a random intercept for &quot;period&quot;, assuming that each observation is grouped by that variable. Note that &quot;year&quot; was included as a fixed effect.</p> <pre><code>library(lme4) summary(fm &lt;- lmer(nfc ~ nform + p + namount + lime + pH + year + (1|period), data = parkglm)) </code></pre> <p>However the model may not be appropriate given the low number of categories in the variable &quot;period&quot;.</p> <p>I suggest you take a look to some references about linear models and think about strategies to model your data, perhaps building categories or exploring different models. Two good manuscripts you may be interested:</p> <p><a href="http://arxiv.org/pdf/1308.5499.pdf" rel="nofollow noreferrer">Winter, B. (2013). Linear models and linear mixed effects models in R with linguistic applications. arXiv:1308.5499.</a></p> <p><a href="https://www.jstatsoft.org/article/view/v067i01" rel="nofollow noreferrer">Douglas Bates, Martin Maechler, Ben Bolker, Steve Walker (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1-48. doi:10.18637/jss.v067.i01.</a></p>
2021-11-19 13:15:22.120000+00:00
2021-11-19 13:23:35.337000+00:00
2021-11-19 13:23:35.337000+00:00
null
70,034,460
<p>I'm playing around with models.</p> <p>If I run a basic GLM:</p> <pre><code>summary(glm(nfc~nform+p+namount+lime+pH+year, data=parkglm,family=gaussian())) </code></pre> <p>I get results that include year as an explanatory variable:</p> <p><a href="https://i.stack.imgur.com/2u1av.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2u1av.png" alt="enter image description here" /></a></p> <p>However, I need year to be nested in another factor, &quot;period&quot;. There are 17 years, each year falls within one of two periods. I tried this using lme4 and lmer.</p> <pre><code>anova(lmer(nfc~nform+p+namount+lime+pH+(year|period), data=parkglm)) </code></pre> <p>That gave me the below output which seems pretty reasonable, however you'll see that year or period aren't listed. I assume they've been controlled for, but I'm interested in their effects. I'm very iffy on nesting with in the lmer package.</p> <p><a href="https://i.stack.imgur.com/uBiDZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uBiDZ.png" alt="lmer output" /></a></p> <p>I appreciate there might be <em>alot</em> wrong going on here and apologies for that. If it's just a mess, fine, don't be afraid to say! I'm new to linear models in r and have struggled to find the info I need in a manner I can implement.</p> <p>Here is a slight snapshot of the top of the data with types of data in each column.</p> <ul> <li>Period = categoric and is either &quot;pre&quot; or &quot;post&quot;</li> <li>Year = continuous</li> <li>Plot = ignore</li> <li>Nform = categoric</li> <li>namount = continuous</li> <li>p = binary/categoric (0 or 1)</li> <li>k = ignore</li> <li>lime = binary / categoric (0 or 1)</li> <li>ph:nfc = all continuous</li> </ul> <p><a href="https://i.stack.imgur.com/N37lC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N37lC.png" alt="data snap." /></a></p>
2021-11-19 11:53:35.583000+00:00
2021-11-19 13:23:35.337000+00:00
null
r|lme4
['http://arxiv.org/pdf/1308.5499.pdf', 'https://www.jstatsoft.org/article/view/v067i01']
2
69,344,241
<p>Larger model is often significantly harder to train [0], thus if you simply increase the size/stacked the model, don't expect much improvement compared to simpler model.</p> <p>Furthermore, how large is your dataset? It seems that at both models, there is a sign of over fitting (or at least stagnating loss value for test).</p> <p>[0] <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">https://arxiv.org/abs/1512.03385</a></p>
2021-09-27 09:24:08.500000+00:00
2021-09-27 09:24:08.500000+00:00
null
null
69,341,891
<p>I have made a stacking model using 5 efficientNet models for a Kaggle competition. Given below is the architecture of the stacking model:</p> <pre><code>Model: &quot;model&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1_0 (InputLayer) [(None, 600, 600, 3) 0 __________________________________________________________________________________________________ input_3_1 (InputLayer) [(None, 600, 600, 3) 0 __________________________________________________________________________________________________ input_5_2 (InputLayer) [(None, 600, 600, 3) 0 __________________________________________________________________________________________________ input_7_3 (InputLayer) [(None, 600, 600, 3) 0 __________________________________________________________________________________________________ input_9_4 (InputLayer) [(None, 600, 600, 3) 0 __________________________________________________________________________________________________ effnet_layer0_0 (Functional) (None, None, None, 2 64097680 input_1_0[0][0] __________________________________________________________________________________________________ effnet_layer1_1 (Functional) (None, None, None, 2 64097680 input_3_1[0][0] __________________________________________________________________________________________________ effnet_layer2_2 (Functional) (None, None, None, 2 64097680 input_5_2[0][0] __________________________________________________________________________________________________ effnet_layer3_3 (Functional) (None, None, None, 2 64097680 input_7_3[0][0] __________________________________________________________________________________________________ effnet_layer4_4 (Functional) (None, None, None, 2 64097680 input_9_4[0][0] __________________________________________________________________________________________________ global_average_pooling2d_0 (Glo (None, 2560) 0 effnet_layer0_0[0][0] __________________________________________________________________________________________________ global_average_pooling2d_1_1 (G (None, 2560) 0 effnet_layer1_1[0][0] __________________________________________________________________________________________________ global_average_pooling2d_2_2 (G (None, 2560) 0 effnet_layer2_2[0][0] __________________________________________________________________________________________________ global_average_pooling2d_3_3 (G (None, 2560) 0 effnet_layer3_3[0][0] __________________________________________________________________________________________________ global_average_pooling2d_4_4 (G (None, 2560) 0 effnet_layer4_4[0][0] __________________________________________________________________________________________________ dropout_0 (Dropout) (None, 2560) 0 global_average_pooling2d_0[0][0] __________________________________________________________________________________________________ dropout_1_1 (Dropout) (None, 2560) 0 global_average_pooling2d_1_1[0][0 __________________________________________________________________________________________________ dropout_2_2 (Dropout) (None, 2560) 0 global_average_pooling2d_2_2[0][0 __________________________________________________________________________________________________ dropout_3_3 (Dropout) (None, 2560) 0 global_average_pooling2d_3_3[0][0 __________________________________________________________________________________________________ dropout_4_4 (Dropout) (None, 2560) 0 global_average_pooling2d_4_4[0][0 __________________________________________________________________________________________________ dense_0 (Dense) (None, 4) 10244 dropout_0[0][0] __________________________________________________________________________________________________ dense_1_1 (Dense) (None, 4) 10244 dropout_1_1[0][0] __________________________________________________________________________________________________ dense_2_2 (Dense) (None, 4) 10244 dropout_2_2[0][0] __________________________________________________________________________________________________ dense_3_3 (Dense) (None, 4) 10244 dropout_3_3[0][0] __________________________________________________________________________________________________ dense_4_4 (Dense) (None, 4) 10244 dropout_4_4[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 20) 0 dense_0[0][0] dense_1_1[0][0] dense_2_2[0][0] dense_3_3[0][0] dense_4_4[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 10) 210 concatenate[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 4) 44 dense[0][0] ================================================================================================== Total params: 320,539,874 Trainable params: 254 Non-trainable params: 320,539,620 </code></pre> <p><em>Performance metrics of the stacking model:</em></p> <p><a href="https://i.stack.imgur.com/3JwV2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3JwV2.png" alt="stacking model accuracy" /></a></p> <p><a href="https://i.stack.imgur.com/VeF8a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VeF8a.png" alt="stacking model loss" /></a></p> <p><em>Performance metrics of a base model:</em></p> <p><a href="https://i.stack.imgur.com/JzoFN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JzoFN.png" alt="accuracy" /></a></p> <p><a href="https://i.stack.imgur.com/5FCM8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5FCM8.png" alt="loss" /></a></p> <p>But, when I use the stacking model for the Kaggle predictions, I am getting a score of 0.551, whereas when I use one of the base models, I get a score of 0.581.</p> <p><strong>Why does this happen? Isn't the stacking model supposed to give better results than the base model?</strong></p>
2021-09-27 06:05:05.713000+00:00
2021-10-08 06:17:39.470000+00:00
2021-10-08 06:17:39.470000+00:00
python|tensorflow|deep-learning|ensemble-learning|efficientnet
['https://arxiv.org/abs/1512.03385']
1
44,510,094
<p>While SqueezeNet has 50x fewer parameters than AlexNet, it is still a very large network. <a href="https://arxiv.org/pdf/1602.07360.pdf" rel="nofollow noreferrer">The original paper</a> does not mention a training time, but the SqueezeNet-based <a href="https://www.researchgate.net/profile/Thomas_Unterthiner2/publication/309935608_Speeding_up_Semantic_Segmentation_for_Autonomous_Driving/links/58524adf08ae7d33e01a58a7.pdf" rel="nofollow noreferrer">SQ</a> required 22 hours to train using two Titan X graphics cards - and that was with some of the weights pre-trained! I haven't gone over your code in detail, but what you describe is expected behavior - your network is able to learn on the single batch, just not as quickly as you expected.</p> <p>I suggest reusing as many of the weights as possible instead of reinitializing them, just as the creators of SQ did. This is known as transfer learning, and it works because many of the lower-level features (lines, curves, basic shapes) in an image are the same regardless of the image's content, and reusing the weights for these layers saves the network from having to re-learn them from scratch.</p>
2017-06-12 23:23:57.530000+00:00
2017-06-12 23:23:57.530000+00:00
null
null
44,509,951
<p>I am trying to classify the ETH Food-101 dataset using squeezenet in Caffe2. My model is imported from the Model Zoo and I made two types of modifications to the model: </p> <p>1) Changing the dimensions of the last layer to have 101 outputs</p> <p>2) The images from the database are in NHWC form and I just flipped the dimensions of the weights to match. (I plan on changing this)</p> <p>The Food101 dataset has 75,000 images for training and I am currently using a batch size of 128 and a starting learning rate of -0.01 with a gamma of 0.999 and stepsize of 1. What I noticed is that for the first 2000 iterations of the network the accuracy hovered around 1/128 and this took an hour or so to complete. </p> <p>I added all the weights to the model.params so they can get updated during gradient descent(except for data) and reinitialized all weights as Xavier and biases to constant. I would expect the accuracy to grow fairly quickly in the first hundred to thousand iterations and then tail off as the number of iterations grow. In my case, the learning is staying constant around 0.</p> <p>When I look at the gradient file I find that the average is on the order of 10^-6 with a standard deviation of 10^-7. This explains the slow learning rate, but I haven't been able to get the gradient to start much higher.</p> <p>These are the gradient statistics for the first convolution after a few iterations</p> <pre><code> Min Max Avg Sdev -1.69821e-05 2.10922e-05 1.52149e-06 5.7707e-06 -1.60263e-05 2.01478e-05 1.49323e-06 5.41754e-06 -1.62501e-05 1.97764e-05 1.49046e-06 5.2904e-06 -1.64293e-05 1.90508e-05 1.45681e-06 5.22742e-06 </code></pre> <p>Here are the core parts of my code:</p> <pre><code>#init_path is path to init_net protobuf #pred_path is path to pred_net protobuf def main(init_path, pred_path): ws.ResetWorkspace() data_folder = '/home/myhome/food101/' #some debug code here arg_scope = {"order":"NCHW"} train_model = model_helper.ModelHelper(name="food101_train", arg_scope=arg_scope) if not debug: data, label = AddInput( train_model, batch_size=128, db=os.path.join(data_folder, 'food101-train-nchw-leveldb'), db_type='leveldb') init_net_def, pred_net_def = update_squeeze_net(init_path, pred_path) #print str(init_net_def) train_model.param_init_net.AppendNet(core.Net(init_net_def)) train_model.net.AppendNet(core.Net(pred_net_def)) ws.RunNetOnce(train_model.param_init_net) add_params(train_model, init_net_def) AddTrainingOperators(train_model, 'softmaxout', 'label') AddBookkeepingOperators(train_model) ws.RunNetOnce(train_model.param_init_net) if debug: ws.FeedBlob('data', data) ws.FeedBlob('label', label) ws.CreateNet(train_model.net) total_iters = 10000 accuracy = np.zeros(total_iters) loss = np.zeros(total_iters) # Now, we will manually run the network for 200 iterations. for i in range(total_iters): #try: conv1_w = ws.FetchBlob('conv1_w') print conv1_w[0][0] ws.RunNet("food101_train") #except RuntimeError: # print ws.FetchBlob('conv1').shape # print ws.FetchBlob('pool1').shape # print ws.FetchBlob('fire2/squeeze1x1_w').shape # print ws.FetchBlob('fire2/squeeze1x1_b').shape #softmax = ws.FetchBlob('softmaxout') #print softmax[i] #print softmax[i][0][0] #print softmax[i][0][:5] #print softmax[64*i] accuracy[i] = ws.FetchBlob('accuracy') loss[i] = ws.FetchBlob('loss') print accuracy[i], loss[i] </code></pre> <p>My add_params function initializes the weights as follows</p> <pre><code>#ops allows me to only initialize the weights of specific ops because i initially was going to do last layer training def add_params(model, init_net_def, ops=[]): def add_param(op): for output in op.output: if "_w" in output: weight_shape = [] for arg in op.arg: if arg.name == 'shape': weight_shape = arg.ints weight_initializer = initializers.update_initializer( None, None, ("XavierFill", {})) model.create_param( param_name=output, shape=weight_shape, initializer=weight_initializer, tags=ParameterTags.WEIGHT) elif "_b" in output: weight_shape = [] for arg in op.arg: if arg.name == 'shape': weight_shape = arg.ints weight_initializer = initializers.update_initializer( None, None, ("ConstantFill", {})) model.create_param( param_name=output, shape=weight_shape, initializer=weight_initializer, </code></pre> <p>I find that my loss function fluctuates when I use the full training set, but If i use just one batch and iterate over it several times I find that the loss function goes down but very slowly. </p>
2017-06-12 23:05:59.673000+00:00
2017-07-18 10:05:57.687000+00:00
2017-06-12 23:15:49.213000+00:00
python|neural-network|caffe|conv-neural-network|caffe2
['https://arxiv.org/pdf/1602.07360.pdf', 'https://www.researchgate.net/profile/Thomas_Unterthiner2/publication/309935608_Speeding_up_Semantic_Segmentation_for_Autonomous_Driving/links/58524adf08ae7d33e01a58a7.pdf']
2
68,033,186
<p>This isn't a great SO question because it's more exploratory. Did you lower your ODE tolerances? That would improve your gradient calculation which could help. What activation function are you using? I would use something like <code>softplus</code> instead of <code>tanh</code> so that you don't have the saturating behavior. Did you scale the eigenvalues and take into account <a href="https://arxiv.org/abs/2103.15341" rel="nofollow noreferrer">the issues explored in the stiff neural ODE paper</a>? Larger neural networks? Different learning rates? ADAM? Etc.</p> <p>This is much better suited for a forum for discussion like <a href="https://discourse.julialang.org/" rel="nofollow noreferrer">the JuliaLang Discourse</a>. We can continue there since walking through this will not be fruitful without some back and forth.</p>
2021-06-18 10:03:15.650000+00:00
2021-06-18 10:03:15.650000+00:00
null
null
68,004,457
<p>About a month ago I asked a question about strategies for better convergence when training a neural differential equation. I've since gotten that example to work using the advice I was given, but when I applied what the same advice to a more difficult model, I got stuck again. All of my code is in Julia, primarily making use of the DiffEqFlux library. In effort to keep this post as brief as possible, I won't share all of my code for everything I've tried, but if anyone wants access to it to troubleshoot I can provide it.</p> <p><strong>What I'm Trying to Do</strong></p> <p>The data I'm trying to learn comes from an SIRx model:</p> <pre><code>function SIRx!(du, u, p, t) β, μ, γ, a, b = Float32.([280, 1/50, 365/22, 100, 0.05]) S, I, x = u du[1] = μ*(1-x) - β*S*I - μ*S du[2] = β*S*I - (μ+γ)*I du[3] = a*I - b*x nothing end; </code></pre> <p>The initial condition I used was <code>u0 = Float32.([0.062047128, 1.3126149f-7, 0.9486445]);</code>. I generated data from t=0 to 25, sampled every 0.02 (in training, I only use every 20 points or so for speed, and using more doesn't improve results). The data looks like this: <a href="https://i.stack.imgur.com/utUGd.png" rel="nofollow noreferrer">Training Data</a></p> <p>The UDE I'm training is</p> <pre><code>function SIRx_ude!(du, u, p, t) μ, γ = Float32.([1/50, 365/22]) S,I,x = u du[1] = μ*(1-x) - μ*S + ann_dS(u, @view p[1:lenS])[1] du[2] = -(μ+γ)*I + ann_dI(u, @view p[lenS+1:lenS+lenI])[1] du[3] = ann_dx(u, @view p[lenI+1:end])[1] nothing end; </code></pre> <p>Each of the neural networks (<code>ann_dS, ann_dI, ann_dx</code>) are defined using <code>FastChain(FastDense(3, 20, tanh), FastDense(20, 1))</code>. I tried using a single neural network with 3 inputs and 3 outputs, but it was slower and didn't perform any better. I also tried normalizing inputs to the network first, but it doesn't make a significant difference outside of slowing things down.</p> <p><strong>What I've Tried</strong></p> <ul> <li><strong>Single shooting</strong> The network just fits a line through the middle of the data. This happens even when I weight the earlier datapoints more in the loss function. <a href="https://i.stack.imgur.com/KIEnU.png" rel="nofollow noreferrer">Single-shot Training</a></li> <li><strong>Multiple Shooting</strong> The best result I had was with multiple shooting. As seen here, it's not simply fitting a straight line, but it's not exactly fitting the data either<a href="https://i.stack.imgur.com/kSW1R.png" rel="nofollow noreferrer">Multiple Shooting Result</a>. I've tried continuity terms ranging from 0.1 to 100 and group sizes from 3 to 30 and it doesn't make a significant difference.</li> <li><strong>Various Other Strategies</strong> I've also tried iteratively growing the fit, 2-stage training with a collocation, and mini-batching as outlined here: <a href="https://diffeqflux.sciml.ai/dev/examples/local_minima" rel="nofollow noreferrer">https://diffeqflux.sciml.ai/dev/examples/local_minima</a>, <a href="https://diffeqflux.sciml.ai/dev/examples/collocation/" rel="nofollow noreferrer">https://diffeqflux.sciml.ai/dev/examples/collocation/</a>, <a href="https://diffeqflux.sciml.ai/dev/examples/minibatch/" rel="nofollow noreferrer">https://diffeqflux.sciml.ai/dev/examples/minibatch/</a>. Iteratively growing the fit works well the first couple of iterations, but as the length increases it goes back to fitting a straight line again. 2-stage collocation training works really well for stage 1, but it doesn't actually improve performance on the second stage (I've tried both single and multiple shooting for the second stage). Finally, mini-batching worked about as well as single-shooting (which is to say not very well) but much more quickly.</li> </ul> <p><strong>My Question</strong></p> <p>In summary, I have no idea what to try. There are so many strategies, each with so many parameters that can be tweaked. I need a way to diagnose the problem more precisely so I can better decide how to proceed. If anyone has experience with this sort of problem, I'd appreciate any advice or guidance I can get.</p>
2021-06-16 14:14:47.510000+00:00
2021-06-18 10:03:15.650000+00:00
null
neural-network|julia|differentialequations.jl|flux-machine-learning
['https://arxiv.org/abs/2103.15341', 'https://discourse.julialang.org/']
2
45,036,315
<p>You can read about the work of Jacques Mattheij, he actually uses a customized version of Xception<sup>1</sup> running on <a href="https://keras.io/" rel="nofollow noreferrer">https://keras.io/</a>.</p> <p>The introduction is <a href="https://jacquesmattheij.com/sorting-two-metric-tons-of-lego" rel="nofollow noreferrer">Sorting 2 Metric Tons of Lego</a>.</p> <p>In <a href="https://jacquesmattheij.com/sorting-lego-the-software-side" rel="nofollow noreferrer">Sorting 2 Tons of Lego, The software Side</a> you can read:</p> <blockquote> <p>The hard challenge to deal with next was to get a training set large enough to make working with 1000+ classes possible. At first this seemed like an insurmountable problem. I could not figure out how to make enough images and to label them by hand in acceptable time, even the most optimistic calculations had me working for 6 months or longer full-time in order to make a data set that would allow the machine to work with many classes of parts rather than just a couple.</p> <p>In the end the solution was staring me in the face for at least a week before I finally clued in: it doesn’t matter. All that matters is that the machine labels its own images most of the time and then all I need to do is correct its mistakes. As it gets better there will be fewer mistakes. This very rapidly expanded the number of training images. The first day I managed to hand-label about 500 parts. The next day the machine added 2000 more, with about half of those labeled wrong. The resulting 2500 parts where the basis for the next round of training 3 days later, which resulted in 4000 more parts, 90% of which were labeled right! So I only had to correct some 400 parts, rinse, repeat… So, by the end of two weeks there was a dataset of 20K images, all labeled correctly.</p> <p>This is far from enough, some classes are severely under-represented so I need to increase the number of images for those, perhaps I’ll just run a single batch consisting of nothing but those parts through the machine. No need for corrections, they’ll all be labeled identically.</p> </blockquote> <p>A recent update is <a href="https://jacquesmattheij.com/sorting-lego-many-questions-and-this-is-what-the-result-looks-like" rel="nofollow noreferrer">Sorting 2 Tons of Lego, Many Questions, Results</a>.</p> <p><hr> <sup>1</sup><a href="https://arxiv.org/abs/1610.02357" rel="nofollow noreferrer">CHOLLET, François. Xception: Deep Learning with Depthwise Separable Convolutions. <em>arXiv preprint arXiv:1610.02357</em>, 2016.</a></p>
2017-07-11 13:44:48.717000+00:00
2017-07-11 13:51:05.753000+00:00
2017-07-11 13:51:05.753000+00:00
null
39,486,320
<p>having read <a href="https://cloud.google.com/blog/big-data/2016/08/how-a-japanese-cucumber-farmer-is-using-deep-learning-and-tensorflow" rel="nofollow" title="this article">this article</a> about a guy who uses tensorflow to sort cucumber into nine different classes I was wondering if this type of process could be applied to a large number of classes. My idea would be to use it to identify Lego parts. </p> <p>At the moment, a site like Bricklink describes more than <a href="https://www.bricklink.com/catalogList.asp?catType=P&amp;catLike=W" rel="nofollow">40,000 different parts</a> so it would be a bit different than the cucumber example but I am wondering if it sounds suitable. There is no easy way to get hundreds of pictures for each part but does the following process sound feasible : </p> <ul> <li>take pictures of a part ; </li> <li>try to identify the part using tensorflow ;</li> <li>if it does not identify the correct part, take more pictures and feed the neural network with them ;</li> <li>go on with the next part.</li> </ul> <p>That way, each time we encounter a new piece we "teach" the network its reference so that it can better be recognized the next time. Like that and after hundreds of iterations monitored by a human, could we imagine tensorflow to be able to recognize the parts? At least the most common ones?</p> <p>My question might sound stupid but I am not into neural networks so any advice is welcome. At the moment I have not found any way to identify a lego part based on pictures and this "cucumber example" sounds promising so I am looking for some feedback.</p> <p>Thanks.</p>
2016-09-14 08:55:00.137000+00:00
2017-07-11 13:51:05.753000+00:00
null
tensorflow|lego-mindstorms
['https://keras.io/', 'https://jacquesmattheij.com/sorting-two-metric-tons-of-lego', 'https://jacquesmattheij.com/sorting-lego-the-software-side', 'https://jacquesmattheij.com/sorting-lego-many-questions-and-this-is-what-the-result-looks-like', 'https://arxiv.org/abs/1610.02357']
5
5,091,273
<p>It turns out that for 0-1 matrices, 2x2 swaps are sufficient to get from one matrix to any other. This was proved by H J Ryser as Theorem 3.1 in a paper called "Combinatorial Properties of Matrices of Zeros and Ones": <a href="http://cms.math.ca/cjm/v9/cjm1957v09.0371-0377.pdf" rel="nofollow">http://cms.math.ca/cjm/v9/cjm1957v09.0371-0377.pdf</a> . People have been trying to prove for a while that the Markov chain based on 2x2 swaps mixes rapidly; this paper <a href="http://arxiv.org/pdf/1004.2612v3" rel="nofollow">http://arxiv.org/pdf/1004.2612v3</a> seems to come the closest.</p> <p>If one could prove the generalization of Ryser's theorem to your case (maybe with up to 4x4 "swaps"), then on account of the symmetry of the swaps, it wouldn't be too hard to get a chain whose steady state distribution is uniform on the matrices of interest. I don't think there's any hope at the moment of proving that it mixes rapidly for all possible row/column distributions, but perhaps you know something about the distributions that we don't...</p>
2011-02-23 13:02:45.167000+00:00
2011-02-23 13:02:45.167000+00:00
null
null
5,086,872
<p>Suppose I have a 2D array like the following:</p> <pre><code>GACTG AGATA TCCGA </code></pre> <p>Each array element is taken from a small finite set (in my case, DNA nucleotides -- <code>{A, C, G, T}</code>). I would like to randomly shuffle this array somehow while preserving both row <em>and</em> column nucleotide frequencies. Is this possible? Can it be done efficiently?</p> <p><strong>[EDIT]</strong>: By this I mean I want to produce a new matrix where each row has the same number of <code>A</code>s, <code>C</code>s, <code>G</code>s and <code>T</code>s as the corresponding row of the original matrix, and where each column has the same number of <code>A</code>s, <code>C</code>s, <code>G</code>s and <code>T</code>s as the corresponding column of the original matrix. <strong>Permuting the rows or columns of the original matrix will not achieve this in general.</strong> (E.g. for the example above, the top row has 2 <code>G</code>s, and 1 each of <code>A</code>, <code>C</code> and <code>T</code>; if this row was swapped with row 2, the top row in the resulting matrix would have 3 <code>A</code>s, 1 <code>G</code> and 1 <code>T</code>.)</p> <p>It's simple enough to preserve just column frequencies by shuffling a column at a time, and likewise for rows. But doing this will in general alter the frequencies of the other kind.</p> <p><strong>My thoughts so far:</strong> If it's possible to pick 2 rows and 2 columns so that the 4 elements at the corners of this rectangle have the pattern</p> <pre><code>XY YX </code></pre> <p>for some pair of distinct elements <code>X</code> and <code>Y</code>, then replacing these 4 elements with</p> <pre><code>YX XY </code></pre> <p>will maintain both row and column frequencies. In the example at the top, this can be done for (at least) rows 1 and 2 and columns 2 and 5 (whose corners give the 2x2 matrix <code>AG;GA</code>), and for rows 1 and 3 and columns 1 and 4 (whose corners give <code>GT;TG</code>). Clearly this could be repeated a number of times to produce some level of randomisation.</p> <p>Generalising a bit, any "subrectangle" induced by a subset of rows and a subset of columns, in which the frequencies of all rows are the same and the frequencies of all columns are the same, can have both its rows and columns permuted to produce a valid complete rectangle. (Of these, only those subrectangles in which at least 1 element is changed are actually interesting.) Big questions:</p> <ol> <li><strong>Are all valid complete matrices reachable by a series of such "subrectangle rearrangements"?</strong> I suspect the answer is yes.</li> <li><strong>Are all valid subrectangle rearrangements decomposable into a series of 2x2 swaps?</strong> <strong>[EDIT]</strong>: <a href="https://stackoverflow.com/questions/5086872/is-it-possible-to-shuffle-a-2d-matrix-while-preserving-row-and-column-frequencies/5087786#5087786">mhum's counterexample</a> shows that the answer is <em>no</em>. Unfortunate, because this would seem to make it harder to come up with an efficient algorithm, but important to know.</li> <li><strong>Can some or all of the valid rearrangements be computed efficiently?</strong></li> </ol> <p><a href="https://stackoverflow.com/questions/2133268/randomize-matrix-in-perl-keeping-row-and-column-totals-the-same">This question</a> addresses a special case in which the set of possible elements is <code>{0, 1}</code>. The solutions people have come up with there are similar to what I have come up with myself, and are probably usable, but not ideal as they require an arbitrary amount of backtracking to work correctly. Also I'm concerned that only 2x2 swaps are considered.</p> <p>Finally, I would ideally like a solution that can be proven to select a matrix uniformly at random from the set of all matrices having identical row frequencies and column frequencies to the original. I know, I'm asking for a lot :)</p>
2011-02-23 04:03:41.463000+00:00
2011-02-23 13:02:45.167000+00:00
2017-05-23 10:33:08.147000+00:00
algorithm|random|shuffle
['http://cms.math.ca/cjm/v9/cjm1957v09.0371-0377.pdf', 'http://arxiv.org/pdf/1004.2612v3']
2
63,189,141
<p>I think you should read the <a href="https://arxiv.org/pdf/1703.09039.pdf" rel="nofollow noreferrer">DNN</a> article.</p> <p>Why? Why do you want to use Random Forest before DNN training?</p> <p>Yes, you can display the feature importance of <code>random-forest</code> using</p> <pre><code>random_forest = RandomForestClassifier(random_state=42).fit(x_train, y_train) feature_importances = DataFrame(random_forest.feature_importances_, index = x_train.columns, columns=['importance']).sort_values('importance', ascending=False) print(feature_importances) </code></pre> <p>But this is a <code>feature-extraction</code> method. The DNN is a <code>neural-network</code> method.</p> <p>DNN is more complex than <code>random-forest</code>, while <code>random-forest</code> handles <code>feature-extraction</code>, DNN handles</p> <ul> <li><code>feature-extraction</code>,</li> <li><code>back-propagation</code>,</li> <li><code>feed-forward</code> methods.</li> </ul> <p>If you feed enough training samples for DNN, you will have higher accuracy.</p> <ul> <li>Does the use of tree-based feature importance prevents the use of training algorithms?</li> </ul> <p>No, based on the problem, the sufficient feature size and samples vary. Usually, you don't use <code>random-forest</code> to extract 1M images feature importance.</p> <p>Also, you don't use DNN for small-datasets.</p>
2020-07-31 09:23:45.337000+00:00
2020-07-31 09:23:45.337000+00:00
null
null
62,958,052
<p>My question is straightforward: is it possible to use a tree-based dimensionality reduction such as feature importance embedded in the Random Forest before training the dataset with a DNN algorithm?</p> <p>In other words, does the use of tree-based feature importance prevents the use of training algorithms different from the tree/Random Forest?</p>
2020-07-17 16:22:23.950000+00:00
2020-07-31 09:23:45.337000+00:00
null
neural-network|random-forest|feature-selection
['https://arxiv.org/pdf/1703.09039.pdf']
1
56,440,329
<p>Looking at the source code of the Adam optimizer in Keras, it looks like the actual "decay" is performed at: <a href="https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L476" rel="nofollow noreferrer">this line</a>. The code you reported is executed only after and is not the decay itself.<br> If the question is "why it is like that" I would suggest you to read some theory about Adam like <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">the original paper</a>.</p> <p>EDIT<br> It should be clear that the update equation of the Adam optimizer does NOT include a decay by itself. The decay should be applied separately.</p>
2019-06-04 08:42:41.380000+00:00
2019-06-04 09:41:32.260000+00:00
2019-06-04 09:41:32.260000+00:00
null
56,440,068
<p>I have been using the following piece of code to print the lr_t learning_rate in Adam() optimizer for my trainable_model.</p> <pre><code>if(np.random.uniform()*100 &lt; 3 and self.training): model = self.trainable_model _lr = tf.to_float(model.optimizer.lr, name='ToFloat') _decay = tf.to_float(model.optimizer.decay, name='ToFloat') _beta1 = tf.to_float(model.optimizer.beta_1, name='ToFloat') _beta2 = tf.to_float(model.optimizer.beta_2, name='ToFloat') _iterations = tf.to_float(model.optimizer.iterations, name='ToFloat') t = K.cast(_iterations, K.floatx()) + 1 _lr_t = lr * (K.sqrt(1. - K.pow(_beta2, t)) / (1. - K.pow(_beta1, t))) print(" - LR_T: "+str(K.eval(_lr_t))) </code></pre> <p>What I don't understand is that this learning rate increases. (with decay at default value of 0).</p> <p>If we look at the learning_rate equation in Adam, we find this:</p> <pre><code> lr_t = lr * (K.sqrt(1. - K.pow(self.beta_2, t)) / (1. - K.pow(self.beta_1, t))) </code></pre> <p>which corresponds to the equation (with default values for parameters):</p> <pre><code>= 0.001*sqrt(1-0.999^x)/(1-0.99^x) </code></pre> <p>If we print this equation we obtain : <a href="https://i.stack.imgur.com/9plBo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9plBo.png" alt="enter image description here"></a></p> <p>which clearly shows that the learning_rate is increasing exponentially over time (since t starts at 1)</p> <p>can someone explain why this is the case ? I have read everywhere that we should use a learning_rate that decays over time, not increase.</p> <p>Does it means that my neural network makes bigger updates over time as Adam's learning_rate increases ?</p>
2019-06-04 08:26:44.663000+00:00
2019-06-04 09:41:32.260000+00:00
2019-06-04 08:52:23.793000+00:00
machine-learning|keras|neural-network|deep-learning|adam
['https://github.com/keras-team/keras/blob/master/keras/optimizers.py#L476', 'https://arxiv.org/abs/1412.6980']
2
45,311,515
<p>You can choose any function approximator that is differentiable. Two commonly used classes of value function approximators are:</p> <ol> <li><p>Linear function approximators: Linear combinations of features</p> <pre><code> For approximating Q (the action-value) 1. Find features that are functions of states and actions. 2. Represent q as a weighted combination of these features. </code></pre> <p><a href="https://i.stack.imgur.com/8jZuk.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8jZuk.gif" alt="enter image description here"></a></p> <p>where <a href="https://i.stack.imgur.com/ujHyW.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujHyW.gif" alt="phi_sa"></a> is a vector in <a href="https://i.stack.imgur.com/gcc4q.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gcc4q.gif" alt="Rd"></a> with <a href="https://i.stack.imgur.com/Enalq.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Enalq.gif" alt="ith"></a> component given by <a href="https://i.stack.imgur.com/lNjix.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lNjix.gif" alt="enter image description here"></a> and <a href="https://i.stack.imgur.com/zPEq4.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zPEq4.gif" alt="w"></a> is the weight vector <a href="https://i.stack.imgur.com/akfxk.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/akfxk.gif" alt="enter image description here"></a> whose <a href="https://i.stack.imgur.com/Enalq.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Enalq.gif" alt="ith"></a> componenet is given by <a href="https://i.stack.imgur.com/ZJ1o0.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZJ1o0.gif" alt="enter image description here"></a>.</p></li> <li><p>Neural Network</p> <p>Represent <a href="https://i.stack.imgur.com/zlo1v.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zlo1v.gif" alt="qSAW"></a> using a neural network. You can either approximate using <em>action-in</em> (left of figure below) type or <em>action-out</em> (right of figure below) type. The difference being that the neural network can either take as input representations of both the state and the action and produce a single value (<em>Q-value</em>) as the output or take as input only the representation of state <code>s</code> and provide as output one value for each action, <em>a</em> in the action space (This type is easier to realize if the action space is discrete and finite).</p> <p><a href="https://i.stack.imgur.com/c5xAJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c5xAJ.png" alt="enter image description here"></a></p> <p>Using the first type (<em>action-in</em>) for the example as it is close to the example in the linear case, you could create a Q-value approximator using a neural network with the following approach:</p> <pre><code> Represent the state-action value as a normalized vector (or as a one-hot vector representing the state and action) 1. Input layer : Size= number of inputs 2. `n` hidden layers with `m` neurons 3. Output layer: single output neuron Sigmoid activation function. Update weights using gradient descent as per the * semi-gradient Sarsa algorithm*. </code></pre> <p>You could also directly use the visuals (if available) as the input and use convolutional layers like in the <a href="https://arxiv.org/pdf/1312.5602" rel="nofollow noreferrer">DQN paper</a>. But read the note below regarding the convergence and additional tricks to stabilize such non-linear approximator based method. </p></li> </ol> <hr> <p>Graphically the function approximator looks like this:</p> <p><a href="https://i.stack.imgur.com/pSa4o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pSa4o.png" alt="linearFA"></a></p> <p>Note that <a href="https://i.stack.imgur.com/mMdzE.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mMdzE.gif" alt="varphi_eqphi"></a> is an <a href="https://en.wikipedia.org/wiki/Elementary_function" rel="nofollow noreferrer">elementary function</a> and <a href="https://i.stack.imgur.com/O8yd5.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O8yd5.gif" alt="xi"></a> is used to represent elements of the state-action vector. You can use any elementary function in place of <a href="https://i.stack.imgur.com/E44Fl.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E44Fl.gif" alt="enter image description here"></a>. Some common ones are linear regressors, <a href="https://en.wikipedia.org/wiki/Radial_basis_function#Approximation" rel="nofollow noreferrer">Radial Basis Functions</a> etc.</p> <p>A <em>good differentiable function</em> depends on the context. But in reinforcement learning settings, convergence properties and the error bounds are important. The <em>Episodic semi-gradient Sarsa</em> algorithm discussed in the book has similar convergence properties as of TD(0) for a constant policy.</p> <p>Since you specifically asked for on-policy prediction, using a <em>linear</em> function approximator is advisable to use because it is guaranteed to converge. The following are some of the other properties that make the Linear function approximators suitable:</p> <ul> <li>The error surface becomes a quadratic surface with a single minimum with mean square error function. This makes it a sure-shot solution as gradient descent is guaranteed to find the minima which is the global optimum.</li> <li><p>The error bound (as proved by <a href="http://castlelab.princeton.edu/ORF544/Readings/Tsitsiklis%20van%20Roy%20-%20Analysis%20of%20Temporal-Difference%20Learning%20with%20Function%20Approximations-IEEE%20TAC.pdf" rel="nofollow noreferrer">Tsitsiklis &amp; Roy,1997</a> for the general case of TD(lambda) ) is:</p> <p><a href="https://i.stack.imgur.com/ZH7HR.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZH7HR.gif" alt="enter image description here"></a></p> <p>Which means that the asymptotic error will be no more than <a href="https://i.stack.imgur.com/Y78BK.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y78BK.gif" alt="enter image description here"></a> times the smallest possible error. Where <a href="https://i.stack.imgur.com/2Upqn.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Upqn.gif" alt="gamma"></a> is the discount factor. The gradient is simple to calculate! </p></li> </ul> <p>Using a non-linear approximator (like a (deep) neural network) however does not inherently guarantee convergence. Gradient TD method uses the true gradient of the projected bellman error for the updates instead of the <em>semi-gradient</em> used in the <em>Episodic semi-gradient Sarsa algorithm</em> which is known to provide <a href="https://papers.nips.cc/paper/3809-convergent-temporal-difference-learning-with-arbitrary-smooth-function-approximation.pdf" rel="nofollow noreferrer">convergence even with non-linear function approximators</a> (even for off-policy prediction) if certain conditions are met.</p>
2017-07-25 18:58:03.413000+00:00
2017-07-25 19:07:07.140000+00:00
2017-07-25 19:07:07.140000+00:00
null
45,298,898
<p>I am currently reading Sutton's introduction about reinforcement learning. After arriving in chapter 10 (On-Policy prediction with approximation), I am now wondering how to choose the form of the function <code>q</code> for which the optimal weights <code>w</code> shall be approximated.</p> <p>I am referring to the first line of the pseudo code below from Sutton: How do I choose a good differentiable function <a href="https://i.stack.imgur.com/xrbyk.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xrbyk.gif" alt="enter image description here"></a>? Are there any standard strategies to choose it?</p> <p><a href="https://i.stack.imgur.com/NM3wo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NM3wo.png" alt="enter image description here"></a></p>
2017-07-25 09:15:25.170000+00:00
2017-07-25 19:07:07.140000+00:00
null
reinforcement-learning|approximation
['https://i.stack.imgur.com/8jZuk.gif', 'https://i.stack.imgur.com/ujHyW.gif', 'https://i.stack.imgur.com/gcc4q.gif', 'https://i.stack.imgur.com/Enalq.gif', 'https://i.stack.imgur.com/lNjix.gif', 'https://i.stack.imgur.com/zPEq4.gif', 'https://i.stack.imgur.com/akfxk.gif', 'https://i.stack.imgur.com/Enalq.gif', 'https://i.stack.imgur.com/ZJ1o0.gif', 'https://i.stack.imgur.com/zlo1v.gif', 'https://i.stack.imgur.com/c5xAJ.png', 'https://arxiv.org/pdf/1312.5602', 'https://i.stack.imgur.com/pSa4o.png', 'https://i.stack.imgur.com/mMdzE.gif', 'https://en.wikipedia.org/wiki/Elementary_function', 'https://i.stack.imgur.com/O8yd5.gif', 'https://i.stack.imgur.com/E44Fl.gif', 'https://en.wikipedia.org/wiki/Radial_basis_function#Approximation', 'http://castlelab.princeton.edu/ORF544/Readings/Tsitsiklis%20van%20Roy%20-%20Analysis%20of%20Temporal-Difference%20Learning%20with%20Function%20Approximations-IEEE%20TAC.pdf', 'https://i.stack.imgur.com/ZH7HR.gif', 'https://i.stack.imgur.com/Y78BK.gif', 'https://i.stack.imgur.com/2Upqn.gif', 'https://papers.nips.cc/paper/3809-convergent-temporal-difference-learning-with-arbitrary-smooth-function-approximation.pdf']
23
7,832,756
<p>It sounds like you could use <a href="http://en.wikipedia.org/wiki/Multiple_buffering" rel="nofollow">double buffering</a>. Basically, you'd maintain pointers to two arrays of particle objects &mdash; call them, say, <code>accepted</code> and <code>trial</code>. At the beginning of a trial, you copy the properties of the particles on the <code>accepted</code> array to those on the <code>trial</code> array, and make any modifications you want. If the trial is successful, you then just swap the pointers, so that what used to be the <code>trial</code> array becomes <code>accepted</code> and vice versa.</p> <p>Also, you say that only <em>some</em> of your trials involve costly updates. If so, you might be interested in techniques like <a href="http://arxiv.org/abs/math.ST/0502099" rel="nofollow">fast variable dragging</a> or <a href="http://arxiv.org/abs/1101.0387" rel="nofollow">ensemble updating</a>.</p>
2011-10-20 07:56:40.633000+00:00
2011-10-20 07:56:40.633000+00:00
null
null
7,832,610
<p>I'm doing a <a href="http://en.wikipedia.org/wiki/Monte_Carlo_method" rel="nofollow">Monte-Carlo simulation</a> of some particles particles. There are several bottle necks in my code but the main one is that in some of the tries I make, I need to update <strong>all</strong> the particles properties. The code is written in c++ and currently I have several loops to achieve that:<br> 1. a loop to store the old properties of all the particles and update the new properties.<br> 2. a 2D loop of interactions.<br> 3. another 2D loop of interactions (I can't combine it with the first one).<br> 4. a loop to store accept the step/a loop to reject the step. </p> <p>I am hoping to remove step 4 using swap but I can't find a way to do so. All the particles are a class which has several elements named <code>properties</code> and <code>nextProperties</code> or <code>oldProperties</code>. How would you approach that?</p>
2011-10-20 07:41:40.263000+00:00
2011-10-20 16:47:20.090000+00:00
null
c++|algorithm|scientific-computing
['http://en.wikipedia.org/wiki/Multiple_buffering', 'http://arxiv.org/abs/math.ST/0502099', 'http://arxiv.org/abs/1101.0387']
3
46,176,378
<p>maybe look at what karpathy has done with arxiv sanity. </p> <p><a href="https://github.com/karpathy/arxiv-sanity-preserver" rel="nofollow noreferrer">https://github.com/karpathy/arxiv-sanity-preserver</a></p>
2017-09-12 12:26:44.663000+00:00
2017-09-12 12:26:44.663000+00:00
null
null
46,174,448
<p>I am looking for the best algorithm to use for article suggestion in my projects. We have bunch of 1000 articles. I would like to recommend similar articles to users based on the article he is reading. Which algorithm best suits this. I tried content based recommendation, which involves training the model. In my case it can be simple text based similarity to the article the user is reading and not the history of articles read by users</p>
2017-09-12 10:48:33.160000+00:00
2017-09-12 12:26:44.663000+00:00
null
python-2.7|machine-learning|recommendation-engine
['https://github.com/karpathy/arxiv-sanity-preserver']
1
44,062,894
<p>Had you read the source code of any decoder like <a href="https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py#L648" rel="nofollow noreferrer">this one</a> you would get to know that it represents the number of attentions. </p> <p>Sometimes there are several attentions(hierarchical attentions), for instance this one(as depicted bellow) in <a href="http://arxiv.org/abs/1602.06023" rel="nofollow noreferrer">this paper</a>.<br> TL;DR; the first one is for the word and the second one is for the sentence.<br> Please check this graph: <a href="https://i.stack.imgur.com/aXPni.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aXPni.png" alt="enter image description here"></a></p>
2017-05-19 06:28:13.513000+00:00
2017-06-09 06:38:12.650000+00:00
2017-06-09 06:38:12.650000+00:00
null
38,113,303
<p>I'm new to the tensorflow and trying to implement the "seq2seq" model according to the tutorial. I'm not sure about one argument "num_heads" (default=1) of the func "embedding_attention_seq2seq". What does it represent? I didn't find it in the related papers.</p>
2016-06-30 03:04:11.107000+00:00
2017-06-09 06:38:12.650000+00:00
null
nlp|tensorflow
['https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py#L648', 'http://arxiv.org/abs/1602.06023', 'https://i.stack.imgur.com/aXPni.png']
3
51,576,488
<p>This method is simply <em>Impact Coding</em> the categorial variables.</p> <p>If your categorical column has the categories {C1,C2,C3,...}, then impact coding is done as follows:</p> <pre><code> Impact(category = Ci) = E[y|Ci] - E[y] </code></pre> <p>During training, for each category, (Ci), it calculates the difference between the mean output (given that category, i.e. a posteriori) and the overall expected value of the dependent variable (a priori). For more reference on impact coding, you can refer to this paper <a href="https://arxiv.org/abs/1611.09477v3" rel="nofollow noreferrer">https://arxiv.org/abs/1611.09477v3</a> (Page10)</p> <p>In the testing phase, to convert the categorial variables of the testing data to their impact codes, it uses the same expected value of 'y' as it used for the training data. Since this is an expected value, it doesn't matter if the training data has more number of samples than the testing data (as long as the distribution of 'y' is similar in both the datasets).</p>
2018-07-29 01:21:50.310000+00:00
2018-07-29 01:21:50.310000+00:00
null
null
49,849,304
<p>According to this website (<a href="http://www.statsmodels.org/dev/contrasts.html" rel="nofollow noreferrer">http://www.statsmodels.org/dev/contrasts.html</a><a href="http://www.statsmodels.org/dev/contrasts.html" rel="nofollow noreferrer">enter link description here</a>), the definition of backward difference encoding is '<strong>In backward difference coding, the mean of the dependent variable for a level is compared with the mean of the dependent variable for the prior level. This type of coding may be useful for a nominal or an ordinal variable.</strong>' </p> <p>What I don't understand is if this encoding method relies on the dependent variable (same thing as output variable if I understand correctly), how can we perform backward difference encoding with the testing set when dependent variable is not given to the model ahead of time? In the training set, values for the dependent variable is given but in testing set they are not given. Can anybody advise?</p>
2018-04-16 03:41:26.123000+00:00
2018-07-29 01:21:50.310000+00:00
null
python|machine-learning|hash|encoding|scikit-learn
['https://arxiv.org/abs/1611.09477v3']
1
64,539,109
<p>That is one motivation behind the paper &quot;<a href="https://arxiv.org/abs/2010.10392" rel="noreferrer">CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters</a>&quot; where BERT's wordpiece system is discarded and replaced with a CharacterCNN (just like in ELMo). This way, a word-level tokenization can be used without any OOV issues (since the model attends to each token's characters) and the model produces a single embedding for any arbitrary input token.</p> <p>Performance-wise, the paper shows that CharacterBERT is generally at least as good BERT while at the same time being more robust to noisy texts.</p>
2020-10-26 14:29:18.200000+00:00
2020-10-26 14:29:18.200000+00:00
null
null
60,942,550
<p>Does it make sense to change the tokenization paradigm in the BERT model, to something else? Maybe just a simple word tokenization or character level tokenization?</p>
2020-03-31 02:30:42.230000+00:00
2020-10-26 14:29:18.200000+00:00
null
nlp|pytorch|tokenize|transformer-model
['https://arxiv.org/abs/2010.10392']
1
20,245,479
<p>See the following paper, which contains links to the source code for various shadow detection/removal algorithms:</p> <p>A. Sanin, C. Sanderson, B.C. Lovell. "Shadow Detection: A Survey and Comparative Evaluation of Recent Methods", Pattern Recognition, Vol. 45, No. 4, pp. 1684-1695, 2012.</p> <p>Official version at: <a href="http://dx.doi.org/10.1016/j.patcog.2011.10.001" rel="nofollow">http://dx.doi.org/10.1016/j.patcog.2011.10.001</a></p> <p>There is also a pre-print of the above paper on the Arxiv server: <a href="http://arxiv.org/abs/1304.1233" rel="nofollow">http://arxiv.org/abs/1304.1233</a></p>
2013-11-27 14:46:30.893000+00:00
2013-11-27 14:46:30.893000+00:00
null
null
8,821,200
<p>I have implemented foreground subtraction to detect moving cars and the results look pretty good. The only issue is in removing the shadows , which form a part of the foreground.</p> <p>I searched online to find a way to fix this and found links to many papers :</p> <p>1) Moving Shadow Detection with Low- and Mid-Level Reasoning</p> <p>2)J.-F. Lalonde, A. A. Efros, and S. G. Narasimhan. Detecting Ground Shadows in Outdoor Consumer Photographs. in European Conference on Computer Vision, 2010.</p> <p><a href="http://www.yourfilelink.com/get.php?fid=744441" rel="nofollow">Please watch the video</a> for a better idea of what I am looking for. Though the papers make for great learning, they are beyond my level of comprehension at this point. Could someone point me to some open source code which could help me understand and implement shadow removal?</p>
2012-01-11 14:46:15.763000+00:00
2022-09-01 11:02:35.433000+00:00
2022-09-01 11:02:35.433000+00:00
c++|.net|opencv|image-processing
['http://dx.doi.org/10.1016/j.patcog.2011.10.001', 'http://arxiv.org/abs/1304.1233']
2
71,954,640
<p>Running <code>eval_process</code> multiple rounds on the same <code>test_data</code> will not produce new information, and is expected to yield the same result every time. These results will be <em>stable</em> in the sense they don't change, but probably are not interesting.</p> <p>Running <code>eval_process</code> on multiple rounds, using different <code>test_data</code> each round can be thought of as sampling a cohort of clients from the larger population to get an <em>estimate</em> of model quality. Computing many estimates from multiple samples can be used with statistical techniques, with more rounds leading to more <em>stable</em> an improved estimates of model quality.</p> <p>Presumably this is the technique used in <a href="https://proceedings.mlsys.org/paper/2019/hash/bd686fd640be98efaae0091fa301e613-Abstract.html" rel="nofollow noreferrer">1</a> and <a href="https://arxiv.org/pdf/2102.08503.pdf" rel="nofollow noreferrer">2</a> which describe later aggregation services.</p>
2022-04-21 12:39:24.440000+00:00
2022-04-21 12:39:24.440000+00:00
null
null
71,898,178
<p>I want to evaluate my federated learning model using <code>tff.learning.build_federated_evaluation</code>. Initially, got reasonable results. but can I run the evaluation process for multiple rounds (as in the training phase done <a href="https://stackoverflow.com/questions/71822452/epochs-vs-rounds-in-federated-learning">here</a>) to get more stable results?</p> <p>The evaluation code is provided below.</p> <pre><code>train, test = source.train_test_client_split(source, 2,seed=0) test_client_ids = test.client_ids test_data= [test.create_tf_dataset_from_all_clients().map(reshape_data) .batch(batch_size=10) for c in test_client_ids] eval_process=tff.learning.build_federated_evaluation(model_fn) eval_process(state.model, test_data) </code></pre> <p>The evaluation output.</p> <pre><code>OrderedDict([('eval', OrderedDict([('sparse_categorical_accuracy', 0.53447974), ('loss', 1.0230521), ('num_examples', 11514), ('num_batches', 1152)]))]) </code></pre>
2022-04-16 23:24:04.750000+00:00
2022-04-21 12:39:24.440000+00:00
null
python|tensorflow|tensorflow-federated
['https://proceedings.mlsys.org/paper/2019/hash/bd686fd640be98efaae0091fa301e613-Abstract.html', 'https://arxiv.org/pdf/2102.08503.pdf']
2
44,421,137
<p>The state for LSTMs really consists of two parts</p> <ol> <li>State for the cell(s)</li> <li>Previous outputs</li> </ol> <p>This is alluded to in <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell" rel="nofollow noreferrer">the docs</a> for BasicLSTMCell. <a href="https://arxiv.org/pdf/1503.04069.pdf" rel="nofollow noreferrer">This paper</a> has a good explanation of how LSTMs work which will help you understand why you need to keep two sets of states in an LSTM implementation. The reason an error is being thrown is because you need to supply a tuple of tensors for the initial state.</p> <p>That said you have two options:</p> <ol> <li>Supply an initial state that consists of two tensors.</li> <li>Let the RNN cell generate its own initial state.</li> </ol> <p>You would usually only do 1. if you wanted to override default behavior. In this case you are using the default (zero) initial state so you can do 2.</p> <pre><code>lstm_outputs, final_state = tf.nn.dynamic_rnn(cell=lstm, inputs=lstm_inputs, dtype=tf.float32) </code></pre>
2017-06-07 19:20:13.053000+00:00
2017-06-07 19:26:15.580000+00:00
2017-06-07 19:26:15.580000+00:00
null
44,420,520
<p>I have the following code: </p> <pre><code>def dense_layers(pool3): with tf.variable_scope('local1') as scope: # Move everything into depth so we can perform a single matrix multiply. shape_d = pool3.get_shape() shape = shape_d[1] * shape_d[2] * shape_d[3] # tf_shape = tf.stack(shape) tf_shape = 1024 print("shape:", shape, shape_d[1], shape_d[2], shape_d[3]) # So note that tf_shape = 1024, this means that we have 1024 features are fed into the network. And # the batch size = 1024. Therefore, the aim is to divide the batch_size into num_steps so that reshape = tf.reshape(pool3, [-1, tf_shape]) # Now we need to reshape/divide the batch_size into num_steps so that we would be feeding a sequence # And note that most importantly is to have batch_partition_length followed by step_size in the parameter list. lstm_inputs = tf.reshape(reshape, [batch_partition_length, step_size, tf_shape]) # print('RNN inputs shape: ', lstm_inputs.get_shape()) # -&gt; (128, 8, 1024). # Note that the state_size is the number of neurons. lstm = tf.contrib.rnn.BasicLSTMCell(state_size) lstm_outputs, final_state = tf.nn.dynamic_rnn(cell=lstm, inputs=lstm_inputs, initial_state=init_state) tf.assign(init_state, final_state) </code></pre> <p>So, I am taking the output of the pool layer and try to feed it into the LSTM in the network. </p> <p>Initially I have declared the following: </p> <pre><code>state_size = 16 step_size = 8 batch_partition_length = int(batch_size / step_size) init_state = tf.Variable(tf.zeros([batch_partition_length, state_size])) # -&gt; [128, 16]. </code></pre> <p>Therefore, I am getting an error on:</p> <pre><code>lstm_outputs, final_state = tf.nn.dynamic_rnn(cell=lstm, inputs=lstm_inputs, initial_state=init_state) </code></pre> <p>As follows: </p> <pre><code>Traceback (most recent call last): File "C:/Users/user/PycharmProjects/AffectiveComputing/Brady_with_LSTM.py", line 197, in &lt;module&gt; predictions = dense_layers(conv_nets_output) File "C:/Users/user/PycharmProjects/AffectiveComputing/Brady_with_LSTM.py", line 162, in dense_layers lstm_outputs, final_state = tf.nn.dynamic_rnn(cell=lstm, inputs=lstm_inputs, initial_state=init_state) File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 553, in dynamic_rnn dtype=dtype) File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 720, in _dynamic_rnn_loop swap_memory=swap_memory) File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2623, in while_loop result = context.BuildLoop(cond, body, loop_vars, shape_invariants) File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2456, in BuildLoop pred, body, original_loop_vars, loop_vars, shape_invariants) File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2406, in _BuildLoop body_result = body(*packed_vars_for_body) File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 705, in _time_step (output, new_state) = call_cell() File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn.py", line 691, in &lt;lambda&gt; call_cell = lambda: cell(input_t, state) File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\contrib\rnn\python\ops\core_rnn_cell_impl.py", line 238, in __call__ c, h = state File "C:\Users\user\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 504, in __iter__ raise TypeError("'Tensor' object is not iterable.") TypeError: 'Tensor' object is not iterable. </code></pre> <p>Any help is much appreciated!! </p>
2017-06-07 18:44:33.463000+00:00
2017-06-07 19:26:15.580000+00:00
null
tensorflow|lstm
['https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell', 'https://arxiv.org/pdf/1503.04069.pdf']
2
52,741,708
<p><strike>The actual question is rather ambiguous. I am guessing correctly, that you want someone to implement the missing two lines of code for the network?</strike></p> <pre><code>model = Sequential() model.add(Conv2D(40, (15, 15), activation='relu', padding='same', input_shape=(64, 64, 1))) model.add(MaxPooling2D((2, 2), padding='same')) model.add(Conv2D(40, (15, 15), activation='relu', padding='same')) # layer 3 model.add(Conv2D(1, (15, 15), activation='linear', padding='same')) # layer 4 print(model.summary()) </code></pre> <p>To get 40 feature maps after layer 3, we just convolve with 40 different kernels. After layer 4, there should be only one feature map / channel, so 1 kernel is enough here.</p> <p>By the way, the figure seems to be from <a href="https://www.nature.com/articles/nmeth.4405" rel="nofollow noreferrer">Convolutional neural networks for automated annotation of cellular cryo-electron tomograms</a> (<a href="https://arxiv.org/pdf/1701.05567.pdf" rel="nofollow noreferrer">PDF</a>) by Chen et al., a Nature article from 2017.</p> <p><strong>Update:</strong></p> <blockquote> <p>Comment: [...] why the authors say 1600 kernels in total and there is a summation?</p> </blockquote> <p>Actually, the authors seem to follow a rather strange notation here. They have an (imho) incorrect way to count kernels. What they rather mean is weights (if given 1x1 kernels...).</p> <p>Maybe they did not understand that the shape of the kernels are in fact 3-D, due to the last dimension equal to the number of feature maps.</p> <p>When we break it down there are</p> <ul> <li>40 kernels of size 15x15x1 for the 1st layer (which makes 40 * 15 ** 2 trainable weights)</li> <li>No kernels in the 2nd layer</li> <li>40 kernels of size 15x15x40 in the 3rd layer (which makes 1600 * 15 ** 2 trainable weights)</li> <li>1 kernel of size 15x15x40 for the 4th layer (which makes 40 * 15 ** 2 trainable weights)</li> </ul>
2018-10-10 13:45:45.520000+00:00
2018-10-11 11:35:18.220000+00:00
2018-10-11 11:35:18.220000+00:00
null
52,741,291
<p>I am trying to implement the artificial convolutional neural network in order to perform a two-class pixel-wise classification as seen in the figure attached (from Chen et al. Nature 2017). <img src="https://i.stack.imgur.com/fvXjE.jpg" alt=""></p> <p>Can you give me a hint on what the third and fourth layers should look like?</p> <p>This is how far I've got already:</p> <pre><code>from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D model = Sequential() model.add(Conv2D(40, (15, 15), activation='relu', padding='same', input_shape = (64, 64, 1))) # first layer model.add(MaxPooling2D((2, 2), padding='same')) # second layer # model.add(...) # third layer &lt;-- how to implement this? # model.add(...) # fourth layer &lt;-- how to implement this? print(model.summary()) </code></pre> <p>How many kernels did they use for the remaining layers and how should I interpret the summation symbols in the image?</p> <p>Thanks in advance!</p>
2018-10-10 13:24:11.620000+00:00
2018-10-11 19:51:18.290000+00:00
2018-10-11 13:10:59.383000+00:00
keras|deep-learning|conv-neural-network|keras-layer
['https://www.nature.com/articles/nmeth.4405', 'https://arxiv.org/pdf/1701.05567.pdf']
2
58,500,541
<p>This is a tricky question to answer but usually theoretical CNN is able to do. It is mainly dependent on the training data itself. In case of a child vs adult, you can gather a dataset that includes alot of variances in sizes and ages in order to make sure that you will have CNN model that able to find patterns and generalize at the end. At the end, the CNN will learn many other features that make the classification scale or size invariant (In dependent of Size) such as shapes,colors, clothes and face features ....etc. Such Intra-class classification problems, it is not easily tackled with traditional supervised learning and therefore some researchers are applying an approach called "<a href="https://arxiv.org/abs/1412.6622" rel="nofollow noreferrer">Deep Metric Learning</a>". </p> <blockquote> <p>Metric learning is the task of learning a distance function over objects. A metric or distance function has to obey four axioms: non-negativity, identity of indiscernibles, symmetry and subadditivity (or the triangle inequality). In practice, metric learning algorithms ignore the condition of identity of indiscernibles and learn a pseudo-metric.<a href="https://en.wikipedia.org/wiki/Similarity_learning" rel="nofollow noreferrer">Wiki Definition</a></p> </blockquote> <p><a href="https://www.arxiv-vanity.com/papers/1708.01682/" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KdIYX.png" alt="Example for Metric Learning"></a></p>
2019-10-22 08:56:41.003000+00:00
2019-10-22 08:56:41.003000+00:00
null
null
58,492,959
<p>Could a CNN tell the difference between different size range of the same organism? Like a puppy vs a adult or a child vs an adult? Or more like a large fly vs a small fly, where they look identical but one is just larger than the other?</p>
2019-10-21 19:27:59.733000+00:00
2019-10-23 08:13:07.010000+00:00
null
image|size|conv-neural-network|convolution
['https://arxiv.org/abs/1412.6622', 'https://en.wikipedia.org/wiki/Similarity_learning', 'https://www.arxiv-vanity.com/papers/1708.01682/']
3
35,613,047
<p>I think you can use an iterative EM-type algorithm:</p> <blockquote> <p>Initialize missing values to their column means</p> <p>Repeat until convergence:</p> <ul> <li><p>Perform K-means clustering on the filled-in data</p></li> <li><p>Set the missing values to the centroid coordinates of the clusters to which they were assigned</p></li> </ul> </blockquote> <h2>Implementation</h2> <pre><code>import numpy as np from sklearn.cluster import KMeans def kmeans_missing(X, n_clusters, max_iter=10): """Perform K-Means clustering on data with missing values. Args: X: An [n_samples, n_features] array of data to cluster. n_clusters: Number of clusters to form. max_iter: Maximum number of EM iterations to perform. Returns: labels: An [n_samples] vector of integer labels. centroids: An [n_clusters, n_features] array of cluster centroids. X_hat: Copy of X with the missing values filled in. """ # Initialize missing values to their column means missing = ~np.isfinite(X) mu = np.nanmean(X, 0, keepdims=1) X_hat = np.where(missing, mu, X) for i in xrange(max_iter): if i &gt; 0: # initialize KMeans with the previous set of centroids. this is much # faster and makes it easier to check convergence (since labels # won't be permuted on every iteration), but might be more prone to # getting stuck in local minima. cls = KMeans(n_clusters, init=prev_centroids) else: # do multiple random initializations in parallel cls = KMeans(n_clusters, n_jobs=-1) # perform clustering on the filled-in data labels = cls.fit_predict(X_hat) centroids = cls.cluster_centers_ # fill in the missing values based on their cluster centroids X_hat[missing] = centroids[labels][missing] # when the labels have stopped changing then we have converged if i &gt; 0 and np.all(labels == prev_labels): break prev_labels = labels prev_centroids = cls.cluster_centers_ return labels, centroids, X_hat </code></pre> <h2>Example with fake data</h2> <pre><code>from sklearn.datasets import make_blobs from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D def make_fake_data(fraction_missing, n_clusters=5, n_samples=1500, n_features=3, seed=None): # complete data gen = np.random.RandomState(seed) X, true_labels = make_blobs(n_samples, n_features, n_clusters, random_state=gen) # with missing values missing = gen.rand(*X.shape) &lt; fraction_missing Xm = np.where(missing, np.nan, X) return X, true_labels, Xm X, true_labels, Xm = make_fake_data(fraction_missing=0.3, n_clusters=5, seed=0) labels, centroids, X_hat = kmeans_missing(Xm, n_clusters=5) # plot the inferred points, color-coded according to the true cluster labels fig, ax = plt.subplots(1, 2, subplot_kw={'projection':'3d', 'aspect':'equal'}) ax[0].scatter3D(X[:, 0], X[:, 1], X[:, 2], c=true_labels, cmap='gist_rainbow') ax[1].scatter3D(X_hat[:, 0], X_hat[:, 1], X_hat[:, 2], c=true_labels, cmap='gist_rainbow') ax[0].set_title('Original data') ax[1].set_title('Imputed (30% missing values)') fig.tight_layout() </code></pre> <p><a href="https://i.stack.imgur.com/HPSb7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/HPSb7.png" alt="enter image description here"></a></p> <h2>Benchmark</h2> <p>To assess the algorithm's performance, we can use the <a href="http://scikit-learn.org/stable/modules/clustering.html#mutual-information-based-scores" rel="noreferrer">adjusted mutual information</a> between the true and inferred cluster labels. A score of 1 is perfect performance and 0 represents chance:</p> <pre><code>from sklearn.metrics import adjusted_mutual_info_score fraction = np.arange(0.0, 1.0, 0.05) n_repeat = 10 scores = np.empty((2, fraction.shape[0], n_repeat)) for i, frac in enumerate(fraction): for j in range(n_repeat): X, true_labels, Xm = make_fake_data(fraction_missing=frac, n_clusters=5) labels, centroids, X_hat = kmeans_missing(Xm, n_clusters=5) any_missing = np.any(~np.isfinite(Xm), 1) scores[0, i, j] = adjusted_mutual_info_score(labels, true_labels) scores[1, i, j] = adjusted_mutual_info_score(labels[any_missing], true_labels[any_missing]) fig, ax = plt.subplots(1, 1) scores_all, scores_missing = scores ax.errorbar(fraction * 100, scores_all.mean(-1), yerr=scores_all.std(-1), label='All labels') ax.errorbar(fraction * 100, scores_missing.mean(-1), yerr=scores_missing.std(-1), label='Labels with missing values') ax.set_xlabel('% missing values') ax.set_ylabel('Adjusted mutual information') ax.legend(loc='best', frameon=False) ax.set_ylim(0, 1) ax.set_xlim(-5, 100) </code></pre> <p><a href="https://i.stack.imgur.com/xm2UU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/xm2UU.png" alt="enter image description here"></a></p> <h3>Update:</h3> <p>In fact, after a quick Google search it seems that what I've come up with above is pretty much the same as the <em>k</em>-POD algorithm for K-means clustering of missing data <a href="http://arxiv.org/pdf/1411.7013.pdf" rel="noreferrer">(Chi, Chi &amp; Baraniuk, 2016)</a>.</p>
2016-02-24 21:03:33.460000+00:00
2016-12-07 23:10:50.383000+00:00
2016-12-07 23:10:50.383000+00:00
null
35,611,465
<p>I want to cluster data with missing columns. Doing it manually I would calculate the distance in case of a missing column simply without this column.</p> <p>With scikit-learn, missing data is not possible. There is also no chance to specify a user distance function.</p> <p>Is there any chance to cluster with missing data?</p> <p>Example data:</p> <pre><code>n_samples = 1500 noise = 0.05 X, _ = make_swiss_roll(n_samples, noise) rnd = np.random.rand(X.shape[0],X.shape[1]) X[rnd&lt;0.1] = np.nan </code></pre>
2016-02-24 19:39:02.570000+00:00
2020-03-11 03:46:41.427000+00:00
null
python|scikit-learn|cluster-analysis|missing-data
['https://i.stack.imgur.com/HPSb7.png', 'http://scikit-learn.org/stable/modules/clustering.html#mutual-information-based-scores', 'https://i.stack.imgur.com/xm2UU.png', 'http://arxiv.org/pdf/1411.7013.pdf']
4
50,305,922
<p>Not aware of any widely-used standard for this. Here’s a non-widely-used one:</p> <p>Proquints</p> <p><a href="https://arxiv.org/html/0901.4016" rel="noreferrer">https://arxiv.org/html/0901.4016</a></p> <p><a href="https://github.com/dsw/proquint" rel="noreferrer">https://github.com/dsw/proquint</a></p> <p>A UUID4 (128 bit) would be converted into 8 proquints. If that’s too much, you can take the last 64 bits of the UUID4 (= just take 64 random bits). This doesn’t make it magically lose uniqueness; only increases the <em>likelihood</em> of collisions, which was non-zero to begin with, and which you can estimate mathematically to decide if it’s still OK for your purposes.</p>
2018-05-12 11:52:10.660000+00:00
2018-05-12 11:59:23.693000+00:00
2018-05-12 11:59:23.693000+00:00
null
49,506,313
<p>I am working on a system that makes heavy use of pseudonyms to make privacy-critical data available to researchers. These pseudonyms should have the following properties:</p> <ol> <li>They should not contain any information (e.g. time of creation, relation to other pseudonyms, encoded data, …).</li> <li>It should be easy to create unique pseudonyms.</li> <li>They should be human readable. That means they should be easy for humans to compare, copy, and understand when read out aloud.</li> </ol> <p>My first idea was to use <a href="https://www.rfc-editor.org/rfc/rfc4122" rel="noreferrer">UUID4</a>. They are quite good on (1) and (2), but not so much on (3).</p> <p>An variant is to encode UUIDs with a wider alphabet, resulting in shorter strings (see for example <a href="https://github.com/skorokithakis/shortuuid" rel="noreferrer">shortuuid</a>). But I am not sure whether this actually improves readability.</p> <p>Another approach I am currently looking into is a paper from 2005 titled <a href="https://www.sciencedirect.com/science/article/pii/S0169260705000672" rel="noreferrer">&quot;An optimal code for patient identifiers&quot;</a> which aims to tackle exactly my problem. The algorithm described there creates 8-character pseudonyms with 30 bits of entropy. I would prefer to use a more widely reviewed standard though.</p> <p>Then there is also the git approach: only display the first few characters of the actual pseudonym. But this would mean that a pseudonym could lose its uniqueness after some time.</p> <p>So my question is: Is there any widely-used standard for human-readable unique ids?</p>
2018-03-27 07:03:18.887000+00:00
2021-03-14 23:11:33.197000+00:00
2021-10-07 07:59:29.080000+00:00
standards|uuid|human-readable
['https://arxiv.org/html/0901.4016', 'https://github.com/dsw/proquint']
2
24,284,222
<p>A node containing elements and text node is a node with <em><a href="http://www.w3.org/TR/REC-xml/#sec-mixed-content" rel="nofollow">mixed content</a></em>. You can declare that using the <code>mixed</code> attribute in <code>complexType</code>:</p> <pre><code>&lt;complexType name="authorsType" mixed="true"&gt; &lt;sequence&gt; &lt;element name="author" minOccurs="1" maxOccurs="unbounded" type="arXiv:authorType"/&gt; &lt;/sequence&gt; &lt;/complexType&gt; </code></pre> <p>Now <code>authorsType</code> accepts elements and text nodes containing character content.</p> <p>See also: <a href="http://www.w3.org/TR/2001/REC-xmlschema-0-20010502/#mixedContent" rel="nofollow">XSD Spec - Mixed content</a></p>
2014-06-18 11:21:06.450000+00:00
2014-06-18 11:21:06.450000+00:00
null
null
24,284,043
<p>I have the following XML snippet and corresponding XML Schema:</p> <pre><code>&lt;authors&gt; &lt;author&gt; &lt;keyname&gt;Foo&lt;/keyname&gt; &lt;forenames&gt;Bar&lt;/forenames&gt; &lt;/author&gt; &lt;/authors&gt; </code></pre> <p>Schema:</p> <pre><code>&lt;element name="authors" minOccurs="0" maxOccurs="1" type="arXiv:authorsType"/&gt; &lt;complexType name="authorsType"&gt; &lt;sequence&gt; &lt;element name="author" minOccurs="1" maxOccurs="unbounded" type="arXiv:authorType"/&gt; &lt;/sequence&gt; &lt;/complexType&gt; &lt;complexType name="authorType"&gt; &lt;sequence&gt; &lt;element name="keyname" minOccurs="1" maxOccurs="1" type="string"/&gt; &lt;element name="forenames" minOccurs="0" maxOccurs="1" type="string"/&gt; &lt;element name="suffix" minOccurs="0" maxOccurs="1" type="string"/&gt; &lt;element name="affiliation" minOccurs="0" maxOccurs="unbounded" type="string"/&gt; &lt;/sequence&gt; &lt;/complexType&gt; </code></pre> <p>But I am curious how would the schema look like to allow this:</p> <pre><code>&lt;authors&gt; Text. &lt;author&gt; &lt;keyname&gt;Foo&lt;/keyname&gt; &lt;forenames&gt;Bar&lt;/forenames&gt; &lt;/author&gt; &lt;/authors&gt; </code></pre>
2014-06-18 11:12:38.393000+00:00
2014-06-18 11:21:06.450000+00:00
null
xml|xsd
['http://www.w3.org/TR/REC-xml/#sec-mixed-content', 'http://www.w3.org/TR/2001/REC-xmlschema-0-20010502/#mixedContent']
2
50,901,544
<p><code> So my question is How to effectively use BLOB and for which purposes it is suitable? </code></p> <p>Quick and dirty answer</p> <p><code> The simple answer is: BLOBs smaller than 256KB are more efficiently handled by a database, while a filesystem is more efficient for those greater than 1MB. Of course, this will vary between different databases and filesystems </code></p> <p>There is a microsoft technical report here : <a href="https://arxiv.org/ftp/cs/papers/0701/0701168.pdf" rel="nofollow noreferrer">Compare blob and ntfs filesystem</a> . The report is quite old (2006) but i think there isn't any much change from there. </p> <p>Imaging when you want to read file which stored in blob. you have send request to your database software, then the software controller will read blob data which is stored in filesystem. Instead of directly read from file-system, you have to go through 2 steps processes. So when the size of your file become bigger, blob will slow down your database a lot. And we all know that speed is the main key for database.</p> <p>Hope that help</p>
2018-06-18 00:47:28.943000+00:00
2018-06-18 00:47:28.943000+00:00
null
null
50,884,525
<p>What I know, in Database Context, BLOB or Binary Large OBject is nothing but actually a stored binary code for a given data. Can Reserves spaces in GBs and can be used to store virtually any data type. But What's actually a use of it?</p> <p>My major is Computer Vision and I'm fairly novice at databases and web development. Currently, I'm working on a sentiment analysis project and want to collect a large dataset for this purpose i.e. huge number of images and also want to keep record of whether a image has been used for the analysis purpose or not. I thought storing images in database with separate column for access record is the best thing I can do to have an organized and systematic approach. But Everyone I talked with recommends not to store image as a blob in database but just have its URL or name there and should have images in a dedicated folder.</p> <p>Moreover, since BLOB is just binary encoding of a file how would we decode it into an image file? I found codes like following to convert a BLOB value into an image:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>echo '&lt;img src="data:image/png;base64,' . base64_encode($image-&gt;getimageblob()) . '" /&gt;';</code></pre> </div> </div> </p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>echo '&lt;img src="data:image/jpg;base64,' . base64_encode($image-&gt;getimageblob()) . '" /&gt;';</code></pre> </div> </div> </p> <p>But these codes are specific to the extension (And personally I haven't been successful with any such codes). As all extensions for sure have some different schemes and thus a code cannot be used for image of all those extensions. My dataset targets visuals of an image and not on extension thus contains images of various extensions so how can one deal with them using a BLOB?</p> <p>So the approach of storing just names in database and and images in a dedicated folder sounds good but then what is the use of database itself? Can not we have some renaming mechanism for images via PHP and store them directly into that folder. Why use database when we can rename images like <strong>img_1_accesses_5.png</strong> and split image name to get the ID and number of times it accessed?</p> <p>If BLOB can store virtually every type of data, why the use of BLOB is such horrible and everyone recommends not to use it? And what is the problem if we directly inject images into database as BLOB? And finally If its suitable for images then how to deal with it?</p> <p>So my question is <strong>How to effectively use BLOB and for which purposes it is suitable?</strong></p>
2018-06-16 02:39:25.350000+00:00
2018-06-18 00:47:28.943000+00:00
2018-06-16 06:11:02.203000+00:00
php|database|blob
['https://arxiv.org/ftp/cs/papers/0701/0701168.pdf']
1
47,140,581
<p>Essentially yes, unlike Cholesky decomposition the LUP decomposition only uses generic field operations, so it can be applied to finite fields as well (and moreover, the result is useful). For pseudo code and further discussion about linear algebra over finite fields, see for example <a href="https://arxiv.org/pdf/1204.3735.pdf" rel="nofollow noreferrer">Computational linear algebra over finite fields</a>.</p> <p>Even more specifically relevant is <a href="https://pdfs.semanticscholar.org/0e71/9c72fe2f2e9cd5c9c1ffe27dc546fd0a15c0.pdf" rel="nofollow noreferrer">Fast matrix decomposition in ₂</a>.</p> <p>FFPACK has a ready to use implementation of several BLAS-like routines but over finite fields, including decomposition.</p>
2017-11-06 15:40:50.943000+00:00
2017-11-06 15:40:50.943000+00:00
null
null
47,138,672
<p>I'm currently working on the following problem: I pretty much have to perform (find) LUP decomposition of the boolean matrix. My question is: in LUP decomposition algorithm (LUP), if I somehow substitute operations for division and subtraction to those, defined for boolean scope - will it produce the desired result? Also, looking for any pointers - the problem seems unsolvable to me at the moment. Thanks for any response in advance.</p>
2017-11-06 14:00:51.190000+00:00
2017-11-06 15:40:50.943000+00:00
null
algorithm|matrix|boolean-logic
['https://arxiv.org/pdf/1204.3735.pdf', 'https://pdfs.semanticscholar.org/0e71/9c72fe2f2e9cd5c9c1ffe27dc546fd0a15c0.pdf']
2
48,152,917
<p>You problem is closely related to pareto optimal path computation in multi-criteria networks, e.g., as described in <a href="http://www.dbs.ifi.lmu.de/Publikationen/Papers/arXiv-SheJosSch14.pdf" rel="nofollow noreferrer">this paper</a>. </p> <p>If you would just have one criteria (like distance) associated with each edge, then <a href="https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm" rel="nofollow noreferrer">Dijkstra</a> lets you quickly find all possible paths (optimizing distance). This is possible since you can "discard" a path that arrives at a node if another path reaching that node already has a lower distance.</p> <p>The problem arises when you have two or more criteria (e.g., distance and reward) associated with each edge. Now, if two paths (starting form your start node) lead to the same node and path_1 has a lower distance than path_2, but path_2 has higher reward than path_1 you cannot discard either. However, if both criteria of a path are worse than in another path you are able to discard it. </p> <p>One possible algorithm to do the complete search is described in <a href="http://www.dbs.ifi.lmu.de/Publikationen/Papers/arXiv-SheJosSch14.pdf" rel="nofollow noreferrer">the above paper</a>.</p> <p><strong>Edit</strong></p> <p>My answer above will not consider elements reappearing during the route. If you want to include this, you would have to know when and where elements reappear during route planning. This however, will make things a lot more complicated since you could achieve a higher reward by "waiting" for elements to respawn.</p>
2018-01-08 15:05:05.073000+00:00
2018-01-08 15:43:19.480000+00:00
2018-01-08 15:43:19.480000+00:00
null
48,152,370
<p>I'm wondering if there is a more elegant solution to this problem. The brute-force approach (depth-first search) is too computationally intensive.</p> <p>You are given a network of nodes interconnected with paths. Each path has a distance and zero or more elements along the path that can only be collected once every five minutes. Collecting those elements increases your score.</p> <p>The goal is to plan out the next five minutes of path traversal, keeping in mind the paths that have been traversed already in the last five minutes, so as to maximize the score increase.</p> <p>The brute force algorithm is to try every possible route from the current location, avoiding places we have already been, stopping when we have traveled our max planning distance or time, and keep a virtual tally of rewards collected. Then all we have to do is choose the route with the highest score.</p> <p>Unfortunately, the number of nodes and paths in the graph is high enough that planning out even just five minutes worth of travel requires too much computation. </p> <p>Is there a known algorithm that solves this problem more efficiently than the brute-force method? Even if it only finds an approximate solution, and not an optimal one.</p> <p><strong>EDIT</strong></p> <p>Thank you @SaiBot, here is my final solution, in case anyone should ever find themselves asking this same question:</p> <p>I assigned every path, going from node A to node B, a unique ID. The path from B to A had its own ID. Outside the DFS search function but accessible to it, I kept a hash keyed by the ID, and the value consists of both the distance traveled prior to taking this path, and the size of the reward received so far. To minimize extra work, I made sure that at each node, the outgoing paths were sorted shortest to longest. Then, when the DFS algorithm was asked to evaluate a path it has evaluated before, the first thing it inspects is that cached result. If the cached result arrived with:</p> <pre><code>( reward &lt;= previous_reward &amp;&amp; distance &gt;= previous_distance ) || reward / distance &lt;= previous_score </code></pre> <p>Then it is reasoned that there will be no benefit to recursing this path again, so it returns immediately with a score of 0 to immediately disqualify it from consideration. Otherwise, it records the new incoming reward, distance, and score in the cache, and proceeds normally.</p> <p>In addition, I did one other thing. I reasoned that I wanted a certain amount of novelty in the path, meaning I didn't want it to just find one tiny little path that gets maximum reward, I wanted it to explore the map. So I added a filter to outgoing nodes, saying that if the node has been visited in the past X minutes, remove it from consideration. This had the side-effect of allowing the algorithm to route itself into a corner, so I added a fall-back, where if there were no available options, it would sort the outgoing paths by last visited, oldest first, and try in that order. </p> <p>The result was decent, but I'm going to do some more experiments to see if I can get even better results. </p>
2018-01-08 14:30:17.900000+00:00
2018-01-13 13:42:23.883000+00:00
2018-01-13 13:42:23.883000+00:00
algorithm|machine-learning|graph-traversal
['http://www.dbs.ifi.lmu.de/Publikationen/Papers/arXiv-SheJosSch14.pdf', 'https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm', 'http://www.dbs.ifi.lmu.de/Publikationen/Papers/arXiv-SheJosSch14.pdf']
3
43,661,512
<p>Update as of 2017: The answer is YES. The <a href="https://arxiv.org/abs/1703.03864" rel="nofollow noreferrer">most downloaded paper over the past month in Reinforcement Learning</a>, aptly named "<strong>Evolution Strategies as a Scalable Alternative to Reinforcement Learning</strong>" is indeed the talk of the town.</p>
2017-04-27 15:06:01.823000+00:00
2017-04-27 15:06:01.823000+00:00
null
null
12,411,197
<p>What is <em>evolutionary computation</em>? Is it a method of reinforcement learning? Or a separate method of machine learning? Or maybe none?</p> <p>Please, cite references used to answer this question.</p>
2012-09-13 16:56:12.800000+00:00
2018-01-25 00:44:56.203000+00:00
2018-01-25 00:44:56.203000+00:00
machine-learning|artificial-intelligence|reinforcement-learning|evolutionary-algorithm
['https://arxiv.org/abs/1703.03864']
1
46,569,428
<p>Your results and accuracy curve seem quite normal to me, so the model is learning fine. Few suggestions:</p> <ul> <li>As already pointed out in the comments, you probably need a bigger data set. Compare your data set to <a href="https://www.cs.toronto.edu/~kriz/cifar.html" rel="nofollow noreferrer">CIFAR-10</a>, which has 50000 training and 10000 test images, also 32x32. It's just possible that your training data doesn't contain that much of a variation to predict your validation/test images. Consider <a href="https://www.tensorflow.org/api_guides/python/image" rel="nofollow noreferrer">image augmentation</a> techniques to expand your data set artificially.</li> <li>When you have enough data, use most of it for training. For example, out of 10000 images, I'd split it like this: 7000 for training, 1500 for validation and 1500 for testing. This will make less likely to overfit.</li> <li>If you are sure that your training dataset represents target population well, you might want to play with your regularization hyperparameters: I noticed dropout probability and L2 regularizer. In general, by increasing these parameters you fight overfitting and improve generalization. Early layers usually need a smaller dropout value than later ones. Also consider trying <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">batchnorm</a>, another technique that helps generalization.</li> <li>You might also want to tweak your other hyper-parameters as well (learning rate, filter size, number of filters, batch size, etc) to get a better performance. Here's a <a href="https://stackoverflow.com/questions/41860817/hyperparameter-optimization-for-deep-learning-structures-using-bayesian-optimiza/46318446">good discussion</a> how to do it efficiently.</li> <li>Did you stop training after 10 epochs (this is a limit on your charts)? You probably should give it more time, because for CIFAR-10 it sometimes takes 30-50 epochs to learn well.</li> </ul>
2017-10-04 16:06:49.903000+00:00
2017-10-04 16:06:49.903000+00:00
null
null
45,865,260
<p>I am trying to design a convolution neural network for detecting a small red football ball. I have captured aproxx 4000 pictures of a scene in different configurations (adding chairs, bottles,etc…) without the ball inside and 4000 pictures of the scene in also different configurations but with the ball inside somewhere. I am using the resolution 32x32 px. The ball can be seen visually in picture where present. These are some positive example pictures (here are upside down):</p> <p>I have tried numerous combination of designing the Convolutional NN but I cannot find a decent one. I will present 2 architectures I have tried (a “normal” size one and very small one). I kept designing small and small networks because it thought I would help me with over-fitting problem. So, I have tried: <strong>Normal Network Design</strong></p> <pre><code>Input: 32x32x3 First Conv Layer: W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 3, 32], stddev=0.1), name=“w1”) b_conv1 = tf.Variable(tf.constant(0.1, shape=[32]), name=“b1”) _ h_conv1 = tf.nn.relu(tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv1, name=“conv1”) h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=‘SAME’, name=“pool1”) </code></pre> <p>2nd Conv Layer:</p> <pre><code>W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 16], stddev=0.1), name=“w2”) b_conv2 = tf.Variable(tf.constant(0.1, shape=[16]), name=“b2”) h_conv2 = tf.nn.relu(tf.nn.conv2d(h_pool1, W_conv2, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv2, name=“conv2”) h_pool2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=‘SAME’, name=“pool2”) </code></pre> <p>Fully connected layer:</p> <pre><code>W_fc1 = tf.Variable(tf.truncated_normal([8 * 8* 16, 16], stddev=0.1), name=“w3”) b_fc1 = tf.Variable(tf.constant(0.1, shape=[16]), name=“b3”) h_pool2_flat = tf.reshape(h_pool2, [-1, 8816], name=“flat3”) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1, name=“conv3”) </code></pre> <p>Dropout</p> <pre><code>keep_prob = tf.placeholder(tf.float32, name=“keep3”) h_fc2_drop = tf.nn.dropout(h_fc1, keep_prob, name=“drop3”) </code></pre> <p>Readout Layer</p> <pre><code>W_fc3 = tf.Variable(tf.truncated_normal([16, 2], stddev=0.1), name=“w4”) b_fc3 = tf.Variable(tf.constant(0.1, shape=([2]), name=“b4”) ) y_conv = tf.matmul(h_fc2_drop, W_fc3, name=“yconv”) + b_fc3 </code></pre> <p>Other info</p> <pre><code>cross_entropy = tf.reduce_mean( _ tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_conv)+ 0.005 * tf.nn.l2_loss(W_conv1)+ 0.005 * tf.nn.l2_loss(W_fc1) + 0.005 * tf.nn.l2_loss(W_fc3)) _ train_step = tf.train.AdamOptimizer(1e-5,name=“trainingstep”).minimize(cross_entropy) _#Percentage of correct _ prediction = tf.nn.softmax(y_conv, name=“y_prediction”) _ correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1), name=“correct_pred”) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name=“acc”) </code></pre> <p>Parameters</p> <pre><code>keep_prob: 0.4 batch_size=500 training time in generations=55 </code></pre> <p>Results</p> <pre><code>Training set final accuracy= 90.2% Validation set final accuracy= 52.2% </code></pre> <p>Graph link : <a href="https://i.stack.imgur.com/YVWFn.png" rel="nofollow noreferrer">Link to accuracy graph</a></p> <p><strong>Small Network Design</strong></p> <pre><code>Input: 32x32x3 </code></pre> <p>First Conv Layer:</p> <pre><code>W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 3, 16], stddev=0.1), name=“w1”) _b_conv1 = tf.Variable(tf.constant(0.1, shape=[16]), name=“b1”) _ h_conv1 = tf.nn.relu(tf.nn.conv2d(x, W_conv1, strides=[1, 1, 1, 1], padding=‘SAME’)+ b_conv1, name=“conv1”) h_pool1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=‘SAME’, name=“pool1”) </code></pre> <p>Fully connected layer:</p> <pre><code>W_fc1 = tf.Variable(tf.truncated_normal([16 * 16* 16, 8], stddev=0.1), name=“w3”) b_fc1 = tf.Variable(tf.constant(0.1, shape=[8]), name=“b3”) h_pool2_flat = tf.reshape(h_pool1, [-1, 161616], name=“flat3”) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1, name=“conv3”) </code></pre> <p>Dropout</p> <pre><code>keep_prob = tf.placeholder(tf.float32, name=“keep3”) h_fc2_drop = tf.nn.dropout(h_fc1, keep_prob, name=“drop3”) </code></pre> <p>Readout Layer</p> <pre><code>W_fc3 = tf.Variable(tf.truncated_normal([8, 2], stddev=0.1), name=“w4”) b_fc3 = tf.Variable(tf.constant(0.1, shape=([2]), name=“b4”) ) y_conv = tf.matmul(h_fc2_drop, W_fc3, name=“yconv”) + b_fc3 </code></pre> <p>Other info</p> <pre><code>cross_entropy = tf.reduce_mean( _ tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)+ 0.005 * tf.nn.l2_loss(W_conv1)+ 0.005 * tf.nn.l2_loss(W_fc1) + 0.005 * tf.nn.l2_loss(W_fc3)) _ train_step = tf.train.AdamOptimizer(1e-5,name=“trainingstep”).minimize(cross_entropy) _#Percentage of correct _ prediction = tf.nn.softmax(y_conv, name=“y_prediction”) _ correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y,1), name=“correct_pred”) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name=“acc”) </code></pre> <p>Parameters</p> <pre><code>keep_prob: 0.4 batch_size=500 training time in generations=55 </code></pre> <p>Results</p> <pre><code>Training set final accuracy= 87% Validation set final accuracy= 60.6% </code></pre> <p>Graph <a href="https://i.stack.imgur.com/8n91q.png" rel="nofollow noreferrer">Link to accuracy graph</a></p> <p>So, everything I do, I cannot get a decent accuracy on validation test. I am sure that is something that is missing but I cannot identify what. I am using dropout and l2 but it seems to overfit anyway</p> <p>Thanks for reading and amateur or advanced in CNN, please leave a feedback</p>
2017-08-24 15:15:22.113000+00:00
2017-10-04 16:06:49.903000+00:00
2017-08-24 15:20:26.800000+00:00
python|machine-learning|tensorflow|neural-network
['https://www.cs.toronto.edu/~kriz/cifar.html', 'https://www.tensorflow.org/api_guides/python/image', 'https://arxiv.org/abs/1502.03167', 'https://stackoverflow.com/questions/41860817/hyperparameter-optimization-for-deep-learning-structures-using-bayesian-optimiza/46318446']
4
60,668,210
<p>I'm not exactly an expert on optimization, but: it depends on what you mean by "nondifferentiable".</p> <p>For many mathematical functions that are used, "nondifferentiable" will just mean "not everywhere differentiable" -- but that's still "differentiable almost everywhere, except on countably many points" (e.g., <code>abs</code>, <code>relu</code>). These functions are not a problem at all -- you can just chose <a href="https://en.wikipedia.org/wiki/Subgradient_method" rel="nofollow noreferrer">any subgradient</a> and apply any normal gradient method. That's what basically all AD systems for machine learning do. The case for non-singular subgradients will happen with low probability anyway. An alternative for certain forms of convex objectives are <a href="https://en.wikipedia.org/wiki/Proximal_gradient_methods_for_learning" rel="nofollow noreferrer">proximal gradient methods</a>, which "smooth" the objective in an efficient way that preserves optima (cf. <a href="https://github.com/kul-forbes/ProximalOperators.jl" rel="nofollow noreferrer">ProximalOperators.jl</a>).</p> <p>Then there's those functions that seem like they can't be differentiated at all, since they seem "combinatoric" or discrete, but are in fact piecewise differentiable (if seen from the correct point of view). This includes <a href="https://arxiv.org/abs/2002.08871" rel="nofollow noreferrer">sorting and ranking</a>. But you have to find them, and describing and implementing the derivative is rather complicated. Whether such functions are supported by an AD system depends on how sophisticated its "standard library" is. Some variants of this, like "permute", can just fall out AD over control structures, while move complex ones require the primitive adjoints to be manually defined.</p> <p>For certain kinds of problems, though, we just work in an intrinsically discrete space -- like, integer parameters of some probability distributions. In these case, differentiation makes no sense, and hence AD libraries define their primitives not to work on these parameters. Possible alternatives are to use (mixed) integer programming, approximations, search, and model selection. This case also occurs for problems where the optimized space itself depends on the parameter in question, like the second argument of <code>fill</code>. We also have things like the <a href="https://en.wikipedia.org/wiki/Lp_space#When_p_=_0" rel="nofollow noreferrer">ℓ0 "norm"</a> or the rank of a matrix, for which there exist well-known continuous relaxations, but that's outside of the scope of AD).</p> <p>(In the specific case of MCMC for discrete or dimensional parameters, there's other ways to deal with that, like combining HMC with other MC methods in a Gibbs sampler, or using a nonparametric model instead. Other tricks <a href="http://arxiv.org/abs/2003.00704" rel="nofollow noreferrer">are possible for VI</a>.)</p> <p>That being said, you will rarely encounter complicated <a href="https://en.wikipedia.org/wiki/Weierstrass_function" rel="nofollow noreferrer">nowhere differentiable continuous</a> functions in optimization. They are already complicated to describe, are just unlikely to arise in the kind of math we use for modelling. </p>
2020-03-13 09:50:18.307000+00:00
2020-03-14 09:04:33.357000+00:00
2020-03-14 09:04:33.357000+00:00
null
60,664,875
<p>I am testing performance of different solvers on minimizing an objective function derived from simulated method of moments. Given that my objective function is not differentiable, I wonder if automatic differentiation would work in this case? I tried my best to read some introduction on this method, but I couldn't figure it out.</p> <p>I am actually trying to use Ipopt+JuMP in Julia for this test. Previously, I have tested it using BlackBoxoptim in Julia. I will also appreciate if you could provide some insights on optimization of non-differentiable functions in Julia.</p> <hr> <p>It seems that I am not clear on "non-differentiable". Let me give you an example. Consider the following <a href="https://i.stack.imgur.com/UzMDf.png" rel="nofollow noreferrer">objective function</a>. X is dataset, B is unobserved random errors which will be integrated out, \theta is parameters. However, A is discrete and therefore not differentiable.</p>
2020-03-13 04:29:43.843000+00:00
2020-03-14 09:04:33.357000+00:00
2020-03-13 15:50:28.353000+00:00
julia|numerical-methods|ipopt|autodiff
['https://en.wikipedia.org/wiki/Subgradient_method', 'https://en.wikipedia.org/wiki/Proximal_gradient_methods_for_learning', 'https://github.com/kul-forbes/ProximalOperators.jl', 'https://arxiv.org/abs/2002.08871', 'https://en.wikipedia.org/wiki/Lp_space#When_p_=_0', 'http://arxiv.org/abs/2003.00704', 'https://en.wikipedia.org/wiki/Weierstrass_function']
7
65,529,352
<p>For this you can extract <a href="https://arxiv.org/abs/1610.02391" rel="nofollow noreferrer">Grad-CAM</a> features. <code>Kears</code> already has published an official documentation for <code>Grad-CAM extraction</code> you can find it <a href="https://keras.io/examples/vision/grad_cam/" rel="nofollow noreferrer">here</a>. So for your task steps need to followed are</p> <ol> <li><code>Extract Grad-CAM from the images</code></li> <li><code>Based on Grad-CAM create a segmentation mask using simple image processing technique</code></li> </ol> <p>In this method <code>you can easily create segmentation mask for images</code> but masks <code>may not be so accurate </code>. Beacuse, see this picture,</p> <p><a href="https://i.stack.imgur.com/zXTzx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zXTzx.png" alt="enter image description here" /></a></p> <p>it is for <code>Xception model (ImageNet).</code></p> <p>Hope you will understand and you will be helpful.</p>
2021-01-01 11:38:52.617000+00:00
2021-01-01 20:49:14.483000+00:00
2021-01-01 20:49:14.483000+00:00
null
65,486,631
<p>I Have a trained classifier: VGG16 on say Image Net (or my own images DB and classes). I want to segment my images automatically knowing there are classes on images my classifier knows. How to automate image segmentation?</p>
2020-12-29 02:37:18.437000+00:00
2021-01-01 20:49:14.483000+00:00
null
automation|neural-network|classification|image-segmentation
['https://arxiv.org/abs/1610.02391', 'https://keras.io/examples/vision/grad_cam/', 'https://i.stack.imgur.com/zXTzx.png']
3
58,007,592
<p>Building a character-level self-attentive model is a challenging task. Character-level models are usually based on RNNs. Whereas in a word/subword model, it is clear from the beginning what are the units carrying meaning (and therefore the units the attention mechanism can attend to), a character-level model needs to learn word meaning in the following layers. This makes it quite difficult for the model to learn.</p> <p>Text generation models are nothing more than conditional languages model. Google AI recently published a paper on <a href="https://arxiv.org/abs/1908.10322" rel="nofollow noreferrer">Transformer character language model</a>, but it is the only work I know of.</p> <p>Anyway, you should consider either using subwords units (as BPE, SentencePiece) or if you really need to go for character level, use RNNs instead.</p>
2019-09-19 09:12:21.517000+00:00
2019-09-19 09:12:21.517000+00:00
null
null
58,007,391
<p>I am searching the web for a couple of days for any <strong>text generation</strong> model that would use only attention mechanisms.</p> <p>The <strong>Transformer</strong> architecture that made waves in the context of <strong>Seq-to-Seq</strong> models is actually based solely on <strong>Attention</strong> mechanisms but is mainly designed and used for translation or chat bot tasks so it doesn't fit to the purpose, but the principle does.</p> <p>My question is:</p> <p>Does anyone knows or heard of a text generation model <strong>based solely on Attention without any recurrence</strong>?</p> <p>Thanks a lot!</p> <p>P.S. I'm familiar with <strong>PyTorch</strong>.</p>
2019-09-19 09:01:32.667000+00:00
2019-09-19 09:12:21.517000+00:00
null
neural-network|nlp|pytorch|transformer-model|attention-model
['https://arxiv.org/abs/1908.10322']
1
45,645,440
<p>The assumption is that each layer has its own set of weights. See equations (1) and (2) on page 4 <a href="https://arxiv.org/pdf/1308.0850.pdf" rel="nofollow noreferrer">here</a>. As you can see, the weights depend on the layer (the equations there deal with vanilla rnn, but the same assumption is done with LSTM).</p>
2017-08-12 00:37:20.347000+00:00
2017-08-12 00:37:20.347000+00:00
null
null
45,645,371
<p>In deep-learning literature, I have encountered many examples of using stacked RNN ( stacked LSTM ) networks and while the details of the cell itself is explored, usually there is no information whether the weights are shared across different layers in a stacked architecture or not. </p> <p>What I try to understand is that when the author does not specify this, what would be the default behavior? Should we assume that they have shared the weights across the layers? or each layer would have its own set of weights for it's cell?</p>
2017-08-12 00:21:57.137000+00:00
2017-08-12 07:48:23.093000+00:00
2017-08-12 07:48:23.093000+00:00
machine-learning|neural-network|deep-learning|recurrent-neural-network|stacked
['https://arxiv.org/pdf/1308.0850.pdf']
1
47,969,200
<p>While clustering lets you classify your text and identify topics in them, unsupervised methods often lead to reduced flexibility in controlling the performance of your classification but they remain the best tools if you do not have labeled data.</p> <p>However, recent advances in zero-shot and few-shot learning can let you build your classifier with little (100 - 200 training data) or no training data at all. Your classifier still retains all the benefits of a supervised classifier and gives you all the control on your categories. </p> <p>I have built one such system and you can try out the <a href="https://www.paralleldots.com/custom-classifier" rel="nofollow noreferrer">demo</a> on your own categories and data to see the system in action.</p> <p>Additional resources: </p> <ol> <li><a href="https://www.quora.com/Whats-the-difference-between-one-shot-learning-and-zero-shot-learning" rel="nofollow noreferrer">https://www.quora.com/Whats-the-difference-between-one-shot-learning-and-zero-shot-learning</a></li> <li><a href="https://arxiv.org/abs/1710.10280" rel="nofollow noreferrer">https://arxiv.org/abs/1710.10280</a></li> </ol>
2017-12-25 13:44:51.287000+00:00
2017-12-25 13:44:51.287000+00:00
null
null
16,518,998
<p>I have a use case in which chat text is to be classified. I want to use DocumentCategorizer in Apache OpenNLP to categorize chat. But for that i must have Training Data that should have Chats already classified. Do i have to manually categorize hundreds of chats to prepare Training and Test Data? What else can i do? I intend the chat categories to be service related PROBLEMS. This list of Categories would then be domain specific. Should the provider of this data, provide me with the categorized chat data? Thanks, in advance.</p>
2013-05-13 09:41:16.270000+00:00
2021-07-12 11:13:23.967000+00:00
null
classification|opennlp|categorization
['https://www.paralleldots.com/custom-classifier', 'https://www.quora.com/Whats-the-difference-between-one-shot-learning-and-zero-shot-learning', 'https://arxiv.org/abs/1710.10280']
3
50,780,875
<p>The current answer is wrong in that it doesn't give you proper "weight decay as in cuda-convnet/caffe" but instead L2-regularization, which is different.</p> <p>When using pure SGD (without momentum) as an optimizer, weight decay is the same thing as adding a L2-regularization term to the loss. <strong>When using any other optimizer, this is not true.</strong></p> <p>Weight decay (don't know how to TeX here, so excuse my pseudo-notation):</p> <pre><code>w[t+1] = w[t] - learning_rate * dw - weight_decay * w </code></pre> <p>L2-regularization:</p> <pre><code>loss = actual_loss + lambda * 1/2 sum(||w||_2 for w in network_params) </code></pre> <p>Computing the gradient of the extra term in L2-regularization gives <code>lambda * w</code> and thus inserting it into the SGD update equation</p> <pre><code>dloss_dw = dactual_loss_dw + lambda * w w[t+1] = w[t] - learning_rate * dw </code></pre> <p>gives the same as weight decay, but mixes <code>lambda</code> with the <code>learning_rate</code>. Any other optimizer, even SGD with momentum, gives a different update rule for weight decay as for L2-regularization! See the paper <a href="/https://arxiv.org/abs/1711.05101">Fixing weight decay in Adam</a> for more details. (Edit: AFAIK, <a href="http://www.cs.toronto.edu/~hinton/absps/parle.pdf" rel="nofollow noreferrer">this 1987 Hinton paper</a> introduced "weight decay", literally as "each time the weights are updated, their magnitude is also decremented by 0.4%" at page 10)</p> <p>That being said, there doesn't seem to be support for "proper" weight decay in TensorFlow yet. There are a few issues discussing it, specifically because of above paper.</p> <p>One possible way to implement it is by writing an op that does the decay step manually after every optimizer step. A different way, which is what I'm currently doing, is using an additional SGD optimizer just for the weight decay, and "attaching" it to your <code>train_op</code>. Both of these are just crude work-arounds, though. My current code:</p> <pre><code># In the network definition: with arg_scope([layers.conv2d, layers.dense], weights_regularizer=layers.l2_regularizer(weight_decay)): # define the network. loss = # compute the actual loss of your problem. train_op = optimizer.minimize(loss, global_step=global_step) if args.weight_decay not in (None, 0): with tf.control_dependencies([train_op]): sgd = tf.train.GradientDescentOptimizer(learning_rate=1.0) train_op = sgd.minimize(tf.add_n(tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES))) </code></pre> <p>This somewhat makes use of TensorFlow's provided bookkeeping. Note that the <code>arg_scope</code> takes care of appending an L2-regularization term for every layer to the <code>REGULARIZATION_LOSSES</code> graph-key, which I then all sum up and optimize using SGD which, as shown above, corresponds to actual weight-decay.</p> <p>Hope that helps, and if anyone gets a nicer code snippet for this, or TensorFlow implements it better (i.e. in the optimizers), please share.</p> <p><strong>Edit:</strong> see also <a href="https://github.com/tensorflow/tensorflow/pull/17438" rel="nofollow noreferrer">this PR</a> which just got merged into TF.</p>
2018-06-10 05:38:52.180000+00:00
2018-06-15 10:23:50.567000+00:00
2018-06-15 10:23:50.567000+00:00
null
38,882,629
<p>In Caffe we have a decay_ratio which is usually set as 0.0005. Then all trainable parameters, e.g., W matrix in FC6 will be decayed by: W = W * (1 - 0.0005) after we applied the gradient to it.</p> <p>I go through many tutorial tensorflow codes, but do not see how people implement this weight decay to prevent numerical problems (very large absolute values)</p> <p>I my experiences, I often run into numerical problems aften 100k iterations during training.</p> <p>I also go through related questions at stackoverflow, e.g., <a href="https://stackoverflow.com/questions/34986911/how-to-set-weight-cost-strength-in-tensorflow">How to set weight cost strength in TensorFlow?</a> However, the solution seems a little different as implemented in Caffe.</p> <p>Does anyone has similar concerns? Thank you.</p>
2016-08-10 20:09:09.967000+00:00
2018-06-15 10:23:50.567000+00:00
2017-05-23 12:17:28.077000+00:00
neural-network|tensorflow|deep-learning
['/https://arxiv.org/abs/1711.05101', 'http://www.cs.toronto.edu/~hinton/absps/parle.pdf', 'https://github.com/tensorflow/tensorflow/pull/17438']
3
50,080,257
<p>The two models have no structural difference; they both consist of an encoder followed by a decoder implemented by LSTM layers. The difference is notational; the first model is defined on the <a href="https://keras.io/getting-started/functional-api-guide/" rel="nofollow noreferrer">functional API</a> with the input being considered a layer, whereas the second is defined using the <a href="https://keras.io/models/sequential/" rel="nofollow noreferrer">sequential API</a>. As for the encoder-decoder (otherwise known as seq2seq) architecture, it was originally proposed <a href="https://arxiv.org/abs/1406.1078" rel="nofollow noreferrer">here</a>, and has since evolved greatly, with the most significant improvement being the attention layer.</p>
2018-04-28 19:23:34.983000+00:00
2018-04-28 19:23:34.983000+00:00
null
null
50,080,087
<p>I would like to know the difference between these 2 Models. the one above has 4 Layers looking into the model summary and you can also define the unit numbers for dimensionality reduction. But what is with the 2nd Model it has 3 layers and you cant directly define the number of hidden units? Are both LSTM Autoencoders for dimensionality reduction and regression analysis ? Are there any good papers describing these two examples that I found from <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="nofollow noreferrer">keras</a> and<a href="http://rickyhan.com/jekyll/update/2017/09/14/autoencoders.html" rel="nofollow noreferrer">here</a>. I did nowhere defined the variables, infact that I am not asking for a coding question directly. I hope this also a good place for this topic. 1. Model: </p> <pre><code>from keras.layers import * from keras.models import Model from keras.layers import Input, LSTM, Dense, RepeatVector samples=1000 timesteps=300 features=input_dim=1 data_shape=np.reshape(data,(samples,timestep,input_dim) inputs = Input(shape=(timestep, input_dim)) encoded = LSTM(units, return_sequences=False, name="encoder")(inputs) decoded = RepeatVector(timestep)(encoded) decoded = LSTM(input_dim, return_sequences=True, name='decoder')(decoded) autoencoder = Model(inputs, decoded) encoder = Model(inputs, encoded) print (autoencoder.summary()) </code></pre> <p>2. Model: </p> <pre><code>x = np.random.random((1000, 300, 1)) </code></pre> <p>2.model: </p> <pre><code>m = Sequential() m.add(LSTM(100, input_shape=(300,1))) m.add(RepeatVector(300)) m.add(LSTM(100, return_sequences=True)) print (m.summary()) m.compile(loss='mse', optimizer='rmsprop', metrics=['mse', 'mape']) history = m.fit(x, x, nb_epoch=2000, batch_size=100) </code></pre> <p>When I try to add to both of them a data with the shape e.g. (1000, 300, 1) the first one is accepting it the second not, I get the error expected lstm_4 to have shape (None, 300, 100) but got array with shape (1000, 300, 1). With the choosen input_dim 1 and units =100. what am I doing wrong ? This is what I want to be: </p> <pre><code>LSTM(100, input_shape=(300, 1)) </code></pre> <p>with units=100 When I run the model, I get the following error: Error when checking target: expected lstm_2 to have shape (None, 300, 100) but got array with shape (1000, 300, 1)</p> <p>Where is my mistake that the model does not accept my data shape and my units size?</p>
2018-04-28 19:03:25.230000+00:00
2018-04-29 11:35:05.747000+00:00
2018-04-29 11:35:05.747000+00:00
python|keras|autoencoder
['https://keras.io/getting-started/functional-api-guide/', 'https://keras.io/models/sequential/', 'https://arxiv.org/abs/1406.1078']
3
46,904,315
<p>If you are specifically looking for classifiers in sklearn, you can have a look at this link : <a href="http://scikit-learn.org/stable/modules/scaling_strategies.html" rel="nofollow noreferrer">Scaling Strategies for large datasets</a>.</p> <p>Generally, the classifiers do incremental learning on your dataset by creating mini-batches. Here are some link for reference :</p> <p><strong>Incremental Learning links</strong></p> <ul> <li><a href="http://www.ra.cs.uni-tuebingen.de/lehre/ss12/advanced_ml/lecture8.pdf" rel="nofollow noreferrer">Advanced ML lecture on Incremental Learning</a></li> <li><a href="https://blog.bigml.com/2013/03/12/machine-learning-from-streaming-data-two-problems-two-solutions-two-concerns-and-two-lessons/" rel="nofollow noreferrer">ML on streaming data</a></li> <li><a href="https://en.wikipedia.org/wiki/Incremental_learning" rel="nofollow noreferrer">Incremental Leanring</a></li> <li><a href="https://arxiv.org/ftp/arxiv/papers/0709/0709.3965.pdf" rel="nofollow noreferrer">Microsoft paper on Incremental Learning</a></li> </ul> <p>You can have a look at these classifiers in SKlearn for more info</p> <ul> <li><a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html#sklearn.linear_model.SGDClassifier" rel="nofollow noreferrer">SGD Classifier</a></li> <li><a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PassiveAggressiveClassifier.html#sklearn.linear_model.PassiveAggressiveClassifier" rel="nofollow noreferrer">Passive Agrressive Classifier</a></li> <li><a href="http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html#sklearn.naive_bayes.MultinomialNB" rel="nofollow noreferrer">Multinomial Naive Bayes Incremental Learning</a></li> <li><a href="http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.BernoulliNB.html#sklearn.naive_bayes.BernoulliNB" rel="nofollow noreferrer">BErnoulli Naive Bayes</a></li> </ul> <p>If your data is given as a stream during input, you can have a look at <a href="https://spark.apache.org/docs/latest/streaming-programming-guide.html" rel="nofollow noreferrer">Apache Spark Streaming</a> and jump to <a href="https://spark.apache.org/docs/latest/ml-guide.html" rel="nofollow noreferrer">MlLib in Apache Spark</a> for more info.</p> <p>You can also have a look at <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html#sklearn.feature_extraction.FeatureHasher" rel="nofollow noreferrer">Feature Hasher</a> for large scale feature hashing in sklearn.</p>
2017-10-24 07:12:19.273000+00:00
2017-10-24 07:12:19.273000+00:00
null
null
46,884,602
<p>There are many supervised classifier algorithms available in scikit-learn but I couldn't find any information about their scaalbility regarding large datasets. I know that for instance, support vector machines don't behave well with huge datasets, but what about others? Which supervised/semi-supervised classifier algorithms are most suitable for large datasets?</p>
2017-10-23 08:08:34.680000+00:00
2017-10-24 09:46:01.143000+00:00
2017-10-24 09:46:01.143000+00:00
machine-learning|scikit-learn|large-data|large-files|large-data-volumes
['http://scikit-learn.org/stable/modules/scaling_strategies.html', 'http://www.ra.cs.uni-tuebingen.de/lehre/ss12/advanced_ml/lecture8.pdf', 'https://blog.bigml.com/2013/03/12/machine-learning-from-streaming-data-two-problems-two-solutions-two-concerns-and-two-lessons/', 'https://en.wikipedia.org/wiki/Incremental_learning', 'https://arxiv.org/ftp/arxiv/papers/0709/0709.3965.pdf', 'http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html#sklearn.linear_model.SGDClassifier', 'http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PassiveAggressiveClassifier.html#sklearn.linear_model.PassiveAggressiveClassifier', 'http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html#sklearn.naive_bayes.MultinomialNB', 'http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.BernoulliNB.html#sklearn.naive_bayes.BernoulliNB', 'https://spark.apache.org/docs/latest/streaming-programming-guide.html', 'https://spark.apache.org/docs/latest/ml-guide.html', 'http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.FeatureHasher.html#sklearn.feature_extraction.FeatureHasher']
12
61,775,631
<p>the attention layer in Keras is not a trainable layer (unless we use the scale parameter). it only computes matrix operation. In my opinion, this layer can result in some mistakes if applied directly on time series, but let proceed with order...</p> <p>the most natural choice to replicate the attention mechanism on our time-series problem is to adopt the solution presented <a href="https://arxiv.org/pdf/1409.0473.pdf" rel="nofollow noreferrer">here</a> and explained again <a href="https://towardsdatascience.com/intuitive-understanding-of-attention-mechanism-in-deep-learning-6c9482aecf4f" rel="nofollow noreferrer">here</a>. It's the classical application of attention in enc-dec structure in NLP</p> <p>following TF implementation, for our attention layer, we need query, value, key tensor in 3d format. we obtain these values directly from our recurrent layer. more specifically we utilize the sequence output and the hidden state. these are all we need to build an attention mechanism.</p> <p>query is the output sequence [batch_dim, time_step, features]</p> <p>value is the hidden state [batch_dim, features] where we add a temporal dimension for matrix operation [batch_dim, 1, features]</p> <p>as the key, we utilize as before the hidden state so key = value</p> <p>In the above definition and implementation I found 2 problems:</p> <ul> <li>the scores are calculated with softmax(dot(sequence, hidden)). the dot is ok but the softmax following Keras implementation is calculated on the last dimension and not on the temporal dimension. this implies the scores to be all 1 so they are useless</li> <li>the output attention is dot(scores, hidden) and not dot(scores, sequences) as we need</li> </ul> <p>the example:</p> <pre><code>def attention_keras(query_value): query, value = query_value # key == value score = tf.matmul(query, value, transpose_b=True) # (batch, timestamp, 1) score = tf.nn.softmax(score) # softmax on -1 axis ==&gt; score always = 1 !!! print((score.numpy()!=1).any()) # False ==&gt; score always = 1 !!! score = tf.matmul(score, value) # (batch, timestamp, feat) return score np.random.seed(33) time_steps = 20 features = 50 sample = 5 X = np.random.uniform(0,5, (sample,time_steps,features)) state = np.random.uniform(0,5, (sample,features)) attention_keras([X,tf.expand_dims(state,1)]) # ==&gt; the same as Attention(dtype='float64')([X,tf.expand_dims(state,1)]) </code></pre> <p>so for this reason, for time series attention I propose this solution</p> <pre><code>def attention_seq(query_value, scale): query, value = query_value score = tf.matmul(query, value, transpose_b=True) # (batch, timestamp, 1) score = scale*score # scale with a fixed number (it can be finetuned or learned during train) score = tf.nn.softmax(score, axis=1) # softmax on timestamp axis score = score*query # (batch, timestamp, feat) return score np.random.seed(33) time_steps = 20 features = 50 sample = 5 X = np.random.uniform(0,5, (sample,time_steps,features)) state = np.random.uniform(0,5, (sample,features)) attention_seq([X,tf.expand_dims(state,1)], scale=0.05) </code></pre> <p>query is the output sequence [batch_dim, time_step, features]</p> <p>value is the hidden state [batch_dim, features] where we add a temporal dimension for matrix operation [batch_dim, 1, features]</p> <p>the weights are calculated with softmax(scale*dot(sequence, hidden)). the scale parameter is a scalar value that can be used to scale the weights before applying the softmax operation. the softmax is calculated correctly on the time dimension. the attention output is the weighted product of input sequence and scores. I use the scalar parameter as a fixed value, but it can be tuned or insert as a learnable weight in a custom layer (as scale parameter in Keras attention).</p> <p>In term of network implementation these are the two possibilities available:</p> <pre><code>######### KERAS ######### inp = Input((time_steps,features)) seq, state = GRU(32, return_state=True, return_sequences=True)(inp) att = Attention()([seq, tf.expand_dims(state,1)]) ######### CUSTOM ######### inp = Input((time_steps,features)) seq, state = GRU(32, return_state=True, return_sequences=True)(inp) att = Lambda(attention_seq, arguments={'scale': 0.05})([seq, tf.expand_dims(state,1)]) </code></pre> <p><strong>CONCLUSION</strong></p> <p>I don't know how much added-value an introduction of an attention layer in simple problems can have. If you have short sequences, I suggest you leave all as is. What I reported here is an answer where I express my considerations, I'll accept comment or consideration about possible mistakes or misunderstandings</p> <hr> <p>In your model, these solutions can be embedded in this way</p> <pre><code>######### KERAS ######### inp = Input((n_features, n_steps)) seq, state = GRU(n_units, activation='relu', return_state=True, return_sequences=True)(inp) att = Attention()([seq, tf.expand_dims(state,1)]) x = GRU(n_units, activation='relu')(att) x = Dense(64, activation='relu')(x) x = Dropout(0.5)(x) out = Dense(n_steps_out)(x) model = Model(inp, out) model.compile(optimizer='adam', loss='mse', metrics=['mse']) model.summary() ######### CUSTOM ######### inp = Input((n_features, n_steps)) seq, state = GRU(n_units, activation='relu', return_state=True, return_sequences=True)(inp) att = Lambda(attention_seq, arguments={'scale': 0.05})([seq, tf.expand_dims(state,1)]) x = GRU(n_units, activation='relu')(att) x = Dense(64, activation='relu')(x) x = Dropout(0.5)(x) out = Dense(n_steps_out)(x) model = Model(inp, out) model.compile(optimizer='adam', loss='mse', metrics=['mse']) model.summary() </code></pre>
2020-05-13 13:15:15.487000+00:00
2020-05-15 23:11:29.750000+00:00
2020-05-15 23:11:29.750000+00:00
null
61,757,475
<p>I've tried to build a sequence to sequence model to predict a sensor signal over time based on its first few inputs (see figure below) <a href="https://i.stack.imgur.com/FCqFj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FCqFj.png" alt="enter image description here"></a></p> <p>The model works OK, but I want to 'spice things up' and try to add an attention layer between the two LSTM layers.</p> <p>Model code:</p> <pre><code>def train_model(x_train, y_train, n_units=32, n_steps=20, epochs=200, n_steps_out=1): filters = 250 kernel_size = 3 logdir = os.path.join(logs_base_dir, datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = TensorBoard(log_dir=logdir, update_freq=1) # get number of features from input data n_features = x_train.shape[2] # setup network # (feel free to use other combination of layers and parameters here) model = keras.models.Sequential() model.add(keras.layers.LSTM(n_units, activation='relu', return_sequences=True, input_shape=(n_steps, n_features))) model.add(keras.layers.LSTM(n_units, activation='relu')) model.add(keras.layers.Dense(64, activation='relu')) model.add(keras.layers.Dropout(0.5)) model.add(keras.layers.Dense(n_steps_out)) model.compile(optimizer='adam', loss='mse', metrics=['mse']) # train network history = model.fit(x_train, y_train, epochs=epochs, validation_split=0.1, verbose=1, callbacks=[tensorboard_callback]) return model, history </code></pre> <p>I've looked at the <a href="https://keras.io/api/layers/attention_layers/attention/" rel="noreferrer">documentation</a> but I'm a bit lost. Any help adding the attention layer or comments on the current model would be appreciated </p> <hr> <p><strong>Update:</strong> After Googeling around, I'm starting to think I got it all wrong and I rewrote my code.</p> <p>I'm trying to migrate a seq2seq model that I've found in this <a href="https://i.stack.imgur.com/FCqFj.png" rel="noreferrer">GitHub repository</a>. In the repository code the problem demonstrated is predicting a randomly generated sine wave baed on some early samples. </p> <p>I have a similar problem, and I'm trying to change the code to fit my needs. </p> <p>Differences:</p> <ul> <li>My training data shape is (439, 5, 20) 439 different signals, 5 time steps each with 20 features </li> <li>I'm not using <code>fit_generator</code> when fitting my data</li> </ul> <hr> <p>Hyper Params:</p> <pre><code>layers = [35, 35] # Number of hidden neuros in each layer of the encoder and decoder learning_rate = 0.01 decay = 0 # Learning rate decay optimiser = keras.optimizers.Adam(lr=learning_rate, decay=decay) # Other possible optimiser "sgd" (Stochastic Gradient Descent) num_input_features = train_x.shape[2] # The dimensionality of the input at each time step. In this case a 1D signal. num_output_features = 1 # The dimensionality of the output at each time step. In this case a 1D signal. # There is no reason for the input sequence to be of same dimension as the ouput sequence. # For instance, using 3 input signals: consumer confidence, inflation and house prices to predict the future house prices. loss = "mse" # Other loss functions are possible, see Keras documentation. # Regularisation isn't really needed for this application lambda_regulariser = 0.000001 # Will not be used if regulariser is None regulariser = None # Possible regulariser: keras.regularizers.l2(lambda_regulariser) batch_size = 128 steps_per_epoch = 200 # batch_size * steps_per_epoch = total number of training examples epochs = 100 input_sequence_length = n_steps # Length of the sequence used by the encoder target_sequence_length = 31 - n_steps # Length of the sequence predicted by the decoder num_steps_to_predict = 20 # Length to use when testing the model </code></pre> <hr> <p>Encoder code:</p> <pre><code># Define an input sequence. encoder_inputs = keras.layers.Input(shape=(None, num_input_features), name='encoder_input') # Create a list of RNN Cells, these are then concatenated into a single layer # with the RNN layer. encoder_cells = [] for hidden_neurons in layers: encoder_cells.append(keras.layers.GRUCell(hidden_neurons, kernel_regularizer=regulariser, recurrent_regularizer=regulariser, bias_regularizer=regulariser)) encoder = keras.layers.RNN(encoder_cells, return_state=True, name='encoder_layer') encoder_outputs_and_states = encoder(encoder_inputs) # Discard encoder outputs and only keep the states. # The outputs are of no interest to us, the encoder's # job is to create a state describing the input sequence. encoder_states = encoder_outputs_and_states[1:] </code></pre> <hr> <p>Decoder code:</p> <pre><code># The decoder input will be set to zero (see random_sine function of the utils module). # Do not worry about the input size being 1, I will explain that in the next cell. decoder_inputs = keras.layers.Input(shape=(None, 20), name='decoder_input') decoder_cells = [] for hidden_neurons in layers: decoder_cells.append(keras.layers.GRUCell(hidden_neurons, kernel_regularizer=regulariser, recurrent_regularizer=regulariser, bias_regularizer=regulariser)) decoder = keras.layers.RNN(decoder_cells, return_sequences=True, return_state=True, name='decoder_layer') # Set the initial state of the decoder to be the ouput state of the encoder. # This is the fundamental part of the encoder-decoder. decoder_outputs_and_states = decoder(decoder_inputs, initial_state=encoder_states) # Only select the output of the decoder (not the states) decoder_outputs = decoder_outputs_and_states[0] # Apply a dense layer with linear activation to set output to correct dimension # and scale (tanh is default activation for GRU in Keras, our output sine function can be larger then 1) decoder_dense = keras.layers.Dense(num_output_features, activation='linear', kernel_regularizer=regulariser, bias_regularizer=regulariser) decoder_outputs = decoder_dense(decoder_outputs) </code></pre> <hr> <p>Model Summary:</p> <pre><code>model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs) model.compile(optimizer=optimiser, loss=loss) model.summary() </code></pre> <hr> <pre><code>Layer (type) Output Shape Param # Connected to ================================================================================================== encoder_input (InputLayer) (None, None, 20) 0 __________________________________________________________________________________________________ decoder_input (InputLayer) (None, None, 20) 0 __________________________________________________________________________________________________ encoder_layer (RNN) [(None, 35), (None, 13335 encoder_input[0][0] __________________________________________________________________________________________________ decoder_layer (RNN) [(None, None, 35), ( 13335 decoder_input[0][0] encoder_layer[0][1] encoder_layer[0][2] __________________________________________________________________________________________________ dense_5 (Dense) (None, None, 1) 36 decoder_layer[0][0] ================================================================================================== Total params: 26,706 Trainable params: 26,706 Non-trainable params: 0 __________________________________________________________________________________________________ </code></pre> <p>When trying to fit the model:</p> <pre><code>history = model.fit([train_x, decoder_inputs],train_y, epochs=epochs, validation_split=0.3, verbose=1) </code></pre> <p>I get the following error:</p> <pre><code>When feeding symbolic tensors to a model, we expect the tensors to have a static batch size. Got tensor with shape: (None, None, 20) </code></pre> <p>What am I doing wrong?</p>
2020-05-12 16:56:33.990000+00:00
2020-05-17 17:38:13.997000+00:00
2020-05-17 17:38:13.997000+00:00
tensorflow|machine-learning|keras|attention-model|sequence-to-sequence
['https://arxiv.org/pdf/1409.0473.pdf', 'https://towardsdatascience.com/intuitive-understanding-of-attention-mechanism-in-deep-learning-6c9482aecf4f']
2
54,364,498
<p>In deep learning <code>1x1</code> and <code>3x3</code> convolutions are used for different purposes. <code>3x3</code> corresponds to a convenient convolution, that applies some filters to the input data. Whereas <code>1x1</code> is something like a <a href="https://arxiv.org/abs/1312.4400" rel="nofollow noreferrer">Network in Network</a>. Conceptually it is close to a MLP (with no hidden layer) applied to the channel values of every pixel. It is often used to shrink or expand the number of feature map's channels (dimensionality reduction or expansion) and thus might serve an auxiliary role for the following <code>3x3</code> convolution: <a href="https://stats.stackexchange.com/questions/194142/what-does-1x1-convolution-mean-in-a-neural-network">What does 1x1 convolution mean in a neural network?</a> </p> <p>Another well-known use of <code>1x1</code> convolutions is mixing together information from separate groups of convolutions or their extreme version, the depth-wise separable convolutions: <a href="https://arxiv.org/abs/1704.04861" rel="nofollow noreferrer">MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications</a></p> <p>To summarize, <code>1x1</code> convolutions often have different meaning as the <code>3x3</code> ones. In the original model they are probably used for a purpose and switching to <code>3x3</code> will change to concept. Which does not necessarily mean that the accuracy will be worse, indeed it is likely to improve or remain the same.</p> <p>And it will definitely result in a larger computational time. But if you can afford it, go ahead and try.</p>
2019-01-25 11:32:34.520000+00:00
2019-01-25 11:32:34.520000+00:00
null
null
54,359,135
<p>In semantic segmentation, the convolutional <code>1x1</code> often use to replace fully connected layer to maintain spatial information. Should I use larger kernel size, for example <code>3x3</code>, instead of <code>1x1</code>. Because <code>3x3</code> kernel size will have larger view information to make the final decision. Thanks</p>
2019-01-25 04:56:37.500000+00:00
2019-01-25 11:32:34.520000+00:00
null
machine-learning|deep-learning|computer-vision
['https://arxiv.org/abs/1312.4400', 'https://stats.stackexchange.com/questions/194142/what-does-1x1-convolution-mean-in-a-neural-network', 'https://arxiv.org/abs/1704.04861']
3
40,116,502
<p>The litterature on data augmentation is very very large and very dependent on your kind of applications. The first things that come to my mind are the galaxy competition's rotations and Jasper Snoeke's data augmentation. </p> <p>But really all papers have their own tricks to get good scores on special datasets for exemples stretching the image to a specific size before cropping it or whatever and this in a very specific order.</p> <p>More practically to train models on the likes of CIFAR or IMAGENET use random crops and random contrast, luminosity perturbations additionally to the obvious flips and noise addition. </p> <p>Look at the CIFAR-10 tutorial on TF website it is a good start. Plus TF now has <code>random_crop_and_resize()</code> which is quite useful.</p> <p><strong>EDIT:</strong> The papers I am referencing <a href="https://arxiv.org/pdf/1503.07077v1.pdf" rel="nofollow">here</a> and <a href="https://arxiv.org/pdf/1502.05700v2.pdf" rel="nofollow">there</a>.</p>
2016-10-18 19:28:14.737000+00:00
2016-10-18 19:28:14.737000+00:00
null
null
40,111,970
<p>To prepare large amounts of data sets for training deep learning-based image classification models, we usually have to rely on image augmentation methods. I would like to know what are the usual image augmentation algorithms, are there any considerations when choosing them? </p>
2016-10-18 15:13:37.817000+00:00
2016-10-18 19:28:14.737000+00:00
null
image-processing|tensorflow|deep-learning
['https://arxiv.org/pdf/1503.07077v1.pdf', 'https://arxiv.org/pdf/1502.05700v2.pdf']
2
47,542,460
<p>Quantum computers much like classical ones can with n bits present 2^n different values. Shor's algorithm at the "Period-finding subroutine" uses two registers, possibly as big as <code>2n + 1</code> where n is number of bits needed to represent the number to factor. In total you need <code>4n + 2</code> qubits to run Shor's algorithm.</p> <p>There was some work done on <a href="https://arxiv.org/abs/quant-ph/0205095" rel="nofollow noreferrer">lowering the qubit requirements</a>. That implementation works with just <code>2n + 3</code> qubits for general number.</p> <p>To asnwer your question, you would need 4 classical (or quantum) bits to represent 15 and thus want 62 qubits with the basic algorithm (you would possibly not use some). There are of course some workarounds on this and there were <a href="http://cryptome.org/shor-nature.pdf" rel="nofollow noreferrer">successfull experimental implementations</a> that used as few as 7 qubits because of special properties of 15 known beforehand, but that cannot be used on general number factored with Shor's algorithm.</p> <p>When you simulate quantum computer on classical one, you usually want to represent it with state space where each base state corresponds with one possible output. This needs <code>2^n</code> dimensional vectors of imaginary numbers, the actual number of bits depends on your implementation of vectors and imaginary numbers.</p>
2017-11-28 23:08:42.720000+00:00
2017-11-28 23:08:42.720000+00:00
null
null
41,397,576
<p>According to <a href="https://stackoverflow.com/questions/4595156/software-simulation-of-a-quantum-computer">this</a>, I understand we need 4^n bits to simulate an n-qubit quantum computer. I was wondering if it's possible to simulate shor's algorithms on a classical computer to factor 15? How many qubits is required to factor 15 using shor's algorithm?</p>
2016-12-30 14:07:29.647000+00:00
2018-12-17 10:34:53.463000+00:00
2017-05-23 11:45:49.133000+00:00
simulation|quantum-computing
['https://arxiv.org/abs/quant-ph/0205095', 'http://cryptome.org/shor-nature.pdf']
2
23,774,175
<p>I would say yes, I can't think of a situation where it is not possible. WHERE in it self can be replaced with a join:</p> <pre><code>select ... from A where x=10 &lt;=&gt; select ... from A join ( values (10) ) B (x) on A.x = B.x </code></pre> <p>Perhaps off topic, but for transformations in general Vadim Tropashko (<a href="http://arxiv.org/abs/cs/0501053" rel="nofollow">http://arxiv.org/abs/cs/0501053</a>) shows that it is possible to reduce the set of classic relational algebra operators to two binary operations: natural join and generalized union </p>
2014-05-21 04:49:02.363000+00:00
2014-05-21 04:49:02.363000+00:00
null
null
23,774,024
<p>I'm wondering if its always possible in SQL to factor a where condition through a join to a subquery. For instance, if I have</p> <pre><code>select ... from a join b on ... where p and q </code></pre> <p>and <code>p</code> pertains only to <code>a</code>, <code>q</code> to <code>b</code>, then can I always rewrite as?</p> <pre><code>select ... from (select ... from a where p) as a join (select ... from b where q) as b on ... </code></pre> <p>Thanks!</p> <p>[Notes: 1) I'm using postgres in case this affects the answer. 2) Readability is not an important consideration, as these are automatically generated queries. <strong>Edit</strong>: 3) I'm not only interested in inner join but other joins as well.]</p>
2014-05-21 04:36:09.863000+00:00
2014-05-21 20:06:35.870000+00:00
2014-05-21 06:07:00.690000+00:00
sql|postgresql
['http://arxiv.org/abs/cs/0501053']
1
42,789,695
<p><strong><code>Intel SGX</code> is designed for securing data and not loading the entire application</strong>. You can perform secure computations inside the <code>SGX</code> enclaves on your data by sending temporary buffers from the user space program (<code>app.cpp</code>) to your <code>SGX</code> enclave (<code>Enclave.cpp</code>). But why?</p> <ol> <li>The enclave size is small and you can't load all your data inside it at the same time.</li> <li>Inside enclaves, you're limited to a set of programming primitives like if-then-else, for-loop, and etc. Also, you can't have syscalls like <code>open</code> for opening a file.</li> </ol> <p>Thus, if your application is large or contains some syscalls or even some forbidden standard C library functions by <code>SGX</code> implementation, it is impossible to import it entirely inside an enclave. But, if your application is doing some primitive operations without the need for any special syscall or function call, you can freely port it inside an enclave. Still, you can't directly load it inside an enclave you have to change your implementation to make it as a <strong>trusted enclave call</strong> inside the <code>Enclave.cpp</code>.</p> <p>As an example, I've implemented a set of cryptographic operations e.g. SHA-2, HMAC SHA-2, AES, and etc. inside an enclave. I send/receive temporary buffers of pliantext/ciphertext data to/from enclave performing the encryption/decryption operations inside the enclave and storing the results of computation like a hash digest, or ciphertexts in userspace. In this way, I ensure that no one can tamper the results of operations because they're running inside the enclave which is secured by CPU instructions.</p> <p>You can read more about this example <a href="https://arxiv.org/abs/1705.04706" rel="nofollow noreferrer">here</a> and check the implementation <a href="https://github.com/hmofrad/CryptoEnclave" rel="nofollow noreferrer">here</a>.</p>
2017-03-14 15:10:39.397000+00:00
2017-09-28 02:05:18+00:00
2017-09-28 02:05:18+00:00
null
42,786,731
<p>Is there a way to load an existing application into an <code>Intel SGX</code> enclave directly?</p>
2017-03-14 13:00:26.947000+00:00
2020-07-16 07:04:56.753000+00:00
2017-03-15 04:45:36.707000+00:00
sgx|enclave
['https://arxiv.org/abs/1705.04706', 'https://github.com/hmofrad/CryptoEnclave']
2
40,479,317
<p>This is completely possible and actually quite common. You just select the output of a layer of the neural network and use that as a feature vector to train a SVM. Generally one normalizes the feature vectors as well.</p> <p>Features learned by (Convolutional) Neural Networks are powerful enough that they generalize to different kinds of objects and even completely different images. For examples see the paper <a href="https://arxiv.org/abs/1403.6382" rel="noreferrer">CNN Features off-the-shelf: an Astounding Baseline for Recognition</a>.</p> <p>About implementation, you just have to train a neural network, then select one of the layers (usually the ones right before the fully connected layers or the first fully connected one), run the neural network on your dataset, store all the feature vectors, then train an SVM with a different library (e.g sklearn).</p>
2016-11-08 05:01:52.893000+00:00
2016-11-08 05:01:52.893000+00:00
null
null
40,401,008
<p>Lately I was on a Data Science meetup in my city, there was a talk about connecting Neural Networks with SVM. Unfortunately presenter had to quit right after presentation, so I wasn't able to ask some questions.</p> <p>I was wondering how is that possible ? He was talking about using neural networks for his classification, and later on, he was using SVM classifier to improve his accuracy and precision by about 10%. </p> <p>I am using Keras for Neural Networks and SKlearn for the rest of ML. </p>
2016-11-03 12:03:55.553000+00:00
2017-04-01 09:58:50.427000+00:00
null
python|machine-learning|scikit-learn|keras
['https://arxiv.org/abs/1403.6382']
1
16,653,712
<p>What do you want to do, exactly? Detect communities, or bridges between them? Those are two different problems. Once you have the communities, it's straightforward enough identifying the edges connecting nodes from two distinct communities. So, I guess you want to detect communities.</p> <p>There are actually thousands methods for this purpose, some of them implemented in Matlab, such as the one you cite, or the <a href="http://netwiki.amath.unc.edu/GenLouvain/GenLouvain" rel="nofollow">generalized Louvain algorithm</a> (also based on modularity optimization). However, most of them are rather available as C or C++ programs, such as <a href="http://www.tp.umu.se/~rosvall/code.html" rel="nofollow">InfoMap</a> (based on a data compression paradigm), <a href="http://www-rp.lip6.fr/~latapy/PP/walktrap.html" rel="nofollow">WalkTrap</a> (clustering using a random walk-based distance), <a href="http://micans.org/mcl/" rel="nofollow">Markov Cluster</a> (simulates some propagation mechanism), and the list goes on...</p> <p>Those tools formalize the notion of community structure more or less differently, potentially leading to different (estimated) community structures, when applied on the same network. And of course, different communities means different bridges, too. So the question is rather to know how to pick the appropriate method for your data. You seem to have <em>a priori</em> knowledge regarding the networks you are studying, so you should use that to make your choice (rather than the programming language). For instance, even if you don't state it explicitly, you seem to be looking for a hierarchical community structure: not all tools are able to detect this kind of structure. Similarly, if you think one node can belong to several communities at the same time, then you should consider looking for overlapping communities, for instance using <a href="http://www.cfinder.org/" rel="nofollow">CFinder</a> (based on clique percolation).</p> <p>I'd advise you to have a look at this excellent review of community detection, you might find some interesting information allowing you to pick a method: <a href="http://arxiv.org/abs/0906.0612" rel="nofollow">Community Detection in Graphs</a>. Also, from a programming point of view, I'd advise you to play with the <a href="http://igraph.sourceforge.net/" rel="nofollow">igraph library</a> (available for C, R and Python): it contains several standard community detection tools. You can try them on your data and see what you get.</p>
2013-05-20 16:11:07.833000+00:00
2015-01-03 11:11:13.390000+00:00
2015-01-03 11:11:13.390000+00:00
null
15,475,116
<p>I have networks of roughly 10K to 100K nodes which are all connected. These nodes are typically grouped into clusters of communities which are strongly connected with many edges between them and there are hubs etc. Between the communities there are nodes with a few edges <em>bridging</em> / <em>connecting</em> the communities together. These datasets are in adjacency matrices</p> <p>I have tried spectral clustering (<a href="http://www.cc.gatech.edu/~mihail/D.8802readings/kdd3a.pdf" rel="nofollow">Ding et al 2001</a>) but it is really slow on large data sets and seems to stop working when there is a lot of ambiguity (bridges which are not the only bridge route to another cluster- other communities can act as alternative proxy routes). </p> <p>I have tried some of the methods from <a href="http://www.elemartelot.org/index.php/programming/cd-code" rel="nofollow">martelot</a> such as the Newman algorithm for modularity optimisation but have not incorporated the stability optimisation functions in that effort (could that be crucial?). On synthetic data sets where the clusters are created by random graphs (ER graphs) the methods work but on real ones where there is nested hierarchy the results are scattered. Using a standalone visualization application/tool the bridges are evident though.</p> <p>What methods would you recommend/advise to try? I am using MATLAB. </p>
2013-03-18 10:55:03.287000+00:00
2015-01-03 11:11:13.390000+00:00
2013-03-18 11:01:14.387000+00:00
matlab|search|math|graph|social-networking
['http://netwiki.amath.unc.edu/GenLouvain/GenLouvain', 'http://www.tp.umu.se/~rosvall/code.html', 'http://www-rp.lip6.fr/~latapy/PP/walktrap.html', 'http://micans.org/mcl/', 'http://www.cfinder.org/', 'http://arxiv.org/abs/0906.0612', 'http://igraph.sourceforge.net/']
7
37,545,055
<p>You might find the paper by Yoshua Bengio on <a href="http://arxiv.org/pdf/1206.5533.pdf" rel="nofollow">Practical Recommendations for Gradient-Based Training of Deep Architectures</a> helpful to learn more about hyperparameters and their settings.</p> <p>If you're asking specifically for settings that have more guaranteed succes, I advise you to read on Batch Normalization. I find that it decreases the failure rate for bad picks of the learning rate and weight initialization.</p> <p>Some people also discourage the use of non-linearities like sigmoid() and tanh() as they suffer from the vanishing gradient problem</p>
2016-05-31 11:54:12.780000+00:00
2016-05-31 11:54:12.780000+00:00
null
null
37,536,833
<p>I've been using tensorflow on and off for various things that I guess are considered rather easy these days. Captcha cracking, basic OCR, things I remember from my AI education at university. They are problems that are reasonably large and therefore don't really lend themselves to experimenting efficiently with different NN architectures.</p> <p>As you probably know, Joel Grus came out with FizzBuzz in tensorflow. TLDR: learning from a binary representation of a number (ie. 12 bits encoding the number) into 4 bits (none_of_the_others, divisible by 3, divisible by 5, divisible by 15). For this toy problem, you can quickly compare different networks.</p> <p>So I've been trying a simple feedforward network and wrote a program to compare various architectures. Things like a 2-hidden-layer feedforward network, then 3 layers, different activation functions, ... Most architectures, well, suck. They get somewhere near 50-60 success rate and remain there, independent of how much training you do.</p> <p>A few perform really well. For instance, a sigmoid-activated double hidden layer with 23 neurons each works really well (89-90% correct after 2000 training epochs). Unfortunately anything close to it is rather disastrously bad. Take one neuron out of the second or first layer and it drops to 30% correct. Same for taking it out of the first layer ... Single hidden layer, 20 neurons tanh activated does pretty well as well. But most have a little over half this performance.</p> <p>Now given that for real problems I can't realistically do these sorts of studies of different architectures, are there ways to get good architectures guaranteed to work ?</p>
2016-05-31 04:27:36.857000+00:00
2016-05-31 11:54:12.780000+00:00
null
tensorflow|data-science
['http://arxiv.org/pdf/1206.5533.pdf']
1
45,536,083
<p>The word2vec algorithm itself is what incrementally learns the real-valued vector, with varied dimension values. </p> <p>In contrast to the one-hot encoding, these vectors are often called "dense embeddings". They're "dense" because unlike the one-hot encoding, which is "sparse" with many dimensions and mostly zero values, they have fewer dimensions and (usually) no zero-values. They're an "embedding" because they've "embed" a discrete set-of-words into another continuous-coordinate-system.</p> <p>You'd want to read the <a href="https://arxiv.org/abs/1301.3781" rel="nofollow noreferrer">original word2vec paper</a> for a full formal description of how the dense embeddings are made. </p> <p>But the gist is that the dense vectors start totally random, and so at first the algorithm's internal neural network is useless for predicting neighboring words. But each (context)->(target) word training example from a text corpus is tried against the network, and each time the difference from the desired prediction is used to apply a tiny nudge, towards a better prediction, to both word-vector and internal-network-weight values. </p> <p>Repeated many times, initially with larger nudges (higher learning-rate) then with ever-smaller nudges, the dense vectors rearrange their coordinates from their initial randomness to a useful relative-arrangement – one that's about-as-good as possible for predicting the training text, given the limits of the model itself. (That is, any further nudge that improves predictions on some examples, worsens it on others – so you might as well consider training done.)</p> <p>You then read the resulting dense embedding real-valued vectors out of the model, and use them for purposes other than just nearby-word prediction. </p>
2017-08-06 20:00:08.953000+00:00
2017-08-06 20:00:08.953000+00:00
null
null
45,531,476
<p>In Word2Vec, i've learned that both of CBOW and Skip-gram produce a one-hot encoding value to create a vector (cmiiw), I wonder how to calculate or represents a One-Hot Encoding value into a real-valued vector, for example (source: <a href="http://blog.districtdatalabs.com/nlp-research-lab-part-1-distributed-representations" rel="nofollow noreferrer">DistrictDataLab's Blog about Distributed Representations</a>) from this: <img src="https://s1.postimg.org/naavbr5lr/Lev_Konstantinovskiy_Next_generation_of_word_embeddings_in_Gensim_2.jpg" alt="One-Hot Encoding&#39;s example"> into: <img src="https://s1.postimg.org/734kvoylb/Lev_Konstantinovskiy_Next_generation_of_word_embeddings_in_Gensim_2.jpg" alt="One-Hot Encoding&#39;s example"> please help, I was struggling on finding this information.</p>
2017-08-06 11:18:31.030000+00:00
2017-08-06 20:00:08.953000+00:00
2017-08-06 15:02:15.010000+00:00
nlp|deep-learning|word2vec|word-embedding
['https://arxiv.org/abs/1301.3781']
1
42,451,260
<p>The <a href="https://github.com/dganguli/robust-pca" rel="noreferrer"><code>robust-pca</code> code</a> factors the data matrix <code>D</code> into two matrices, <code>L</code> and <code>S</code> which are &quot;low-rank&quot; and &quot;sparse&quot; matrices (see <a href="https://arxiv.org/pdf/0912.3599.pdf" rel="noreferrer">the paper</a> for details). <code>L</code> is what's mostly constant between the various observations, while <code>S</code> is what varies. Figures 2 and 3 in <a href="https://arxiv.org/pdf/0912.3599.pdf" rel="noreferrer">the paper</a> give a really nice example from a couple of security cameras, picking out the static background (<code>L</code>) and variability such as passing people (<code>S</code>).</p> <p>If you just want the eigenvectors, treat the <code>S</code> as junk (the &quot;large outliers&quot; you're wanting to clip out) and do an eigenanalysis on the <code>L</code> matrix.</p> <p>Here's an example using the <a href="https://github.com/dganguli/robust-pca" rel="noreferrer"><code>robust-pca</code> code</a>:</p> <pre><code> L, S = RPCA(data).fit() rcomp, revals, revecs = pca(L) print(&quot;Normalised robust eigenvalues: %s&quot; % (revals/np.sum(revals),)) </code></pre> <p>Here, the <code>pca</code> function is:</p> <pre><code>def pca(data, numComponents=None): &quot;&quot;&quot;Principal Components Analysis From: http://stackoverflow.com/a/13224592/834250 Parameters ---------- data : `numpy.ndarray` numpy array of data to analyse numComponents : `int` number of principal components to use Returns ------- comps : `numpy.ndarray` Principal components evals : `numpy.ndarray` Eigenvalues evecs : `numpy.ndarray` Eigenvectors &quot;&quot;&quot; m, n = data.shape data -= data.mean(axis=0) R = np.cov(data, rowvar=False) # use 'eigh' rather than 'eig' since R is symmetric, # the performance gain is substantial evals, evecs = np.linalg.eigh(R) idx = np.argsort(evals)[::-1] evecs = evecs[:,idx] evals = evals[idx] if numComponents is not None: evecs = evecs[:, :numComponents] # carry out the transformation on the data using eigenvectors # and return the re-scaled data, eigenvalues, and eigenvectors return np.dot(evecs.T, data.T).T, evals, evecs </code></pre>
2017-02-25 02:22:16.140000+00:00
2020-11-24 19:16:08.190000+00:00
2020-11-24 19:16:08.190000+00:00
null
40,721,260
<p>I am using PCA to reduce the dimensionality of a N-dimensional dataset, but I want to build in robustness to large outliers, so I've been looking into Robust PCA codes. </p> <p>For traditional PCA, I'm using python's sklearn.decomposition.PCA which nicely returns the principal components as vectors, onto which I can then project my data (to be clear, I've also coded my own versions using SVD so I know how the method works). I found a few pre-coded RPCA python codes out there (like <a href="https://github.com/dganguli/robust-pca" rel="noreferrer">https://github.com/dganguli/robust-pca</a> and <a href="https://github.com/jkarnows/rpcaADMM" rel="noreferrer">https://github.com/jkarnows/rpcaADMM</a>).</p> <p>The 1st code is based on the Candes et al. (2009) method, and returns low rank L and sparse S matrices for a dataset D. The 2nd code uses the ADMM method of matrix decomposition (Parikh, N., &amp; Boyd, S. 2013) and returns X_1, X_2, X_3 matrices. I must admit, I'm having a very hard time figuring out how to connect these to the principal axes that are returned by a standard PCM algorithm. Can anyone provide any guidance? </p> <p>Specifically, in one dataset X, I have a cloud of N 3-D points. I run it through PCA:</p> <pre><code>pca=sklean.decompose.PCA(n_components=3) pca.fit(X) comps=pca.components_ </code></pre> <p>and these 3 components are 3-D vectors define the new basis onto which I project all my points. With Robust PCA, I get matrices L+S=X. Does one then run pca.fit(L)? I would have thought that RPCA would have given me back the eigenvectors but have internal steps to throw out outliers as part of building the covariance matrix or performing SVD. </p> <p>Maybe what I think of as "Robust PCA" isn't how other people are using/coding it?</p>
2016-11-21 13:23:56.647000+00:00
2020-11-24 19:16:08.190000+00:00
2016-11-21 16:16:24.637000+00:00
python|pca
['https://github.com/dganguli/robust-pca', 'https://arxiv.org/pdf/0912.3599.pdf', 'https://arxiv.org/pdf/0912.3599.pdf', 'https://github.com/dganguli/robust-pca']
4
40,100,655
<p>I will assume the question is as follows: We are given a number specified as a sequence of decimal digits that possibly includes a decimal fraction, and possibly makes use of scientific notation. How do we correctly convert this number into one of the binary floating-point formats specified by the <a href="https://en.wikipedia.org/wiki/IEEE_floating_point#IEEE_754-2008" rel="nofollow">IEEE 754</a> floating-point standard, i.e. <code>binary16</code> (half precision), <code>binary32</code> (single precision), <code>binary64</code> (double precision), or <code>binary128</code> (quadruple precision)?</p> <p>As you noted, most decimal numbers cannot be represented exactly in a binary floating-point format. That means we need to chose one of the IEEE-754 rounding mode that should be used to determine the final result: Round towards positive infinity ("up"), round towards negative infinity ("down"), round towards zero (truncate), or round towards-nearest-or-even ("nearest"). Decimal-to-binary conversion typically uses the last mode listed, round towards-nearest-or-even, as this minimizes overall error in the conversion.</p> <p>Conceptually, our task is simple. Carry out the conversion process until we have generated enough bits to make a correct rounding decision. Clearly, we will often need more bits than provided by the target format. However, we cannot tell a-priory exactly how many bits we will need, as some hard to round cases will generate results very close to a tie-case. The take-home message is that some parts of our algorithm will require the use of some sort of extended precision (or multi-precision) arithmetic, and we need to develop a criterion for determining when we have generated enough bits for correct rounding.</p> <p>The fundamental algorithms for correct conversions were developed over a couple of decades in the past century, and are described in the following publications:</p> <p>David W. Matula, "In-and-out conversions". <em>Communications of the ACM</em>, Vol. 11, No. 1 (Jan. 1968), pp. 47-50<br></p> <p>David W. Matula, "A Formalization of Floating-Point Numeric Base Conversion". <em>IEEE Transactions on Computers</em>, Vol 10, No. 8 (Aug. 1970), pp. 681-692 (<a href="http://www.acsel-lab.com/arithmetic/arith1/papers/ARITH1_Matula.pdf" rel="nofollow">online</a>)</p> <p>William D. Clinger, "How to Read Floating Point Numbers Accurately". <em>SIGPLAN Notices</em>, Vol. 25, No. 6 (June 1990), pp. 92-101 (<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45.4152&amp;rep=rep1&amp;type=pdf" rel="nofollow">online</a>)</p> <p>David M. Gay, "Correctly rounded binary-decimal and decimal-binary conversions". Technical Report 90--10, AT&amp;T Bell Laboraties, November 1990. (<a href="http://ampl.com/REFS/rounding.pdf" rel="nofollow">online</a>)</p> <p>A fresh look at this research area is provided by the following publications:</p> <p>Michel Hack, "On Intermediate Precision Required for Correctly-Rounding Decimal-to-Binary Floating-Point Conversion." In <em>Proceedings of Real Numbers and Computers (RNC'6)</em>, Nov. 2004, pp. 113-133 (<a href="https://www.researchgate.net/profile/Jean-Michel_Muller/publication/253481041_A_proven_correctly_rounded_logarithm_in_double-precision/links/54db2cc50cf2ba88a68f5354.pdf#page=114" rel="nofollow">online</a>)</p> <p>Aubrey Jaffer, "Easy Accurate Reading and Writing of Floating-Point Numbers". arXiv:1310.8121, draft v6 (Jan. 2015), (<a href="https://arxiv.org/abs/1310.8121" rel="nofollow">online</a>)</p> <p>Although the fundamental algorithms have been around for twenty-five years, they are of considerable complexity, and the "devil is in the details". <em>Correct</em> implementations of decimal-to-brinary conversions continue to prove elusive. Over the past 5 years, Rick Regan's blog <a href="http://www.exploringbinary.com/" rel="nofollow">"Exploring Binary"</a> has chronicled a number of defects in the decimal-to-binary conversion functionality of widely used software, such as <a href="http://www.exploringbinary.com/visual-c-plus-plus-strtod-still-broken/" rel="nofollow">Microsoft Visual C/C++</a>, <a href="http://www.exploringbinary.com/glibc-strtod-incorrectly-converts-2-to-the-negative-1075/" rel="nofollow">glibc</a>, and <a href="http://www.exploringbinary.com/a-better-fix-for-the-php-2-2250738585072011e-308-bug/" rel="nofollow">PHP</a>, where the last item would cause an infinite loop that might be exploited for denial-of-service attacks.</p> <p>A paper by Vern Paxson and William Kahan addresses the issue of hard-to-round cases in decimal-to-binary conversion, and gives some examples that demonstrate how many additional bits beyond target precision may be required for correct rounding:</p> <p>V. Paxson and W. Kahan, "A Program for Testing IEEE Decimal–Binary Conversion". Manuscript, May 1991 (<a href="http://www.icir.org/vern/papers/testbase-report.pdf" rel="nofollow">online</a>)</p> <p>Additional hard-to-round cases for IEEE-754 <code>binary64</code> were listed in a 1996 <a href="https://groups.google.com/forum/#!original/comp.arch.arithmetic/uDGiskofu4I/stg9JR8vXE0J" rel="nofollow">posting</a> to the newsgroup <code>comp.arch.arithmetic</code> by Fred Tydeman.</p> <p>The following paper describes a test framework for testing conversions, however the files containing the test vectors were no longer accessible online the last time I checked:</p> <p>Brigitte Verdonk, Annie Cuyt, and Dennis Verschaeren. "A precision-and range-independent tool for testing floating-point arithmetic II: conversions." <em>ACM Transactions on Mathematical Software</em>, Vol. 27, No. 1 (Mar. 2001), pp. 119-140. (<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.7408&amp;rep=rep1&amp;type=pdf" rel="nofollow">draft online</a>)</p>
2016-10-18 06:02:42.153000+00:00
2016-10-19 02:00:47.743000+00:00
2016-10-19 02:00:47.743000+00:00
null
40,092,773
<p>Everything I can find on this says to simply multiply by 2 until the decimal resolves to zero, but this only works if the last decimal is 5. </p> <p>In my particular case the number to convert is 98765.4321, how would I convert this (or any other decimal that doesn't resolve) to IEEE754?</p>
2016-10-17 17:57:21.163000+00:00
2016-10-19 02:00:47.743000+00:00
null
math|binary|floating-point|ieee-754
['https://en.wikipedia.org/wiki/IEEE_floating_point#IEEE_754-2008', 'http://www.acsel-lab.com/arithmetic/arith1/papers/ARITH1_Matula.pdf', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45.4152&rep=rep1&type=pdf', 'http://ampl.com/REFS/rounding.pdf', 'https://www.researchgate.net/profile/Jean-Michel_Muller/publication/253481041_A_proven_correctly_rounded_logarithm_in_double-precision/links/54db2cc50cf2ba88a68f5354.pdf#page=114', 'https://arxiv.org/abs/1310.8121', 'http://www.exploringbinary.com/', 'http://www.exploringbinary.com/visual-c-plus-plus-strtod-still-broken/', 'http://www.exploringbinary.com/glibc-strtod-incorrectly-converts-2-to-the-negative-1075/', 'http://www.exploringbinary.com/a-better-fix-for-the-php-2-2250738585072011e-308-bug/', 'http://www.icir.org/vern/papers/testbase-report.pdf', 'https://groups.google.com/forum/#!original/comp.arch.arithmetic/uDGiskofu4I/stg9JR8vXE0J', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.35.7408&rep=rep1&type=pdf']
13
64,495,737
<p>When I've used Bert for text classification my model has generally behaved as you tell. In part this is expected because pre-trained models tend to require few epochs to fine-tune, actually if you check <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">Bert's paper</a> the number of epochs recommended for fine-tuning is between 2 and 4.</p> <p>On the other hand, I've usually found the optimum at just 1 or 2 epochs, which coincides with your case also. My guess is: there is a trade-off when fine-tuning pre-trained models between fitting to your downstream task and forgetting the weights learned at pre-training. Depending on the data you have, the equilibrium point may happen sooner or later and overfitting starts after that. But this paragraph is speculation based on my experience.</p>
2020-10-23 07:29:33.530000+00:00
2020-10-23 07:29:33.530000+00:00
null
null
61,566,646
<p>I designed a network for a text classification problem. To do this, I'm using huggingface transformet's BERT model with a linear layer above that for fine-tuning. My problem is that the loss on the training set is decreasing which is fine, but when it comes to do the evaluation after each epoch on the development set, the loss is increasing with epochs. I'm posting my code to investigate if there's something wrong with it.</p> <pre><code>for epoch in range(1, args.epochs + 1): total_train_loss = 0 trainer.set_train() for step, batch in enumerate(train_dataloader): loss = trainer.step(batch) total_train_loss += loss avg_train_loss = total_train_loss / len(train_dataloader) logger.info(('Training loss for epoch %d/%d: %4.2f') % (epoch, args.epochs, avg_train_loss)) print("\n-------------------------------") logger.info('Start validation ...') trainer.set_eval() y_hat = list() y = list() total_dev_loss = 0 for step, batch_val in enumerate(dev_dataloader): true_labels_ids, predicted_labels_ids, loss = trainer.validate(batch_val) total_dev_loss += loss y.extend(true_labels_ids) y_hat.extend(predicted_labels_ids) avg_dev_loss = total_dev_loss / len(dev_dataloader) print(("\n-Total dev loss: %4.2f on epoch %d/%d\n") % (avg_dev_loss, epoch, args.epochs)) print("Training terminated!") </code></pre> <p>Following is the trainer file, which I use for doing a forward pass on a given batch and then backpropagate accordingly.</p> <pre><code>class Trainer(object): def __init__(self, args, model, device, data_points, is_test=False, train_stats=None): self.args = args self.model = model self.device = device self.loss = nn.CrossEntropyLoss(reduction='none') if is_test: # Should load the model from checkpoint self.model.eval() self.model.load_state_dict(torch.load(args.saved_model)) logger.info('Loaded saved model from %s' % args.saved_model) else: self.model.train() self.optim = AdamW(model.parameters(), lr=2e-5, eps=1e-8) total_steps = data_points * self.args.epochs self.scheduler = get_linear_schedule_with_warmup(self.optim, num_warmup_steps=0, num_training_steps=total_steps) def step(self, batch): batch = tuple(t.to(self.device) for t in batch) batch_input_ids, batch_input_masks, batch_labels = batch self.model.zero_grad() outputs = self.model(batch_input_ids, attention_mask=batch_input_masks, labels=batch_labels) loss = self.loss(outputs, batch_labels) loss = loss.sum() (loss / loss.numel()).backward() torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0) self.optim.step() self.scheduler.step() return loss def validate(self, batch): batch = tuple(t.to(self.device) for t in batch) batch_input_ids, batch_input_masks, batch_labels = batch with torch.no_grad(): model_output = self.model(batch_input_ids, attention_mask=batch_input_masks, labels=batch_labels) predicted_label_ids = self._predict(model_output) label_ids = batch_labels.to('cpu').numpy() loss = self.loss(model_output, batch_labels) loss = loss.sum() return label_ids, predicted_label_ids, loss def _predict(self, logits): return np.argmax(logits.to('cpu').numpy(), axis=1) </code></pre> <p>Finally, the following is my model (i.e., Classifier) class:</p> <pre><code>import torch.nn as nn from transformers import BertModel class Classifier(nn.Module): def __init__(self, args, is_eval=False): super(Classifier, self).__init__() self.bert_model = BertModel.from_pretrained( args.init_checkpoint, output_attentions=False, output_hidden_states=True, ) self.is_eval_mode = is_eval self.linear = nn.Linear(768, 2) # binary classification def switch_state(self): self.is_eval_mode = not self.is_eval_mode def forward(self, input_ids, attention_mask=None, labels=None): bert_outputs = self.bert_model(input_ids, token_type_ids=None, attention_mask=attention_mask) # Should give the logits to the the linear layer model_output = self.linear(bert_outputs[1]) return model_output </code></pre> <p>For visualization the loss throughout the epochs:</p> <p><a href="https://i.stack.imgur.com/nifDl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nifDl.png" alt="enter image description here"></a></p>
2020-05-02 21:39:18.283000+00:00
2020-10-23 07:29:33.530000+00:00
2020-05-02 21:46:51.023000+00:00
python|deep-learning|neural-network|pytorch|loss-function
['https://arxiv.org/pdf/1810.04805.pdf']
1
54,974,763
<p>People believe that the answer is "NO".</p> <p>Assume that your <code>k</code> is <code>2^s - 1</code> (so it's <code>111...111</code> in binary) and all numbers are of at most <code>k</code>. Then</p> <pre><code>a or b = k &lt;=&gt; (~a) and (~b) = 0. </code></pre> <p>, where <code>~</code> is a "bitwise not". E.g.</p> <pre><code>110 or 101 = 111 &lt;=&gt; 001 and 010 = 0 </code></pre> <p>This is a general Orthogonal Vector Problem (OVP), and popular conjecture states that it's not solvable faster than <code>O(n^2)</code> (there are some details I omit).</p> <p>See Conjecture 1 here: <a href="https://arxiv.org/pdf/1510.02824.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1510.02824.pdf</a>.</p>
2019-03-03 23:13:29.287000+00:00
2019-03-03 23:19:35.593000+00:00
2019-03-03 23:19:35.593000+00:00
null
54,974,356
<p>Is it possible to write a function that takes an array of <em>n</em> integers and an integer <em>k</em> and returns the number of pairs of array elements with BITWISE OR value equal to <em>k</em> in better than O(<em>n</em><sup>2</sup>) time?</p> <p>Example: If we have an array = [21, 10, 29, 8] and k = 31, then the function should return 2, since the valid pairs are (21, 10) and (10, 29).</p> <p><strong>* for clarity *</strong> 21 OR 10 = 31 , 21 OR 29 = 29 , 21 OR 8 = 29, 10 OR 29 = 31, 10 OR 8 = 10,29 OR 8 = 29, so answer is 2.</p> <p>**** k is a constant which is always 31 .****</p>
2019-03-03 22:21:56.563000+00:00
2019-03-05 23:26:08.747000+00:00
2019-03-03 23:59:38.627000+00:00
algorithm|bit-manipulation
['https://arxiv.org/pdf/1510.02824.pdf']
1
56,727,927
<p>To add to what has been stated, I recommend reading through <a href="https://arxiv.org/abs/1709.01922" rel="nofollow noreferrer">A Comparison of Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging</a> by Keunwoo Choi, György Fazekas, Kyunghyun Cho, and Mark Sandler. </p> <p>For their data, they achieved nearly identical classification accuracy between simple STFTs and melspectrograms. So melspectrograms seem to be the clear winner for dimension reduction if you don't mind the preprocessing. The authors also found, as jonner mentions, that log-scaling (essentially converting amplitude to a db scale) improves accuracy. You can easily do this with Librosa (using your code) like this:</p> <pre><code>y,sr= librosa.core.load(r'C:\Users\Tej\Desktop\NoiseWork\NoiseOnly\song.wav') S = librosa.feature.melspectrogram(y=y, sr=sr) S_db = librosa.core.power_to_db(S) </code></pre> <p>As for normalization after db-scaling, that seems hit or miss depending on your data. From the paper above, the authors found nearly no difference using various normalization techniques for their data.</p> <p>One last thing that should be mentioned is a somewhat new method called Per-Channel Energy Normalization. I recommend reading <a href="http://www.justinsalamon.com/uploads/4/3/9/4/4394963/lostanlen_pcen_spl2018.pdf" rel="nofollow noreferrer">Per-Channel Energy Normalization: Why and How</a> by Vincent Lostanlen, Justin Salamon, Mark Cartwright, Brian McFee, Andrew Farnsworth, Steve Kelling, and Juan Pablo Bello. Unfortunately, there are some parameters that need adjusting depending on the data, but in many cases seems to do as well as or better than logmelspectrograms. You can implement it in Librosa like this:</p> <pre><code>y,sr= librosa.core.load(r'C:\Users\Tej\Desktop\NoiseWork\NoiseOnly\song.wav') S = librosa.feature.melspectrogram(y=y, sr=sr) S_pcen = librosa.pcen(S) </code></pre> <p>Although, like I mentioned, there are parameters within pcen that need adjusting! Here is <a href="https://librosa.github.io/librosa/generated/librosa.core.pcen.html" rel="nofollow noreferrer">Librosa's documentation on PCEN</a> to get you started if you are interested.</p>
2019-06-23 21:39:30.960000+00:00
2019-06-23 22:05:57.380000+00:00
2019-06-23 22:05:57.380000+00:00
null
55,513,652
<p>I am looking to understand various spectrograms for audio analysis. I want to convert an audio file into 10 second chunks, generate spectrograms for each and use a CNN model to train on top of those images to see if they are good or bad. </p> <p>I have looked at linear, log, mel, etc and read somewhere that mel based spectrogram is best to be used for this. But with no proper verifiable information. I have used the simple following code to generate mel spectrogram. </p> <pre><code>y,sr= librosa.core.load(r'C:\Users\Tej\Desktop\NoiseWork\NoiseOnly\song.wav') S = librosa.feature.melspectrogram(y=y, sr=sr) librosa.display.specshow(librosa.power_to_db(S, ref=np.max)) </code></pre> <p>My question is which spectrogram best represents features of an audio file for training with CNN? I have used linear but some audio files the linear spectrogram seems to be the same</p>
2019-04-04 10:28:26.930000+00:00
2019-06-23 22:05:57.380000+00:00
null
python-3.x|machine-learning|audio|spectrogram|librosa
['https://arxiv.org/abs/1709.01922', 'http://www.justinsalamon.com/uploads/4/3/9/4/4394963/lostanlen_pcen_spl2018.pdf', 'https://librosa.github.io/librosa/generated/librosa.core.pcen.html']
3
66,054,593
<p>Peter's answer is true but might lack a few details. Let me add on top of it.</p> <p>Autopadding = SAME means that: o = ceil(i/s), where o = output size, i = input size, s = stride.</p> <p>In addition, the generic output size formula is:</p> <pre><code>o = floor( (i + p - k) / s) + 1 </code></pre> <p>Where the new terms are p (pading) and k, i.e., the effective kernel size (including dilation, or just kernel size if dilation is disabled).</p> <p>If you develop that formula to solve for p, you get:</p> <pre><code>p_min = (o-1) s - i + k # i.e., when the floor is removed from the previous equation p_max = o s - i + k - 1 # i.e., when the numerator of the floor % s is s-1 </code></pre> <p>Any padding value p in the range [p_min, p_max] will satisfy the condition o = ceil(i/s), meaning that for a stride s there are s total solution satisfying the formula.</p> <p>It is the norm to use p_min as padding, so you can ignore all other s-1 solutions.</p> <p>PS: This would be for 1D, but for nD, simply repeat these formulas independently for each dimension, i.e.,</p> <pre><code>p_min[dimension_index] = (o[dimension_index]-1)s[dimension_index] - i[dimension_index] + k[dimension_index] </code></pre> <p>For references, these 2 links are really useful:</p> <ul> <li><a href="https://arxiv.org/abs/1603.07285" rel="nofollow noreferrer">https://arxiv.org/abs/1603.07285</a></li> <li><a href="https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215" rel="nofollow noreferrer">https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215</a></li> <li><a href="https://mmuratarat.github.io/2019-01-17/implementing-padding-schemes-of-tensorflow-in-python" rel="nofollow noreferrer">https://mmuratarat.github.io/2019-01-17/implementing-padding-schemes-of-tensorflow-in-python</a></li> </ul>
2021-02-04 22:10:42.510000+00:00
2021-02-04 22:52:52.193000+00:00
2021-02-04 22:52:52.193000+00:00
null
48,491,728
<p>My understanding of SAME padding in Tensorflow is that padding is added such that the output dimensions (for width and height) will be the same as the input dimensions. However, this understanding only really makes sense when stride=1, because if stride is >1 then output dimensions will almost certainly be lower. </p> <p>So I'm wondering what the algorithm is for calculating padding in this case. Is it simply that padding is added so that the filter is applied to every input value, rather than leaving some off on the right?</p>
2018-01-28 21:52:43.630000+00:00
2021-06-03 13:23:24.453000+00:00
null
tensorflow
['https://arxiv.org/abs/1603.07285', 'https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215', 'https://mmuratarat.github.io/2019-01-17/implementing-padding-schemes-of-tensorflow-in-python']
3
41,811,517
<blockquote> <p>Does a machine learning algorithm copy the data it learns from?</p> </blockquote> <p>There are many different machine learning algorithms. If you are talking about <a href="https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm" rel="nofollow noreferrer">k nearest neighbor</a> (k-NN) then the answer is simply <strong>yes</strong>.</p> <p>However, k-NN is rarely used. Most (all?) other models are not that simple. Usually, a machine learning developer wants the training data to be compressed (a lot, lossy) by the model for several reasons: (1) The amount of training data is large (many GB), (2) Generalization might be better if the training data is compressed (3) inference of new examples might take really long if the data is not compressed. (By "compress", I mean that the relevant information for the task is extracted and irrelevant data is removed. Not compression in the usual sense.)</p> <p>For other models than k-NN, the answer is more complicated. <strong>It depends</strong> on what you consider a "copy". For example, from artificial neural networks (especially the sub-type of <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="nofollow noreferrer">convolutional neural networks</a>, short: CNNs) the training data can partially be restored. Those models ware state of the art for many (all?) computer vision tasks.</p> <p>I could not find papers which show that you can (partially) restore / extract training data from CNNs with the focus on possible privacy / copyright problems, but I'm ~70% certain I have read an abstract about this problem. I think I've also heard a talk where a researcher said this was a problem when building a detector for child pornography. However, I don't think that was recorded or anything published about this.</p> <p>Here are two papers which indicate that restoring training data from CNNs might be possible:</p> <ul> <li><a href="https://arxiv.org/abs/1611.03530" rel="nofollow noreferrer">Understanding deep learning requires rethinking generalization</a></li> <li><a href="https://arxiv.org/abs/1512.02017" rel="nofollow noreferrer">Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images</a> and the <a href="https://arxiv.org/abs/1311.2901" rel="nofollow noreferrer">Zeiler &amp; Fergus paper</a></li> </ul>
2017-01-23 16:52:56.410000+00:00
2017-01-23 17:20:49.373000+00:00
2017-01-23 17:20:49.373000+00:00
null
41,797,569
<p>I am not a programmer, rather a law student, but I am currently researching for a project involving artificial intelligence and copyright law. I am currently looking at whether the learning process of a machine learning algorithm may be copyright infringement if a protected work is used by the algorithm. However, this relies on whether or not the algorithm copies the work or does something else.</p> <p>Can anyone tell me whether machine learning algorithms typically copy the data (picture/text/video/etc.) they are analysing (even if only briefly) or if they are able to obtain the required information from the data through other methods that do not require copying (akin to a human looking at a stop sign and recognising it as a stop sign without necessarily copying the image).</p> <p>Apologies for my lack of knowledge and I'm sorry if any of my explanation flies in the face of any established machine learning knowledge. As I said, I am merely a lowly law student.</p> <p>Thanks in advance!</p>
2017-01-23 00:10:19.880000+00:00
2017-01-23 17:20:49.373000+00:00
null
algorithm|machine-learning|neural-network|artificial-intelligence|deep-learning
['https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm', 'https://en.wikipedia.org/wiki/Convolutional_neural_network', 'https://arxiv.org/abs/1611.03530', 'https://arxiv.org/abs/1512.02017', 'https://arxiv.org/abs/1311.2901']
5
60,353,722
<p>Adding different regularizations in different layers is not a problem. There are papers regarding this topic <a href="https://arxiv.org/abs/1711.07592" rel="nofollow noreferrer">Sparse input neural network</a>. However, a few things need attention here.</p> <ul> <li>Adding l1 regularization itself in the first layer does not do feature selection. If a feature is not selected, it can not connect to any of the nodes in the next layer. l1 regularization won't be able to drop the connections of a feature totally. You will need a <a href="https://icml.cc/2012/papers/110.pdf" rel="nofollow noreferrer">group lasso regularization (also called the l_{1,p} norm)</a>. </li> <li>The implementation of these regularizations, especially for sparsity, is not well supported in keras itself. You will need to add thresholding functions manually in each iteration. An algorithm can be found in <a href="https://arxiv.org/abs/1711.07592" rel="nofollow noreferrer">Sparse input neural network</a>.</li> </ul>
2020-02-22 15:32:53.887000+00:00
2020-02-22 15:32:53.887000+00:00
null
null
60,264,823
<p>Does it make sense to mix regularizers? For example using L1 to select features in the first layer and use L2 for the rest?</p> <p>I created this model:</p> <pre><code>model = Sequential() # the input layer uses L1 to partially serve as a feature selection layer model.add(Dense(10, input_dim = train_x.shape[1], activation = 'swish', kernel_regularizer=regularizers.l1(0.001))) model.add(Dense(20, activation = 'swish', kernel_regularizer=regularizers.l2(0.001))) model.add(Dense(20, activation = 'swish', kernel_regularizer=regularizers.l2(0.001))) model.add(Dense(10, activation = 'softmax')) </code></pre> <p>But I'm not sure if it is a good idea to mix L1&amp;L2, to me it seems logical to have L1 as feature selector in the input layer. But everywhere, I'm just seeing code that uses the same regularizer for all layers.</p> <p>(the model seems to give quite good results, >95% correct predictions in a multiclass classification problem)</p>
2020-02-17 14:41:52.297000+00:00
2020-02-22 15:32:53.887000+00:00
null
keras|neural-network|regularized
['https://arxiv.org/abs/1711.07592', 'https://icml.cc/2012/papers/110.pdf', 'https://arxiv.org/abs/1711.07592']
3
55,024,312
<p>I would recommend using Kafka, just to avoid the single point of failure you get when using Solo. </p> <p>In addition to that, the ordering service is unlikely to be a performance bottleneck, as mentioned in the <a href="https://arxiv.org/abs/1801.10228" rel="nofollow noreferrer">Hyperledger Fabric paper</a> (Section 5.2). You're more likely going to be limited by computationally intensive signature verification in the validation phase or network bandwidth.</p>
2019-03-06 13:31:05.533000+00:00
2019-03-06 13:31:05.533000+00:00
null
null
54,932,616
<p>In Fabric there are two ordering types: Solo and Kafka. When using Kafka, it is possible to have multiple orderers per channel. </p> <p>In addition to fault tolerance, would having more than one orderer per channel have speed improvements? My understanding is solo would actually be faster because it requires less overhead?</p> <p>The official docs is pretty light on discussing performance implications regarding this topic. </p>
2019-02-28 19:07:19.277000+00:00
2019-03-06 13:31:05.533000+00:00
null
apache-kafka|hyperledger-fabric|hyperledger|blockchain|ibm-blockchain
['https://arxiv.org/abs/1801.10228']
1
55,950,282
<p>I think according to the imbalanced data, it is better to create a custom data generator for your model so that each of it's generated data batch, contains at least one sample from each class. And also it is better to use <code>Dropout</code> layer after each <code>dense</code> layer instead of <code>conv</code> layer. For data augmentation it is better to at least use combination of rotate, horizontal flip and vertical flip. there are some other approaches for data augmentation like using <code>GAN</code> network or random pixel replacement. For <code>Gan</code> you can check <a href="https://stackoverflow.com/questions/55534025/how-to-generate-new-image-using-deep-learning-from-new-features/55534488#55534488">This SO post</a></p> <p>For using <code>Gan</code> as data augmenter you can read <a href="https://arxiv.org/pdf/1711.04340.pdf" rel="nofollow noreferrer">This Article</a>. For combination of pixel level augmentation and <code>GAN</code> <a href="https://arxiv.org/pdf/1811.00174.pdf" rel="nofollow noreferrer">pixel level data augmentation</a></p>
2019-05-02 10:19:03.130000+00:00
2019-05-02 10:53:47.357000+00:00
2019-05-02 10:53:47.357000+00:00
null
55,949,613
<p>I want to classify pattern on image. My original image shape are 200 000*200 000 i reshape it to 96*96, pattern are still recognizable with human eyes. Pixel value are 0 or 1. </p> <p>i'm using the following neural network.</p> <pre><code> train_X, test_X, train_Y, test_Y = train_test_split(cnn_mat, img_bin["Classification"], test_size = 0.2, random_state = 0) class_weights = class_weight.compute_class_weight('balanced', np.unique(train_Y), train_Y) train_Y_one_hot = to_categorical(train_Y) test_Y_one_hot = to_categorical(test_Y) train_X,valid_X,train_label,valid_label = train_test_split(train_X, train_Y_one_hot, test_size=0.2, random_state=13) model = Sequential() model.add(Conv2D(24,kernel_size=3,padding='same',activation='relu', input_shape=(96,96,1))) model.add(MaxPool2D()) model.add(Conv2D(48,kernel_size=3,padding='same',activation='relu')) model.add(MaxPool2D()) model.add(Conv2D(64,kernel_size=3,padding='same',activation='relu')) model.add(MaxPool2D()) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(256, activation='relu')) model.add(Dense(16, activation='softmax')) model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"]) train = model.fit(train_X, train_label, batch_size=80,epochs=20,verbose=1,validation_data=(valid_X, valid_label),class_weight=class_weights) </code></pre> <p>I have already run some experiment to find a "good" number of hidden layer and fully connected layer. it's probably not the most optimal architecture since my computer is slow, i just ran different model once and selected best one with matrix confusion, i didn't use cross validation,<strong>I didn't try more complex architecture since my number of data is small, i have read small architecture are the best, is it worth to try more complex architecture?</strong></p> <p>here the result with 5 and 12 epoch, bach size 80. This is the confusion matrix for my <strong>test set</strong></p> <p>As you can see it's look like i'm overfiting. When i only run 5 epoch, most of the class are assigned to class 0; With more epoch, class 0 is less important but classification is still bad</p> <p>I added 0.8 dropout after each convolutional layer</p> <p>e.g</p> <pre><code> model.add(Conv2D(48,kernel_size=3,padding='same',activation='relu')) model.add(MaxPool2D()) model.add(Dropout(0.8)) model.add(Conv2D(64,kernel_size=3,padding='same',activation='relu')) model.add(MaxPool2D()) model.add(Dropout(0.8)) </code></pre> <p>With drop out, 95% of my image are classified in class 0.</p> <p>I tryed image augmentation; i made rotation of all my training image, still used weighted activation function, result didnt improve. <strong>Should i try to augment only class with small number of image? Most of the thing i read says to augment all the dataset...</strong></p> <p>To resume my question are: Should i try more complex model?</p> <p>Is it usefull to do image augmentation only on unrepresented class? then should i still use weight class (i guess no)?</p> <p>Should i have hope to find a "good" model with cnn when we see the size of my dataset?</p>
2019-05-02 09:39:25.793000+00:00
2019-05-16 21:07:55.003000+00:00
2019-05-16 21:07:55.003000+00:00
python|machine-learning|keras|computer-vision|conv-neural-network
['https://stackoverflow.com/questions/55534025/how-to-generate-new-image-using-deep-learning-from-new-features/55534488#55534488', 'https://arxiv.org/pdf/1711.04340.pdf', 'https://arxiv.org/pdf/1811.00174.pdf']
3
32,241,498
<ul> <li><strong><code>LATENCY -</code></strong> an amount of <strong>time</strong> to get the response <code>[us]</code></li> <li><strong><code>BANDWIDTH -</code></strong> an amount of data-flow volume <strong>per unit of time</strong> <code>[GB</code><strong>/<code>s</code></strong>]`</li> </ul> <h2>Marketing papers are fabulous in mystifications with <code>LATENCY</code> <em>figures</em></h2> <p>A term latency could be confused, if not taking carefully this <strong>whole context of transaction life-cycle</strong>: participating line-segments { amplification | retiming | switching | MUX/MAP-ing | routing | EnDec-processing (not speaking about cryptography ) | statistical-(de)compressing }, data-flow duration and framing / line-code-protective add-ons / ( opt. procotol, if present, encapsulation and re-framing ) additional surplus overheads, <strong>that continually increase <kbd>latency</kbd> but <em>also</em> increase data-<code>VOLUME</code></strong>.</p> <p><a href="https://i.stack.imgur.com/hY9hs.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hY9hs.png" alt="enter image description here"></a> Just as an example, <strong>take any GPU-engine marketing.</strong> The huge numbers that are presented about GigaBytes of <strong><code>DDR5</code></strong> and <strong><code>GHz</code></strong> timing thereof silently are communicated in bold, what they omit to tell you is, that with all that zillions of things, each of your <code>SIMT</code> many-cores, yes, all the cores, have to pay a cruel <kbd><strong>latency</strong></kbd>-<strong>penalty</strong> and <strong>wait</strong> for more than <strong><code>+400-800</code></strong> <code>[GPU-clk]</code>s just to receive the first byte from GPU-over-hyped-GigaHertz-Fast-DDRx-ECC-protected bank of memory.<br></p> <p><strong>Yes, your Super-Engine's <code>GFLOPs/TFLOPs</code> <em>have</em> to wait!</strong> ... because of (hidden) <strong><code>LATENCY</code></strong></p> <p>And you wait with all the full parallel-<strong>circus</strong> ... because of <strong><code>LATENCY</code></strong></p> <p>( ... and any marketing bell or whistle cannot help, believe or not ( forget about cache promises too, these do not know, what the hell there would be in the far / late / distant memory cell, so cannot feed you a single bit copy of such latency-"far" enigma from their shallow local-pockets ) )</p> <hr> <h2><code>LATENCY</code> ( and taxes ) cannot be avoided</h2> <p>Highly professional <strong><code>HPC</code></strong>-designs only <strong>help to pay less</strong> penalty, while <strong>still cannot avoid <code>LATENCY</code></strong> (as taxes) <strong>penalty</strong> beyond some smart re-arrangements principles.</p> <pre><code> CUDA Device:0_ has &lt;_compute capability_&gt; == 2.0. CUDA Device:0_ has [ Tesla M2050] .name CUDA Device:0_ has [ 14] .multiProcessorCount [ Number of multiprocessors on device ] CUDA Device:0_ has [ 2817982464] .totalGlobalMem [ __global__ memory available on device in Bytes [B] ] CUDA Device:0_ has [ 65536] .totalConstMem [ __constant__ memory available on device in Bytes [B] ] CUDA Device:0_ has [ 1147000] .clockRate [ GPU_CLK frequency in kilohertz [kHz] ] CUDA Device:0_ has [ 32] .warpSize [ GPU WARP size in threads ] CUDA Device:0_ has [ 1546000] .memoryClockRate [ GPU_DDR Peak memory clock frequency in kilohertz [kHz] ] CUDA Device:0_ has [ 384] .memoryBusWidth [ GPU_DDR Global memory bus width in bits [b] ] CUDA Device:0_ has [ 1024] .maxThreadsPerBlock [ MAX Threads per Block ] CUDA Device:0_ has [ 32768] .regsPerBlock [ MAX number of 32-bit Registers available per Block ] CUDA Device:0_ has [ 1536] .maxThreadsPerMultiProcessor [ MAX resident Threads per multiprocessor ] CUDA Device:0_ has [ 786432] .l2CacheSize CUDA Device:0_ has [ 49152] .sharedMemPerBlock [ __shared__ memory available per Block in Bytes [B] ] CUDA Device:0_ has [ 2] .asyncEngineCount [ a number of asynchronous engines ] </code></pre> <h2>Yes, telephone!<br>Why not?<br><br> A cool point to remind<br>a 8kHz-8bit-sampling on a 64k circuit switching<br>used inside an E1/T1 TELCO hierarchy</h2> <p>A <strong><code>POTS</code></strong> telephone service used to be based on a <strong>synchronous</strong> <strong>fix-<code>latency</code></strong> switching ( late 70-ies have merged global, otherwise in-synchronise-able Plesiochronous Digital Hierarchy networks between Japanese-<code>PDH</code>-standard, Continental-<code>PDH</code>-<strong><code>E3</code></strong> inter-carrier standards and US-<code>PDH</code>-<strong><code>T3</code></strong> carrier services, which finally avoided many headaches with international carrier service jitter / slippage / (re)-synchronisation storms and drop-outs )</p> <p><strong><code>SDH</code></strong>/<code>SONET-STM1 / 4 / 16</code>, carried on 155 / 622 / 2488 <code>[Mb/s]</code> <strong><code>BANDWIDTH</code></strong> SyncMUX-circuits.</p> <p>The cool idea on <code>SDH</code> was the globally enforced fix structure of time-aligned framing, which was both deterministic and stable.</p> <p>This allowed to simply memory-map (cross-connect switch) lower-order container-datastream components to be copied from incoming STMx onto outgoing STMx/PDHy payloads on the SDH-cross-connects ( remember, that was as deep as in late 70-ies so the CPU performance and DRAMs were decades before handling <code>GHz</code> and sole <code>ns</code> ). Such a box-inside-a-box-inside-a-box payload mapping provided both low-switching overheads on the hardware and provided also some means for re-alignment in time-domain ( there were some bit-gaps between the box-in-box boundaries, so as to provide some elasticity, well under a standard given maximum skew in time )</p> <p>While it may be hard to explain the beauty of this concept in a few words, AT&amp;T and other major global operators enjoyed a lot the SDH-synchronicity and the beauty of the globally-synchronous SDH network and local side Add-Drop-MUX mappings.</p> <hr> <p>Having said this,<br> <strong>latency controlled design</strong><br> takes care of:<br> - <code>ACCESS-LATENCY :</code>how long time does it take to <strong>arrive</strong> for the first ever bit <code>: [s]</code><br> - <code>TRANSPORT-BANDWIDTH :</code>how many bits it can transfer/<strong>deliver</strong> each next unit of time<code>: [b/s]</code><br> - <code>VOLUME OF DATA :</code>how many bits of data are there in total to transport <code>: [b]</code><br> - <code>TRANSPORT DURATION :</code>how many units of time does it take<br> - <code>___________________ :</code>to move/<strong>deliver</strong> whole <code>VOLUME OF DATA</code>to who has asked<code>: [s]</code></p> <hr> <h2>Epilogue:</h2> <blockquote> <p>A very nice illustration of the principal independence of a <strong>THROUGHPUT</strong> ( <strong>BANDWIDTH <code>[GB/s]</code></strong> ) on <strong>LATENCY <code>[ns]</code></strong> is in <strong>Fig.4</strong> in a lovely <a href="https://arxiv.org/pdf/1512.05578v1.pdf" rel="noreferrer">ArXiv paper on <strong>Improving Latency</strong></a> from Ericsson, testing how manycore RISC-procesor Epiphany-64 architecture from Adapteva may help in driving LATENCY down in signal processing.<br><br>Understanding the <strong>Fig.4</strong>, extended in core-dimension,<br>can also show the possible scenarios<br><br>- how to increase of <strong>BANDWIDTH <code>[GB/s]</code></strong> <br>by more-core(s) involved into accelerated / TDMux-ed <code>[Stage-C]</code>-processing ( interleaved in time )<br>and also<br>- that <strong>LATENCY <code>[ns]</code></strong><br>can never be shorter than a sum of principal <strong><code>SEQ</code>-process-durations</strong> <code>== [Stage-A]</code>+<code>[Stage-B]</code>+<code>[Stage-C]</code>, independently of the number of available ( single/many )-cores the architecture permits to use.<br><strong>Great thanks to Andreas Olofsson &amp; the Ericsson guys. KEEP WALKING, BRAVE MEN!</strong></p> </blockquote>
2015-08-27 05:42:58.550000+00:00
2017-01-12 16:38:17.513000+00:00
2017-01-12 16:38:17.513000+00:00
null
18,821,585
<p>What do you mean by low latency access of data?<br></p> <p>I am actually confused about the definition of the term <kbd>"<strong>LATENCY</strong>"</kbd>.<br></p> <p>Can anyone please elaborate the term "Latency".</p>
2013-09-16 06:26:04.333000+00:00
2017-01-12 16:38:17.513000+00:00
2015-08-27 05:47:33.127000+00:00
performance|memory|dataflow|low-latency|multiplexing
['https://i.stack.imgur.com/hY9hs.png', 'https://arxiv.org/pdf/1512.05578v1.pdf']
2
72,451,049
<p>I don't think it has a name, but it's described in this paper: <a href="https://arxiv.org/abs/2110.01111" rel="nofollow noreferrer">&quot;Is this the simplest (and most surprising) sorting algorithm ever?&quot;</a> Stanley P. Y. Fung</p>
2022-05-31 16:09:47.357000+00:00
2022-05-31 16:09:47.357000+00:00
null
null
72,442,434
<p>I have created a sorting function. Can I know the name of this algorithm? Is this bubble sort?</p> <p>I am new in C.</p> <pre><code>#include &lt;stdio.h&gt; void sort(int *, int); int main(void) { int arrayNum[] = {1, 12, 8, 4, 90, 11, 76}; int len = sizeof(arrayNum) / sizeof(arrayNum[0]); sort(arrayNum, len); for (int i = 0; i &lt; len; i++) { printf(&quot;%d, &quot;, arrayNum[i]); } return 0; } void sort(int *array, int length) { int temp; for (int i = 0; i &lt; length; i++) { for (int j = 0; j &lt; length; j++) { if (array[i] &lt; array[j]) { temp = array[i]; array[i] = array[j]; array[j] = temp; } } } } </code></pre>
2022-05-31 04:54:46.610000+00:00
2022-05-31 16:48:40.107000+00:00
2022-05-31 11:03:55.040000+00:00
c|algorithm|bubble-sort|selection-sort
['https://arxiv.org/abs/2110.01111']
1
61,131,033
<p>I guess you can go a lot of different ways and that it depends on your particular use case.</p> <p>Your randomisation of a gene doesn't seem particularly wrong to me. Although a more subtle approach could be to only add or remove a pre-defined <code>n</code> (or range of) instead of a changing it to a completely new random number.</p> <p>In this paper here they mutate like you suggest though, using a random generator: <a href="https://arxiv.org/abs/1308.4675" rel="nofollow noreferrer"><em>Genetic Algorithm for Solving Simple Mathematical Equality Problem</em></a>.</p>
2020-04-09 22:09:25.407000+00:00
2020-04-09 22:09:25.407000+00:00
null
null
61,113,622
<p>I'm familiar with GA within the context of strings or text but not with numerical data.</p> <p>For strings, I understand how crossover and mutation would apply:</p> <pre><code>ParentA = abcdef ParentB = uvwxyz Using one-point crossover: ChildA = abwxyz (pivot after 2nd gene) ChildB = uvcdef Using random gene mutation (after crossover): ChildA = abwgyz (4th gene mutated) ChildB = uvcdef (no genes mutated) </code></pre> <p>For strings, I have a discrete alphabet to go off of, but how would these operators apply to continuous numerical data?</p> <p>For example, chromosomes represented as points in 4-space (each axes is a gene):</p> <pre><code>ParentA = [19, 58, 21, 54] ParentB = [65, 21, 59, 11] </code></pre> <p>Would it be appropriate to apply crossover by switching axes of both parents for offspring?</p> <pre><code>ChildA = [19, 58, 59, 11] (pivot after 2nd gene) ChildB = [65, 21, 21, 54] </code></pre> <p>I have a feeling this seems alright, but my naive notion of mutation, randomizing a gene, doesn't seem correct:</p> <pre><code>ChildA = [12, 58, 59, 11] (1st gene mutated) ChildB = [65, 89, 34, 54] (2nd and 3rd genes mutated) </code></pre> <p>I'm just unsure how genetic algorithms can be applied to numeric data like this. I know what I need for GA but not how to apply the operators. For example, consider the problem being minimizing the Rastrigin function in 4-dimensions: the search space is <code>[-512, 512]</code> in each dimension and the fitness function is the Rastrigin function. I don't know how the operators as I've described here can help find a more fit chromosome.</p> <p>For what it's worth, elite selection and population initialization seems straightforward, my only confusion comes with crossover and mutation operators.</p> <h3>Update for Bounty</h3> <p>I did an implementation of a GA for continuous numerical data using the mutation and crossover rates as I have described here. The optimization problem is the Styblinski-Tang function in two dimensions because it is easy to graph. I'm also using standard elite and tournament selection strategies.</p> <p>I find that the population best fitness does converge nicely to a solution, the average fitness doesn't really.</p> <p>Here I've plotted the search space over ten generations: a black dot is a candidate solution and the red 'x' is the global optimum:</p> <p><a href="https://i.stack.imgur.com/S52xa.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S52xa.gif" alt="enter image description here"></a></p> <p>The crossover operator as I've described seems to work well but the mutation operator (randomizing both, either, or neither of the x or y positions of a chromosome) seems to create crosshair and crosshatch patterns.</p> <p>I did a run in 50 dimensions to prolong convergence (since in two dimensions it converges in one generation) and plotted it:</p> <p><a href="https://i.stack.imgur.com/AeY3P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AeY3P.png" alt="enter image description here"></a></p> <p>Here the y-axis represents how close a solution was to global optimum (since the optimum is known), it's just a fraction <code>actual output / expected output</code>. It's a percentage. Green line is population best (approx 96-97% target), blue is population average (fluctuates 65-85% target).</p> <p>This verifies what I thought: the mutation operator doesn't really affect the population best but does mean the population average never converges and it fluctuates up and down.</p> <p>So my question for the bounty is what mutation operators can be used other than randomization of a gene?</p> <p><strong>Just to add:</strong> I ask this question because I'm interested in using GA to optimize neural network weights to train a network in lieu of backpropagation. If you know anything about that, any source detailing that would also answer my question.</p>
2020-04-09 03:45:51.273000+00:00
2020-04-12 01:18:29.060000+00:00
2020-04-11 21:09:48.060000+00:00
machine-learning|numeric|genetic-algorithm
['https://arxiv.org/abs/1308.4675']
1
43,621,903
<p>I might suggest you stick to hex or base64 instead of making your own formatting</p> <pre><code>dat = [1, 0, 11, 0, 4, 0, 106, 211, 169, 1, 0, 12, 0, 8, 0, 1, 26, 25, 32, 189, 77, 216, 1, 0, 1, 0, 4, 0, 0, 0, 0, 12, 15] </code></pre> <p>Hexadecimal</p> <pre><code>hex = dat.map { |x| sprintf('%02x', x) }.join # =&gt; 01000b0004006ad3a901000c000800011a1920bd4dd80100010004000000000c0f </code></pre> <p>Base64</p> <pre><code>require 'base64' base64 = Base64.encode64(dat.pack('c*')) # =&gt; AQALAAQAatOpAQAMAAgAARoZIL1N2AEAAQAEAAAAAAwP\n </code></pre> <hr> <p><strong>Proquints</strong></p> <p>What? <a href="https://arxiv.org/html/0901.4016" rel="nofollow noreferrer">Proquints</a> are pronounceable unique identifiers which makes them great for reading/communicating binary data. In your case, maybe not the best because you're dealing with 30+ bytes here, but they're very suitable for smaller byte strings</p> <pre><code># proquint.rb # adapted to ruby from https://github.com/deoxxa/proquint module Proquint C = %w(b d f g h j k l m n p r s t v z) V = %w(a i o u) def self.encode (bytes) bytes &lt;&lt; 0 if bytes.size &amp; 1 == 1 bytes.pack('c*').unpack('S*').reduce([]) do |acc, n| c1 = n &amp; 0x0f v1 = (n &gt;&gt; 4) &amp; 0x03 c2 = (n &gt;&gt; 6) &amp; 0x0f v2 = (n &gt;&gt; 10) &amp; 0x03 c3 = (n &gt;&gt; 12) &amp; 0x0f acc &lt;&lt; C[c1] + V[v1] + C[c2] + V[v2] + C[c3] end.join('-') end def decode str # learner's exercise # or see some proquint library (eg) https://github.com/deoxxa/proquint end end Proquint.encode dat # =&gt; dabab-rabab-habab-potat-nokab-babub-babob-bahab-pihod-bohur-tadot-dabab-dabab-habab-babab-babub-zabab </code></pre> <p>Of course the entire process is reversible too. You might not need it, so I'll leave it as an exercise for the learner</p> <p>It's particularly nice for things like IP address, or any other short binary blobs. You gain familiarity too as you see common byte strings in their proquint form</p> <pre><code>Proquint.encode [192, 168, 11, 51] # bagop-rasag Proquint.encode [192, 168, 11, 52] # bagop-rabig Proquint.encode [192, 168, 11, 66] # bagop-ramah Proquint.encode [192, 168, 22, 19] # bagop-kisad Proquint.encode [192, 168, 22, 20] # bagop-kibid </code></pre>
2017-04-25 22:15:33.980000+00:00
2017-04-25 22:49:39.030000+00:00
2017-04-25 22:49:39.030000+00:00
null
43,621,627
<p>I have an array of <code>[1, 0, 11, 0, 4, 0, 106, 211, 169, 1, 0, 12, 0, 8, 0, 1, 26, 25, 32, 189, 77, 216, 1, 0, 1, 0, 4, 0, 0, 0, 0, 12, 15]</code>.</p> <p>I would love to create a string version mostly for logging purposes. My end result would be <code>"01000B0004006AD3..."</code></p> <p>I could not find a simple way to take each array byte value and pack a string with an ASCII presentation of the byte value. </p> <p>My solution is cumbersome. I appreciate advice on making the solution slick.</p> <pre><code>array.each {|x| value = (x&gt;&gt;4)&amp;0x0f if( value&gt;9 ) then result_string.concat (value-0x0a + 'A'.ord).chr else result_string.concat (value + '0'.ord).chr end value = (x)&amp;0x0f if( value&gt;9 ) then result_string.concat (value-0x0a + 'A'.ord).chr else result_string.concat (value + '0'.ord).chr end } </code></pre>
2017-04-25 21:54:49.990000+00:00
2017-04-25 22:58:39.150000+00:00
2017-04-25 22:07:55.833000+00:00
ruby
['https://arxiv.org/html/0901.4016']
1
57,934,952
<p>One more approach is approximating Gaussian curve with step-wise function: <a href="https://arxiv.org/pdf/1107.4958.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1107.4958.pdf</a> (I guess piece-wise linear functions can be also used of course).</p>
2019-09-14 11:16:48.363000+00:00
2019-09-14 11:16:48.363000+00:00
null
null
5,243,983
<p>Implementing convolution in a pixel shader is somewhat costly as to the very high number of texture fetches.</p> <p>A direct way of implementing a convolution filter is to make <em>N x N</em> lookups per fragment using two for cycles per fragment. A simple calculation says that a 1024x1024 image blurred with a 4x4 Gaussian kernel would need <code>1024 x 1024 x 4 x 4 = 16M</code> lookups.</p> <p>What can one do about this?</p> <ol> <li>Can one use some optimization that would need less lookups? I am not interested in kernel-specific optimizations like the ones for the Gaussian (or are they kernel specific?)</li> <li>Can one at least make these lookups faster by somehow exploiting the locality of the pixels one would work with?</li> </ol> <p>Thanks!</p>
2011-03-09 09:52:14.317000+00:00
2019-09-14 11:16:48.363000+00:00
null
optimization|opengl|glsl|shader
['https://arxiv.org/pdf/1107.4958.pdf']
1
5,557,359
<p>Assuming you mean O(1) memory (or depending on the model O(log n)) rather than <em>no extra memory</em>, a linear time in-place algorithm exists.</p> <p>This paper: <a href="http://arxiv.org/abs/0805.1598" rel="nofollow">http://arxiv.org/abs/0805.1598</a> has an algorithm for the case when you have</p> <p><code>a1 ... an b1 ... bn</code> and want to convert to </p> <p><code>b1 a1 b2 a2 ... bn an</code>.</p> <p>The paper also mentions that you can generalize this to other k-way shuffles. In your case, k = 3.</p> <p>The algorithm in the paper will give the following:</p> <p>Start with <code>a1 a2 ... an b1 b2 ... bn c1 c2 ... cn</code> and convert to</p> <p><code>c1 b1 a1 c2 b2 a2 ... cn bn an</code></p> <p>Another pass through this, and you can easily get <code>a1 b1 c2 a2 b2 c2 ... an bn cn</code>.</p> <p>Now to generalize the algorithm in the paper, we need to pick a prime p, such that k is a primitive root of p^2.</p> <p>For k = 3, p = 5 will do.</p> <p>Now to apply the algorithm, first you need to find the largest m &lt; n such 3m+1 is a power of 5.</p> <p>Note: this will only happen when 3m+1 is an <em>even</em> power of 5. Thus you can actually work with powers of 25 when trying to find the m. (5^odd - 1 is not divisible by 3).</p> <p>Once you find m,</p> <p>You shuffle the array to be</p> <p><code>[a1 a2 ... am b1 b2 ... bm c1 c2 ... cm] [a(m+1) ... an b(m+1) ... bn c(m+1) ... cn]</code></p> <p>and then use the follow the cycle method(refer the paper) for the first 3m elements, using the powers of 5 (including 1 = 5^0) as the starting points of the different cycles) and do a tail recursion for the rest.</p> <p>Now to convert <code>a1 a2 ... an b1 b2 ... bn c1 c2 ... cn</code></p> <p>to </p> <p><code>[a1 a2 ... am b1 b2 ... bm c1 c2 ... cm] [a(m+1) ... an b(m+1) ... bn c(m+1) ... cn]</code></p> <p>you first do a cyclic shift to get</p> <p><code>a1 a2 ... am [b1 b2 bm a(m+1) .. an] b(m+1) .. bn c1 c2 ... cn</code></p> <p>(the elements in the square brackets are the ones that were shifted)</p> <p>Then do a cyclic shift to get</p> <p><code>a1 a2 ... am b1 b2 bm a(m+1) .. an [c1 c2 ..cm b(m+1) .. bn ] c(m+1) ... cn</code></p> <p>And then a final shift to </p> <p><code>a1 a2 ... am b1 b2 bm [c1 c2 ..cm a(m+1) .. an ] b(m+1) .. bn c(m+1) ... cn</code></p> <p>Note that cyclic shift can be done in O(n) time and O(1) space.</p> <p>So whole algorithm is O(n) time and O(1) space.</p>
2011-04-05 19:27:04.440000+00:00
2011-04-19 15:15:15.207000+00:00
2011-04-19 15:15:15.207000+00:00
null
5,557,326
<p>Given an array </p> <pre><code>[a1 a2 a3 ... an b1 b2 b3 ... bn c1 c2 c3 ...cn] </code></pre> <p>without using extra memory how do you reorder into an array</p> <pre><code>[a1 b1 c1 a2 b2 c2 a3 b3 c3 ... an bn cn] </code></pre>
2011-04-05 19:23:25.593000+00:00
2011-04-19 15:15:15.207000+00:00
2011-04-05 19:30:19.123000+00:00
algorithm
['http://arxiv.org/abs/0805.1598']
1
51,767,462
<p>TLDR: Each LSTM cell at time t and level l has inputs x(t) and hidden state h(l,t) In the first layer, the input is the actual sequence input x(t), and previous hidden state h(l, t-1), and in the next layer the input is the hidden state of the corresponding cell in the previous layer h(l-1,t).</p> <p>From <a href="https://arxiv.org/pdf/1710.02254.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1710.02254.pdf</a>:</p> <p>To increase the capacity of GRU networks (Hermans and Schrauwen 2013), recurrent layers can be stacked on top of each other. Since GRU does not have two output states, the same output hidden state h'2 is passed to the next vertical layer. In other words, the h1 of the next layer will be equal to h'2. This forces GRU to learn transformations that are useful along depth as well as time.</p>
2018-08-09 12:46:18.900000+00:00
2018-08-09 12:46:18.900000+00:00
null
null
47,466,715
<p>I'm trying to understand and implement multi-layer LSTM. The problem is i don't know how they connect. I'm having two thoughs in mind:</p> <ol> <li><p>At each timestep, the hidden state H of the first LSTM will become the input of the second LSTM.</p></li> <li><p>At each timestep, the hidden state H of the first LSTM will become the initial value for the hidden state of the sencond LSTM, and the input of the first LSTM will become the input for the second LSTM.</p></li> </ol> <p>Please help!</p>
2017-11-24 05:11:44.563000+00:00
2019-11-08 19:11:24.503000+00:00
2017-11-30 17:32:46.610000+00:00
neural-network|deep-learning|lstm
['https://arxiv.org/pdf/1710.02254.pdf']
1
48,048,345
<p>This is a great discussion. Thanks for starting this thread. The idea of including document length by @avip seems interesting. Will have to experiment and check on the results. In the meantime let me try asking the question a little differently. What are we trying to interpret when querying for TF-IDF relevance scores ?</p> <ol> <li>Possibly trying to understand the word relevance at the document level </li> <li>Possibly trying to understand the word relevance per Class</li> <li><p>Possibly trying to understand the word relevance overall ( in the whole corpus )</p> <pre><code> # # features, corpus = 6 documents of length 3 counts = [[3, 0, 1], [2, 0, 0], [3, 0, 0], [4, 0, 0], [3, 2, 0], [3, 0, 2]] from sklearn.feature_extraction.text import TfidfTransformer transformer = TfidfTransformer(smooth_idf=False) tfidf = transformer.fit_transform(counts) print(tfidf.toarray()) # lambda for basic stat computation summarizer_default = lambda x: np.sum(x, axis=0) summarizer_mean = lambda x: np.mean(x, axis=0) print(summarizer_default(tfidf)) print(summarizer_mean(tfidf)) </code></pre></li> </ol> <p>Result:</p> <pre><code># Result post computing TF-IDF relevance scores array([[ 0.81940995, 0. , 0.57320793], [ 1. , 0. , 0. ], [ 1. , 0. , 0. ], [ 1. , 0. , 0. ], [ 0.47330339, 0.88089948, 0. ], [ 0.58149261, 0. , 0.81355169]]) # Result post aggregation (Sum, Mean) [[ 4.87420595 0.88089948 1.38675962]] [[ 0.81236766 0.14681658 0.2311266 ]] </code></pre> <p>If we observe closely, we realize the the feature1 witch occurred in all the document is not ignored completely because the sklearn implementation of idf = log [ n / df(d, t) ] + 1. +1 is added so that the important word which just so happens to occur in all document is not ignored. E.g. the word 'bike' occurring very frequently in classifying a particular document as 'motorcyle' (20_newsgroup dataset).</p> <ol> <li><p>Now in-regards to the first 2 questions, one is trying to interpret and understand the top common features that might be occurring in the document. In that case, aggregating in some form including all possible occurrence of the word in a doc is not taking anything away even mathematically. IMO such a query is very useful exploring the dataset and helping understanding what the dataset is about. The logic might be applied to vectorizing using Hashing as well.</p> <p>relevance_score = mean(tf(t,d) * idf(t,d)) = mean( (bias + inital_wt * F(t,d) / max{F(t',d)}) * log(N/df(d, t)) + 1 ))</p></li> <li><p>Question3 is very important as it might as well be contributing to features being selected for building a predictive model. Just using TF-IDF scores independently for feature selection might be misleading at multiple level. Adopting a more theoretical statistical test such as 'chi2' couple with TF-IDF relevance scores might be a better approach. Such statistical test also evaluates the importance of the feature in relation to the respective target class.</p></li> </ol> <p>And of-course combining such interpretation with the model's learned feature weights would be very helpful in understanding the importance of text derived features completely.</p> <p>** The problem is a little more complex to cover in detail here. But, hoping the above helps. What do others feel ? </p> <p>Reference: <a href="https://arxiv.org/abs/1707.05261" rel="nofollow noreferrer">https://arxiv.org/abs/1707.05261</a></p>
2018-01-01 08:55:40.297000+00:00
2018-01-01 21:23:48.663000+00:00
2018-01-01 21:23:48.663000+00:00
null
42,269,313
<p>First let's extract the TF-IDF scores per term per document:</p> <pre><code>from gensim import corpora, models, similarities documents = ["Human machine interface for lab abc computer applications", "A survey of user opinion of computer system response time", "The EPS user interface management system", "System and human system engineering testing of EPS", "Relation of user perceived response time to error measurement", "The generation of random binary unordered trees", "The intersection graph of paths in trees", "Graph minors IV Widths of trees and well quasi ordering", "Graph minors A survey"] stoplist = set('for a of the and to in'.split()) texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts] tfidf = models.TfidfModel(corpus) corpus_tfidf = tfidf[corpus] </code></pre> <p>Printing it out:</p> <pre><code>for doc in corpus_tfidf: print doc </code></pre> <p>[out]:</p> <pre><code>[(0, 0.4301019571350565), (1, 0.4301019571350565), (2, 0.4301019571350565), (3, 0.4301019571350565), (4, 0.2944198962221451), (5, 0.2944198962221451), (6, 0.2944198962221451)] [(4, 0.3726494271826947), (7, 0.27219160459794917), (8, 0.3726494271826947), (9, 0.27219160459794917), (10, 0.3726494271826947), (11, 0.5443832091958983), (12, 0.3726494271826947)] [(6, 0.438482464916089), (7, 0.32027755044706185), (9, 0.32027755044706185), (13, 0.6405551008941237), (14, 0.438482464916089)] [(5, 0.3449874408519962), (7, 0.5039733231394895), (14, 0.3449874408519962), (15, 0.5039733231394895), (16, 0.5039733231394895)] [(9, 0.21953536176370683), (10, 0.30055933182961736), (12, 0.30055933182961736), (17, 0.43907072352741366), (18, 0.43907072352741366), (19, 0.43907072352741366), (20, 0.43907072352741366)] [(21, 0.48507125007266594), (22, 0.48507125007266594), (23, 0.48507125007266594), (24, 0.48507125007266594), (25, 0.24253562503633297)] [(25, 0.31622776601683794), (26, 0.31622776601683794), (27, 0.6324555320336759), (28, 0.6324555320336759)] [(25, 0.20466057569885868), (26, 0.20466057569885868), (29, 0.2801947048062438), (30, 0.40932115139771735), (31, 0.40932115139771735), (32, 0.40932115139771735), (33, 0.40932115139771735), (34, 0.40932115139771735)] [(8, 0.6282580468670046), (26, 0.45889394536615247), (29, 0.6282580468670046)] </code></pre> <p>If we want to find the "saliency" or "importance" of the words within this corpus, <strong>can we simple do the sum of the tf-idf scores across all documents and divide it by the number of documents?</strong> I.e. </p> <pre><code>&gt;&gt;&gt; tfidf_saliency = Counter() &gt;&gt;&gt; for doc in corpus_tfidf: ... for word, score in doc: ... tfidf_saliency[word] += score / len(corpus_tfidf) ... &gt;&gt;&gt; tfidf_saliency Counter({7: 0.12182694202050007, 8: 0.11121194156107769, 26: 0.10886469856464989, 29: 0.10093919463036093, 9: 0.09022272408985754, 14: 0.08705221175200946, 25: 0.08482488519466996, 6: 0.08143359568202602, 10: 0.07480097322359022, 12: 0.07480097322359022, 4: 0.07411881371164887, 13: 0.07117278898823597, 5: 0.07104525967490458, 27: 0.07027283689263066, 28: 0.07027283689263066, 11: 0.060487023243988705, 15: 0.055997035904387725, 16: 0.055997035904387725, 21: 0.05389680556362955, 22: 0.05389680556362955, 23: 0.05389680556362955, 24: 0.05389680556362955, 17: 0.048785635947490406, 18: 0.048785635947490406, 19: 0.048785635947490406, 20: 0.048785635947490406, 0: 0.04778910634833961, 1: 0.04778910634833961, 2: 0.04778910634833961, 3: 0.04778910634833961, 30: 0.045480127933079706, 31: 0.045480127933079706, 32: 0.045480127933079706, 33: 0.045480127933079706, 34: 0.045480127933079706}) </code></pre> <p>Looking at the output, could we assume that the most "prominent" word in the corpus is:</p> <pre><code>&gt;&gt;&gt; dictionary[7] u'system' &gt;&gt;&gt; dictionary[8] u'survey' &gt;&gt;&gt; dictionary[26] u'graph' </code></pre> <p>If so, <strong>what is the mathematical interpretation of the sum of TF-IDF scores of words across documents?</strong></p>
2017-02-16 09:06:14.760000+00:00
2019-04-08 15:59:32.900000+00:00
null
python|statistics|nlp|tf-idf|gensim
['https://arxiv.org/abs/1707.05261']
1
64,174,932
<p>Following the jackknife method highlighted in this <a href="https://www.jmlr.org/papers/volume15/wager14a/wager14a.pdf" rel="nofollow noreferrer">paper</a> to obtain the standard error, you can use an implementation in the package <code>ranger</code> :</p> <pre><code>library(ranger) library(mlbench) data(BostonHousing) mdl = ranger(medv ~ .,data=BostonHousing[1:400,],keep.inbag = TRUE) pred = predict(mdl,BostonHousing[401:nrow(BostonHousing),],type=&quot;se&quot;) head(cbind(pred$predictions,pred$se )) [,1] [,2] [1,] 10.673356 1.107839 [2,] 11.390374 1.102217 [3,] 12.760511 1.126945 [4,] 10.458128 1.100246 [5,] 10.720076 1.084376 [6,] 9.914648 1.102000 </code></pre> <p>The confidence interval can be estimated as 1.96*se. There is also a new package forestError available that can work on randomForest objects:</p> <pre><code>library(randomForest) library(forestError) mdl = randomForest(medv ~ .,data=BostonHousing[1:400,],keep.inbag=TRUE) err = quantForestError(mdl,BostonHousing[1:400,],BostonHousing[401:nrow(BostonHousing),]) head(err$estimates) pred mspe bias lower_0.05 upper_0.05 1 10.649734 15.70943 -1.5336411 2.935949 12.59486 2 11.611078 15.16339 -1.4436056 3.897293 13.55621 3 12.603938 20.92701 -0.9590869 4.890153 22.32699 4 10.650549 12.42555 -1.4188440 3.941648 12.49029 5 10.414707 29.08155 -1.1438267 2.700922 31.42272 6 9.720305 19.63286 -1.3469671 2.006520 16.43220 </code></pre> <p>You can refer to this <a href="https://arxiv.org/pdf/1912.07435.pdf" rel="nofollow noreferrer">paper</a> for the actual method used,</p>
2020-10-02 16:15:14.360000+00:00
2020-10-02 16:28:05.390000+00:00
2020-10-02 16:28:05.390000+00:00
null
17,812,754
<p>I'm using <code>randomForest</code> package in R, for the purpose of predicting the distances between proteins (regression model in RF) &quot;for a homology modeling purposes&quot; and I obtained quite good results. However, I need to have a confidence level to rank my predicted values and filter out the bad models, so I wonder if there is any possibility to calculate such confidence level, or any other way of measuring the certainty of the predictions? any suggestions or recommendations is highly appreciated</p>
2013-07-23 14:14:38.053000+00:00
2020-10-02 16:28:05.390000+00:00
2020-10-02 16:16:49.903000+00:00
r|regression|random-forest|confidence-interval|uncertainty
['https://www.jmlr.org/papers/volume15/wager14a/wager14a.pdf', 'https://arxiv.org/pdf/1912.07435.pdf']
2
57,211,063
<p>You have two distinct problems: you need to turn the geometric problem into a combinatoric problem, and then you need to solve the combinatoric problem. For the latter, you are looking at a <a href="https://en.wikipedia.org/wiki/Set_cover_problem" rel="nofollow noreferrer">minimum set cover problem</a>, and there should be plenty of literature on that. Personally I like Knuth's <a href="https://arxiv.org/abs/cs/0011047" rel="nofollow noreferrer">Dancing Links</a> approach to enumerate all solutions of a set cover, but I guess for a single minimal solution you can do better. A CPLEX formulation (to match your tag) would use a binary variable for each row, and a ≥1 constraint for each column.</p> <p>So now about turning geometry into combinatorics. All the lines of all your circles divide the plane into a bunch of areas. The areas are delimited by lines. Of particular relevance are the points where two or more circles meet. The exact shape of the line between these points is less relevant, and you might imagine pulling those arcs straight to come up with a more classical planar graph representation. So compute all the pair-wise intersections between all your circles. Order all intersections of a single circle <a href="https://stackoverflow.com/q/16542042/1468366">by angle</a> and connect them with graph edges in that order. Do so for all circles. Then you can do a kind of bucket fill to determine for each circle which graph faces are within and which are outside.</p> <p>Now you have your matrix for the set cover: every graph face which is inside the big circle is a column you need to cover. Every circle is a row and covers some of these faces, and you know which.</p>
2019-07-25 22:49:36.797000+00:00
2019-07-25 22:49:36.797000+00:00
null
null
57,209,471
<p>I'm looking for some solutions that, given a set <code>S</code> of circles with 2D-center points and radii, returns a minimal sub-set <code>M</code> in <code>S</code> that covers entirely a specific circle with 2d-center point and radius. This last circle is not in <code>S</code>.</p> <p>I've chosen circles, but it doesn't matter if we change them to squares, hexagons, etc.</p>
2019-07-25 20:07:47.687000+00:00
2019-07-25 22:49:36.797000+00:00
2019-07-25 22:17:44.307000+00:00
python|optimization|geometry|mathematical-optimization|cplex
['https://en.wikipedia.org/wiki/Set_cover_problem', 'https://arxiv.org/abs/cs/0011047', 'https://stackoverflow.com/q/16542042/1468366']
3
42,960,635
<p>There are a couple of papers addressing this issue. For example in <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf" rel="nofollow noreferrer">http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf</a> some general principles are mentioned, like preserving information by not having too rapid changes in any cut of the graph seperating the output from the input.</p> <p>Another paper is <a href="https://arxiv.org/pdf/1606.02228.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1606.02228.pdf</a> where specific hyperparameter combinations are tried. </p> <p>The remainder are just what you observe in practice and depends on your dataset and on your requirement. Maybe you have performance requirements because you want to deploy to mobile or you need more than 90 % accuracy. Then you will have to choose your model accordingly.</p>
2017-03-22 19:09:15.080000+00:00
2017-03-22 19:09:15.080000+00:00
null
null
42,955,746
<p>Currently, I am working on deep neural network for image detection and I founded a model called YOLO Network, and it's very powerful to make objects detections, but I have a question:</p> <ul> <li>How can we design and concept our own model? Do we use a brut force for that, for example "I use 2 convolutional and 1 pooling layer and 1 fully connected layer" after that if the result is'nt good I change the number of layers and change the parameter until I find the best model, Please if there is anyone who knows some informations about that, show me how ?</li> </ul> <p>I use Tensorflow.</p> <p>Thanks, </p>
2017-03-22 15:16:12.803000+00:00
2017-03-22 19:09:15.080000+00:00
null
tensorflow|deep-learning|object-detection
['http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf', 'https://arxiv.org/pdf/1606.02228.pdf']
2
46,043,209
<p>I was asking myself the exact same question, and wondering why wouldn't it change. By looking at the <a href="https://arxiv.org/pdf/1412.6980v8.pdf" rel="nofollow noreferrer">original paper</a> (page 2), one sees that the <code>self._lr</code> stepsize (designed with <code>alpha</code> in the paper) is required by the algorithm, but never updated. We also see that there is an <code>alpha_t</code> that is updated for every <code>t</code> step, and should correspond to the <code>self._lr_t</code> attribute. But in fact, as you observe, <strong>evaluating the value for the <code>self._lr_t</code> tensor at any point during the training returns always the initial value, that is, <code>_lr</code></strong>.</p> <p>So your question, as I understood it, is <strong>how to get the <code>alpha_t</code> for TensorFlow's AdamOptimizer as described in section 2 of the paper and in the corresponding <a href="https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer">TF v1.2 API page</a></strong>:</p> <blockquote> <p><code>alpha_t = alpha * sqrt(1-beta_2_t) / (1-beta_1_t)</code></p> </blockquote> <h3>BACKGROUND</h3> <p>As you observed, the <code>_lr_t</code> tensor doesn't change thorough the training, which may lead to the false conclusion that the optimizer doesn't adapt (this can be easily tested by switching to the <em>vanilla</em> <code>GradientDescentOptimizer</code> with the same <code>alpha</code>). And, in fact, other values do change: a quick look at the optimizer's <code>__dict__</code> shows the following keys: <code>['_epsilon_t', '_lr', '_beta1_t', '_lr_t', '_beta1', '_beta1_power', '_beta2', '_updated_lr', '_name', '_use_locking', '_beta2_t', '_beta2_power', '_epsilon', '_slots']</code>.</p> <p>By inspecting them through training, I noticed that <strong>only <code>_beta1_power</code>, <code>_beta2_power</code> and the <code>_slots</code> get updated</strong>.</p> <p>Further inspecting <a href="https://github.com/tensorflow/tensorflow/blob/927f811b0303e51126531c135f8b093383de2d6d/tensorflow/python/training/adam.py#L211" rel="nofollow noreferrer">the optimizer's code</a>, in line 211, we see the following update:</p> <pre><code>update_beta1 = self._beta1_power.assign( self._beta1_power * self._beta1_t, use_locking=self._use_locking) </code></pre> <p>Which basically means that <code>_beta1_power</code>, which is <a href="https://github.com/tensorflow/tensorflow/blob/927f811b0303e51126531c135f8b093383de2d6d/tensorflow/python/training/adam.py#L120" rel="nofollow noreferrer">initialized with <code>_beta1</code></a>, will be multiplied by <code>_beta_1_t</code> after every iteration, which <a href="https://github.com/tensorflow/tensorflow/blob/927f811b0303e51126531c135f8b093383de2d6d/tensorflow/python/training/adam.py#L133" rel="nofollow noreferrer">is also initialized with <code>beta_1_t</code></a>.</p> <p>But here comes the confusing part: <strong><code>_beta1_t</code> and <code>_beta2_t</code> never get updated</strong>, so effectively they hold the initial values (<code>_beta1</code>and <code>_beta2</code>) through the whole training, contradicting the notation of the paper in a similar fashion as <code>_lr</code> and <code>lr_t</code> do. <em>I guess this is for a reason but I personally don't know why, in any case this are protected/private attributes of the implementation (as they start with an underscore) and don't belong to the public interface (they may even change among TF versions).</em></p> <p>So after this small background we can see that <code>_beta_1_power</code> and <code>_beta_2_power</code> are the original beta values exponentiated to the current training step, that is, the equivalent to the variables referred with <code>beta_t</code>in the paper. Going back to the definition of <code>alpha_t</code> in the section 2 of the paper, we see that, with this information, it should be pretty straightforward to implement:</p> <h3>SOLUTION</h3> <pre><code>optimizer = tf.train.AdamOptimizer() # rest of the graph... # ... somewhere in your session # note that a0 comes from a scalar, whereas bb1 and bb2 come from tensors and thus have to be evaluated a0, bb1, bb2 = optimizer._lr, optimizer._beta1_power.eval(), optimizer._beta2_power.eval() at = a0* (1-bb2)**0.5 /(1-bb1) print(at) </code></pre> <p>The variable <code>at</code> holds the <code>alpha_t</code> for the current training step.</p> <h3>DISCLAIMER</h3> <p>I couldn't find a cleaner way of getting this value by just using the optimizer's interface, but please let me know if it exists one! I guess there is none, which actually puts into question the usefulness of plotting <code>alpha_t</code>, since <strong>it does not depend on the data</strong>.</p> <p>Also, to complete this information, section 2 of the paper also gives the formula for the weight updates, which is much more telling, but also more plot-intensive. For a very nice and good-looking implementation of that, you may want to take a look at <a href="https://stackoverflow.com/a/44688307/4511978">this nice answer</a> from the post that you linked.</p> <p>Hope it helps! Cheers,<br> Andres</p>
2017-09-04 19:42:15.590000+00:00
2017-09-12 23:53:44.050000+00:00
2017-09-12 23:53:44.050000+00:00
null
38,882,593
<p>I would like to see how the learning rate changes during training (print it out or create a summary and visualize it in tensorboard).</p> <p>Here is a code snippet from what I have so far:</p> <pre class="lang-py prettyprint-override"><code>optimizer = tf.train.AdamOptimizer(1e-3) grads_and_vars = optimizer.compute_gradients(loss) train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step) sess.run(tf.initialize_all_variables()) for i in range(0, 10000): sess.run(train_op) print sess.run(optimizer._lr_t) </code></pre> <p>If I run the code I constantly get the initial learning rate (1e-3) i.e. I see no change.</p> <p>What is a correct way for getting the learning rate at every step?</p> <p>I would like to add that <a href="https://stackoverflow.com/questions/36990476/getting-the-current-learning-rate-from-a-tf-train-adamoptimizer">this question</a> is really similar to mine. However, I cannot post my findings in the comment section there since I do not have enough rep.</p>
2016-08-10 20:06:48.183000+00:00
2017-09-12 23:53:44.050000+00:00
2017-05-23 12:17:15.330000+00:00
tensorflow
['https://arxiv.org/pdf/1412.6980v8.pdf', 'https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/train/AdamOptimizer', 'https://github.com/tensorflow/tensorflow/blob/927f811b0303e51126531c135f8b093383de2d6d/tensorflow/python/training/adam.py#L211', 'https://github.com/tensorflow/tensorflow/blob/927f811b0303e51126531c135f8b093383de2d6d/tensorflow/python/training/adam.py#L120', 'https://github.com/tensorflow/tensorflow/blob/927f811b0303e51126531c135f8b093383de2d6d/tensorflow/python/training/adam.py#L133', 'https://stackoverflow.com/a/44688307/4511978']
6