source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
22
My data set contains a number of numeric attributes and one categorical. Say, NumericAttr1, NumericAttr2, ..., NumericAttrN, CategoricalAttr , where CategoricalAttr takes one of three possible values: CategoricalAttrValue1 , CategoricalAttrValue2 or CategoricalAttrValue3 . I'm using default k-means clustering algorithm implementation for Octave . It works with numeric data only. So my question: is it correct to split the categorical attribute CategoricalAttr into three numeric (binary) variables, like IsCategoricalAttrValue1, IsCategoricalAttrValue2, IsCategoricalAttrValue3 ?
The standard k-means algorithm isn't directly applicable to categorical data, for various reasons. The sample space for categorical data is discrete, and doesn't have a natural origin. A Euclidean distance function on such a space isn't really meaningful. As someone put it, "The fact a snake possesses neither wheels nor legs allows us to say nothing about the relative value of wheels and legs." (from here ) There's a variation of k-means known as k-modes, introduced in this paper by Zhexue Huang, which is suitable for categorical data. Note that the solutions you get are sensitive to initial conditions, as discussed here (PDF), for instance. Huang's paper (linked above) also has a section on "k-prototypes" which applies to data with a mix of categorical and numeric features. It uses a distance measure which mixes the Hamming distance for categorical features and the Euclidean distance for numeric features. A Google search for "k-means mix of categorical data" turns up quite a few more recent papers on various algorithms for k-means-like clustering with a mix of categorical and numeric data. (I haven't yet read them, so I can't comment on their merits.) Actually, what you suggest (converting categorical attributes to binary values, and then doing k-means as if these were numeric values) is another approach that has been tried before (predating k-modes). (See Ralambondrainy, H. 1995. A conceptual version of the k-means algorithm. Pattern Recognition Letters, 16:1147–1157.) But I believe the k-modes approach is preferred for the reasons I indicated above.
{ "source": [ "https://datascience.stackexchange.com/questions/22", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/97/" ] }
41
R has many libraries which are aimed at Data Analysis (e.g. JAGS, BUGS, ARULES etc..), and is mentioned in popular textbooks such as: J.Krusche, Doing Bayesian Data Analysis; B.Lantz, "Machine Learning with R". I've seen a guideline of 5TB for a dataset to be considered as Big Data. My question is: Is R suitable for the amount of Data typically seen in Big Data problems? Are there strategies to be employed when using R with this size of dataset?
Actually this is coming around. In the book R in a Nutshell there is even a section on using R with Hadoop for big data processing. There are some work arounds that need to be done because R does all it's work in memory, so you are basically limited to the amount of RAM you have available to you. A mature project for R and Hadoop is RHadoop RHadoop has been divided into several sub-projects, rhdfs, rhbase, rmr2, plyrmr, and quickcheck ( wiki ).
{ "source": [ "https://datascience.stackexchange.com/questions/41", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/136/" ] }
61
Logic often states that by overfitting a model, its capacity to generalize is limited, though this might only mean that overfitting stops a model from improving after a certain complexity. Does overfitting cause models to become worse regardless of the complexity of data, and if so, why is this the case? Related: Followup to the question above, " When is a Model Underfitted? "
Overfitting is empirically bad. Suppose you have a data set which you split in two, test and training. An overfitted model is one that performs much worse on the test dataset than on training dataset. It is often observed that models like that also in general perform worse on additional (new) test datasets than models which are not overfitted. One way to understand that intuitively is that a model may use some relevant parts of the data (signal) and some irrelevant parts (noise). An overfitted model uses more of the noise, which increases its performance in the case of known noise (training data) and decreases its performance in the case of novel noise (test data). The difference in performance between training and test data indicates how much noise the model picks up; and picking up noise directly translates into worse performance on test data (including future data). Summary: overfitting is bad by definition, this has not much to do with either complexity or ability to generalize, but rather has to do with mistaking noise for signal. P.S. On the "ability to generalize" part of the question, it is very possible to have a model which has inherently limited ability to generalize due to the structure of the model (for example linear SVM, ...) but is still prone to overfitting. In a sense overfitting is just one way that generalization may fail.
{ "source": [ "https://datascience.stackexchange.com/questions/61", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/158/" ] }
128
Latent Dirichlet Allocation (LDA) and Hierarchical Dirichlet Process (HDP) are both topic modeling processes. The major difference is LDA requires the specification of the number of topics, and HDP doesn't. Why is that so? And what are the differences, pros, and cons of both topic modelling methods?
HDP is an extension of LDA, designed to address the case where the number of mixture components (the number of "topics" in document-modeling terms) is not known a priori. So that's the reason why there's a difference. Using LDA for document modeling, one treats each "topic" as a distribution of words in some known vocabulary. For each document a mixture of topics is drawn from a Dirichlet distribution, and then each word in the document is an independent draw from that mixture (that is, selecting a topic and then using it to generate a word). For HDP (applied to document modeling), one also uses a Dirichlet process to capture the uncertainty in the number of topics. So a common base distribution is selected which represents the countably-infinite set of possible topics for the corpus, and then the finite distribution of topics for each document is sampled from this base distribution. As far as pros and cons, HDP has the advantage that the maximum number of topics can be unbounded and learned from the data rather than specified in advance. I suppose though it is more complicated to implement, and unnecessary in the case where a bounded number of topics is acceptable.
{ "source": [ "https://datascience.stackexchange.com/questions/128", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/122/" ] }
130
From wikipedia: dimensionality reduction or dimension reduction is the process of reducing the number of random variables under consideration, and can be divided into feature selection and feature extraction. What is the difference between feature selection and feature extraction? What is an example of dimensionality reduction in a Natural Language Processing task?
Simply put: feature selection: you select a subset of the original feature set; while feature extraction: you build a new set of features from the original feature set. Examples of feature extraction: extraction of contours in images, extraction of digrams from a text, extraction of phonemes from recording of spoken text, etc. Feature extraction involves a transformation of the features, which often is not reversible because some information is lost in the process of dimensionality reduction.
{ "source": [ "https://datascience.stackexchange.com/questions/130", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/122/" ] }
155
One of the common problems in data science is gathering data from various sources in a somehow cleaned (semi-structured) format and combining metrics from various sources for making a higher level analysis. Looking at the other people's effort, especially other questions on this site, it appears that many people in this field are doing somewhat repetitive work. For example analyzing tweets, facebook posts, Wikipedia articles etc. is a part of a lot of big data problems. Some of these data sets are accessible using public APIs provided by the provider site, but usually, some valuable information or metrics are missing from these APIs and everyone has to do the same analyses again and again. For example, although clustering users may depend on different use cases and selection of features, but having a base clustering of Twitter/Facebook users can be useful in many Big Data applications, which is neither provided by the API nor available publicly in independent data sets. Is there any index or publicly available data set hosting site containing valuable data sets that can be reused in solving other big data problems? I mean something like GitHub (or a group of sites/public datasets or at least a comprehensive listing) for the data science. If not, what are the reasons for not having such a platform for data science? The commercial value of data, need to frequently update data sets, ...? Can we not have an open-source model for sharing data sets devised for data scientists?
There is, in fact, a very reasonable list of publicly-available datasets, supported by different enterprises/sources. Some of them are below: Public Datasets on Amazon WebServices ; Frequent Itemset Mining Implementation Repository ; UCI Machine Learning Repository ; KDnuggets -- a big list of lots of public repositories. Now, two considerations on your question. First one, regarding policies of database sharing. From personal experience, there are some databases that can't be made publicly available, either for involving privacy restraints (as for some social network information) or for concerning government information (like health system databases). Another point concerns the usage/application of the dataset. Although some bases can be reprocessed to suit the needs of the application, it would be great to have some nice organization of the datasets by purpose. The taxonomy should involve social graph analysis, itemset mining, classification, and lots of other research areas there may be.
{ "source": [ "https://datascience.stackexchange.com/questions/155", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/227/" ] }
199
LDA has two hyperparameters, tuning them changes the induced topics. What does the alpha and beta hyperparameters contribute to LDA? How does the topic change if one or the other hyperparameters increase or decrease? Why are they hyperparamters and not just parameters?
The Dirichlet distribution is a multivariate distribution. We can denote the parameters of the Dirichlet as a vector of size K of the form ~ $\frac{1}{B(a)} \cdot \prod\limits_{i} x_i^{a_{i-1}}$ , where $a$ is the vector of size $K$ of the parameters, and $\sum x_i = 1$ . Now the LDA uses some constructs like: a document can have multiple topics (because of this multiplicity, we need the Dirichlet distribution); and there is a Dirichlet distribution which models this relation words can also belong to multiple topics, when you consider them outside of a document; so here we need another Dirichlet to model this The previous two are distributions which you do not really see from data, this is why is called latent, or hidden. Now, in Bayesian inference you use the Bayes rule to infer the posterior probability. For simplicity, let's say you have data $x$ and you have a model for this data governed by some parameters $\theta$ . In order to infer values for this parameters, in full Bayesian inference you will infer the posterior probability of these parameters using Bayes' rule with $$p(\theta|x) = \frac{p(x|\theta)p(\theta|\alpha)}{p(x|\alpha)} \iff \text{posterior probability} = \frac{\text{likelihood}\times \text{prior probability}}{\text{marginal likelihood}}$$ Note that here comes an $\alpha$ . This is your initial belief about this distribution, and is the parameter of the prior distribution. Usually this is chosen in such a way that will have a conjugate prior (so the distribution of the posterior is the same with the distribution of the prior) and often to encode some knowledge if you have one or to have maximum entropy if you know nothing. The parameters of the prior are called hyperparameters . So, in LDA, both topic distributions, over documents and over words have also correspondent priors, which are denoted usually with alpha and beta, and because are the parameters of the prior distributions are called hyperparameters. Now about choosing priors. If you plot some Dirichlet distributions you will note that if the individual parameters $\alpha_k$ have the same value, the pdf is symmetric in the simplex defined by the $x$ values, which is the minimum or maximum for pdf is at the center. If all the $\alpha_k$ have values lower than unit the maximum is found at corners or can if all values $\alpha_k$ are the same and greater than 1 the maximum will be found in center like It is easy to see that if values for $\alpha_k$ are not equal the symmetry is broken and the maximum will be found near bigger values. Additional, please note that values for priors parameters produce smooth pdfs of the distribution as the values of the parameters are near 1. So if you have great confidence that something is clearly distributed in a way you know, with a high degree of confidence, than values far from 1 in absolute value are to be used, if you do not have such kind of knowledge than values near 1 would be encode this lack of knowledge. It is easy to see why 1 plays such a role in Dirichlet distribution from the formula of the distribution itself. Another way to understand this is to see that prior encode prior-knowledge. In the same time you might think that prior encode some prior seen data. This data was not seen by the algorithm itself, it was seen by you, you learned something, and you can model prior according to what you know (learned). So in the prior parameters (hyperparameters) you encode also how big this data set you apriori saw, because the sum of $\alpha_k$ can be that also as the size of this more or less imaginary data set. So the bigger the prior data set, the bigger is the confidence, the bigger the values of $\alpha_k$ you can choose, the sharper the surface near maximum value, which means also less doubts. Hope it helped.
{ "source": [ "https://datascience.stackexchange.com/questions/199", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/122/" ] }
253
An aspiring data scientist here. I don't know anything about Hadoop, but as I have been reading about Data Science and Big Data, I see a lot of talk about Hadoop. Is it absolutely necessary to learn Hadoop to be a Data Scientist?
Different people use different tools for different things. Terms like Data Science are generic for a reason. A data scientist could spend an entire career without having to learn a particular tool like hadoop. Hadoop is widely used, but it is not the only platform that is capable of managing and manipulating data, even large scale data. I would say that a data scientist should be familiar with concepts like MapReduce, distributed systems, distributed file systems, and the like, but I wouldn't judge someone for not knowing about such things. It's a big field. There is a sea of knowledge and most people are capable of learning and being an expert in a single drop. The key to being a scientist is having the desire to learn and the motivation to know that which you don't already know. As an example: I could hand the right person a hundred structured CSV files containing information about classroom performance in one particular class over a decade. A data scientist would be able to spend a year gleaning insights from the data without ever needing to spread computation across multiple machines. You could apply machine learning algorithms, analyze it using visualizations, combine it with external data about the region, ethnic makeup, changes to environment over time, political information, weather patterns, etc. All of that would be "data science" in my opinion. It might take something like hadoop to test and apply anything you learned to data comprising an entire country of students rather than just a classroom, but that final step doesn't necessarily make someone a data scientist. And not taking that final step doesn't necessarily disqualify someone from being a data scientist.
{ "source": [ "https://datascience.stackexchange.com/questions/253", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/456/" ] }
262
What are the advantages of HDF compared to alternative formats? What are the main data science tasks where HDF is really suitable and useful?
Perhaps a good way to paraphrase the question is, what are the advantages compared to alternative formats? The main alternatives are, I think: a database, text files, or another packed/binary format. The database options to consider are probably a columnar store or NoSQL, or for small self-contained datasets SQLite. The main advantage of the database is the ability to work with data much larger than memory, to have random or indexed access, and to add/append/modify data quickly. The main *dis*advantage is that it is much slower than HDF, for problems in which the entire dataset needs to be read in and processed. Another disadvantage is that, with the exception of embedded-style databases like SQLite, a database is a system (requiring admnistration, setup, maintenance, etc) rather than a simple self-contained data store. The text file format options are XML/JSON/CSV. They are cross-platform/language/toolkit, and are a good archival format due to the ability to be self-describing (or obvious :). If uncompressed, they are huge (10x-100x HDF), but if compressed, they can be fairly space-efficient (compressed XML is about the same as HDF). The main disadvantage here is again speed: parsing text is much, much slower than HDF. The other binary formats (npy/npz numpy files, blz blaze files, protocol buffers, Avro, ...) have very similar properties to HDF, except they are less widely supported (may be limited to just one platform: numpy) and may have specific other limitations. They typically do not offer a compelling advantage. HDF is a good complement to databases, it may make sense to run a query to produce a roughly memory-sized dataset and then cache it in HDF if the same data would be used more than once. If you have a dataset which is fixed, and usually processed as a whole, storing it as a collection of appropriately sized HDF files is not a bad option. If you have a dataset which is updated often, staging some of it as HDF files periodically might still be helpful. To summarize, HDF is a good format for data which is read (or written) typically as a whole; it is the lingua franca or common/preferred interchange format for many applications due to wide support and compatibility, decent as an archival format, and very fast. P.S. To give this some practical context, my most recent experience comparing HDF to alternatives, a certain small (much less than memory-sized) dataset took 2 seconds to read as HDF (and most of this is probably overhead from Pandas); ~1 minute to read from JSON; and 1 hour to write to database. Certainly the database write could be sped up, but you'd better have a good DBA! This is how it works out of the box.
{ "source": [ "https://datascience.stackexchange.com/questions/262", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/97/" ] }
326
I'm just starting to develop a machine learning application for academic purposes. I'm currently using R and training myself in it. However, in a lot of places, I have seen people using Python . What are people using in academia and industry, and what is the recommendation?
Some real important differences to consider when you are choosing R or Python over one another: Machine Learning has 2 phases. Model Building and Prediction phase. Typically, model building is performed as a batch process and predictions are done realtime . The model building process is a compute intensive process while the prediction happens in a jiffy. Therefore, performance of an algorithm in Python or R doesn't really affect the turn-around time of the user. Python 1, R 1. Production: The real difference between Python and R comes in being production ready. Python, as such is a full fledged programming language and many organisations use it in their production systems. R is a statistical programming software favoured by many academia and due to the rise in data science and availability of libraries and being open source, the industry has started using R. Many of these organisations have their production systems either in Java, C++, C#, Python etc. So, ideally they would like to have the prediction system in the same language to reduce the latency and maintenance issues. Python 2, R 1. Libraries: Both the languages have enormous and reliable libraries. R has over 5000 libraries catering to many domains while Python has some incredible packages like Pandas, NumPy, SciPy, Scikit Learn, Matplotlib . Python 3, R 2. Development: Both the language are interpreted languages. Many say that python is easy to learn, it's almost like reading english (to put it on a lighter note) but R requires more initial studying effort. Also, both of them have good IDEs (Spyder etc for Python and RStudio for R). Python 4, R 2. Speed: R software initially had problems with large computations (say, like nxn matrix multiplications). But, this issue is addressed with the introduction of R by Revolution Analytics. They have re-written computation intensive operations in C which is blazingly fast. Python being a high level language is relatively slow. Python 4, R 3. Visualizations: In data science, we frequently tend to plot data to showcase patterns to users. Therefore, visualisations become an important criteria in choosing a software and R completely kills Python in this regard. Thanks to Hadley Wickham for an incredible ggplot2 package. R wins hands down. Python 4, R 4. Dealing with Big Data: One of the constraints of R is it stores the data in system memory (RAM). So, RAM capacity becomes a constraint when you are handling Big Data. Python does well, but I would say, as both R and Python have HDFS connectors, leveraging Hadoop infrastructure would give substantial performance improvement. So, Python 5, R 5. So, both the languages are equally good. Therefore, depending upon your domain and the place you work, you have to smartly choose the right language. The technology world usually prefers using a single language. Business users (marketing analytics, retail analytics) usually go with statistical programming languages like R, since they frequently do quick prototyping and build visualisations (which is faster done in R than Python).
{ "source": [ "https://datascience.stackexchange.com/questions/326", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/721/" ] }
361
Logic often states that by underfitting a model, it's capacity to generalize is increased. That said, clearly at some point underfitting a model cause models to become worse regardless of the complexity of data. How do you know when your model has struck the right balance and is not underfitting the data it seeks to model? Note: This is a followup to my question, " Why Is Overfitting Bad? "
A model underfits when it is too simple with regards to the data it is trying to model. One way to detect such situation is to use the bias–variance approach , which can represented like this: Your model is underfitted when you have a high bias. To know whether you have a too high bias or a too high variance, you view the phenomenon in terms of training and test errors: High bias: This learning curve shows high error on both the training and test sets, so the algorithm is suffering from high bias: High variance: This learning curve shows a large gap between training and test set errors, so the algorithm is suffering from high variance. If an algorithm is suffering from high variance: more data will probably help otherwise reduce the model complexity If an algorithm is suffering from high bias: increase the model complexity I would advise to watch Coursera' Machine Learning course , section "10: Advice for applying Machine Learning", from which I took the above graphs.
{ "source": [ "https://datascience.stackexchange.com/questions/361", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/158/" ] }
410
I'm currently working on implementing Stochastic Gradient Descent, SGD , for neural nets using back-propagation, and while I understand its purpose I have some questions about how to choose values for the learning rate. Is the learning rate related to the shape of the error gradient, as it dictates the rate of descent? If so, how do you use this information to inform your decision about a value? If it's not what sort of values should I choose, and how should I choose them? It seems like you would want small values to avoid overshooting, but how do you choose one such that you don't get stuck in local minima or take to long to descend? Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient? In short: How do I choose the learning rate for SGD?
Is the learning rate related to the shape of the error gradient, as it dictates the rate of descent? In plain SGD, the answer is no. A global learning rate is used which is indifferent to the error gradient. However, the intuition you are getting at has inspired various modifications of the SGD update rule. If so, how do you use this information to inform your decision about a value? Adagrad is the most widely known of these and scales a global learning rate η on each dimension based on l2 norm of the history of the error gradient gt on each dimension: Adadelta is another such training algorithm which uses both the error gradient history like adagrad and the weight update history and has the advantage of not having to set a learning rate at all . If it's not what sort of values should I choose, and how should I choose them? Setting learning rates for plain SGD in neural nets is usually a process of starting with a sane value such as 0.01 and then doing cross-validation to find an optimal value. Typical values range over a few orders of magnitude from 0.0001 up to 1. It seems like you would want small values to avoid overshooting, but how do you choose one such that you don't get stuck in local minima or take too long to descend? Does it make sense to have a constant learning rate, or should I use some metric to alter its value as I get nearer a minimum in the gradient? Usually, the value that's best is near the highest stable learning rate and learning rate decay/annealing (either linear or exponentially) is used over the course of training. The reason behind this is that early on there is a clear learning signal so aggressive updates encourage exploration while later on the smaller learning rates allow for more delicate exploitation of local error surface.
{ "source": [ "https://datascience.stackexchange.com/questions/410", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/890/" ] }
422
As an extension to our great list of publicly available datasets , I'd like to know if there is any list of publicly available social network datasets/crawling APIs. It would be very nice if alongside with a link to the dataset/API, characteristics of the data available were added. Such information should be, and is not limited to: the name of the social network; what kind of user information it provides (posts, profile, friendship network, ...); whether it allows for crawling its contents via an API (and rate: 10/min, 1k/month, ...); whether it simply provides a snapshot of the whole dataset. Any suggestions and further characteristics to be added are very welcome.
A couple of words about social networks APIs. About a year ago I wrote a review of popular social networks’ APIs for researchers. Unfortunately, it is in Russian. Here is a summary: Twitter ( https://dev.twitter.com/docs/api/1.1 ) almost all data about tweets/texts and users is available; lack of sociodemographic data; great streaming API: useful for real time text processing; a lot of wrappers for programing languages; getting network structure (connections) is possible, but time-expensive (1 request per 1 minute). Facebook ( https://developers.facebook.com/docs/reference/api/ ) rate limits: about 1 request per second; well documented, sandbox present; FQL (SQL-like) and «regular Rest» Graph API; friendship data and sociodemographic features present; a lot of data is beyond event horizon : only friends' and friends' of friends data is more or less complete, almost nothing could be investigated about random user; some strange API bugs, and looks like nobody cares about it (e.g., some features available through FQL, but not through Graph API synonym). Instagram ( http://instagram.com/developer/ ) rate limits: 5000 requests per hour; real-time API (like Streaming API for Twitter, but with photos) - connection to it is a little bit tricky: callbacks are used; lack of sociodemographic data; photos, filters data available; unexpected imperfections (e.g., it’s possible to collect only 150 comments to post/photo). Foursquare ( https://developer.foursquare.com/overview/ ) rate limits: 5000 requests per hour; kingdom of geosocial data :) quite closed from researches because of privacy issues. To collect checkins data one need to build composite parser working with 4sq, bit.ly, and twitter APIs at once; again: lack of sociodemographic data. Google+ ( https://developers.google.com/+/api/latest/ ) about 5 requests per second (try to verify); main methods: activities and people; like on Facebook, a lot of personal data for random user is hidden; lack of user connections data. And out-of-competition: I reviewed social networks for Russian readers, and #1 network here is vk.com . It’s translated to many languages, but popular only in Russia and other CIS countries. API docs link: http://vk.com/dev/ . And from my point of view, it’s the best choice for homebrew social media research. At least, in Russia. That’s why: rate limits: 3 requests per second; public text and media data available; sociodemographic data available: for random user availability level is about 60-70%; connections between users are also available: almost all friendships data for random user is available; some special methods: e.g., there is a method to get online/offline status for exact user in realtime, and one could build schedule for his audience.
{ "source": [ "https://datascience.stackexchange.com/questions/422", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/84/" ] }
441
With Hadoop 2.0 and YARN Hadoop is supposedly no longer tied only map-reduce solutions. With that advancement, what are the use cases for Apache Spark vs Hadoop considering both sit atop of HDFS? I've read through the introduction documentation for Spark, but I'm curious if anyone has encountered a problem that was more efficient and easier to solve with Spark compared to Hadoop.
Hadoop means HDFS, YARN, MapReduce, and a lot of other things. Do you mean Spark vs MapReduce ? Because Spark runs on/with Hadoop, which is rather the point. The primary reason to use Spark is for speed, and this comes from the fact that its execution can keep data in memory between stages rather than always persist back to HDFS after a Map or Reduce. This advantage is very pronounced for iterative computations, which have tens of stages each of which is touching the same data. This is where things might be "100x" faster. For simple, one-pass ETL-like jobs for which MapReduce was designed, it's not in general faster. Another reason to use Spark is its nicer high-level language compared to MapReduce. It provides a functional programming-like view that mimics Scala, which is far nicer than writing MapReduce code. (Although you have to either use Scala, or adopt the slightly-less-developed Java or Python APIs for Spark). Crunch and Cascading already provide a similar abstraction on top of MapReduce, but this is still an area where Spark is nice. Finally Spark has as-yet-young but promising subprojects for ML, graph analysis, and streaming, which expose a similar, coherent API. With MapReduce, you would have to turn to several different other projects for this (Mahout, Giraph, Storm). It's nice to have it in one package, albeit not yet 'baked'. Why would you not use Spark? paraphrasing myself: Spark is primarily Scala, with ported Java APIs; MapReduce might be friendlier and more native for Java-based developers There is more MapReduce expertise out there now than Spark For the data-parallel, one-pass, ETL-like jobs MapReduce was designed for, MapReduce is lighter-weight compared to the Spark equivalent Spark is fairly mature, and so is YARN now, but Spark-on-YARN is still pretty new. The two may not be optimally integrated yet. For example until recently I don't think Spark could ask YARN for allocations based on number of cores? That is: MapReduce might be easier to understand, manage and tune
{ "source": [ "https://datascience.stackexchange.com/questions/441", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/426/" ] }
473
The usual definition of regression (as far as I am aware) is predicting a continuous output variable from a given set of input variables . Logistic regression is a binary classification algorithm, so it produces a categorical output. Is it really a regression algorithm? If so, why?
Logistic regression is regression, first and foremost. It becomes a classifier by adding a decision rule. I will give an example that goes backwards. That is, instead of taking data and fitting a model, I'm going to start with the model in order to show how this is truly a regression problem. In logistic regression, we are modeling the log odds, or logit, that an event occurs, which is a continuous quantity. If the probability that event $A$ occurs is $P(A)$, the odds are: $$\frac{P(A)}{1 - P(A)}$$ The log odds, then, are: $$\log \left( \frac{P(A)}{1 - P(A)}\right)$$ As in linear regression, we model this with a linear combination of coefficients and predictors: $$\operatorname{logit} = b_0 + b_1x_1 + b_2x_2 + \cdots$$ Imagine we are given a model of whether a person has gray hair. Our model uses age as the only predictor. Here, our event A = a person has gray hair: log odds of gray hair = -10 + 0.25 * age ...Regression! Here is some Python code and a plot: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns x = np.linspace(0, 100, 100) def log_odds(x): return -10 + .25 * x plt.plot(x, log_odds(x)) plt.xlabel("age") plt.ylabel("log odds of gray hair") Now, let's make it a classifier. First, we need to transform the log odds to get out our probability $P(A)$. We can use the sigmoid function: $$P(A) = \frac1{1 + \exp(-\text{log odds}))}$$ Here's the code: plt.plot(x, 1 / (1 + np.exp(-log_odds(x)))) plt.xlabel("age") plt.ylabel("probability of gray hair") The last thing we need to make this a classifier is to add a decision rule. One very common rule is to classify a success whenever $P(A) > 0.5$. We will adopt that rule, which implies that our classifier will predict gray hair whenever a person is older than 40 and will predict non-gray hair whenever a person is under 40. Logistic regression works great as a classifier in more realistic examples too, but before it can be a classifier, it must be a regression technique!
{ "source": [ "https://datascience.stackexchange.com/questions/473", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/922/" ] }
488
I thought that generalized linear model (GLM) would be considered a statistical model, but a friend told me that some papers classify it as a machine learning technique. Which one is true (or more precise)? Any explanation would be appreciated.
A GLM is absolutely a statistical model, but statistical models and machine learning techniques are not mutually exclusive. In general, statistics is more concerned with inferring parameters, whereas in machine learning, prediction is the ultimate goal.
{ "source": [ "https://datascience.stackexchange.com/questions/488", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/1021/" ] }
678
When I say "document", I have in mind web pages like Wikipedia articles and news stories. I prefer answers giving either vanilla lexical distance metrics or state-of-the-art semantic distance metrics, with stronger preference for the latter.
There's a number of different ways of going about this depending on exactly how much semantic information you want to retain and how easy your documents are to tokenize (html documents would probably be pretty difficult to tokenize, but you could conceivably do something with tags and context.) Some of them have been mentioned by ffriend, and the paragraph vectors by user1133029 is a really solid one, but I just figured I would go into some more depth about plusses and minuses of different approaches. Cosine Distance - Tried a true, cosine distance is probably the most common distance metric used generically across multiple domains. With that said, there's very little information in cosine distance that can actually be mapped back to anything semantic, which seems to be non-ideal for this situation. Levenshtein Distance - Also known as edit distance , this is usually just used on the individual token level (words, bigrams, etc...). In general I wouldn't recommend this metric as it not only discards any semantic information, but also tends to treat very different word alterations very similarly, but it is an extremely common metric for this kind of thing LSA - Is a part of a large arsenal of techniques when it comes to evaluating document similarity called topic modeling . LSA has gone out of fashion pretty recently, and in my experience, it's not quite the strongest topic modeling approach, but it is relatively straightforward to implement and has a few open source implementations LDA - Is also a technique used for topic modeling , but it's different from LSA in that it actually learns internal representations that tend to be more smooth and intuitive. In general, the results you get from LDA are better for modeling document similarity than LSA , but not quite as good for learning how to discriminate strongly between topics. Pachinko Allocation - Is a really neat extension on top of LDA. In general, this is just a significantly improved version of LDA , with the only downside being that it takes a bit longer to train and open-source implementations are a little harder to come by word2vec - Google has been working on a series of techniques for intelligently reducing words and documents to more reasonable vectors than the sparse vectors yielded by techniques such as Count Vectorizers and TF-IDF . Word2vec is great because it has a number of open source implementations. Once you have the vector, any other similarity metric (like cosine distance) can be used on top of it with significantly more efficacy. doc2vec - Also known as paragraph vectors , this is the latest and greatest in a series of papers by Google, looking into dense vector representations of documents. The gensim library in python has an implementation of word2vec that is straightforward enough that it can pretty reasonably be leveraged to build doc2vec , but make sure to keep the license in mind if you want to go down this route Hope that helps, let me know if you've got any questions.
{ "source": [ "https://datascience.stackexchange.com/questions/678", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/1097/" ] }
679
I made a similar question asking about distance between "documents" (Wikipedia articles, news stories, etc.). I made this a separate question because search queries are considerably smaller than documents and are considerably noisier. I hence don't know (and doubt) if the same distance metrics would be used here. Either vanilla lexical distance metrics or state-of-the-art semantic distance metrics are preferred, with stronger preference for the latter.
There's a number of different ways of going about this depending on exactly how much semantic information you want to retain and how easy your documents are to tokenize (html documents would probably be pretty difficult to tokenize, but you could conceivably do something with tags and context.) Some of them have been mentioned by ffriend, and the paragraph vectors by user1133029 is a really solid one, but I just figured I would go into some more depth about plusses and minuses of different approaches. Cosine Distance - Tried a true, cosine distance is probably the most common distance metric used generically across multiple domains. With that said, there's very little information in cosine distance that can actually be mapped back to anything semantic, which seems to be non-ideal for this situation. Levenshtein Distance - Also known as edit distance , this is usually just used on the individual token level (words, bigrams, etc...). In general I wouldn't recommend this metric as it not only discards any semantic information, but also tends to treat very different word alterations very similarly, but it is an extremely common metric for this kind of thing LSA - Is a part of a large arsenal of techniques when it comes to evaluating document similarity called topic modeling . LSA has gone out of fashion pretty recently, and in my experience, it's not quite the strongest topic modeling approach, but it is relatively straightforward to implement and has a few open source implementations LDA - Is also a technique used for topic modeling , but it's different from LSA in that it actually learns internal representations that tend to be more smooth and intuitive. In general, the results you get from LDA are better for modeling document similarity than LSA , but not quite as good for learning how to discriminate strongly between topics. Pachinko Allocation - Is a really neat extension on top of LDA. In general, this is just a significantly improved version of LDA , with the only downside being that it takes a bit longer to train and open-source implementations are a little harder to come by word2vec - Google has been working on a series of techniques for intelligently reducing words and documents to more reasonable vectors than the sparse vectors yielded by techniques such as Count Vectorizers and TF-IDF . Word2vec is great because it has a number of open source implementations. Once you have the vector, any other similarity metric (like cosine distance) can be used on top of it with significantly more efficacy. doc2vec - Also known as paragraph vectors , this is the latest and greatest in a series of papers by Google, looking into dense vector representations of documents. The gensim library in python has an implementation of word2vec that is straightforward enough that it can pretty reasonably be leveraged to build doc2vec , but make sure to keep the license in mind if you want to go down this route Hope that helps, let me know if you've got any questions.
{ "source": [ "https://datascience.stackexchange.com/questions/679", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/1097/" ] }
694
I'm using Neural Networks to solve different Machine learning problems. I'm using Python and pybrain but this library is almost discontinued. Are there other good alternatives in Python?
UPDATE: the landscape has changed quite a bit since I answered this question in July '14, and some new players have entered the space. In particular, I would recommend checking out: TensorFlow Blocks Lasagne Keras Deepy Nolearn NeuPy They each have their strengths and weaknesses, so give them all a go and see which best suits your use case. Although I would have recommended using PyLearn2 a year ago, the community is no longer active so I would recommend looking elsewhere. My original response to the answer is included below but is largely irrelevant at this point. PyLearn2 is generally considered the library of choice for neural networks and deep learning in python. It's designed for easy scientific experimentation rather than ease of use, so the learning curve is rather steep, but if you take your time and follow the tutorials I think you'll be happy with the functionality it provides. Everything from standard Multilayer Perceptrons to Restricted Boltzmann Machines to Convolutional Nets to Autoencoders is provided. There's great GPU support and everything is built on top of Theano, so performance is typically quite good. The source for PyLearn2 is available on github . Be aware that PyLearn2 has the opposite problem of PyBrain at the moment -- rather than being abandoned, PyLearn2 is under active development and is subject to frequent changes.
{ "source": [ "https://datascience.stackexchange.com/questions/694", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/989/" ] }
711
This question is in response to a comment I saw on another question. The comment was regarding the Machine Learning course syllabus on Coursera, and along the lines of "SVMs are not used so much nowadays". I have only just finished the relevant lectures myself, and my understanding of SVMs is that they are a robust and efficient learning algorithm for classification, and that when using a kernel, they have a "niche" covering number of features perhaps 10 to 1000 and number of training samples perhaps 100 to 10,000. The limit on training samples is because the core algorithm revolves around optimising results generated from a square matrix with dimensions based on number of training samples, not number of original features. So does the comment I saw refer some real change since the course was made, and if so, what is that change: A new algorithm that covers SVM's "sweet spot" just as well, better CPUs meaning SVM's computational advantages are not worth as much? Or is it perhaps opinion or personal experience of the commenter? I tried a search for e.g. "are support vector machines out of fashion" and found nothing to imply they were being dropped in favour of anything else. And Wikipedia has this: http://en.wikipedia.org/wiki/Support_vector_machine#Issues . . . the main sticking point appears to be difficulty of interpreting the model. Which makes SVM fine for a black-box predicting engine, but not so good for generating insights. I don't see that as a major issue, just another minor thing to take into account when picking the right tool for the job (along with nature of the training data and learning task etc).
SVM is a powerful classifier. It has some nice advantages (which I guess were responsible for its popularity)... These are: Efficiency: Only the support vectors play a role in determining the classification boundary. All other points from the training set needn't be stored in memory. The so-called power of kernels: With appropriate kernels you can transform feature space into a higher dimension so that it becomes linearly separable. The notion of kernels work with arbitrary objects on which you can define some notion of similarity with the help of inner products... and hence SVMs can classify arbitrary objects such as trees, graphs etc. There are some significant disadvantages as well. Parameter sensitivity: The performance is highly sensitive to the choice of the regularization parameter C, which allows some variance in the model. Extra parameter for the Gaussian kernel: The radius of the Gaussian kernel can have a significant impact on classifier accuracy. Typically a grid search has to be conducted to find optimal parameters. LibSVM has a support for grid search. SVMs generally belong to the class of "Sparse Kernel Machines". The sparse vectors in the case of SVM are the support vectors which are chosen from the maximum margin criterion. Other sparse vector machines such as the Relevance Vector Machine (RVM) perform better than SVM. The following figure shows a comparative performance of the two. In the figure, the x-axis shows one dimensional data from two classes y={0,1}. The mixture model is defined as P(x|y=0)=Unif(0,1) and P(x|y=1)=Unif(.5,1.5) (Unif denotes uniform distribution). 1000 points were sampled from this mixture and an SVM and an RVM were used to estimate the posterior. The problem of SVM is that the predicted values are far off from the true log odds. A very effective classifier, which is very popular nowadays, is the Random Forest . The main advantages are: Only one parameter to tune (i.e. the number of trees in the forest) Not utterly parameter sensitive Can easily be extended to multiple classes Is based on probabilistic principles (maximizing mutual information gain with the help of decision trees)
{ "source": [ "https://datascience.stackexchange.com/questions/711", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/836/" ] }
731
When I started with artificial neural networks (NN) I thought I'd have to fight overfitting as the main problem. But in practice I can't even get my NN to pass the 20% error rate barrier. I can't even beat my score on random forest! I'm seeking some very general or not so general advice on what should one do to make a NN start capturing trends in data. For implementing NN I use Theano Stacked Auto Encoder with the code from tutorial that works great (less than 5% error rate) for classifying the MNIST dataset. It is a multilayer perceptron, with softmax layer on top with each hidden later being pre-trained as autoencoder (fully described at tutorial , chapter 8). There are ~50 input features and ~10 output classes. The NN has sigmoid neurons and all data are normalized to [0,1]. I tried lots of different configurations: number of hidden layers and neurons in them (100->100->100, 60->60->60, 60->30->15, etc.), different learning and pre-train rates, etc. And the best thing I can get is a 20% error rate on the validation set and a 40% error rate on the test set. On the other hand, when I try to use Random Forest (from scikit-learn) I easily get a 12% error rate on the validation set and 25%(!) on the test set. How can it be that my deep NN with pre-training behaves so badly? What should I try?
The problem with deep networks is that they have lots of hyperparameters to tune and very small solution space. Thus, finding good ones is more like an art rather than engineering task. I would start with working example from tutorial and play around with its parameters to see how results change - this gives a good intuition (though not formal explanation) about dependencies between parameters and results (both - final and intermediate). Also I found following papers very useful: Visually Debugging Restricted Boltzmann Machine Training with a 3D Example A Practical Guide to Training Restricted Boltzmann Machines They both describe RBMs, but contain some insights on deep networks in general. For example, one of key points is that networks need to be debugged layer-wise - if previous layer doesn't provide good representation of features, further layers have almost no chance to fix it.
{ "source": [ "https://datascience.stackexchange.com/questions/731", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2471/" ] }
739
I am an MSc student at the University of Edinburgh, specialized in machine learning and natural language processing. I had some practical courses focused on data mining, and others dealing with machine learning, bayesian statistics and graphical models. My background is a BSc in Computer Science. I did some software engineering and I learnt the basic concepts, such as design patterns, but I have never been involved in a large software development project. However, I had a data mining project in my MSc. My question is, if I want to go for a career as Data Scientist, should I apply for a graduate data scientist position first, or should I get a position as graduate software engineer first, maybe something related to data science, such as big data infrastructure or machine learning software development? My concern is that I might need good software engineering skills for data science, and I am not sure if these can be obtained by working as a graduate data scientist directly. Moreover, at the moment I like Data Mining, but what if I want to change my career to software engineering in the future? It might be difficult if I specialised so much in data science. I have not been employed yet, so my knowledge is still limited. Any clarification or advice are welcome, as I am about to finish my MSc and I want to start applying for graduate positions in early October.
1) I think that there's no need to question whether your background is adequate for a career in data science. CS degree IMHO is more than enough for data scientist from software engineering point of view. Having said that, theoretical knowledge is not very helpful without matching practical experience , so I would definitely try to enrich my experience through participating in additional school projects, internships or open source projects (maybe ones, focused on data science / machine learning / artificial intelligence). 2) I believe your concern about focusing on data science too early is unfounded, as long as you will be practicing software engineering either as a part of your data science job, or additionally in your spare time. 3) I find the following definition of a data scientist rather accurate and hope it will be helpful in your future career success: A data scientist is someone who is better at statistics than any software engineer and better at software engineering than any statistician. P.S. Today's enormous number of various resources on data science topics is mind-blowing, but this open source curriculum for learning data science might fill some gaps between your BSc/MSc respective curricula and reality of the data science career (or, at least, provide some direction for further research and maybe answer some of your concerns): http://datasciencemasters.org , or on GitHub: https://github.com/datasciencemasters/go .
{ "source": [ "https://datascience.stackexchange.com/questions/739", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2489/" ] }
744
It looks like the cosine similarity of two features is just their dot product scaled by the product of their magnitudes. When does cosine similarity make a better distance metric than the dot product? I.e. do the dot product and cosine similarity have different strengths or weaknesses in different situations?
Think geometrically. Cosine similarity only cares about angle difference, while dot product cares about angle and magnitude. If you normalize your data to have the same magnitude, the two are indistinguishable. Sometimes it is desirable to ignore the magnitude, hence cosine similarity is nice, but if magnitude plays a role, dot product would be better as a similarity measure. Note that neither of them is a "distance metric".
{ "source": [ "https://datascience.stackexchange.com/questions/744", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2507/" ] }
758
I am working on a data science project using Python. The project has several stages. Each stage comprises of taking a data set, using Python scripts, auxiliary data, configuration and parameters, and creating another data set. I store the code in git, so that part is covered. I would like to hear about: Tools for data version control. Tools enabling to reproduce stages and experiments. Protocol and suggested directory structure for such a project. Automated build/run tools.
The topic of reproducible research (RR) is very popular today and, consequently, is huge , but I hope that my answer will be comprehensive enough as an answer and will provide enough information for further research , should you decide to do so. While Python-specific tools for RR certainly exist out there, I think it makes more sense to focus on more universal tools (you never know for sure what programming languages and computing environments you will be working with in the future). Having said that, let's take a look what tools are available per your list. 1) Tools for data version control . Unless you plan to work with (very) big data , I guess, it would make sense to use the same git , which you use for source code version control. The infrastructure is already there. Even if your files are binary and big, this advice might be helpful: https://stackoverflow.com/questions/540535/managing-large-binary-files-with-git . 2) Tools for managing RR workflows and experiments . Here's a list of most popular tools in this category, to the best of my knowledge (in the descending order of popularity): Taverna Workflow Management System ( http://www.taverna.org.uk ) - very solid, if a little too complex, set of tools. The major tool is a Java-based desktop software. However, it is compatible with online workflow repository portal myExperiment ( http://www.myexperiment.org ), where user can store and share their RR workflows. Web-based RR portal, fully compatible with Taverna is called Taverna Online , but it is being developed and maintained by totally different organization in Russia (referred there to as OnlineHPC : http://onlinehpc.com ). The Kepler Project ( https://kepler-project.org ) VisTrails ( http://vistrails.org ) Madagascar ( http://www.reproducibility.org ) EXAMPLE . Here's an interesting article on scientific workflows with an example of the real workflow design and data analysis, based on using Kepler and myExperiment projects: http://f1000research.com/articles/3-110/v1 . There are many RR tools that implement literate programming paradigm, exemplified by LaTeX software family. Tools that help in report generation and presentation is also a large category, where Sweave and knitr are probably the most well-known ones. Sweave is a tool, focused on R, but it can be integrated with Python-based projects, albeit with some additional effort ( https://stackoverflow.com/questions/2161152/sweave-for-python ). I think that knitr might be a better option, as it's modern, has extensive support by popular tools (such as RStudio ) and is language-neutral ( http://yihui.name/knitr/demo/engines ). 3) Protocol and suggested directory structure . If I understood correctly what you implied by using term protocol ( workflow ), generally I think that standard RR data analysis workflow consists of the following sequential phases: data collection => data preparation (cleaning, transformation, merging, sampling) => data analysis => presentation of results (generating reports and/or presentations). Nevertheless, every workflow is project-specific and, thus, some specific tasks might require adding additional steps. For sample directory structure, you may take a look at documentation for R package ProjectTemplate ( http://projecttemplate.net ), as an attempt to automate data analysis workflows and projects: 4) Automated build/run tools . Since my answer is focused on universal (language-neutral) RR tools, the most popular tools is make . Read the following article for some reasons to use make as the preferred RR workflow automation tool: http://bost.ocks.org/mike/make . Certainly, there are other similar tools, which either improve some aspects of make , or add some additional features. For example: ant (officially, Apache Ant: http://ant.apache.org ), Maven ("next generation ant ": http://maven.apache.org ), rake ( https://github.com/ruby/rake ), Makepp ( http://makepp.sourceforge.net ). For a comprehensive list of such tools, see Wikipedia: http://en.wikipedia.org/wiki/List_of_build_automation_software .
{ "source": [ "https://datascience.stackexchange.com/questions/758", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/895/" ] }
761
What is the right approach and clustering algorithm for geolocation clustering? I'm using the following code to cluster geolocation coordinates: import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans2, whiten coordinates= np.array([ [lat, long], [lat, long], ... [lat, long] ]) x, y = kmeans2(whiten(coordinates), 3, iter = 20) plt.scatter(coordinates[:,0], coordinates[:,1], c=y); plt.show() Is it right to use K-means for geolocation clustering, as it uses Euclidean distance, and not Haversine formula as a distance function?
K-means is not the most appropriate algorithm here. The reason is that k-means is designed to minimize variance . This is, of course, appearling from a statistical and signal procssing point of view, but your data is not "linear". Since your data is in latitude, longitude format, you should use an algorithm that can handle arbitrary distance functions, in particular geodetic distance functions. Hierarchical clustering, PAM, CLARA, and DBSCAN are popular examples of this. This recommends OPTICS clustering. The problems of k-means are easy to see when you consider points close to the +-180 degrees wrap-around. Even if you hacked k-means to use Haversine distance, in the update step when it recomputes the mean the result will be badly screwed. Worst case is, k-means will never converge!
{ "source": [ "https://datascience.stackexchange.com/questions/761", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2533/" ] }
793
How can NoSQL databases like MongoDB be used for data analysis? What are the features in them that can make data analysis faster and powerful?
To be perfectly honest, most NoSQL databases are not very well suited to applications in big data. For the vast majority of all big data applications, the performance of MongoDB compared to a relational database like MySQL is significantly is poor enough to warrant staying away from something like MongoDB entirely. With that said, there are a couple of really useful properties of NoSQL databases that certainly work in your favor when you're working with large data sets, though the chance of those benefits outweighing the generally poor performance of NoSQL compared to SQL for read-intensive operations (most similar to typical big data use cases) is low. No Schema - If you're working with a lot of unstructured data, it might be hard to actually decide on and rigidly apply a schema. NoSQL databases in general are very supporting of this, and will allow you to insert schema-less documents on the fly, which is certainly not something an SQL database will support. JSON - If you happen to be working with JSON-style documents instead of with CSV files, then you'll see a lot of advantage in using something like MongoDB for a database-layer. Generally the workflow savings don't outweigh the increased query-times though. Ease of Use - I'm not saying that SQL databases are always hard to use, or that Cassandra is the easiest thing in the world to set up, but in general NoSQL databases are easier to set up and use than SQL databases. MongoDB is a particularly strong example of this, known for being one of the easiest database layers to use (outside of SQLite ). SQL also deals with a lot of normalization and there's a large legacy of SQL best practices that just generally bogs down the development process. Personally I might suggest you also check out graph databases such as Neo4j that show really good performance for certain types of queries if you're looking into picking out a backend for your data science applications.
{ "source": [ "https://datascience.stackexchange.com/questions/793", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2643/" ] }
806
I was starting to look into area under curve(AUC) and am a little confused about its usefulness. When first explained to me, AUC seemed to be a great measure of performance but in my research I've found that some claim its advantage is mostly marginal in that it is best for catching 'lucky' models with high standard accuracy measurements and low AUC. So should I avoid relying on AUC for validating models or would a combination be best? Thanks for all your help.
Really great question, and one that I find that most people don't really understand on an intuitive level. AUC is in fact often preferred over accuracy for binary classification for a number of different reasons. First though, let's talk about exactly what AUC is. Honestly, for being one of the most widely used efficacy metrics, it's surprisingly obtuse to figure out exactly how AUC works. AUC stands for Area Under the Curve , which curve you ask? Well, that would be the ROC curve. ROC stands for Receiver Operating Characteristic , which is actually slightly non-intuitive. The implicit goal of AUC is to deal with situations where you have a very skewed sample distribution, and don't want to overfit to a single class. A great example is in spam detection. Generally, spam datasets are STRONGLY biased towards ham, or not-spam. If your data set is 90% ham, you can get a pretty damn good accuracy by just saying that every single email is ham, which is obviously something that indicates a non-ideal classifier. Let's start with a couple of metrics that are a little more useful for us, specifically the true positive rate ( TPR ) and the false positive rate ( FPR ): Now in this graph, TPR is specifically the ratio of true positive to all positives, and FPR is the ratio of false positives to all negatives. (Keep in mind, this is only for binary classification.) On a graph like this, it should be pretty straightforward to figure out that a prediction of all 0's or all 1's will result in the points of (0,0) and (1,1) respectively. If you draw a line through these lines you get something like this: Which looks basically like a diagonal line (it is), and by some easy geometry, you can see that the AUC of such a model would be 0.5 (height and base are both 1). Similarly, if you predict a random assortment of 0's and 1's, let's say 90% 1's, you could get the point (0.9, 0.9) , which again falls along that diagonal line. Now comes the interesting part. What if we weren't only predicting 0's and 1's? What if instead, we wanted to say that, theoretically we were going to set a cutoff, above which every result was a 1, and below which every result were a 0. This would mean that at the extremes you get the original situation where you have all 0's and all 1's (at a cutoff of 0 and 1 respectively), but also a series of intermediate states that fall within the 1x1 graph that contains your ROC . In practice you get something like this: So basically, what you're actually getting when you do an AUC over accuracy is something that will strongly discourage people going for models that are representative, but not discriminative, as this will only actually select for models that achieve false positive and true positive rates that are significantly above random chance, which is not guaranteed for accuracy.
{ "source": [ "https://datascience.stackexchange.com/questions/806", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2653/" ] }
810
My 'machine learning' task is of separating benign Internet traffic from malicious traffic. In the real world scenario, most (say 90% or more) of Internet traffic is benign. Thus I felt that I should choose a similar data setup for training my models as well. But I came across a research paper or two (in my area of work) which have used a "class balancing" data approach to training the models, implying an equal number of instances of benign and malicious traffic. In general, if I am building machine learning models, should I go for a dataset which is representative of the real world problem, or is a balanced dataset better suited for building the models (since certain classifiers do not behave well with class imbalance, or due to other reasons not known to me)? Can someone shed more light on the pros and cons of both the choices and how to decide which one to go choose?
I would say the answer depends on your use case. Based on my experience: If you're trying to build a representative model -- one that describes the data rather than necessarily predicts -- then I would suggest using a representative sample of your data. If you want to build a predictive model, particularly one that performs well by measure of AUC or rank-order and plan to use a basic ML framework (i.e. Decision Tree, SVM, Naive Bayes, etc), then I would suggest you feed the framework a balanced dataset. Much of the literature on class imbalance finds that random undersampling (down sampling the majority class to the size of the minority class) can drive performance gains. If you're building a predictive model, but are using a more advanced framework (i.e. something that determines sampling parameters via wrapper or a modification of a bagging framework that samples to class equivalence), then I would suggest again feeding the representative sample and letting the algorithm take care of balancing the data for training.
{ "source": [ "https://datascience.stackexchange.com/questions/810", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2661/" ] }
812
What is the best tool to use to visualize (draw the vertices and edges) a graph with 1000000 vertices? There are about 50000 edges in the graph. And I can compute the location of individual vertices and edges. I am thinking about writing a program to generate a svg. Any other suggestions?
I also suggest Gephi software ( https://gephi.github.io ), which seems to be quite powerful. Some additional information on using Gephi with large networks can be found here and, more generally, here . Cytoscape ( http://www.cytoscape.org ) is an alternative to Gephi , being an another popular platform for complex network analysis and visualization. If you'd like to work with networks programmatically (including visualization) in R, Python or C/C++, you can check igraph collection of libraries. Speaking of R, you may find interesting the following blog posts: on using R with Cytoscape ( http://www.vesnam.com/Rblog/viznets1 ) and on using R with Gephi ( http://www.vesnam.com/Rblog/viznets2 ). For extensive lists of network analysis and visualization software , including some comparison and reviews, you might want to check the following pages: 1) http://wiki.cytoscape.org/Network_analysis_links ; 2) http://www.kdnuggets.com/software/social-network-analysis.html ; 3) http://www.activatenetworks.net/social-network-analysis-sna-software-review .
{ "source": [ "https://datascience.stackexchange.com/questions/812", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/192/" ] }
842
I don't know if this is a right place to ask this question, but a community dedicated to Data Science should be the most appropriate place in my opinion. I have just started with Data Science and Machine learning. I am looking for long term project ideas which I can work on for like 8 months. A mix of Data Science and Machine learning would be great. A project big enough to help me understand the core concepts and also implement them at the same time would be very beneficial.
I would try to analyze and solve one or more of the problems published on Kaggle Competitions . Note that the competitions are grouped by their expected complexity , from 101 (bottom of the list) to Research and Featured (top of the list). A color-coded vertical band is a visual guideline for grouping. You can assess time you could spend on a project by adjusting the expected length of corresponding competition, based on your skills and experience . A number of data science project ideas can be found by browsing Coursolve webpage. If you have skills and desire to work on a real data science project , focused on social impacts , visit DataKind projects page. More projects with social impacts focus can be found at Data Science for Social Good webpage. Science Project Ideas page at My NASA Data site looks like another place to visit for inspiration. If you would like to use open data , this long list of applications on Data.gov can provide you with some interesting data science project ideas.
{ "source": [ "https://datascience.stackexchange.com/questions/842", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2725/" ] }
937
I am working on a problem with too many features and training my models takes way too long. I implemented a forward selection algorithm to choose features. However, I was wondering does scikit-learn have a forward selection/stepwise regression algorithm?
No, scikit-learn does not seem to have a forward selection algorithm. However, it does provide recursive feature elimination, which is a greedy feature elimination algorithm similar to sequential backward selection. See the documentation here
{ "source": [ "https://datascience.stackexchange.com/questions/937", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2854/" ] }
962
I know the difference between clustering and classification in machine learning, but I don't understand the difference between text classification and topic modeling for documents. Can I use topic modeling over documents to identify a topic? Can I use classification methods to classify the text inside these documents?
Text Classification I give you a bunch of documents, each of which has a label attached. I ask you to learn why you think the contents of the documents have been given these labels based on their words. Then I give you new documents and ask what you think the label for each one should be. The labels have meaning to me, not to you necessarily. Topic Modeling I give you a bunch of documents, without labels. I ask you to explain why the documents have the words they do by identifying some topics that each is "about". You tell me the topics, by telling me how much of each is in each document, and I decide what the topics "mean" if anything. You'd have to clarify what you me by "identify one topic" or "classify the text".
{ "source": [ "https://datascience.stackexchange.com/questions/962", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2916/" ] }
989
I am trying to run SVR using scikit-learn (python) on a training dataset that has 595605 rows and 5 columns (features) while the test dataset has 397070 rows. The data has been pre-processed and regularized. I am able to successfully run the test examples, but on executing using my dataset and letting it run for over an hour, I could still not see any output or termination of the program. I tried executing using a different IDE and even from the terminal, but that does not seem to be the issue. I also tried changing the 'C' parameter value from 1 to 1e3. I am facing similar issues with all SVM implementations using scikit. Am I not waiting long enough for it to complete? How much time should this execution take? From my experience, it should not require more than a few minutes. Here is my system configuration: Ubuntu 14.04, 8GB RAM, lots of free memory, 4th gen i7 processor
Kernelized SVMs require the computation of a distance function between each point in the dataset, which is the dominating cost of $\mathcal{O}(n_\text{features} \times n_\text{observations}^2)$. The storage of the distances is a burden on memory, so they're recomputed on the fly. Thankfully, only the points nearest the decision boundary are needed most of the time. Frequently computed distances are stored in a cache. If the cache is getting thrashed then the running time blows up to $\mathcal{O}(n_\text{features} \times n_\text{observations}^3)$. You can increase this cache by invoking SVR as model = SVR(cache_size=7000) In general, this is not going to work. But all is not lost. You can subsample the data and use the rest as a validation set, or you can pick a different model. Above the 200,000 observation range, it's wise to choose linear learners. Kernel SVM can be approximated, by approximating the kernel matrix and feeding it to a linear SVM. This allows you to trade off between accuracy and performance in linear time. A popular means of achieving this is to use 100 or so cluster centers found by kmeans/kmeans++ as the basis of your kernel function. The new derived features are then fed into a linear model. This works very well in practice. Tools like sophia-ml and vowpal wabbit are how Google, Yahoo and Microsoft do this. Input/output becomes the dominating cost for simple linear learners. In the abundance of data, nonparametric models perform roughly the same for most problems. The exceptions being structured inputs, like text, images, time series, audio. Further reading How to implement this. How to train an ngram neural network with dropout that scales linearly Kernel Approximations A formal paper on using kmeans to approximate kernel machines
{ "source": [ "https://datascience.stackexchange.com/questions/989", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/793/" ] }
1,028
I have been reading around about Random Forests but I cannot really find a definitive answer about the problem of overfitting. According to the original paper of Breiman, they should not overfit when increasing the number of trees in the forest, but it seems that there is not consensus about this. This is creating me quite some confusion about the issue. Maybe someone more expert than me can give me a more concrete answer or point me in the right direction to better understand the problem.
Every ML algorithm with high complexity can overfit. However, the OP is asking whether an RF will not overfit when increasing the number of trees in the forest. In general, ensemble methods reduces the prediction variance to almost nothing, improving the accuracy of the ensemble. If we define the variance of the expected generalization error of an individual randomized model as: From here , the variance of the expected generalization error of an ensemble corresponds to: where p(x) is the Pearson’s correlation coefficient between the predictions of two randomized models trained on the same data from two independent seeds. If we increase the number of DT's in the RF, larger M , the variance of the ensemble decreases when ρ(x)<1 . Therefore, the variance of an ensemble is strictly smaller than the variance of an individual model. In a nutshell, increasing the number of individual randomized models in an ensemble will never increase the generalization error.
{ "source": [ "https://datascience.stackexchange.com/questions/1028", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/3054/" ] }
1,107
I have a classification problem with approximately 1000 positive and 10000 negative samples in training set. So this data set is quite unbalanced. Plain random forest is just trying to mark all test samples as a majority class. Some good answers about sub-sampling and weighted random forest are given here: What are the implications for training a Tree Ensemble with highly biased datasets? Which classification methods besides RF can handle the problem in the best way?
Max Kuhn covers this well in Ch16 of Applied Predictive Modeling . As mentioned in the linked thread, imbalanced data is essentially a cost sensitive training problem. Thus any cost sensitive approach is applicable to imbalanced data. There are a large number of such approaches. Not all implemented in R: C50, weighted SVMs are options. Jous-boost. Rusboost I think is only available as Matlab code. I don't use Weka, but believe it has a large number of cost sensitive classifiers. Handling imbalanced datasets: A review : Sotiris Kotsiantis, Dimitris Kanellopoulos, Panayiotis Pintelas' On the Class Imbalance Problem : Xinjian Guo, Yilong Yin, Cailing Dong, Gongping Yang, Guangtong Zhou
{ "source": [ "https://datascience.stackexchange.com/questions/1107", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/97/" ] }
1,159
I have a large set of data (about 8GB). I would like to use machine learning to analyze it. So, I think that I should use SVD then PCA to reduce the data dimensionality for efficiency. However, MATLAB and Octave cannot load such a large dataset. What tools I can use to do SVD with such a large amount of data?
First of all, dimensionality reduction is used when you have many covariated dimensions and want to reduce problem size by rotating data points into new orthogonal basis and taking only axes with largest variance. With 8 variables (columns) your space is already low-dimensional, reducing number of variables further is unlikely to solve technical issues with memory size, but may affect dataset quality a lot. In your concrete case it's more promising to take a look at online learning methods. Roughly speaking, instead of working with the whole dataset, these methods take a little part of them (often referred to as "mini-batches") at a time and build a model incrementally. (I personally like to interpret word "online" as a reference to some infinitely long source of data from Internet like a Twitter feed, where you just can't load the whole dataset at once). But what if you really wanted to apply dimensionality reduction technique like PCA to a dataset that doesn't fit into a memory? Normally a dataset is represented as a data matrix X of size n x m , where n is number of observations (rows) and m is a number of variables (columns). Typically problems with memory come from only one of these two numbers. Too many observations (n >> m) When you have too many observations , but the number of variables is from small to moderate, you can build the covariance matrix incrementally . Indeed, typical PCA consists of constructing a covariance matrix of size m x m and applying singular value decomposition to it. With m =1000 variables of type float64, a covariance matrix has size 1000*1000*8 ~ 8Mb, which easily fits into memory and may be used with SVD. So you need only to build the covariance matrix without loading entire dataset into memory - pretty tractable task . Alternatively, you can select a small representative sample from your dataset and approximate the covariance matrix . This matrix will have all the same properties as normal, just a little bit less accurate. Too many variables (n << m) On another hand, sometimes, when you have too many variables , the covariance matrix itself will not fit into memory. E.g. if you work with 640x480 images, every observation has 640*480=307200 variables, which results in a 703Gb covariance matrix! That's definitely not what you would like to keep in memory of your computer, or even in memory of your cluster. So we need to reduce dimensions without building a covariance matrix at all. My favourite method for doing it is Random Projection . In short, if you have dataset X of size n x m , you can multiply it by some sparse random matrix R of size m x k (with k << m ) and obtain new matrix X' of a much smaller size n x k with approximately the same properties as the original one. Why does it work? Well, you should know that PCA aims to find set of orthogonal axes (principal components) and project your data onto first k of them. It turns out that sparse random vectors are nearly orthogonal and thus may also be used as a new basis. And, of course, you don't have to multiply the whole dataset X by R - you can translate every observation x into the new basis separately or in mini-batches. There's also somewhat similar algorithm called Random SVD . I don't have any real experience with it, but you can find example code with explanations here . As a bottom line, here's a short check list for dimensionality reduction of big datasets: If you have not that many dimensions (variables), simply use online learning algorithms. If there are many observations, but a moderate number of variables (covariance matrix fits into memory), construct the matrix incrementally and use normal SVD. If number of variables is too high, use incremental algorithms.
{ "source": [ "https://datascience.stackexchange.com/questions/1159", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/3167/" ] }
2,368
What are the common/best practices to handle time data for machine learning application? For example, if in data set there is a column with timestamp of event, such as "2014-05-05", how you can extract useful features from this column if any? Thanks in advance!
I would start by graphing the time variable vs other variables and looking for trends. For example In this case there is a periodic weekly trend and a long term upwards trend. So you would want to encode two time variables: day_of_week absolute_time In general There are several common time frames that trends occur over: absolute_time day_of_year day_of_week month_of_year hour_of_day minute_of_hour Look for trends in all of these. Weird trends Look for weird trends too. For example you may see rare but persistent time based trends: is_easter is_superbowl is_national_emergency etc. These often require that you cross reference your data against some external source that maps events to time. Why graph? There are two reasons that I think graphing is so important. Weird trends While the general trends can be automated pretty easily (just add them every time), weird trends will often require a human eye and knowledge of the world to find. This is one reason that graphing is so important. Data errors All too often data has serious errors in it. For example, you may find that the dates were encoded in two formats and only one of them has been correctly loaded into your program. There are a myriad of such problems and they are surprisingly common. This is the other reason I think graphing is important, not just for time series, but for any data.
{ "source": [ "https://datascience.stackexchange.com/questions/2368", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/88/" ] }
2,403
I had a conversation with someone recently and mentioned my interest in data analysis and who I intended to learn the necessary skills and tools. They suggested to me that while it is great to learn the tools and build the skills there is little point in doing so unless i have specialized knowledge in a specific field. They basically summed it to that I'd just be like a builder with a pile of tools who could build a few wooden boxes and may be build better things (cabins, cupboards etc), but without knowledge in a specific field I'd never be a builder people would come to for a specific product. Has anyone found this or have any input on what to make of this ? It would seem if it was true one would have to learn the data science aspects of things and then learn a new field just to become specialized.
Drew Conway published the Data Science Venn Diagram , with which I heartily agree: On the one hand, you should really read his post. On the other hand, I can offer my own experience: my subject matter expertise (which I like better as a term than "Substantive Expertise", because you should really also have "Substantive Expertise" in math/stats and hacking) is in the retail business, my math/stats are forecasting and inferential statistics, and my hacking skills lie in R. From this vantage point, I can talk to and understand retailers, and someone who doesn't have at least a passing knowledge of this field will have to face a steep learning curve in a project with retailers. As a side gig, I do statistics in psychology, and it's exactly the same there. And even with quite some knowledge of the hacking/math/stats part of the diagram, I would have a hard time getting up to speed in, say, credit scoring or some other new subject field. Once you have a certain amount of math/stats and hacking skills, it is much better to acquire a grounding in one or more subjects than in adding yet another programming language to your hacking skills, or yet another machine learning algorithm to your math/stats portfolio. After all, once you have a solid math/stats/hacking grounding, you could if need be learn such new tools from the web or from textbooks in a relative short time period. But the subject matter expertise, on the other hand, you will likely not be able to learn from scratch if you start from zero. And clients will rather work with some data scientist A who understands their specific field than with another data scientist B who first needs to learn the basics - even if B is better in math/stats/hacking. Of course, all this will also mean that you will never become an expert in either of the three fields. But that's fine, because you are a data scientist, not a programmer or a statistician or a subject matter expert. There will always be people in the three separate circles who you can learn from. Which is part of what I like about data science. EDIT: A little while and a few thoughts later, I'd like to update this post with a new version of the diagram. I still think that Hacking Skills, Math & Statistics Knowledge and Substantive Expertise (shortened to "Programming", "Statistics" and "Business" for legibility) are important... but I think that the role of Communication is important, too. All the insights you derive by leveraging your hacking, stats and business expertise won't make a bit of a difference unless you can communicate them to people who may not have that unique blend of knowledge. You may need to explain your statistical insights to a business manager who needs to be convinced to spend money or change processes. Or to a programmer who doesn't think statistically. Caulcutt (2021, Significance ) is a short article that says much the same, but gives more detail than I do. So here is the new data science Venn diagram, which also includes communication as one indispensable ingredient. I have labeled the areas in ways that should guarantee maximum flaming, while being easy to remember. Comment away. R code: draw.ellipse <- function(center,angle,semimajor,semiminor,radius,h,s,v,...) { shape <- rbind(c(cos(angle),-sin(angle)),c(sin(angle),cos(angle))) %*% diag(c(semimajor,semiminor)) tt <- seq(0,2*pi,length.out=1000) foo <- matrix(center,nrow=2,ncol=length(tt),byrow=FALSE) + shape%*%(radius*rbind(cos(tt),sin(tt))) polygon(foo[1,],foo[2,],col=hsv(h,s,v,alpha=0.5),border="black",...) } name <- function(x,y,label,cex=1.2,...) text(x,y,label,cex=cex,...) png("Venn.png",width=600,height=600) opar <- par(mai=c(0,0,0,0),lwd=3,font=2) plot(c(0,100),c(0,90),type="n",bty="n",xaxt="n",yaxt="n",xlab="",ylab="") draw.ellipse(center=c(30,30),angle=0.75*pi,semimajor=2,semiminor=1,radius=20,h=60/360,s=.068,v=.976) draw.ellipse(center=c(70,30),angle=0.25*pi,semimajor=2,semiminor=1,radius=20,h=83/360,s=.482,v=.894) draw.ellipse(center=c(48,40),angle=0.7*pi,semimajor=2,semiminor=1,radius=20,h=174/360,s=.397,v=.8) draw.ellipse(center=c(52,40),angle=0.3*pi,semimajor=2,semiminor=1,radius=20,h=200/360,s=.774,v=.745) name(50,90,"The Data Scientist Venn Diagram",pos=1,cex=2) name(8,62,"Communi-\ncation",cex=1.5,pos=3) name(30,78,"Statistics",cex=1.5) name(70,78,"Programming",cex=1.5) name(92,62,"Business",cex=1.5,pos=3) name(10,45,"Hot\nAir") name(90,45,"The\nAccountant") name(33,65,"The\nData\nNerd") name(67,65,"The\nHacker") name(27,50,"The\nStats\nProf") name(73,50,"The\nIT\nGuy") name(50,55,"R\nCore\nTeam") name(38,38,"The\nGood\nConsultant") name(62,38,"Drew\nConway's\nData\nScientist") name(50,24,"The\nperfect\nData\nScientist!") name(31,18,"Comp\nSci\nProf") name(69,18,"The\nNumber\nCruncher") name(42,11,"Head\nof IT") name(58,11,"Ana-\nlyst") name(50,5,"The\nSalesperson") par(opar) dev.off()
{ "source": [ "https://datascience.stackexchange.com/questions/2403", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/4836/" ] }
2,504
I have a big data problem with a large dataset (take for example 50 million rows and 200 columns). The dataset consists of about 100 numerical columns and 100 categorical columns and a response column that represents a binary class problem. The cardinality of each of the categorical columns is less than 50. I want to know a priori whether I should go for deep learning methods or ensemble tree based methods (for example gradient boosting, adaboost, or random forests). Are there some exploratory data analysis or some other techniques that can help me decide for one method over the other?
Why restrict yourself to those two approaches? Because they're cool? I would always start with a simple linear classifier \ regressor. So in this case a Linear SVM or Logistic Regression, preferably with an algorithm implementation that can take advantage of sparsity due to the size of the data. It will take a long time to run a DL algorithm on that dataset, and I would only normally try deep learning on specialist problems where there's some hierarchical structure in the data, such as images or text. It's overkill for a lot of simpler learning problems, and takes a lot of time and expertise to learn and also DL algorithms are very slow to train. Additionally, just because you have 50M rows, doesn't mean you need to use the entire dataset to get good results. Depending on the data, you may get good results with a sample of a few 100,000 rows or a few million. I would start simple, with a small sample and a linear classifier, and get more complicated from there if the results are not satisfactory. At least that way you'll get a baseline. We've often found simple linear models to out perform more sophisticated models on most tasks, so you want to always start there.
{ "source": [ "https://datascience.stackexchange.com/questions/2504", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/847/" ] }
2,646
Are there any articles or discussions about extracting part of text that holds the most of information about current document. For example, I have a large corpus of documents from the same domain. There are parts of text that hold the key information what single document talks about. I want to extract some of those parts and use them as kind of a summary of the text. Is there any useful documentation about how to achieve something like this. It would be really helpful if someone could point me into the right direction what I should search for or read to get some insight in work that might have already been done in this field of Natural language processing.
What you're describing is often achieved using a simple combination of TF-IDF and extractive summarization . In a nutshell, TF-IDF tells you the relative importance of each word in each document, in comparison to the rest of your corpus. At this point, you have a score for each word in each document approximating its "importance." Then you can use these individual word scores to compute a composite score for each sentence by summing the scores of each word in each sentence. Finally, simply take the top-N scoring sentences from each document as its summary. Earlier this year, I put together an iPython Notebook that culminates with an implementation of this in Python using NLTK and Scikit-learn: A Smattering of NLP in Python .
{ "source": [ "https://datascience.stackexchange.com/questions/2646", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2750/" ] }
2,651
I am looking for a paper detailing the very basics of deep learning. Ideally like the Andrew Ng course for deep learning. Do you know where I can find this ?
This link contains an amazing amount of deep learning literature. Summarizing it here(going in the order a beginner ideally should)- NOTE: All these resources mainly use python. 1) First of all, a basic knowledge of machine learning is required. I found Caltech's Learning from data to be ideal of all the machine learning courses available on the net. Andrew Ng's Coursera course is pretty good too. 2) For Neural networks, nobody explains it better than Dr.Patrick Winston . The assignments should be tried out for better understanding. They are in python. 3) For a better understanding of Neural Networks, Michael Nielsen 's course should be done(as suggested by Alexey). It is pretty basic but it works. 4) For deep neural networks, and implementing them faster on GPUs, there are multiple frameworks available, such as Theano , Caffe , Pybrain , Torch ,etc. Out of these Theano provides a better low level functionality that allows its user to create custom NNs. It is a python library, so being able to use numpy,scikit-learn, matplotlib, scipy along with it is a big plus. The deep learning tutorial written by Lisa Lab should be tried out for a better understanding of theano. 5) For Convolutional Neural Networks, follow andrej karpathy's tutorial . 6) For unsupervised learning, follow here and here . 7) For an intersection of deep learning and NLP, follow Richard Socher's class . 8) For LSTMs, read Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780 and Graves, Alex. Supervised sequence labelling with recurrent neural networks. Vol. 385. Springer, 2012 . Here is LSTM's Theano code .
{ "source": [ "https://datascience.stackexchange.com/questions/2651", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/5342/" ] }
5,224
I would like to use a neural network for image classification. I'll start with pre-trained CaffeNet and train it for my application. How should I prepare the input images? In this case, all the images are of the same object but with variations (think: quality control). They are at somewhat different scales/resolutions/distances/lighting conditions (and in many cases I don't know the scale). Also, in each image there is an area (known) around the object of interest that should be ignored by the network. I could (for example) crop the center of each image, which is guaranteed to contain a portion of the object of interest and none of the ignored area; but that seems like it would throw away information, and also the results wouldn't be really the same scale (maybe 1.5x variation). Dataset augmentation I've heard of creating more training data by random crop/mirror/etc, is there a standard method for this? Any results on how much improvement it produces to classifier accuracy?
The idea with Neural Networks is that they need little pre-processing since the heavy lifting is done by the algorithm which is the one in charge of learning the features. The winners of the Data Science Bowl 2015 have a great write-up regarding their approach, so most of this answer's content was taken from: Classifying plankton with deep neural networks . I suggest you read it, specially the part about Pre-processing and data augmentation . - Resize Images As for different sizes, resolutions or distances you can do the following. You can simply rescale the largest side of each image to a fixed length. Another option is to use openCV or scipy. and this will resize the image to have 100 cols (width) and 50 rows (height): resized_image = cv2.resize(image, (100, 50)) Yet another option is to use scipy module, by using: small = scipy.misc.imresize(image, 0.5) - Data Augmentation Data Augmentation always improves performance though the amount depends on the dataset. If you want to augmented the data to artificially increase the size of the dataset you can do the following if the case applies (it wouldn't apply if for example were images of houses or people where if you rotate them 180degrees they would lose all information but not if you flip them like a mirror does): rotation: random with angle between 0° and 360° (uniform) translation: random with shift between -10 and 10 pixels (uniform) rescaling: random with scale factor between 1/1.6 and 1.6 (log-uniform) flipping: yes or no (bernoulli) shearing: random with angle between -20° and 20° (uniform) stretching: random with stretch factor between 1/1.3 and 1.3 (log-uniform) You can see the results on the Data Science bowl images. Pre-processed images augmented versions of the same images -Other techniques These will deal with other image properties like lighting and are already related to the main algorithm more like a simple pre-processing step. Check the full list on: UFLDL Tutorial
{ "source": [ "https://datascience.stackexchange.com/questions/5224", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/26/" ] }
5,226
I am doing some problems on an application of decision tree/random forest. I am trying to fit a problem which has numbers as well as strings (such as country name) as features. Now the library, scikit-learn takes only numbers as parameters, but I want to inject the strings as well as they carry a significant amount of knowledge. How do I handle such a scenario? I can convert a string to numbers by some mechanism such as hashing in Python. But I would like to know the best practice on how strings are handled in decision tree problems.
In most of the well-established machine learning systems, categorical variables are handled naturally. For example in R you would use factors, in WEKA you would use nominal variables. This is not the case in scikit-learn. The decision trees implemented in scikit-learn uses only numerical features and these features are interpreted always as continuous numeric variables . Thus, simply replacing the strings with a hash code should be avoided, because being considered as a continuous numerical feature any coding you will use will induce an order which simply does not exist in your data. One example is to code ['red','green','blue'] with [1,2,3], would produce weird things like 'red' is lower than 'blue', and if you average a 'red' and a 'blue' you will get a 'green'. Another more subtle example might happen when you code ['low', 'medium', 'high'] with [1,2,3]. In the latter case it might happen to have an ordering which makes sense, however, some subtle inconsistencies might happen when 'medium' in not in the middle of 'low' and 'high'. Finally, the answer to your question lies in coding the categorical feature into multiple binary features . For example, you might code ['red','green','blue'] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. This is called one-hot-encoding , binary encoding, one-of-k-encoding or whatever. You can check documentation here for encoding categorical features and feature extraction - hashing and dicts . Obviously one-hot-encoding will expand your space requirements and sometimes it hurts the performance as well.
{ "source": [ "https://datascience.stackexchange.com/questions/5226", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8409/" ] }
5,345
I use RStudio for R programming. I remember about solid IDE-s from other technology stacks, like Visual Studio or Eclipse. I have two questions: What other IDE-s than RStudio are used (please consider providing some brief description on them). Does any of them have noticeable advantages over RStudio? I mostly mean debug/build/deploy features, besides coding itself (so text editors are probably not a solution).
RIDE - R-Brain IDE (RIDE) for R & Python, Other Data Science R IDEs, Other Data Science Python IDEs. Flexible layout. Multiple language support. Jupyter notebook - The Jupyter Notebook App is a server-client application that allows editing and running notebook documents via a web browser. The Jupyter Notebook App can be executed on a local desktop Jupyter lab - An extensible environment for interactive and reproducible computing, based on the Jupyter Notebook and Architecture. Radiant – Open-source platform-independent browser-based interface for business analytics in R, based on the Shiny package and can be run locally or on a server. R Tools for Visual Studio (RTVS) - A free, open-source extension for Visual Studio 2017, RTVS is presently supported only in Visual Studio on Windows and not Visual Studio for Mac. Architect - Architect is an integrated development environment (IDE) that focuses specifically on the needs of the data scientist. All data science tasks from analyzing data to writing reports can be performed in a single environment with a common logic. displayr - Simple and powerful. Automation by menu or code. Elegant visualizations. Instant publishing. Collaboration. Reproducibility. Auto-updating. Secure cloud platform. Rbox - This package is a collection of several packages to run R via Atom editor. Use below for more IDEs: RKWard - an easy to use and easily extensible IDE/GUI for R Tinn-R - Tinn-R Editor - GUI for R Language and Environment R AnalyticFlow - data analysis software that utilizes the R environment for statistical computing. Rgedit - a text-editor plugin. Nvim-R - Vim plugin for editing R code. Rattle - A Graphical User Interface for Data Mining using R. How to Turn Vim Into an IDE for R
{ "source": [ "https://datascience.stackexchange.com/questions/5345", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/97/" ] }
5,357
I'm an R language programmer. I'm also in the group of people who are considered Data Scientists but who come from academic disciplines other than CS. This works out well in my role as a Data Scientist, however, by starting my career in R and only having basic knowledge of other scripting/web languages, I've felt somewhat inadequate in 2 key areas: Lack of a solid knowledge of programming theory. Lack of a competitive level of skill in faster and more widely used languages like C , C++ and Java , which could be utilized to increase the speed of the pipeline and Big Data computations as well as to create DS/data products which can be more readily developed into fast back-end scripts or standalone applications. The solution is simple of course -- go learn about programming, which is what I've been doing by enrolling in some classes (currently C programming). However, now that I'm starting to address problems #1 and #2 above, I'm left asking myself " Just how viable are languages like C and C++ for Data Science? ". For instance, I can move data around very quickly and interact with users just fine, but what about advanced regression, Machine Learning, text mining and other more advanced statistical operations? So. can C do the job -- what tools are available for advanced statistics, ML, AI, and other areas of Data Science? Or must I lose most of the efficiency gained by programming in C by calling on R scripts or other languages? The best resource I've found thus far in C is a library called Shark , which gives C / C++ the ability to use Support Vector Machines, linear regression (not non-linear and other advanced regression like multinomial probit, etc) and a shortlist of other (great but) statistical functions.
Or must I lose most of the efficiency gained by programming in C by calling on R scripts or other languages? Do the opposite: learn C/C++ to write R extensions. Use C/C++ only for the performance-critical sections of your new algorithms, use R to build your analysis, import data, make plots, etc. If you want to go beyond R, I'd recommend learning Python. There are many libraries available such as scikit-learn for machine learning algorithms or PyBrain for building Neural Networks etc. (and use pylab/ matplotlib for plotting and iPython notebooks to develop your analyses). Again, C/C++ is useful to implement time critical algorithms as Python extensions.
{ "source": [ "https://datascience.stackexchange.com/questions/5357", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/2723/" ] }
5,427
Generally, the machine learning model is built on datasets. I'd like to know if there is any way to generate synthetic dataset using such trained machine learning model preserving original dataset characteristics? [original data --> build machine learning model --> use ml model to generate synthetic data....!!!] Is it possible? Please point me to related resource if possible.
The general approach is to do traditional statistical analysis on your data set to define a multidimensional random process that will generate data with the same statistical characteristics. The virtue of this approach is that your synthetic data is independent of your ML model, but statistically "close" to your data. (see below for discussion of your alternative) In essence, you are estimating the multivariate probability distribution associated with the process. Once you have estimated the distribution, you can generate synthetic data through the Monte Carlo method or similar repeated sampling methods. If your data resembles some parametric distribution (e.g. lognormal) then this approach is straightforward and reliable. The tricky part is to estimate the dependence between variables. See: https://www.encyclopediaofmath.org/index.php/Multi-dimensional_statistical_analysis . If your data is irregular, then non-parametric methods are easier and probably more robust. Multivariate kernal density estimation is a method that is accessible and appealing to people with ML background. For a general introduction and links to specific methods, see: https://en.wikipedia.org/wiki/Nonparametric_statistics . To validate that this process worked for you, you go through the machine learning process again with the synthesized data, and you should end up with a model that is fairly close to your original. Likewise, if you put the synthesized data into your ML model, you should get outputs that have similar distribution as your original outputs. In contrast, you are proposing this: [original data --> build machine learning model --> use ml model to generate synthetic data....!!!] This accomplishes something different that the method I just described. This would solve the inverse problem : "what inputs could generate any given set of model outputs". Unless your ML model is over-fitted to your original data, this synthesized data will not look like your original data in every respect, or even most. Consider a linear regression model. The same linear regression model can have identical fit to data that have very different characteristics. A famous demonstration of this is through Anscombe's quartet . Thought I don't have references, I believe this problem can also arise in logistic regression, generalized linear models, SVM, and K-means clustering. There are some ML model types (e.g. decision tree) where it's possible to inverse them to generate synthetic data, though it takes some work. See: Generating Synthetic Data to Match Data Mining Patterns .
{ "source": [ "https://datascience.stackexchange.com/questions/5427", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/6465/" ] }
5,443
I would consider myself a journeyman data scientist. Like most (I think), I made my first charts and did my first aggregations in high school and college, using Excel. As I went through college, grad school and ~7 years of work experience, I quickly picked up what I consider to be more advanced tools, like SQL, R, Python, Hadoop, LaTeX, etc. We are interviewing for a data scientist position and one candidate advertises himself as a "senior data scientist" (a very buzzy term these days) with 15+ years experience. When asked what his preferred toolset was, he responded that it was Excel. I took this as evidence that he was not as experienced as his resume would claim, but wasn't sure. After all, just because it's not my preferred tool, doesn't mean it's not other people's. Do experienced data scientists use Excel? Can you assume a lack of experience from someone who does primarily use Excel?
Most non-technical people often use Excel as a database replacement. I think that's wrong but tolerable. However, someone who is supposedly experienced in data analysis simply can not use Excel as his main tool (excluding the obvious task of looking at the data for the first time). That's because Excel was never intended for that kind of analysis and as a consequence of this, it is incredibly easy to make mistakes in Excel (that's not to say that it is not incredibly easy to make another type of mistakes when using other tools, but Excel aggravates the situation even more.) To summarize what Excel doesn't have and is a must for any analysis: Reproducibility. A data analysis needs to be reproducible. Version control. Good for collaboration and also good for reproducibility. Instead of using xls, use csv (still very complex and has lots of edge cases, but csv parsers are fairly good nowadays.) Testing. If you don't have tests, your code is broken. If your code is broken, your analysis is worse than useless. Maintainability. Accuracy. Numerical accuracy, accurate date parsing, among others are really lacking in Excel. More resources: European Spreadsheet Risks Interest Group - Horror Stories You shouldn’t use a spreadsheet for important work (I mean it) Microsoft's Excel Might Be The Most Dangerous Software On The Planet Destroy Your Data Using Excel With This One Weird Trick! Excel spreadsheets are hard to get right
{ "source": [ "https://datascience.stackexchange.com/questions/5443", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8944/" ] }
5,458
I have around 1,000 job ads in the filed of IT (in excel file). I want to find the skills which are mentioned in each of ads. and then find the similar jobs based on skills. My method: I created 12 categories Such as programming skills, testing skills, communication skills, network skills, ... . Each advertisement may belong to 3-4 categories. In this case, some said multi-variate classification or Multi label classification is useful. But I don't know how to do this kind of classification in RapidMiner. 1- Does anyone know how to do multi-variate classification or Multi label classification in RapidMiner? or is there another way? 2- Do you recommend "classification" in order to analysis required job skills? or another technique? 3- Is there any better way to classify the skills which are stated in job ads? I'm new in the field of text mining. Please let me know if you have any idea. Thanks
Most non-technical people often use Excel as a database replacement. I think that's wrong but tolerable. However, someone who is supposedly experienced in data analysis simply can not use Excel as his main tool (excluding the obvious task of looking at the data for the first time). That's because Excel was never intended for that kind of analysis and as a consequence of this, it is incredibly easy to make mistakes in Excel (that's not to say that it is not incredibly easy to make another type of mistakes when using other tools, but Excel aggravates the situation even more.) To summarize what Excel doesn't have and is a must for any analysis: Reproducibility. A data analysis needs to be reproducible. Version control. Good for collaboration and also good for reproducibility. Instead of using xls, use csv (still very complex and has lots of edge cases, but csv parsers are fairly good nowadays.) Testing. If you don't have tests, your code is broken. If your code is broken, your analysis is worse than useless. Maintainability. Accuracy. Numerical accuracy, accurate date parsing, among others are really lacking in Excel. More resources: European Spreadsheet Risks Interest Group - Horror Stories You shouldn’t use a spreadsheet for important work (I mean it) Microsoft's Excel Might Be The Most Dangerous Software On The Planet Destroy Your Data Using Excel With This One Weird Trick! Excel spreadsheets are hard to get right
{ "source": [ "https://datascience.stackexchange.com/questions/5458", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8969/" ] }
5,694
A commonly heard sentence in unsupervised Machine learning is High dimensional inputs typically live on or near a low dimensional manifold What is a dimension? What is a manifold? What is the difference? Can you give an example to describe both? Manifold from Wikipedia: In mathematics, a manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an n-dimensional manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. Dimension from Wikipedia: In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. What does the Wikipedia even mean in layman terms? It sounds like some bizarre definition like most machine learning definition? They are both spaces, so what's the difference between a Euclidean space (i.e. Manifold) and a dimension space (i.e. feature-based)?
What is a dimension? To put it simply, if you have a tabular data set with m rows and n columns, then the dimensionality of your data is n: What is a manifold? The simplest example is our planet Earth. For us it looks flat, but it really is a sphere. So it's sort of a 2d manifold embedded in the 3d space. What is the difference? To answer this question, consider another example of a manifold: This is so-called "swiss roll". The data points are in 3d, but they all lie on 2d manifold, so the dimensionality of the manifold is 2, while the dimensionality of the input space is 3. There are many techniques to "unwrap" these manifolds. One of them is called Locally Linear Embedding , and this is how it would do that: Here's a scikit-learn snippet for doing that: from sklearn.manifold import LocallyLinearEmbedding lle = LocallyLinearEmbedding(n_neighbors=k, n_components=2) X_lle = lle.fit_transform(data) plt.scatter(X_lle[:, 0], X_lle[:, 1], c=color) plt.show()
{ "source": [ "https://datascience.stackexchange.com/questions/5694", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/122/" ] }
5,695
Could you tell me, are there any techniques for building neural networks with non-negative weights?
What is a dimension? To put it simply, if you have a tabular data set with m rows and n columns, then the dimensionality of your data is n: What is a manifold? The simplest example is our planet Earth. For us it looks flat, but it really is a sphere. So it's sort of a 2d manifold embedded in the 3d space. What is the difference? To answer this question, consider another example of a manifold: This is so-called "swiss roll". The data points are in 3d, but they all lie on 2d manifold, so the dimensionality of the manifold is 2, while the dimensionality of the input space is 3. There are many techniques to "unwrap" these manifolds. One of them is called Locally Linear Embedding , and this is how it would do that: Here's a scikit-learn snippet for doing that: from sklearn.manifold import LocallyLinearEmbedding lle = LocallyLinearEmbedding(n_neighbors=k, n_components=2) X_lle = lle.fit_transform(data) plt.scatter(X_lle[:, 0], X_lle[:, 1], c=color) plt.show()
{ "source": [ "https://datascience.stackexchange.com/questions/5695", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/9215/" ] }
5,706
Referring to the Stanford course notes on Convolutional Neural Networks for Visual Recognition , a paragraph says: "Unfortunately, ReLU units can be fragile during training and can "die". For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold. For example, you may find that as much as 40% of your network can be "dead" (i.e. neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue." What does dying of neurons here mean? Could you please provide an intuitive explanation in simpler terms.
A "dead" ReLU always outputs the same value (zero as it happens, but that is not important) for any input. Probably this is arrived at by learning a large negative bias term for its weights. In turn, that means that it takes no role in discriminating between inputs. For classification, you could visualise this as a decision plane outside of all possible input data. Once a ReLU ends up in this state, it is unlikely to recover, because the function gradient at 0 is also 0, so gradient descent learning will not alter the weights. "Leaky" ReLUs with a small positive gradient for negative inputs ( y=0.01x when x < 0 say) are one attempt to address this issue and give a chance to recover. The sigmoid and tanh neurons can suffer from similar problems as their values saturate, but there is always at least a small gradient allowing them to recover in the long term.
{ "source": [ "https://datascience.stackexchange.com/questions/5706", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/793/" ] }
5,885
I'm using Brain to train a neural network on a feature set that includes both positive and negative values. But Brain requires input values between 0 and 1. What's the best way to normalize my data?
This is called unity-based normalization. If you have a vector $X$, you can obtain a normalized version of it, say $Z$, by doing: $$Z = \frac{X - \min(X)}{\max(X) - \min(X)}$$
{ "source": [ "https://datascience.stackexchange.com/questions/5885", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/9802/" ] }
5,990
I am having 'hour' field as my attribute, but it takes a cyclic values. How could I transform the feature to preserve the information like '23' and '0' hour are close not far. One way I could think is to do transformation: min(h, 23-h) Input: [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23] Output: [0 1 2 3 4 5 6 7 8 9 10 11 11 10 9 8 7 6 5 4 3 2 1] Is there any standard to handle such attributes? Update: I will be using superviseed learning, to train random forest classifier!
The most logical way to transform hour is into two variables that swing back and forth out of sync. Imagine the position of the end of the hour hand of a 24-hour clock. The x position swings back and forth out of sync with the y position. For a 24-hour clock you can accomplish this with x=sin(2pi*hour/24) , y=cos(2pi*hour/24) . You need both variables or the proper movement through time is lost. This is due to the fact that the derivative of either sin or cos changes in time where as the (x,y) position varies smoothly as it travels around the unit circle. Finally, consider whether it is worthwhile to add a third feature to trace linear time, which can be constructed my hours (or minutes or seconds) from the start of the first record or a Unix time stamp or something similar. These three features then provide proxies for both the cyclic and linear progression of time e.g. you can pull out cyclic phenomenon like sleep cycles in people's movement and also linear growth like population vs. time. Hope this helps! Adding some relevant example code that I generated for another answer: Example of if being accomplished: # Enable inline plotting %matplotlib inline #Import everything I need... import numpy as np import matplotlib as mp import matplotlib.pyplot as plt import pandas as pd # Grab some random times from here: https://www.random.org/clock-times/ # put them into a csv. from pandas import DataFrame, read_csv df = read_csv('/Users/angus/Machine_Learning/ipython_notebooks/times.csv',delimiter=':') df['hourfloat']=df.hour+df.minute/60.0 df['x']=np.sin(2.*np.pi*df.hourfloat/24.) df['y']=np.cos(2.*np.pi*df.hourfloat/24.) df def kmeansshow(k,X): from sklearn import cluster from matplotlib import pyplot import numpy as np kmeans = cluster.KMeans(n_clusters=k) kmeans.fit(X) labels = kmeans.labels_ centroids = kmeans.cluster_centers_ #print centroids for i in range(k): # select only data observations with cluster label == i ds = X[np.where(labels==i)] # plot the data observations pyplot.plot(ds[:,0],ds[:,1],'o') # plot the centroids lines = pyplot.plot(centroids[i,0],centroids[i,1],'kx') # make the centroid x's bigger pyplot.setp(lines,ms=15.0) pyplot.setp(lines,mew=2.0) pyplot.show() return centroids Now lets try it out: kmeansshow(6,df[['x', 'y']].values) You can just barely see that there are some after midnight times included with the before midnight green cluster. Now lets reduce the number of clusters and show that before and after midnight can be connected in a single cluster in more detail: kmeansshow(3,df[['x', 'y']].values) See how the blue cluster contains times that are from before and after midnight that are clustered together in the same cluster... QED!
{ "source": [ "https://datascience.stackexchange.com/questions/5990", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8338/" ] }
6,048
I am working on a classification problem. I have a dataset containing equal numbers of categorical variables and continuous variables. How do I decide which technique to use, between a decision tree and logistic regression? Is it right to assume that logistic regression will be more suitable for continuous variables and that decision trees will be more suitable for both continuous and categorical variables?
Long story short : do what @untitledprogrammer said, try both models and cross-validate to help pick one. Both decision trees (depending on the implementation, e.g. C4.5) and logistic regression should be able to handle continuous and categorical data just fine. For logistic regression, you'll want to dummy code your categorical variables . As @untitledprogrammer mentioned, it's difficult to know a priori which technique will be better based simply on the types of features you have, continuous or otherwise. It really depends on your specific problem and the data you have. (See No Free Lunch Theorem ) You'll want to keep in mind though that a logistic regression model is searching for a single linear decision boundary in your feature space, whereas a decision tree is essentially partitioning your feature space into half-spaces using axis-aligned linear decision boundaries. The net effect is that you have a non-linear decision boundary, possibly more than one. This is nice when your data points aren't easily separated by a single hyperplane, but on the other hand, decisions trees are so flexible that they can be prone to overfitting. To combat this, you can try pruning. Logistic regression tends to be less susceptible (but not immune!) to overfitting. Lastly, another thing to consider is that decision trees can automatically take into account interactions between variables, e.g. $xy$ if you have two independent features $x$ and $y$. With logistic regression, you'll have to manually add those interaction terms yourself. So you have to ask yourself: what kind of decision boundary makes more sense in your particular problem? how do you want to balance bias and variance? are there interactions between my features? Of course, it's always a good idea to just try both models and do cross-validation. This will help you find out which one is more likely to have better generalization error.
{ "source": [ "https://datascience.stackexchange.com/questions/6048", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/9793/" ] }
6,107
I recently read Fully Convolutional Networks for Semantic Segmentation by Jonathan Long, Evan Shelhamer, Trevor Darrell. I don't understand what "deconvolutional layers" do / how they work. The relevant part is 3.3. Upsampling is backwards strided convolution Another way to connect coarse outputs to dense pixels is interpolation. For instance, simple bilinear interpolation computes each output $y_{ij}$ from the nearest four inputs by a linear map that depends only on the relative positions of the input and output cells. In a sense, upsampling with factor $f$ is convolution with a fractional input stride of 1/f. So long as $f$ is integral, a natural way to upsample is therefore backwards convolution (sometimes called deconvolution) with an output stride of $f$. Such an operation is trivial to implement, since it simply reverses the forward and backward passes of convolution. Thus upsampling is performed in-network for end-to-end learning by backpropagation from the pixelwise loss. Note that the deconvolution filter in such a layer need not be fixed (e.g., to bilinear upsampling), but can be learned. A stack of deconvolution layers and activation functions can even learn a nonlinear upsampling. In our experiments, we find that in-network upsampling is fast and effective for learning dense prediction. Our best segmentation architecture uses these layers to learn to upsample for refined prediction in Section 4.2. I don't think I really understood how convolutional layers are trained. What I think I've understood is that convolutional layers with a kernel size $k$ learn filters of size $k \times k$. The output of a convolutional layer with kernel size $k$, stride $s \in \mathbb{N}$ and $n$ filters is of dimension $\frac{\text{Input dim}}{s^2} \cdot n$. However, I don't know how the learning of convolutional layers works. (I understand how simple MLPs learn with gradient descent, if that helps). So if my understanding of convolutional layers is correct, I have no clue how this can be reversed. Could anybody please help me to understand deconvolutional layers?
Deconvolution layer is a very unfortunate name and should rather be called a transposed convolutional layer . Visually, for a transposed convolution with stride one and no padding, we just pad the original input (blue entries) with zeroes (white entries) (Figure 1). In case of stride two and padding, the transposed convolution would look like this (Figure 2): All credits for the great visualisations go to Vincent Dumoulin, Francesco Visin - A guide to convolution arithmetic for deep learning You can find more visualisations of convolutional arithmetics here .
{ "source": [ "https://datascience.stackexchange.com/questions/6107", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8820/" ] }
6,546
I am very new to Scala and Spark, and am working on some self-made exercises using baseball statistics. I am using a case class create a RDD and assign a schema to the data, and am then turning it into a DataFrame so I can use SparkSQL to select groups of players via their stats that meet certain criteria. Once I have the subset of players I am interested in looking at further, I would like to find the mean of a column; eg Batting Average or RBIs. From there I would like to break all the players into percentile groups based on their average performance compared to all players; the top 10%, bottom 10%, 40-50% I've been able to use the DataFrame.describe() function to return a summary of a desired column (mean, stddev, count, min, and max) all as strings though. Is there a better way to get just the mean and stddev as Doubles, and what is the best way of breaking the players into groups of 10-percentiles? So far my thoughts are to find the values that bookend the percentile ranges and writing a function that groups players via comparators, but that feels like it is bordering on reinventing the wheel. I have the following imports currently: import org.apache.spark.rdd.RDD import org.apache.spark.sql.SQLContext import org.apache.spark.{SparkConf, SparkContext} import org.joda.time.format.DateTimeFormat
This is the import you need, and how to get the mean for a column named "RBIs": import org.apache.spark.sql.functions._ df.select(avg($"RBIs")).show() For the standard deviation, see scala - Calculate the standard deviation of grouped data in a Spark DataFrame - Stack Overflow For grouping by percentiles, I suggest defining a new column via a user-defined function (UDF), and using groupBy on that column. See Spark SQL and DataFrames - Spark 1.5.1 Documentation - udf registration
{ "source": [ "https://datascience.stackexchange.com/questions/6546", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/10832/" ] }
6,676
A way to train a Logistic Regression is by using stochastic gradient descent, which scikit-learn offers an interface to. What I would like to do is take a scikit-learn's SGDClassifier and have it score the same as a Logistic Regression here . However, I must be missing some machine learning enhancements, since my scores are not equivalent. This is my current code. What am I missing on the SGDClassifier which would have it produce the same results as a Logistic Regression? from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.linear_model import SGDClassifier import numpy as np import pandas as pd from sklearn.cross_validation import KFold from sklearn.metrics import accuracy_score # Note that the iris dataset is available in sklearn by default. # This data is also conveniently preprocessed. iris = datasets.load_iris() X = iris["data"] Y = iris["target"] numFolds = 10 kf = KFold(len(X), numFolds, shuffle=True) # These are "Class objects". For each Class, find the AUC through # 10 fold cross validation. Models = [LogisticRegression, SGDClassifier] params = [{}, {"loss": "log", "penalty": "l2"}] for param, Model in zip(params, Models): total = 0 for train_indices, test_indices in kf: train_X = X[train_indices, :]; train_Y = Y[train_indices] test_X = X[test_indices, :]; test_Y = Y[test_indices] reg = Model(**param) reg.fit(train_X, train_Y) predictions = reg.predict(test_X) total += accuracy_score(test_Y, predictions) accuracy = total / numFolds print "Accuracy score of {0}: {1}".format(Model.__name__, accuracy) My output: Accuracy score of LogisticRegression: 0.946666666667 Accuracy score of SGDClassifier: 0.76
The comments about iteration number are spot on. The default SGDClassifier n_iter is 5 meaning you do 5 * num_rows steps in weight space. The sklearn rule of thumb is ~ 1 million steps for typical data. For your example, just set it to 1000 and it might reach tolerance first. Your accuracy is lower with SGDClassifier because it's hitting iteration limit before tolerance so you are "early stopping" Modifying your code quick and dirty I get: # Added n_iter here params = [{}, {"loss": "log", "penalty": "l2", 'n_iter':1000}] for param, Model in zip(params, Models): total = 0 for train_indices, test_indices in kf: train_X = X[train_indices, :]; train_Y = Y[train_indices] test_X = X[test_indices, :]; test_Y = Y[test_indices] reg = Model(**param) reg.fit(train_X, train_Y) predictions = reg.predict(test_X) total += accuracy_score(test_Y, predictions) accuracy = total / numFolds print "Accuracy score of {0}: {1}".format(Model.__name__, accuracy) Accuracy score of LogisticRegression: 0.96 Accuracy score of SGDClassifier: 0.96
{ "source": [ "https://datascience.stackexchange.com/questions/6676", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8774/" ] }
6,715
Is it necessary to standardize your data before cluster? In the example from scikit learn about DBSCAN, here they do this in the line: X = StandardScaler().fit_transform(X) But I do not understand why it is necessary. After all, clustering does not assume any particular distribution of data - it is an unsupervised learning method so its objective is to explore the data. Why would it be necessary to transform the data?
Normalization is not always required, but it rarely hurts. Some examples: K-means : K-means clustering is "isotropic" in all directions of space and therefore tends to produce more or less round (rather than elongated) clusters. In this situation leaving variances unequal is equivalent to putting more weight on variables with smaller variance. Example in Matlab: X = [randn(100,2)+ones(100,2);... randn(100,2)-ones(100,2)]; % Introduce denormalization % X(:, 2) = X(:, 2) * 1000 + 500; opts = statset('Display','final'); [idx,ctrs] = kmeans(X,2,... 'Distance','city',... 'Replicates',5,... 'Options',opts); plot(X(idx==1,1),X(idx==1,2),'r.','MarkerSize',12) hold on plot(X(idx==2,1),X(idx==2,2),'b.','MarkerSize',12) plot(ctrs(:,1),ctrs(:,2),'kx',... 'MarkerSize',12,'LineWidth',2) plot(ctrs(:,1),ctrs(:,2),'ko',... 'MarkerSize',12,'LineWidth',2) legend('Cluster 1','Cluster 2','Centroids',... 'Location','NW') title('K-means with normalization') (FYI: How can I detect if my dataset is clustered or unclustered (i.e. forming one single cluster ) Distributed clustering : The comparative analysis shows that the distributed clustering results depend on the type of normalization procedure. Artificial neural network (inputs) : If the input variables are combined linearly, as in an MLP, then it is rarely strictly necessary to standardize the inputs, at least in theory. The reason is that any rescaling of an input vector can be effectively undone by changing the corresponding weights and biases, leaving you with the exact same outputs as you had before. However, there are a variety of practical reasons why standardizing the inputs can make training faster and reduce the chances of getting stuck in local optima. Also, weight decay and Bayesian estimation can be done more conveniently with standardized inputs. Artificial neural network (inputs/outputs) Should you do any of these things to your data? The answer is, it depends. Standardizing either input or target variables tends to make the training process better behaved by improving the numerical condition (see ftp://ftp.sas.com/pub/neural/illcond/illcond.html ) of the optimization problem and ensuring that various default values involved in initialization and termination are appropriate. Standardizing targets can also affect the objective function. Standardization of cases should be approached with caution because it discards information. If that information is irrelevant, then standardizing cases can be quite helpful. If that information is important, then standardizing cases can be disastrous. Interestingly, changing the measurement units may even lead one to see a very different clustering structure: Kaufman, Leonard, and Peter J. Rousseeuw.. "Finding groups in data: An introduction to cluster analysis." (2005). In some applications, changing the measurement units may even lead one to see a very different clustering structure. For example, the age (in years) and height (in centimeters) of four imaginary people are given in Table 3 and plotted in Figure 3. It appears that {A, B ) and { C, 0) are two well-separated clusters. On the other hand, when height is expressed in feet one obtains Table 4 and Figure 4, where the obvious clusters are now {A, C} and { B, D}. This partition is completely different from the first because each subject has received another companion. (Figure 4 would have been flattened even more if age had been measured in days.) To avoid this dependence on the choice of measurement units, one has the option of standardizing the data. This converts the original measurements to unitless variables. Kaufman et al. continues with some interesting considerations (page 11): From a philosophical point of view, standardization does not really solve the problem. Indeed, the choice of measurement units gives rise to relative weights of the variables. Expressing a variable in smaller units will lead to a larger range for that variable, which will then have a large effect on the resulting structure. On the other hand, by standardizing one attempts to give all variables an equal weight, in the hope of achieving objectivity. As such, it may be used by a practitioner who possesses no prior knowledge. However, it may well be that some variables are intrinsically more important than others in a particular application, and then the assignment of weights should be based on subject-matter knowledge (see, e.g., Abrahamowicz, 1985). On the other hand, there have been attempts to devise clustering techniques that are independent of the scale of the variables (Friedman and Rubin, 1967). The proposal of Hardy and Rasson (1982) is to search for a partition that minimizes the total volume of the convex hulls of the clusters. In principle such a method is invariant with respect to linear transformations of the data, but unfortunately no algorithm exists for its implementation (except for an approximation that is restricted to two dimensions). Therefore, the dilemma of standardization appears unavoidable at present and the programs described in this book leave the choice up to the user.
{ "source": [ "https://datascience.stackexchange.com/questions/6715", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/10512/" ] }
6,838
When would one use Random Forest over SVM and vice versa? I understand that cross-validation and model comparison is an important aspect of choosing a model, but here I would like to learn more about rules of thumb and heuristics of the two methods. Can someone please explain the subtleties, strengths, and weaknesses of the classifiers as well as problems, which are best suited to each of them?
I would say, the choice depends very much on what data you have and what is your purpose. A few "rules of thumb". Random Forest is intrinsically suited for multiclass problems, while SVM is intrinsically two-class. For multiclass problem you will need to reduce it into multiple binary classification problems. Random Forest works well with a mixture of numerical and categorical features. When features are on the various scales, it is also fine. Roughly speaking, with Random Forest you can use data as they are. SVM maximizes the "margin" and thus relies on the concept of "distance" between different points. It is up to you to decide if "distance" is meaningful. As a consequence, one-hot encoding for categorical features is a must-do. Further, min-max or other scaling is highly recommended at preprocessing step. If you have data with $n$ points and $m$ features, an intermediate step in SVM is constructing an $n\times n$ matrix (think about memory requirements for storage) by calculating $n^2$ dot products (computational complexity). Therefore, as a rule of thumb, SVM is hardly scalable beyond 10^5 points. Large number of features (homogeneous features with meaningful distance, pixel of image would be a perfect example) is generally not a problem. For a classification problem Random Forest gives you probability of belonging to class. SVM gives you distance to the boundary, you still need to convert it to probability somehow if you need probability. For those problems, where SVM applies, it generally performs better than Random Forest. SVM gives you "support vectors", that is points in each class closest to the boundary between classes. They may be of interest by themselves for interpretation.
{ "source": [ "https://datascience.stackexchange.com/questions/6838", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/12350/" ] }
6,840
I have a dataset of what apps users downloaded and I try to estimate these users' gender based on what apps they downloaded, using machine learning algorithm. However, what kind of features of the apps should I focus on? As far as I know, app category plays a big role. Do Google Play and Apple App Store estimate users' gender based on what apps they download?
I would say, the choice depends very much on what data you have and what is your purpose. A few "rules of thumb". Random Forest is intrinsically suited for multiclass problems, while SVM is intrinsically two-class. For multiclass problem you will need to reduce it into multiple binary classification problems. Random Forest works well with a mixture of numerical and categorical features. When features are on the various scales, it is also fine. Roughly speaking, with Random Forest you can use data as they are. SVM maximizes the "margin" and thus relies on the concept of "distance" between different points. It is up to you to decide if "distance" is meaningful. As a consequence, one-hot encoding for categorical features is a must-do. Further, min-max or other scaling is highly recommended at preprocessing step. If you have data with $n$ points and $m$ features, an intermediate step in SVM is constructing an $n\times n$ matrix (think about memory requirements for storage) by calculating $n^2$ dot products (computational complexity). Therefore, as a rule of thumb, SVM is hardly scalable beyond 10^5 points. Large number of features (homogeneous features with meaningful distance, pixel of image would be a perfect example) is generally not a problem. For a classification problem Random Forest gives you probability of belonging to class. SVM gives you distance to the boundary, you still need to convert it to probability somehow if you need probability. For those problems, where SVM applies, it generally performs better than Random Forest. SVM gives you "support vectors", that is points in each class closest to the boundary between classes. They may be of interest by themselves for interpretation.
{ "source": [ "https://datascience.stackexchange.com/questions/6840", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/12355/" ] }
8,244
What are some of the advantages of columnar data-stores which make them more suitable for data science and analytics?
A column-oriented database (=columnar data-store) stores the data of a table column by column on the disk, while a row-oriented database stores the data of a table row by row. There are two main advantages of using a column-oriented database in comparison with a row-oriented database. The first advantage relates to the amount of data one’s need to read in case we perform an operation on just a few features. Consider a simple query: SELECT correlation(feature2, feature5) FROM records A traditional executor would read the entire table (i.e. all the features): Instead, using our column-based approach we just have to read the columns which are interested in: The second advantage, which is also very important for large databases, is that column-based storage allows better compression, since the data in one specific column is indeed homogeneous than across all the columns. The main drawback of a column-oriented approach is that manipulating (lookup, update or delete) an entire given row is inefficient. However the situation should occur rarely in databases for analytics (“warehousing”), which means most operations are read-only, rarely read many attributes in the same table and writes are only appends. Some RDMS offer a column-oriented storage engine option. For example, PostgreSQL has natively no option to store tables in a column-based fashion, but Greenplum has created a closed-source one (DBMS2, 2009). Interestingly, Greenplum is also behind the open-source library for scalable in-database analytics, MADlib (Hellerstein et al., 2012), which is no coincidence. More recently, CitusDB, a startup working on high speed, analytic database, released their own open-source columnar store extension for PostgreSQL, CSTORE (Miller, 2014). Google’s system for large scale machine learning Sibyl also uses column-oriented data format (Chandra et al., 2010). This trend reflects the growing interest around column-oriented storage for large-scale analytics. Stonebraker et al. (2005) further discuss the advantages of column-oriented DBMS. Two concrete use cases: How are most datasets for large-scale machine learning stored? (most of the answer comes from Appendix C of: BeatDB: An end-to-end approach to unveil saliencies from massive signal data sets. Franck Dernoncourt, S.M, thesis, MIT Dept of EECS )
{ "source": [ "https://datascience.stackexchange.com/questions/8244", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/11097/" ] }
8,860
Bagging is the generation of multiple predictors that works as ensamble as a single predictor. Dropout is a technique that teach to a neural networks to average all possible subnetworks. Looking at the most important Kaggle's competitions seem that this two techniques are used together very often. I can't see any theoretical difference besides the actual implementation. Who can explain me why we should use both of them in any real application? and why performance improve when we use both of them?
Bagging and dropout do not achieve quite the same thing, though both are types of model averaging. Bagging is an operation across your entire dataset which trains models on a subset of the training data. Thus some training examples are not shown to a given model. Dropout , by contrast, is applied to features within each training example. It is true that the result is functionally equivalent to training exponentially many networks (with shared weights!) and then equally weighting their outputs. But dropout works on the feature space, causing certain features to be unavailable to the network, not full examples. Because each neuron cannot completely rely on one input, representations in these networks tend to be more distributed and the network is less likely to overfit.
{ "source": [ "https://datascience.stackexchange.com/questions/8860", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/10938/" ] }
8,922
I have a dataset like the one below. I would like to remove all characters after the character ©. How can I do that in R? data_clean_phrase <- c("Copyright © The Society of Geomagnetism and Earth", "© 2013 Chinese National Committee ") data_clean_df <- as.data.frame(data_clean_phrase)
For instance: rs<-c("copyright @ The Society of mo","I want you to meet me @ the coffeshop") s<-gsub("@.*","",rs) s [1] "copyright " "I want you to meet me " Or, if you want to keep the @ character: s<-gsub("(@).*","\\1",rs) s [1] "copyright @" "I want you to meet me @" EDIT: If what you want is to remove everything from the last @ on you just have to follow this previous example with the appropriate regex. Example: rs<-c("copyright @ The Society of mo located @ my house","I want you to meet me @ the coffeshop") s<-gsub("(.*)@.*","\\1",rs) s [1] "copyright @ The Society of mo located " "I want you to meet me " Given the matching we are looking for, both sub and gsub will give you the same answer.
{ "source": [ "https://datascience.stackexchange.com/questions/8922", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/3151/" ] }
8,926
I have a corpus of text documents, some of which are labelled by analysts with label L. I am using this data to train an SVM for predicting if a new document should have label L. So far it's straight-forward, but there is an issue: The analysts have not evaluated all documents in the training set, so there are in fact three classes of documents: Documents labeled L Documents the analysts have looked at, and chosen not label L (so you could say they're labelled not-L) Documents the analysts have not looked at. Unfortunately, at training time, I have no way to separate documents in 2. and 3, or not-L and unlabelled documents. I believe this is a problem, because a not-L label gives information to the SVM, but an unlabelled document is more "neutral". How can I estimate the impact of this issue on predicting if a new document should have label L?
For instance: rs<-c("copyright @ The Society of mo","I want you to meet me @ the coffeshop") s<-gsub("@.*","",rs) s [1] "copyright " "I want you to meet me " Or, if you want to keep the @ character: s<-gsub("(@).*","\\1",rs) s [1] "copyright @" "I want you to meet me @" EDIT: If what you want is to remove everything from the last @ on you just have to follow this previous example with the appropriate regex. Example: rs<-c("copyright @ The Society of mo located @ my house","I want you to meet me @ the coffeshop") s<-gsub("(.*)@.*","\\1",rs) s [1] "copyright @ The Society of mo located " "I want you to meet me " Given the matching we are looking for, both sub and gsub will give you the same answer.
{ "source": [ "https://datascience.stackexchange.com/questions/8926", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/13596/" ] }
9,262
I am rather new to this and can't say I have a complete understanding of the theoretical concepts behind this. I am trying to calculate the KL Divergence between several lists of points in Python. I am using this to try and do this. The problem that I'm running into is that the value returned is the same for any 2 lists of numbers (its 1.3862943611198906). I have a feeling that I'm making some sort of theoretical mistake here but can't spot it. values1 = [1.346112,1.337432,1.246655] values2 = [1.033836,1.082015,1.117323] metrics.mutual_info_score(values1,values2) That is an example of what I'm running - just that I'm getting the same output for any 2 input. Any advice/help would be appreciated!
First of all, sklearn.metrics.mutual_info_score implements mutual information for evaluating clustering results, not pure Kullback-Leibler divergence! This is equal to the Kullback-Leibler divergence of the joint distribution with the product distribution of the marginals. KL divergence (and any other such measure) expects the input data to have a sum of 1 . Otherwise, they are not proper probability distributions . If your data does not have a sum of 1, most likely it is usually not proper to use KL divergence! (In some cases, it may be admissible to have a sum of less than 1, e.g. in the case of missing data.) Also note that it is common to use base 2 logarithms. This only yields a constant scaling factor in difference, but base 2 logarithms are easier to interpret and have a more intuitive scale (0 to 1 instead of 0 to log2=0.69314..., measuring the information in bits instead of nats). > sklearn.metrics.mutual_info_score([0,1],[1,0]) 0.69314718055994529 as we can clearly see, the MI result of sklearn is scaled using natural logarithms instead of log2. This is an unfortunate choice, as explained above. Kullback-Leibler divergence is fragile, unfortunately. On above example it is not well-defined: KL([0,1],[1,0]) causes a division by zero, and tends to infinity. It is also asymmetric .
{ "source": [ "https://datascience.stackexchange.com/questions/9262", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8206/" ] }
9,302
In the MNIST For ML Beginners they define cross-entropy as $$H_{y'} (y) := - \sum_{i} y_{i}' \log (y_i)$$ $y_i$ is the predicted probability value for class $i$ and $y_i'$ is the true probability for that class. Question 1 Isn't it a problem that $y_i$ (in $\log(y_i)$) could be 0? This would mean that we have a really bad classifier, of course. But think of an error in our dataset, e.g. an "obvious" 1 labeled as 3 . Would it simply crash? Does the model we chose (softmax activation at the end) basically never give the probability 0 for the correct class? Question 2 I've learned that cross-entropy is defined as $$H_{y'}(y) := - \sum_{i} ({y_i' \log(y_i) + (1-y_i') \log (1-y_i)})$$ What is correct? Do you have any textbook references for either version? How do those functions differ in their properties (as error functions for neural networks)?
One way to interpret cross-entropy is to see it as a (minus) log-likelihood for the data $y_i'$ , under a model $y_i$ . Namely, suppose that you have some fixed model (a.k.a. "hypothesis"), which predicts for $n$ classes $\{1,2,\dots, n\}$ their hypothetical occurrence probabilities $y_1, y_2,\dots, y_n$ . Suppose that you now observe (in reality) $k_1$ instances of class $1$ , $k_2$ instances of class $2$ , $k_n$ instances of class $n$ , etc. According to your model the likelihood of this happening is: $$ P[data|model] := y_1^{k_1}y_2^{k_2}\dots y_n^{k_n}. $$ Taking the logarithm and changing the sign: $$ -\log P[data|model] = -k_1\log y_1 -k_2\log y_2 - \dots -k_n\log y_n = -\sum_i k_i \log y_i $$ If you now divide the right-hand sum by the number of observations $N = k_1+k_2+\dots+k_n$ , and denote the empirical probabilities as $y_i'=k_i/N$ , you'll get the cross-entropy: $$ -\frac{1}{N} \log P[data|model] = -\frac{1}{N}\sum_i k_i \log y_i = -\sum_i y_i'\log y_i =: H(y', y) $$ Furthermore, the log-likelihood of a dataset given a model can be interpreted as a measure of "encoding length" - the number of bits you expect to spend to encode this information if your encoding scheme would be based on your hypothesis. This follows from the observation that an independent event with probability $y_i$ requires at least $-\log_2 y_i$ bits to encode it (assuming efficient coding), and consequently the expression $$-\sum_i y_i'\log_2 y_i,$$ is literally the expected length of the encoding, where the encoding lengths for the events are computed using the "hypothesized" distribution, while the expectation is taken over the actual one. Finally, instead of saying "measure of expected encoding length" I really like to use the informal term "measure of surprise". If you need a lot of bits to encode an expected event from a distribution, the distribution is "really surprising" for you. With those intuitions in mind, the answers to your questions can be seen as follows: Question 1 . Yes. It is a problem whenever the corresponding $y_i'$ is nonzero at the same time . It corresponds to the situation where your model believes that some class has zero probability of occurrence, and yet the class pops up in reality. As a result, the "surprise" of your model is infinitely great: your model did not account for that event and now needs infinitely many bits to encode it. That is why you get infinity as your cross-entropy. To avoid this problem you need to make sure that your model does not make rash assumptions about something being impossible while it can happen. In reality, people tend to use sigmoid or "softmax" functions as their hypothesis models, which are conservative enough to leave at least some chance for every option. If you use some other hypothesis model, it is up to you to regularize (aka "smooth") it so that it would not hypothesize zeros where it should not. Question 2 . In this formula, one usually assumes $y_i'$ to be either $0$ or $1$ , while $y_i$ is the model's probability hypothesis for the corresponding input. If you look closely, you will see that it is simply a $-\log P[data|model]$ for binary data, an equivalent of the second equation in this answer. Hence, strictly speaking, although it is still a log-likelihood, this is not syntactically equivalent to cross-entropy. What some people mean when referring to such an expression as cross-entropy is that it is, in fact, a sum over binary cross-entropies for individual points in the dataset: $$ \sum_i H(y_i', y_i), $$ where $y_i'$ and $y_i$ have to be interpreted as the corresponding binary distributions $(y_i', 1-y_i')$ and $(y_i, 1-y_i)$ .
{ "source": [ "https://datascience.stackexchange.com/questions/9302", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8820/" ] }
9,364
XGBoost have been doing a great job, when it comes to dealing with both categorical and continuous dependant variables. But, how do I select the optimized parameters for an XGBoost problem? This is how I applied the parameters for a recent Kaggle problem: param <- list( objective = "reg:linear", booster = "gbtree", eta = 0.02, # 0.06, #0.01, max_depth = 10, #changed from default of 8 subsample = 0.5, # 0.7 colsample_bytree = 0.7, # 0.7 num_parallel_tree = 5 # alpha = 0.0001, # lambda = 1 ) clf <- xgb.train( params = param, data = dtrain, nrounds = 3000, #300, #280, #125, #250, # changed from 300 verbose = 0, early.stop.round = 100, watchlist = watchlist, maximize = FALSE, feval=RMPSE ) All I do to experiment is randomly select (with intuition) another set of parameters for improving on the result. Is there anyway I automate the selection of optimized(best) set of parameters? (Answers can be in any language. I'm just looking for the technique)
Whenever I work with xgboost I often make my own homebrew parameter search but you can do it with the caret package as well like KrisP just mentioned. Caret See this answer on Cross Validated for a thorough explanation on how to use the caret package for hyperparameter search on xgboost. How to tune hyperparameters of xgboost trees? Custom Grid Search I often begin with a few assumptions based on Owen Zhang 's slides on tips for data science P. 14 Here you can see that you'll mostly need to tune row sampling, column sampling and maybe maximum tree depth. This is how I do a custom row sampling and column sampling search for a problem I am working on at the moment: searchGridSubCol <- expand.grid(subsample = c(0.5, 0.75, 1), colsample_bytree = c(0.6, 0.8, 1)) ntrees <- 100 #Build a xgb.DMatrix object DMMatrixTrain <- xgb.DMatrix(data = yourMatrix, label = yourTarget) rmseErrorsHyperparameters <- apply(searchGridSubCol, 1, function(parameterList){ #Extract Parameters to test currentSubsampleRate <- parameterList[["subsample"]] currentColsampleRate <- parameterList[["colsample_bytree"]] xgboostModelCV <- xgb.cv(data = DMMatrixTrain, nrounds = ntrees, nfold = 5, showsd = TRUE, metrics = "rmse", verbose = TRUE, "eval_metric" = "rmse", "objective" = "reg:linear", "max.depth" = 15, "eta" = 2/ntrees, "subsample" = currentSubsampleRate, "colsample_bytree" = currentColsampleRate) xvalidationScores <- as.data.frame(xgboostModelCV) #Save rmse of the last iteration rmse <- tail(xvalidationScores$test.rmse.mean, 1) return(c(rmse, currentSubsampleRate, currentColsampleRate)) }) And combined with some ggplot2 magic using the results of that apply function you can plot a graphical representation of the search. In this plot lighter colors represent lower error and each block represents a unique combination of column sampling and row sampling. So if you want to perform an additional search of say eta (or tree depth) you will end up with one of these plots for each eta parameters tested. I see you have a different evaluation metric (RMPSE), just plug that in the cross validation function and you'll get the desired result. Besides that I wouldn't worry too much about fine tuning the other parameters because doing so won't improve performance too much, at least not so much compared to spending more time engineering features or cleaning the data. Others Random search and Bayesian parameter selection are also possible but I haven't made/found an implementation of them yet. Here is a good primer on bayesian Optimization of hyperparameters by Max Kuhn creator of caret. http://blog.revolutionanalytics.com/2016/06/bayesian-optimization-of-machine-learning-models.html
{ "source": [ "https://datascience.stackexchange.com/questions/9364", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/11097/" ] }
9,443
I have been building models with categorical data for a while now and when in this situation I basically default to using scikit-learn's LabelEncoder function to transform this data prior to building a model. I understand the difference between OHE , LabelEncoder and DictVectorizor in terms of what they are doing to the data, but what is not clear to me is when you might choose to employ one technique over another. Are there certain algorithms or situations in which one has advantages/disadvantages with respect to the others?
There are some cases where LabelEncoder or DictVectorizor are useful, but these are quite limited in my opinion due to ordinality. LabelEncoder can turn [dog,cat,dog,mouse,cat] into [1,2,1,3,2] , but then the imposed ordinality means that the average of dog and mouse is cat. Still there are algorithms like decision trees and random forests that can work with categorical variables just fine and LabelEncoder can be used to store values using less disk space. One-Hot-Encoding has the advantage that the result is binary rather than ordinal and that everything sits in an orthogonal vector space. The disadvantage is that for high cardinality, the feature space can really blow up quickly and you start fighting with the curse of dimensionality. In these cases, I typically employ one-hot-encoding followed by PCA for dimensionality reduction. I find that the judicious combination of one-hot plus PCA can seldom be beat by other encoding schemes. PCA finds the linear overlap, so will naturally tend to group similar features into the same feature.
{ "source": [ "https://datascience.stackexchange.com/questions/9443", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/10462/" ] }
9,818
Neural networks get top results in Computer Vision tasks (see MNIST , ILSVRC , Kaggle Galaxy Challenge ). They seem to outperform every other approach in Computer Vision. But there are also other tasks: Kaggle Molecular Activity Challenge Regression: Kaggle Rain prediction , also the 2nd place Grasp and Lift 2nd also third place - Identify hand motions from EEG recordings I'm not too sure about ASR (automatic speech recognition) and machine translation, but I think I've also heard that (recurrent) neural networks (start to) outperform other approaches. I am currently learning about Bayesian Networks and I wonder in which cases those models are usually applied. So my question is: Is there any challenge / (Kaggle) competition, where the state of the art are Bayesian Networks or at least very similar models? (Side note: I've also seen decision trees , 2 , 3 , 4 , 5 , 6 , 7 win in several recent Kaggle challenges)
One of the areas where Bayesian approaches are often used, is where one needs interpretability of the prediction system. You don't want to give doctors a Neural net and say that it's 95% accurate. You rather want to explain the assumptions your method makes, as well as the decision process the method uses. Similar area is when you have a strong prior domain knowledge and want to use it in the system.
{ "source": [ "https://datascience.stackexchange.com/questions/9818", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8820/" ] }
9,832
It seems to me that the $V$ function can be easily expressed by the $Q$ function and thus the $V$ function seems to be superfluous to me. However, I'm new to reinforcement learning so I guess I got something wrong. Definitions Q- and V-learning are in the context of Markov Decision Processes . A MDP is a 5-tuple $(S, A, P, R, \gamma)$ with $S$ is a set of states (typically finite) $A$ is a set of actions (typically finite) $P(s, s', a) = P(s_{t+1} = s' | s_t = s, a_t = a)$ is the probability to get from state $s$ to state $s'$ with action $a$. $R(s, s', a) \in \mathbb{R}$ is the immediate reward after going from state $s$ to state $s'$ with action $a$. (It seems to me that usually only $s'$ matters). $\gamma \in [0, 1]$ is called discount factor and determines if one focuses on immediate rewards ($\gamma = 0$), the total reward ($\gamma = 1$) or some trade-off. A policy $\pi$ , according to Reinforcement Learning: An Introduction by Sutton and Barto is a function $\pi: S \rightarrow A$ (this could be probabilistic). According to Mario Martins slides , the $V$ function is $$V^\pi(s) = E_\pi \{R_t | s_t = s\} = E_\pi \{\sum_{k=0}^\infty \gamma^k r_{t+k+1} | s_t = s\}$$ and the Q function is $$Q^\pi(s, a) = E_\pi \{R_t | s_t = s, a_t = a\} = E_\pi \{\sum_{k=0}^\infty \gamma^k r_{t+k+1} | s_t = s, a_t=a\}$$ My thoughts The $V$ function states what the expected overall value (not reward!) of a state $s$ under the policy $\pi$ is. The $Q$ function states what the value of a state $s$ and an action $a$ under the policy $\pi$ is. This means, $$Q^\pi(s, \pi(s)) = V^\pi(s)$$ Right? So why do we have the value function at all? (I guess I mixed up something)
Q-values are a great way to the make actions explicit so you can deal with problems where the transition function is not available (model-free). However, when your action-space is large, things are not so nice and Q-values are not so convenient. Think of a huge number of actions or even continuous action-spaces. From a sampling perspective, the dimensionality of $Q(s, a)$ is higher than $V(s)$ so it might get harder to get enough $(s, a)$ samples in comparison with $(s)$ . If you have access to the transition function sometimes $V$ is good. There are also other uses where both are combined. For instance, the advantage function where $A(s, a) = Q(s, a) - V(s)$ . If you are interested, you can find a recent example using advantage functions here: Dueling Network Architectures for Deep Reinforcement Learning by Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot and Nando de Freitas.
{ "source": [ "https://datascience.stackexchange.com/questions/9832", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8820/" ] }
9,850
I am using TensorFlow for experiments mainly with neural networks. Although I have done quite some experiments (XOR-Problem, MNIST, some Regression stuff, ...) now, I struggle with choosing the "correct" cost function for specific problems because overall I could be considered a beginner. Before coming to TensorFlow I coded some fully-connected MLPs and some recurrent networks on my own with Python and NumPy but mostly I had problems where a simple squared error and a simple gradient descient was sufficient. However, since TensorFlow offers quite a lot of cost functions itself as well as building custom cost functions, I would like to know if there is some kind of tutorial maybe specifically for cost functions on neural networks? (I've already done like half of the official TensorFlow tutorials but they're not really explaining why specific cost functions or learners are used for specific problems - at least not for beginners) To give some examples: cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_output, y_train)) I guess it applies the softmax function on both inputs so that the sum of one vector equals 1. But what exactly is cross entropy with logits? I thought it sums up the values and calculates the cross entropy...so some metric measurement?! Wouldn't this be very much the same if I normalize the output, sum it up and take the squared error? Additionally, why is this used e.g. for MNIST (or even much harder problems)? When I want to classify like 10 or maybe even 1000 classes, doesn't summing up the values completely destroy any information about which class actually was the output? cost = tf.nn.l2_loss(vector) What is this for? I thought l2 loss is pretty much the squared error but TensorFlow's API tells that it's input is just one tensor. Doesn't get the idea at all?! Besides I saw this for cross entropy pretty often: cross_entropy = -tf.reduce_sum(y_train * tf.log(y_output)) ...but why is this used? Isn't the loss in cross entropy mathematically this: -1/n * sum(y_train * log(y_output) + (1 - y_train) * log(1 - y_output)) Where is the (1 - y_train) * log(1 - y_output) part in most TensorFlow examples? Isn't it missing? Answers: I know this question is quite open, but I do not expect to get like 10 pages with every single problem/cost function listed in detail. I just need a short summary about when to use which cost function (in general or in TensorFlow, doesn't matter much to me) and some explanation about this topic. And/or some source(s) for beginners ;)
This answer is on the general side of cost functions, not related to TensorFlow, and will mostly address the "some explanation about this topic" part of your question. In most examples/tutorial I followed, the cost function used was somewhat arbitrary. The point was more to introduce the reader to a specific method, not to the cost function specifically. It should not stop you to follow the tutorial to be familiar with the tools, but my answer should help you on how to choose the cost function for your own problems. If you want answers regarding Cross-Entropy, Logit, L2 norms, or anything specific, I advise you to post multiple, more specific questions. This will increase the probability that someone with specific knowledge will see your question. Choosing the right cost function for achieving the desired result is a critical point of machine learning problems. The basic approach, if you do not know exactly what you want out of your method, is to use Mean Square Error (Wikipedia) for regression problems and Percentage of error for classification problems. However, if you want good results out of your method, you need to define good , and thus define the adequate cost function. This comes from both domain knowledge (what is your data, what are you trying to achieve), and knowledge of the tools at your disposal. I do not believe I can guide you through the cost functions already implemented in TensorFlow, as I have very little knowledge of the tool, but I can give you an example on how to write and assess different cost functions. To illustrate the various differences between cost functions, let us use the example of the binary classification problem, where we want, for each sample $x_n$ , the class $f(x_n) \in \{0,1\}$ . Starting with computational properties ; how two functions measuring the "same thing" could lead to different results. Take the following, simple cost function; the percentage of error. If you have $N$ samples, $f(y_n)$ is the predicted class and $y_n$ the true class, you want to minimize $\frac{1}{N} \sum_n \left\{ \begin{array}{ll} 1 & \text{ if } f(x_n) \not= y_n\\ 0 & \text{ otherwise}\\ \end{array} \right. = \sum_n y_n[1-f(x_n)] + [1-y_n]f(x_n)$ . This cost function has the benefit of being easily interpretable. However, it is not smooth; if you have only two samples, the function "jumps" from 0, to 0.5, to 1. This will lead to inconsistencies if you try to use gradient descent on this function. One way to avoid it is to change the cost function to use probabilities of assignment; $p(y_n = 1 | x_n)$ . The function becomes $\frac{1}{N} \sum_n y_n p(y_n = 0 | x_n) + (1 - y_n) p(y_n = 1 | x_n)$ . This function is smoother, and will work better with a gradient descent approach. You will get a 'finer' model. However, it has other problem; if you have a sample that is ambiguous, let say that you do not have enough information to say anything better than $p(y_n = 1 | x_n) = 0.5$ . Then, using gradient descent on this cost function will lead to a model which increases this probability as much as possible, and thus, maybe, overfit. Another problem of this function is that if $p(y_n = 1 | x_n) = 1$ while $y_n = 0$ , you are certain to be right, but you are wrong. In order to avoid this issue, you can take the log of the probability, $\log p(y_n | x_n)$ . As $\log(0) = \infty$ and $\log(1) = 0$ , the following function does not have the problem described in the previous paragraph: $\frac{1}{N} \sum_n y_n \log p(y_n = 0 | x_n) + (1 - y_n) \log p(y_n = 1 | x_n)$ . This should illustrate that in order to optimize the same thing , the percentage of error, different definitions might yield different results if they are easier to make sense of, computationally. It is possible for cost functions $A$ and $B$ to measure the same concept , but $A$ might lead your method to better results than $B$ . Now let see how different costs function can measure different concepts. In the context of information retrieval, as in google search (if we ignore ranking), we want the returned results to have high precision , not return irrelevant information have high recall , return as much relevant results as possible Precision and Recall (Wikipedia) Note that if your algorithm returns everything , it will return every relevant result possible, and thus have high recall, but have very poor precision. On the other hand, if it returns only one element, the one that it is the most certain is relevant, it will have high precision but low recall. In order to judge such algorithms, the common cost function is the $F$ -score (Wikipedia) . The common case is the $F_1$ -score, which gives equal weight to precision and recall, but the general case it the $F_\beta$ -score, and you can tweak $\beta$ to get Higher recall, if you use $\beta > 1$ Higher precision, if you use $\beta < 1$ . In such scenario, choosing the cost function is choosing what trade-off your algorithm should do . Another example that is often brought up is the case of medical diagnosis, you can choose a cost function that punishes more false negatives or false positives depending on what is preferable: More healthy people being classified as sick (But then, we might treat healthy people, which is costly and might hurt them if they are actually not sick) More sick people being classified as healthy (But then, they might die without treatment) In conclusion, defining the cost function is defining the goal of your algorithm. The algorithm defines how to get there. Side note: Some cost functions have nice algorithm ways to get to their goals. For example, a nice way to the minimum of the Hinge loss (Wikipedia) exists, by solving the dual problem in SVM (Wikipedia)
{ "source": [ "https://datascience.stackexchange.com/questions/9850", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/13809/" ] }
10,188
I'm just getting started with some machine learning, and until now I have been dealing with linear regression over one variable. I have learnt that there is a hypothesis, which is: $h_\theta(x)=\theta_0+\theta_1x$ To find out good values for the parameters $\theta_0$ and $\theta_1$ we want to minimize the difference between the calculated result and the actual result of our test data. So we subtract $h_\theta(x^{(i)})-y^{(i)}$ for all $i$ from $1$ to $m$. Hence we calculate the sum over this difference and then calculate the average by multiplying the sum by $\frac{1}{m}$. So far, so good. This would result in: $\frac{1}{m}\sum_{i=1}^mh_\theta(x^{(i)})-y^{(i)}$ But this is not what has been suggested. Instead the course suggests to take the square value of the difference, and to multiply by $\frac{1}{2m}$. So the formula is: $\frac{1}{2m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})^2$ Why is that? Why do we use the square function here, and why do we multiply by $\frac{1}{2m}$ instead of $\frac{1}{m}$?
Your loss function would not work because it incentivizes setting $\theta_1$ to any finite value and $\theta_0$ to $-\infty$ . Let's call $r(x,y)=\frac{1}{m}\sum_{i=1}^m {h_\theta\left(x^{(i)}\right)} -y$ the residual for $h$ . Your goal is to make $r$ as close to zero as possible, not just minimize it . A high negative value is just as bad as a high positive value. EDIT: You can counter this by artificially limiting the parameter space $\mathbf{\Theta} $ (e.g. you want $|\theta_0| < 10$ ). In this case, the optimal parameters would lie on certain points on the boundary of the parameter space. See https://math.stackexchange.com/q/896388/12467 . This is not what you want. Why do we use the square loss The squared error forces $h(x)$ and $y$ to match. It's minimized at $u=v$ , if possible, and is always $\ge 0$ , because it's a square of the real number $u-v$ . $|u-v|$ would also work for the above purpose, as would $(u-v)^{2n}$ , with $n$ some positive integer. The first of these is actually used (it's called the $\ell_1$ loss; you might also come across the $\ell_2$ loss, which is another name for squared error). So, why is the squared loss better than these? This is a deep question related to the link between Frequentist and Bayesian inference. In short, the squared error relates to Gaussian Noise . If your data does not fit all points exactly, i.e. $h(x)-y$ is not zero for some point no matter what $\theta$ you choose (as will always happen in practice), that might be because of noise . In any complex system there will be many small independent causes for the difference between your model $h$ and reality $y$ : measurement error, environmental factors etc. By the Central Limit Theorem (CLT), the total noise would be distributed Normally , i.e. according to the Gaussian distribution . We want to pick the best fit $\theta$ taking this noise distribution into account. Assume $R = h(X)-Y$ , the part of $\mathbf{y}$ that your model cannot explain, follows the Gaussian distribution $\mathcal{N}(\mu,\sigma)$ . We're using capitals because we're talking about random variables now. The Gaussian distribution has two parameters, mean $\mu = \mathbb{E}[R] = \frac{1}{m} \sum_i h_\theta(X^{(i)})-Y^{(i))}$ and variance $\sigma^2 = E[R^2] = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$ . See here to understand these terms better. Consider $\mu$ , it is the systematic error of our measurements. Use $h'(x) = h(x) - \mu$ to correct for systematic error, so that $\mu' = \mathbb{E}[R']=0$ (exercise for the reader). Nothing else to do here. $\sigma$ represents the random error , also called noise . Once we've taken care of the systematic noise component as in the previous point, the best predictor is obtained when $\sigma^2 = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$ is minimized. Put another way, the best predictor is the one with the tightest distribution (smallest variance) around the predicted value, i.e. smallest variance. Minimizing the the least squared loss is the same thing as minimizing the variance! That explains why the least squared loss works for a wide range of problems. The underlying noise is very often Gaussian, because of the CLT, and minimizing the squared error turns out to be the right thing to do! To simultaneously take both the mean and variance into account, we include a bias term in our classifier (to handle systematic error $\mu$ ), then minimize the square loss. Followup questions: Least squares loss = Gaussian error. Does every other loss function also correspond to some noise distribution? Yes. For example, the $\ell_1$ loss (minimizing absolute value instead of squared error) corresponds to the Laplace distribution (Look at the formula for the PDF in the infobox -- it's just the Gaussian with $|x-\mu|$ instead of $(x-\mu)^2$ ). A popular loss for probability distributions is the KL-divergence . -The Gaussian distribution is very well motivated because of the Central Limit Theorem , which we discussed earlier. When is the Laplace distribution the right noise model? There are some circumstances where it comes about naturally, but it's more commonly as a regularizer to enforce sparsity : the $\ell_1$ loss is the least convex among all convex losses. As Jan mentions in the comments, the minimizer of squared deviations is the mean and the minimizer of the sum of absolute deviations is the median . Why would we want to find the median of the residuals instead of the mean? Unlike the mean, the median isn't thrown off by one very large outlier. So, the $\ell_1$ loss is used for increased robustness. Sometimes a combination of the two is used. Are there situations where we minimize both the Mean and Variance? Yes. Look up Bias-Variance Trade-off . Here, we are looking at a set of classifiers $h_\theta \in H$ and asking which among them is best. If we ask which set of classifiers is the best for a problem, minimizing both the bias and variance becomes important. It turns out that there is always a trade-off between them and we use regularization to achieve a compromise. Regarding the $\frac{1}{2}$ term The 1/2 does not matter and actually, neither does the $m$ - they're both constants. The optimal value of $\theta$ would remain the same in both cases. The expression for the gradient becomes prettier with the $\frac{1}{2}$ , because the 2 from the square term cancels out. When writing code or algorithms, we're usually concerned more with the gradient, so it helps to keep it concise. You can check progress just by checking the norm of the gradient. The loss function itself is sometimes omitted from code because it is used only for validation of the final answer. The $m$ is useful if you solve this problem with gradient descent. Then your gradient becomes the average of $m$ terms instead of a sum, so its' scale does not change when you add more data points. I've run into this problem before: I test code with a small number of points and it works fine, but when you test it with the entire dataset there is loss of precision and sometimes over/under-flows, i.e. your gradient becomes nan or inf . To avoid that, just normalize w.r.t. number of data points. These aesthetic decisions are used here to maintain consistency with future equations where you'll add regularization terms. If you include the $m$ , the regularization parameter $\lambda$ will not depend on the dataset size $m$ and it will be more interpretable across problems.
{ "source": [ "https://datascience.stackexchange.com/questions/10188", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/16148/" ] }
10,228
Can someone practically explain the rationale behind Gini impurity vs Information gain (based on Entropy)? Which metric is better to use in different scenarios while using decision trees?
Gini impurity and Information Gain Entropy are pretty much the same. And people do use the values interchangeably. Below are the formulae of both: $\textit{Gini}: \mathit{Gini}(E) = 1 - \sum_{j=1}^{c}p_j^2$ $\textit{Entropy}: H(E) = -\sum_{j=1}^{c}p_j\log p_j$ Given a choice, I would use the Gini impurity, as it doesn't require me to compute logarithmic functions, which are computationally intensive. The closed-form of its solution can also be found. Which metric is better to use in different scenarios while using decision trees? The Gini impurity, for reasons, stated above. So, they are pretty much the same when it comes to CART analytics. Helpful reference for computational comparison of the two methods
{ "source": [ "https://datascience.stackexchange.com/questions/10228", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/16236/" ] }
10,234
I am looking for datasets that have the same columns and their content (rows) are different. When I run a decision tree (classification) I'll get different models (trees) for each of them. It can be data of customers of different bank or insurance companies. Just for clarification the datasets should have the following criteria: 1. Have same columns 2. Categorized target column (i.e. I can build decision tree) 3. Each dataset can have enough data (over 1000 tuples) 4. The decision trees that I create (2) from each dataset is different
Gini impurity and Information Gain Entropy are pretty much the same. And people do use the values interchangeably. Below are the formulae of both: $\textit{Gini}: \mathit{Gini}(E) = 1 - \sum_{j=1}^{c}p_j^2$ $\textit{Entropy}: H(E) = -\sum_{j=1}^{c}p_j\log p_j$ Given a choice, I would use the Gini impurity, as it doesn't require me to compute logarithmic functions, which are computationally intensive. The closed-form of its solution can also be found. Which metric is better to use in different scenarios while using decision trees? The Gini impurity, for reasons, stated above. So, they are pretty much the same when it comes to CART analytics. Helpful reference for computational comparison of the two methods
{ "source": [ "https://datascience.stackexchange.com/questions/10234", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/16251/" ] }
10,459
I have a pandas data frame with several entries, and I want to calculate the correlation between the income of some type of stores. There are a number of stores with income data, classification of area of activity (theater, cloth stores, food ...) and other data. I tried to create a new data frame and insert a column with the income of all kinds of stores that belong to the same category, and the returning data frame has only the first column filled and the rest is full of NaN's. The code that I tired: corr = pd.DataFrame() for at in activity: stores.loc[stores['Activity']==at]['income'] I want to do so, so I can use .corr() to gave the correlation matrix between the category of stores. After that, I would like to know how I can plot the matrix values (-1 to 1, since I want to use Pearson's correlation) with matplolib.
I suggest some sort of play on the following: Using the UCI Abalone data for this example... import matplotlib import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Read file into a Pandas dataframe from pandas import DataFrame, read_csv f = 'https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data' df = read_csv(f) df=df[0:10] df Correlation matrix plotting function: # Correlation matric plotting function def correlation_matrix(df): from matplotlib import pyplot as plt from matplotlib import cm as cm fig = plt.figure() ax1 = fig.add_subplot(111) cmap = cm.get_cmap('jet', 30) cax = ax1.imshow(df.corr(), interpolation="nearest", cmap=cmap) ax1.grid(True) plt.title('Abalone Feature Correlation') labels=['Sex','Length','Diam','Height','Whole','Shucked','Viscera','Shell','Rings',] ax1.set_xticklabels(labels,fontsize=6) ax1.set_yticklabels(labels,fontsize=6) # Add colorbar, make sure to specify tick locations to match desired ticklabels fig.colorbar(cax, ticks=[.75,.8,.85,.90,.95,1]) plt.show() correlation_matrix(df) Hope this helps!
{ "source": [ "https://datascience.stackexchange.com/questions/10459", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/16096/" ] }
11,356
I have 10 data frames pyspark.sql.dataframe.DataFrame , obtained from randomSplit as (td1, td2, td3, td4, td5, td6, td7, td8, td9, td10) = td.randomSplit([.1, .1, .1, .1, .1, .1, .1, .1, .1, .1], seed = 100) Now I want to join 9 td 's into a single data frame, how should I do that? I have already tried with unionAll , but this function accepts only two arguments. td1_2 = td1.unionAll(td2) # this is working fine td1_2_3 = td1.unionAll(td2, td3) # error TypeError: unionAll() takes exactly 2 arguments (3 given) Is there any way to combine more than two data frames row-wise? The purpose of doing this is that I am doing 10-fold Cross Validation manually without using PySpark CrossValidator method, So taking 9 into training and 1 into test data and then I will repeat it for other combinations.
Stolen from: https://stackoverflow.com/questions/33743978/spark-union-of-multiple-rdds Outside of chaining unions this is the only way to do it for DataFrames. from functools import reduce # For Python 3.x from pyspark.sql import DataFrame def unionAll(*dfs): return reduce(DataFrame.unionAll, dfs) unionAll(td2, td3, td4, td5, td6, td7, td8, td9, td10) What happens is that it takes all the objects that you passed as parameters and reduces them using unionAll (this reduce is from Python, not the Spark reduce although they work similarly) which eventually reduces it to one DataFrame. If instead of DataFrames they are normal RDDs you can pass a list of them to the union function of your SparkContext EDIT: For your purpose I propose a different method, since you would have to repeat this whole union 10 times for your different folds for crossvalidation, I would add labels for which fold a row belongs to and just filter your DataFrame for every fold based on the label
{ "source": [ "https://datascience.stackexchange.com/questions/11356", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/17116/" ] }
11,619
I've been thinking about the Recurrent Neural Networks (RNN) and their varieties and Convolutional Neural Networks (CNN) and their varieties. Would these two points be fair to say: Use CNNs to break a component (such as an image) into subcomponents (such as an object in an image, such as the outline of the object in the image, etc.) Use RNNs to create combinations of subcomponents (image captioning, text generation, language translation, etc.) I would appreciate if anyone wants to point out any inaccuracies in these statements. My goal here is to get a more clearer foundation on the uses of CNNs and RNNs.
A CNN will learn to recognize patterns across space. So, as you say, a CNN will learn to recognize components of an image (e.g., lines, curves, etc.) and then learn to combine these components to recognize larger structures (e.g., faces, objects, etc.). You could say, in a very general way, that a RNN will similarly learn to recognize patterns across time. So a RNN that is trained to translate text might learn that "dog" should be translated differently if preceded by the word "hot". The mechanism by which the two kinds of NNs represent these patterns is different, however. In the case of a CNN, you are looking for the same patterns on all the different subfields of the image. In the case of a RNN you are (in the simplest case) feeding the hidden layers from the previous step as an additional input into the next step. While the RNN builds up memory in this process, it is not looking for the same patterns over different slices of time in the same way that a CNN is looking for the same patterns over different regions of space. I should also note that when I say "time" and "space" here, it shouldn't be taken too literally. You could run a RNN on a single image for image captioning, for instance, and the meaning of "time" would simply be the order in which different parts of the image are processed. So objects initially processed will inform the captioning of later objects processed.
{ "source": [ "https://datascience.stackexchange.com/questions/11619", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/16244/" ] }
11,699
This is a small conceptual question that's been nagging me for a while: How can we back-propagate through a max-pooling layer in a neural network? I came across max-pooling layers while going through this tutorial for Torch 7's nn library. The library abstracts the gradient calculation and forward passes for each layer of a deep network. I don't understand how the gradient calculation is done for a max-pooling layer. I know that if you have an input ${z_i}^l$ going into neuron $i$ of layer $l$, then ${\delta_i}^l$ (defined as ${\delta_i}^l = \frac{\partial E}{\partial {z_i}^l}$) is given by: $$ {\delta_i}^l = \theta^{'}({z_i}^l) \sum_{j} {\delta_j}^{l+1} w_{i,j}^{l,l+1} $$ So, a max-pooling layer would receive the ${\delta_j}^{l+1}$'s of the next layer as usual; but since the activation function for the max-pooling neurons takes in a vector of values (over which it maxes) as input, ${\delta_i}^{l}$ isn't a single number anymore, but a vector ($\theta^{'}({z_j}^l)$ would have to be replaced by $\nabla \theta(\left\{{z_j}^l\right\})$). Furthermore, $\theta$, being the max function, isn't differentiable with respect to it's inputs. So....how should it work out exactly?
There is no gradient with respect to non maximum values, since changing them slightly does not affect the output. Further the max is locally linear with slope 1, with respect to the input that actually achieves the max. Thus, the gradient from the next layer is passed back to only that neuron which achieved the max. All other neurons get zero gradient. So in your example, $\delta_i^l$ would be a vector of all zeros, except that the $i^{*^{th}}$ location will get a values $\left\{\delta_j^{l+1}\right\}$ where $i^* = argmax_{i} (z_i^l)$
{ "source": [ "https://datascience.stackexchange.com/questions/11699", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/18632/" ] }
11,928
I got ValueError when predicting test data using a RandomForest model. My code: clf = RandomForestClassifier(n_estimators=10, max_depth=6, n_jobs=1, verbose=2) clf.fit(X_fit, y_fit) df_test.fillna(df_test.mean()) X_test = df_test.values y_pred = clf.predict(X_test) The error: ValueError: Input contains NaN, infinity or a value too large for dtype('float32'). How do I find the bad values in the test dataset? Also, I do not want to drop these records, can I just replace them with the mean or median? Thanks.
With np.isnan(X) you get a boolean mask back with True for positions containing NaN s. With np.where(np.isnan(X)) you get back a tuple with i, j coordinates of NaN s. Finally, with np.nan_to_num(X) you "replace nan with zero and inf with finite numbers". Alternatively, you can use: sklearn.impute.SimpleImputer for mean / median imputation of missing values, or pandas' pd.DataFrame(X).fillna() , if you need something other than filling it with zeros.
{ "source": [ "https://datascience.stackexchange.com/questions/11928", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/17310/" ] }
12,318
I ran a xgboost model. I don't exactly know how to interpret the output of xgb.importance . What is the meaning of Gain, Cover, and Frequency and how do we interpret them? Also, what does Split, RealCover, and RealCover% mean? I have some extra parameters here Are there any other parameters that can tell me more about feature importances? From the R documentation, I have some understanding that Gain is something similar to Information gain and Frequency is number of times a feature is used across all the trees. I have no idea what Cover is. I ran the example code given in the link (and also tried doing the same on the problem that I am working on), but the split definition given there did not match with the numbers that I calculated. importance_matrix Output: Feature Gain Cover Frequence 1: xxx 2.276101e-01 0.0618490331 1.913283e-02 2: xxxx 2.047495e-01 0.1337406946 1.373710e-01 3: xxxx 1.239551e-01 0.1032614896 1.319798e-01 4: xxxx 6.269780e-02 0.0431682707 1.098646e-01 5: xxxxx 6.004842e-02 0.0305611830 1.709108e-02 214: xxxxxxxxxx 4.599139e-06 0.0001551098 1.147052e-05 215: xxxxxxxxxx 4.500927e-06 0.0001665320 1.147052e-05 216: xxxxxxxxxxxx 3.899363e-06 0.0001536857 1.147052e-05 217: xxxxxxxxxxxxxx 3.619348e-06 0.0001808504 1.147052e-05 218: xxxxxxxxxxxxx 3.429679e-06 0.0001792233 1.147052e-05
From your question, I'm assuming that you're using xgboost to fit boosted trees for binary classification. The importance matrix is actually a data.table object with the first column listing the names of all the features actually used in the boosted trees. The meaning of the importance data table is as follows: The Gain implies the relative contribution of the corresponding feature to the model calculated by taking each feature's contribution for each tree in the model. A higher value of this metric when compared to another feature implies it is more important for generating a prediction. The Cover metric means the relative number of observations related to this feature. For example, if you have 100 observations, 4 features and 3 trees, and suppose feature1 is used to decide the leaf node for 10, 5, and 2 observations in tree1, tree2 and tree3 respectively; then the metric will count cover for this feature as 10+5+2 = 17 observations. This will be calculated for all the 4 features and the cover will be 17 expressed as a percentage for all features' cover metrics. The Frequency (/'Frequence') is the percentage representing the relative number of times a particular feature occurs in the trees of the model. In the above example, if feature1 occurred in 2 splits, 1 split and 3 splits in each of tree1, tree2 and tree3; then the weightage for feature1 will be 2+1+3 = 6. The frequency for feature1 is calculated as its percentage weight over weights of all features. The Gain is the most relevant attribute to interpret the relative importance of each feature. The measures are all relative and hence all sum up to one, an example from a fitted xgboost model in R is: > sum(importance $Frequence) [1] 1 > sum(importance$ Cover) [1] 1 > sum(importance$Gain) [1] 1
{ "source": [ "https://datascience.stackexchange.com/questions/12318", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/-1/" ] }
12,321
I do not understand the difference between the fit and fit_transform methods in scikit-learn. Can anybody explain simply why we might need to transform data? What does it mean, fitting a model on training data and transforming to test data? Does it mean, for example, converting categorical variables into numbers in training and transforming the new feature set onto test data?
To center the data (make it have zero mean and unit standard error), you subtract the mean and then divide the result by the standard deviation: $$x' = \frac{x-\mu}{\sigma}$$ You do that on the training set of the data. But then you have to apply the same transformation to your test set (e.g. in cross-validation), or to newly obtained examples before forecasting. But you have to use the exact same two parameters $\mu$ and $\sigma$ (values) that you used for centering the training set. Hence, every scikit-learn's transform's fit() just calculates the parameters (e.g. $\mu$ and $\sigma$ in case of StandardScaler ) and saves them as an internal object's state. Afterwards, you can call its transform() method to apply the transformation to any particular set of examples. fit_transform() joins these two steps and is used for the initial fitting of parameters on the training set $x$ , while also returning the transformed $x'$ . Internally, the transformer object just calls first fit() and then transform() on the same data.
{ "source": [ "https://datascience.stackexchange.com/questions/12321", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/15064/" ] }
12,532
I am about to train a big LSTM network with 2-3 million articles and am struggling with Memory Errors (I use AWS EC2 g2x2large). I found out that one solution is to reduce the batch_size . However, I am not sure if this parameter is only related to memory efficiency issues or if it will effect my results. As a matter of fact, I also noticed that batch_size used in examples is usually as a power of two, which I don't understand either. I don't mind if my network takes longer to train, but I would like to know if reducing the batch_size will decrease the quality of my predictions. Thanks.
After one and a half years, I come back to my answer because my previous answer was wrong. Batch size impacts learning significantly. What happens when you put a batch through your network is that you average the gradients. The concept is that if your batch size is big enough, this will provide a stable enough estimate of what the gradient of the full dataset would be. By taking samples from your dataset, you estimate the gradient while reducing computational cost significantly. The lower you go, the less accurate your esttimate will be, however in some cases these noisy gradients can actually help escape local minima. When it is too low, your network weights can just jump around if your data is noisy and it might be unable to learn or it converges very slowly, thus negatively impacting total computation time. Another advantage of batching is for GPU computation, GPUs are very good at parallelizing the calculations that happen in neural networks if part of the computation is the same (for example, repeated matrix multiplication over the same weight matrix of your network). This means that a batch size of 16 will take less than twice the amount of a batch size of 8. In the case that you do need bigger batch sizes but it will not fit on your GPU, you can feed a small batch, save the gradient estimates and feed one or more batches, and then do a weight update. This way you get a more stable gradient because you increased your virtual batch size.
{ "source": [ "https://datascience.stackexchange.com/questions/12532", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/17484/" ] }
12,554
I'm currently using XGBoost on a data-set with 21 features (selected from list of some 150 features), then one-hot coded them to obtain ~98 features. A few of these 98 features are somewhat redundant, for example: a variable (feature) $A$ also appears as $\frac{B}{A}$ and $\frac{C}{A}$. My questions are : How ( If? ) do Boosted Decision Trees handle multicollinearity? How would the existence of multicollinearity affect prediction if it is not handled? From what I understand, the model is learning more than one tree and the final prediction is based on something like a "weighted sum" of the individual predictions. So if this is correct, then Boosted Decision Trees should be able to handle co-dependence between variables. Also, on a related note - how does the variable importance object in XGBoost work?
Decision trees are by nature immune to multi-collinearity. For example, if you have 2 features which are 99% correlated, when deciding upon a split the tree will choose only one of them. Other models such as Logistic regression would use both the features. Since boosted trees use individual decision trees, they also are unaffected by multi-collinearity. However, its a good practice to remove any redundant features from any dataset used for training, irrespective of the model's algorithm. In your case since you're deriving new features, you could use this approach, evaluate each feature's importance and retain only the best features for your final model. The importance matrix of an xgboost model is actually a data.table object with the first column listing the names of all the features actually used in the boosted trees. The second column is the Gain metric which implies the relative contribution of the corresponding feature to the model calculated by taking each feature's contribution for each tree in the model. A higher value of this metric when compared to another feature implies it is more important for generating a prediction.
{ "source": [ "https://datascience.stackexchange.com/questions/12554", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/13450/" ] }
12,564
I'm falling in love with data science and I'm spending a lot of time studying it. It seems that a common data science workflow is: Frame the problem Collect the data Clean the data Work on the data Report the results I'm struggling to connect the dots when comes to work on the data. I'm aware that step 4 is where the fun happens, but I don't know where to begin. What are the steps taken when you work on the data? Example: do I need to find the central tendency or the standard deviation? Is machine learning needed? Ps: I know these are broad questions, so please answer it within your own domain expertise.
Decision trees are by nature immune to multi-collinearity. For example, if you have 2 features which are 99% correlated, when deciding upon a split the tree will choose only one of them. Other models such as Logistic regression would use both the features. Since boosted trees use individual decision trees, they also are unaffected by multi-collinearity. However, its a good practice to remove any redundant features from any dataset used for training, irrespective of the model's algorithm. In your case since you're deriving new features, you could use this approach, evaluate each feature's importance and retain only the best features for your final model. The importance matrix of an xgboost model is actually a data.table object with the first column listing the names of all the features actually used in the boosted trees. The second column is the Gain metric which implies the relative contribution of the corresponding feature to the model calculated by taking each feature's contribution for each tree in the model. A higher value of this metric when compared to another feature implies it is more important for generating a prediction.
{ "source": [ "https://datascience.stackexchange.com/questions/12564", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/18919/" ] }
12,645
How can I get the number of missing value in each row in Pandas dataframe. I would like to split dataframe to different dataframes which have same number of missing values in each row. Any suggestion?
You can apply a count over the rows like this: test_df.apply(lambda x: x.count(), axis=1) test_df: A B C 0: 1 1 3 1: 2 nan nan 2: nan nan nan output: 0: 3 1: 1 2: 0 You can add the result as a column like this: test_df['full_count'] = test_df.apply(lambda x: x.count(), axis=1) Result: A B C full_count 0: 1 1 3 3 1: 2 nan nan 1 2: nan nan nan 0
{ "source": [ "https://datascience.stackexchange.com/questions/12645", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/15064/" ] }
12,761
So, I have not been able to find any literature on this subject but it seems like something worth giving a thought: What are the best practices in model training and optimization if new observations are available? Is there any way to determine the period/frequency of re-training a model before the predictions begin to degrade? Is it over-fitting if the parameters are re-optimised for the aggregated data? Note that the learning may not necessarily be online. One may wish to upgrade an existing model after observing significant variance in more recent predictions.
Once a model is trained and you get new data which can be used for training, you can load the previous model and train onto it. For example, you can save your model as a .pickle file and load it and train further onto it when new data is available. Do note that for the model to predict correctly, the new training data should have a similar distribution as the past data . Predictions tend to degrade based on the dataset you are using. For example, if you are trying to train using twitter data and you have collected data regarding a product which is widely tweeted that day. But if you use use tweets after some days when that product is not even discussed, it might be biased. The frequency will be dependent on dataset and there is no specific time to state as such. If you observe that your new incoming data is deviating vastly, then it is a good practise to retrain the model . Optimizing parameters on the aggregated data is not overfitting. Large data doesn't imply overfitting. Use cross validation to check for over-fitting.
{ "source": [ "https://datascience.stackexchange.com/questions/12761", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/13450/" ] }
12,830
I recently read Yan LeCuns comment on 1x1 convolutions : In Convolutional Nets, there is no such thing as "fully-connected layers". There are only convolution layers with 1x1 convolution kernels and a full connection table. It's a too-rarely-understood fact that ConvNets don't need to have a fixed-size input. You can train them on inputs that happen to produce a single output vector (with no spatial extent), and then apply them to larger images. Instead of a single output vector, you then get a spatial map of output vectors. Each vector sees input windows at different locations on the input. In that scenario, the "fully connected layers" really act as 1x1 convolutions. I would like to see a simple example for this. Example Assume you have a fully connected network. It has only an input layer and an output layer. The input layer has 3 nodes, the output layer has 2 nodes. This network has $3 \cdot 2 = 6$ parameters. To make it even more concrete, lets say you have a ReLU activation function in the output layer and the weight matrix $$ \begin{align} W &= \begin{pmatrix} 0 & 1 & 1\\ 2 & 3 & 5\\ \end{pmatrix} \in \mathbb{R}^{2 \times 3}\\ b &= \begin{pmatrix}8\\ 13\end{pmatrix} \in \mathbb{R}^2 \end{align} $$ So the network is $f(x) = ReLU(W \cdot x + b)$ with $x \in \mathbb{R}^3$ . How would the convolutional layer have to look like to be the same? What does LeCun mean with "full connection table"? I guess to get an equivalent CNN it would have to have exactly the same number of parameters. The MLP from above has $2 \cdot 3 + 2 = 8$ parameters.
Your Example In your example we have 3 input and 2 output units. To apply convolutions, think of those units having shape: [1,1,3] and [1,1,2] , respectively. In CNN terms, we have 3 input and 2 output feature maps, each having spatial dimensions 1 x 1 . Applying an n x n convolution to a layer with k feature maps, requires you to have a kernel of shape [n,n,k] . Hence the kernel of your 1x1 convolutions have shape [1, 1, 3] . You need 2 of those kernels (or filters) to produce the 2 output feature maps. Please Note: $1 \times 1$ convolutions really are $1 \times 1 \times \text{number of channels of the input}$ convolutions. The last one is only rarely mentioned. Indeed if you choose as kernels and bias: $$ \begin{align} w_1 &= \begin{pmatrix} 0 & 1 & 1\\ \end{pmatrix} \in \mathbb{R}^{3}\\ w_2 &= \begin{pmatrix} 2 & 3 & 5\\ \end{pmatrix} \in \mathbb{R}^{3}\\ b &= \begin{pmatrix}8\\ 13\end{pmatrix} \in \mathbb{R}^2 \end{align} $$ The conv-layer will then compute $f(x) = ReLU\left(\begin{pmatrix}w_1 \cdot x\\ w_2 \cdot x\end{pmatrix} + \begin{pmatrix}b_1\\ b_2\end{pmatrix}\right)$ with $x \in \mathbb{R}^3$ . Transformation in real Code For a real-life example, also have a look at my vgg-fcn implementation. The Code provided in this file takes the VGG weights, but transforms every fully-connected layer into a convolutional layers. The resulting network yields the same output as vgg when applied to input image of shape [244,244,3] . (When applying both networks without padding). The transformed convolutional layers are introduced in the function _fc_layer (line 145). They have kernel size 7x7 for FC6 (which is maximal, as pool5 of VGG outputs a feature map of shape [7,7, 512] . Layer FC7 and FC8 are implemented as 1x1 convolution. "Full Connection Table" He might refer to a filter/kernel which has the same dimension as the input feature map. In both cases (Code and your Example) the spatial dimensions are maximal in the sense, that the spatial dimension of the filter is the same as the spatial dimension as the input.
{ "source": [ "https://datascience.stackexchange.com/questions/12830", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8820/" ] }
12,839
I am trying to create a association using apriori algorithm.The data contains around 33000 records.Below is the sample of the data id code 1 19 1 58 1 111 2 19 2 111 2 167 3 12 3 79 3 85 4 96 5 19 6 58 7 12 7 18 7 40 7 48 7 85 7 86 7 135 In R this data is: structure(list(id = c(1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 5, 6, 7, 7, 7, 7, 7, 7, 7), code = c(19, 58, 111, 19, 111, 167, 12, 79, 85, 96, 19, 58, 12, 18, 40, 48, 85, 86, 135)), .Names = c("id", "code"), row.names = c(NA, -19L), class = "data.frame") Using the following code I tried to build the association # creating a string of codes based on the id library(dplyr) Asso2 = Asso %>% group_by(patient_id) %>% summarise(hi = toString(hcc_id)) # converting into transactions library(arules) fact <- data.frame(lapply(Asso2,as.factor)) trans <- as(fact, 'transactions') # Applying aprior rules = apriori(trans, parameter = list(supp = 0.001, conf = 0.001,target = "rules")) rules inspect(rules) I am getting totally 96 rules like below with empty lhs and I trying to understand whether we cannot make any rules from this data or am I missing anything here. Since I am novice in this, I would like to get some help. # lhs rhs support confidence lift # 1 {} => {hi=19, 96, 108} 0.001021696 0.001021696 1 # 2 {} => {hi=176} 0.001021696 0.001021696 1 # 3 {} => {hi=88, 108} 0.001021696 0.001021696 1 # 4 {} => {hi=72} 0.001051746 0.001051746 1 # 5 {} => {hi=88, 96} 0.001051746 0.001051746 1 # 6 {} => {hi=108, 112} 0.001081796 0.001081796 1 # 7 {} => {hi=84} 0.001081796 0.001081796 1 # 8 {} => {hi=100, 103} 0.001111846 0.001111846 1 # 9 {} => {hi=18, 108, 111} 0.001111846 0.001111846 1
Your Example In your example we have 3 input and 2 output units. To apply convolutions, think of those units having shape: [1,1,3] and [1,1,2] , respectively. In CNN terms, we have 3 input and 2 output feature maps, each having spatial dimensions 1 x 1 . Applying an n x n convolution to a layer with k feature maps, requires you to have a kernel of shape [n,n,k] . Hence the kernel of your 1x1 convolutions have shape [1, 1, 3] . You need 2 of those kernels (or filters) to produce the 2 output feature maps. Please Note: $1 \times 1$ convolutions really are $1 \times 1 \times \text{number of channels of the input}$ convolutions. The last one is only rarely mentioned. Indeed if you choose as kernels and bias: $$ \begin{align} w_1 &= \begin{pmatrix} 0 & 1 & 1\\ \end{pmatrix} \in \mathbb{R}^{3}\\ w_2 &= \begin{pmatrix} 2 & 3 & 5\\ \end{pmatrix} \in \mathbb{R}^{3}\\ b &= \begin{pmatrix}8\\ 13\end{pmatrix} \in \mathbb{R}^2 \end{align} $$ The conv-layer will then compute $f(x) = ReLU\left(\begin{pmatrix}w_1 \cdot x\\ w_2 \cdot x\end{pmatrix} + \begin{pmatrix}b_1\\ b_2\end{pmatrix}\right)$ with $x \in \mathbb{R}^3$ . Transformation in real Code For a real-life example, also have a look at my vgg-fcn implementation. The Code provided in this file takes the VGG weights, but transforms every fully-connected layer into a convolutional layers. The resulting network yields the same output as vgg when applied to input image of shape [244,244,3] . (When applying both networks without padding). The transformed convolutional layers are introduced in the function _fc_layer (line 145). They have kernel size 7x7 for FC6 (which is maximal, as pool5 of VGG outputs a feature map of shape [7,7, 512] . Layer FC7 and FC8 are implemented as 1x1 convolution. "Full Connection Table" He might refer to a filter/kernel which has the same dimension as the input feature map. In both cases (Code and your Example) the spatial dimensions are maximal in the sense, that the spatial dimension of the filter is the same as the spatial dimension as the input.
{ "source": [ "https://datascience.stackexchange.com/questions/12839", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/18013/" ] }
12,850
I had a question regarding about a career as a data scientist. Im pursuing a degree in business analytics with a minor computer science. Could i still become a data scientist with a bachelors degree in business analytics. Or would i have to pursue a masters degree in data science in order to become a data scientist. (I have a good programming background in data structures and algorithm in java, plus web development, python)
Your Example In your example we have 3 input and 2 output units. To apply convolutions, think of those units having shape: [1,1,3] and [1,1,2] , respectively. In CNN terms, we have 3 input and 2 output feature maps, each having spatial dimensions 1 x 1 . Applying an n x n convolution to a layer with k feature maps, requires you to have a kernel of shape [n,n,k] . Hence the kernel of your 1x1 convolutions have shape [1, 1, 3] . You need 2 of those kernels (or filters) to produce the 2 output feature maps. Please Note: $1 \times 1$ convolutions really are $1 \times 1 \times \text{number of channels of the input}$ convolutions. The last one is only rarely mentioned. Indeed if you choose as kernels and bias: $$ \begin{align} w_1 &= \begin{pmatrix} 0 & 1 & 1\\ \end{pmatrix} \in \mathbb{R}^{3}\\ w_2 &= \begin{pmatrix} 2 & 3 & 5\\ \end{pmatrix} \in \mathbb{R}^{3}\\ b &= \begin{pmatrix}8\\ 13\end{pmatrix} \in \mathbb{R}^2 \end{align} $$ The conv-layer will then compute $f(x) = ReLU\left(\begin{pmatrix}w_1 \cdot x\\ w_2 \cdot x\end{pmatrix} + \begin{pmatrix}b_1\\ b_2\end{pmatrix}\right)$ with $x \in \mathbb{R}^3$ . Transformation in real Code For a real-life example, also have a look at my vgg-fcn implementation. The Code provided in this file takes the VGG weights, but transforms every fully-connected layer into a convolutional layers. The resulting network yields the same output as vgg when applied to input image of shape [244,244,3] . (When applying both networks without padding). The transformed convolutional layers are introduced in the function _fc_layer (line 145). They have kernel size 7x7 for FC6 (which is maximal, as pool5 of VGG outputs a feature map of shape [7,7, 512] . Layer FC7 and FC8 are implemented as 1x1 convolution. "Full Connection Table" He might refer to a filter/kernel which has the same dimension as the input feature map. In both cases (Code and your Example) the spatial dimensions are maximal in the sense, that the spatial dimension of the filter is the same as the spatial dimension as the input.
{ "source": [ "https://datascience.stackexchange.com/questions/12850", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/21573/" ] }
12,851
When writing a paper / making a presentation about a topic which is about neural networks, one usually visualizes the networks architecture. What are good / simple ways to visualize common architectures automatically?
Tensorflow, Keras, MXNet, PyTorch If the neural network is given as a Tensorflow graph, then you can visualize this graph with TensorBoard . Here is how the MNIST CNN looks like: You can add names / scopes (like "dropout", "softmax", "fc1", "conv1", "conv2") yourself. Interpretation The following is only about the left graph. I ignore the 4 small graphs on the right half. Each box is a layer with parameters that can be learned. For inference, information flows from bottom to the top. Ellipses are layers which do not contain learned parameters. The color of the boxes does not have a meaning. I'm not sure of the value of the dashed small boxes ("gradients", "Adam", "save").
{ "source": [ "https://datascience.stackexchange.com/questions/12851", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8820/" ] }
13,178
I have a data set with 20000 samples, each has 12 different features. Each sample is either in category 0 or 1. I want to train a neural network and a decision forest to categorize the samples so that I can compare the results and both techniques. The first thing I stumbled upon is the proper normalization of the data. One feature is in the range $[0,10^6]$, another one in $[30,40]$ and there is one feature that mostly takes the value 8 and sometimes 7. So as I read in different sources, proper normalization of the input data is crucial for neural networks. As I found out, there are many possible ways to normalize the data, for example: Min-Max Normalization : The input range is linearly transformed to the interval $[0,1]$ (or alternatively $[-1,1]$, does that matter?) Z-Score Normalization : The data is transformed to have zero mean and unit variance: $$y_{new}=\frac{y_{old}-\text{mean}}{\sqrt{\text{Var}}}$$ Which normalization should I choose? Is normalization also needed for decision forests? With Z-Score normalization, the different features of my test data do not lie in the same range. Could this be a problem? Should every feature normalized with the same algorithm, so that I decide either to use Min-Max for all features or Z-Score for all features? Are there combinations where the data is mapped to $[-1,1]$ and also has zero mean (which would imply a non-linear transformation of the data and hence a change in the variance and other features of the input data). I feel a bit lost because I can't find references which answer these questions.
I disagree with the other comments. First of all, I see no need to normalize data for decision trees . Decision trees work by calculating a score (usually entropy) for each different division of the data $(X\leq x_i,X>x_i)$. Applying a transformation to the data that does not change the order of the data makes no difference. Random forests are just a bunch of decision trees, so it doesn't change this rationale. Neural networks are a different story. First of all, in terms of prediction, it makes no difference. The neural network can easily counter your normalization since it just scales the weights and changes the bias. The big problem is in the training. If you use an algorithm like resilient backpropagation to estimate the weights of the neural network, then it makes no difference. The reason is because it uses the sign of the gradient, not its magnitude, when changing the weights in the direction of whatever minimizes your error. This is the default algorithm for the neuralnet package in R, by the way. When does it make a difference? When you are using traditional backpropagation with sigmoid activation functions, it can saturate the sigmoid derivative. Consider the sigmoid function (green) and its derivative (blue): What happens if you do not normalize your data is that your data is multiplied by the random weights and you get things like $s'(9999)=0$. The derivative of the sigmoid is (approximately) zero and the training process does not move along. The neural network that you end up with is just a neural network with random weights (there is no training). Does this help us to know what the best normalization function is? But of course! First of all, it is crucial to use a normalization that centers your data because most implementation initialize bias at zero. I would normalize between -0.5 and 0.5, $\frac{X-\min{X}}{\max{X}-\min{X}}-0.5$. But standard score is also good. The actual normalization is not very crucial because it only influences the initial iterations of the optimization process. As long as it is centered and most of your data is below 1, then it might mean you have to use slightly less or more iterations to get the same result. But the result will be the same, as long as you avoid the saturation problem I mentioned. There is something not here discussed which is regularization . If you use regularization in your objective function, the way you normalize your data will affect the resulting model. I'm assuming your are already familiar with this. If you know that one variable is more prone to cause overfitting, your normalization of the data should take this into account. This is of course completely independent of neural networks being used.
{ "source": [ "https://datascience.stackexchange.com/questions/13178", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/22080/" ] }
13,216
I read about NCE (a form of candidate sampling) from these two sources: Tensorflow writeup Original Paper Can someone help me with the following: A simple explanation of how NCE works (I found the above difficult to parse and get an understanding of, so something intuitive that leads to the math presented there would be great) After point 1 above, a naturally intuitive description of how this is different from Negative Sampling. I can see that there's a slight change in the formula but could not understand the math. I do have an intuitive understanding of negative sampling in the context of word2vec - we randomly choose some samples from the vocabulary V and update only those because |V| is large and this offers a speedup. Please correct if wrong. When to use which one and how is that decided? It would be great if you could include examples(possibly easy to understand applications) Is NCE better than Negative Sampling? Better in what manner?
Taken from this post: https://stats.stackexchange.com/a/245452/154812 The issue There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a window of words (the input of the network). Predicting the next word is like predicting the class. That is, such a network is just a "standard" multinomial (multi-class) classifier. And this network must have as many output neurons as classes there are. When classes are actual words, the number of neurons is, well, huge. A "standard" neural network is usually trained with a cross-entropy cost function which requires the values of the output neurons to represent probabilities - which means that the output "scores" computed by the network for each class have to be normalized, converted into actual probabilities for each class. This normalization step is achieved by means of the softmax function. Softmax is very costly when applied to a huge output layer. The (a) solution In order to deal with this issue, that is, the expensive computation of the softmax, Word2Vec uses a technique called noise-contrastive estimation. This technique was introduced by [A] (reformulated by [B]) then used in [C], [D], [E] to learn word embeddings from unlabelled natural language text. The basic idea is to convert a multinomial classification problem (as it is the problem of predicting the next word) to a binary classification problem. That is, instead of using softmax to estimate a true probability distribution of the output word, a binary logistic regression (binary classification) is used instead. For each training sample, the enhanced (optimized) classifier is fed a true pair (a center word and another word that appears in its context) and a number of kk randomly corrupted pairs (consisting of the center word and a randomly chosen word from the vocabulary). By learning to distinguish the true pairs from corrupted ones, the classifier will ultimately learn the word vectors. This is important: instead of predicting the next word (the "standard" training technique), the optimized classifier simply predicts whether a pair of words is good or bad. Word2Vec slightly customizes the process and calls it negative sampling. In Word2Vec, the words for the negative samples (used for the corrupted pairs) are drawn from a specially designed distribution, which favours less frequent words to be drawn more often. References [A] (2005) - Contrastive estimation: Training log-linear models on unlabeled data [B] (2010) - Noise-contrastive estimation: A new estimation principle for unnormalized statistical models [C] (2008) - A unified architecture for natural language processing: Deep neural networks with multitask learning [D] (2012) - A fast and simple algorithm for training neural probabilistic language models . [E] (2013) - Learning word embeddings efficiently with noise-contrastive estimation .
{ "source": [ "https://datascience.stackexchange.com/questions/13216", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/793/" ] }
13,227
I am using the forward feature selection algorithm from MATLAB. The code is as follows: X=combine_6_non; y=target; c = cvpartition(y,'k',10); opts = statset('display','iter'); [fs,history] = sequentialfs(@fun,X,y,'cv',c,'options',opts) The function fun is as follows: function err = fun(XT,yT,Xt,yt) model = svmtrain(XT,yT, 'Kernel_Function', 'rbf', 'boxconstraint', 1); err = sum(svmclassify(model, Xt) ~= yt); end Now for different runs of the selection algorithm I am getting different feature sets. How should I zero down to the best feature set?
Taken from this post: https://stats.stackexchange.com/a/245452/154812 The issue There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a window of words (the input of the network). Predicting the next word is like predicting the class. That is, such a network is just a "standard" multinomial (multi-class) classifier. And this network must have as many output neurons as classes there are. When classes are actual words, the number of neurons is, well, huge. A "standard" neural network is usually trained with a cross-entropy cost function which requires the values of the output neurons to represent probabilities - which means that the output "scores" computed by the network for each class have to be normalized, converted into actual probabilities for each class. This normalization step is achieved by means of the softmax function. Softmax is very costly when applied to a huge output layer. The (a) solution In order to deal with this issue, that is, the expensive computation of the softmax, Word2Vec uses a technique called noise-contrastive estimation. This technique was introduced by [A] (reformulated by [B]) then used in [C], [D], [E] to learn word embeddings from unlabelled natural language text. The basic idea is to convert a multinomial classification problem (as it is the problem of predicting the next word) to a binary classification problem. That is, instead of using softmax to estimate a true probability distribution of the output word, a binary logistic regression (binary classification) is used instead. For each training sample, the enhanced (optimized) classifier is fed a true pair (a center word and another word that appears in its context) and a number of kk randomly corrupted pairs (consisting of the center word and a randomly chosen word from the vocabulary). By learning to distinguish the true pairs from corrupted ones, the classifier will ultimately learn the word vectors. This is important: instead of predicting the next word (the "standard" training technique), the optimized classifier simply predicts whether a pair of words is good or bad. Word2Vec slightly customizes the process and calls it negative sampling. In Word2Vec, the words for the negative samples (used for the corrupted pairs) are drawn from a specially designed distribution, which favours less frequent words to be drawn more often. References [A] (2005) - Contrastive estimation: Training log-linear models on unlabeled data [B] (2010) - Noise-contrastive estimation: A new estimation principle for unnormalized statistical models [C] (2008) - A unified architecture for natural language processing: Deep neural networks with multitask learning [D] (2012) - A fast and simple algorithm for training neural probabilistic language models . [E] (2013) - Learning word embeddings efficiently with noise-contrastive estimation .
{ "source": [ "https://datascience.stackexchange.com/questions/13227", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/8013/" ] }
13,490
I know that there is a possibility in Keras with the class_weights parameter dictionary at fitting, but I couldn't find any example. Would somebody so kind to provide one? By the way, in this case the appropriate praxis is simply to weight up the minority class proportionally to its underrepresentation?
If you are talking about the regular case, where your network produces only one output, then your assumption is correct. In order to force your algorithm to treat every instance of class 1 as 50 instances of class 0 you have to: Define a dictionary with your labels and their associated weights class_weight = {0: 1., 1: 50., 2: 2.} Feed the dictionary as a parameter: model.fit(X_train, Y_train, nb_epoch=5, batch_size=32, class_weight=class_weight) EDIT: "treat every instance of class 1 as 50 instances of class 0 " means that in your loss function you assign higher value to these instances. Hence, the loss becomes a weighted average, where the weight of each sample is specified by class_weight and its corresponding class. From Keras docs : class_weight : Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function (during training only).
{ "source": [ "https://datascience.stackexchange.com/questions/13490", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/21560/" ] }
13,513
I see a many times in job description for data scientist asking for Python/Java experience and disregard R. Below is a personal email I received from chief data scientist of a company I applied for through linkedin. X, Thanks for connecting and expressing interest. You do have good Analytics Skills. However, all our data scientists must have good programming skills in Java/Python as we are a internet/mobile organisation and everything we do is online. While I respect the decision of the chief data scientist, I am unable to get a clear picture as to what are the tasks that Python can do that R cannot do. Can anyone care to elaborate? I am actually keen to learn Python/Java, provided I get a bit more detail. Edit: I found an interesting discussion on Quora. Why is Python a language of choice for data scientists? Edit2: Blog from Udacity on Languages and Libraries for Machine Learning
So you can integrate with the rest of the code base. It seems your company uses a mix of Java and python. What are you going to do if a little corner of the site needs machine learning; pass the data around with a database, or a cache, drop to R, and so on? Why not just do it all in the same language? It's faster, cleaner, and easier to maintain. Know any online companies that run solely on R? Neither do I... All that said Java is the last language I'd do data science in.
{ "source": [ "https://datascience.stackexchange.com/questions/13513", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/13100/" ] }
14,187
I have noticed that such terms as model hyperparameter and model parameter have been used interchangeably on the web without prior clarification. I think this is incorrect and needs explanation. Consider a machine learning model, an SVM/NN/NB based classificator or image recognizer, just anything that first springs to mind. What are the hyperparameters and parameters of the model? Give your examples please.
Hyperparameters and parameters are often used interchangeably but there is a difference between them. You can call something a 'hyperparameter' if it cannot be learned within the estimator directly. However, 'parameters' is a more general term. When you say 'passing the parameters to the model', it generally means a combination of hyperparameters along with some other parameters that are not directly related to your estimator but are required for your model. For example, suppose you are building a SVM classifier in sklearn: from sklearn import svm X = [[0, 0], [1, 1]] y = [0, 1] clf = svm.SVC(C =0.01, kernel ='rbf', random_state=33) clf.fit(X, y) In the above code an instance of SVM is your estimator for your model for which the hyperparameters, in this case, are C and kernel . But your model has another parameter which is not a hyperparameter and that is random_state .
{ "source": [ "https://datascience.stackexchange.com/questions/14187", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/22012/" ] }
14,352
It is said in Wikipedia and deeplearning4j that Deep-learning NN (DLNN) are NN that have >1 hidden layer. These kind of NN were standard at university for me, while DLNN are very hyped right now. Been there, done that - what's the big deal? I heard also that stacked NN are considered deep-learning. How is deep-learning really defined? My background of NN is mostly from university, not from jobs: studied applications of NN in industry had about 5 courses on artif. intel. & mach. learn. - though maybe 2 of them on NN used NN for small, simple project on image recognition - used 3 layer feed-forward NN did not do real research (as in doctor thesis) on them
You are right in that the basic concept of a deep NN hasn't changed since 2012. But there have been a variety of improvements to the ways in which deep NNs are trained that have made them qualitatively more powerful. There are also a wider variety of architectures available today. I've listed some developments since 2012, grouped by training improvements and architecture improvements: Improvements to training deep NNs Hardware : The most obvious change is just the inexorable progression of Moore's law. There is more computing power available today. Cloud computing also makes it easy for people to train large NNs without needing to buy a huge rig. Software : The open source software for deep learning is really enormously improved from 2012. Back in 2012 there was Theano, maybe Caffe as well. I'm sure there are some others, too. But today we also have TensorFlow, Torch, Paddle, and CNTK, all of which are supported by large tech companies. This is closely related to the hardware bullet point since many of these platforms make it easy to train on GPUs, which drastically speeds up training time. Activation functions : The use of ReLU activation functions is probably more widespread these days, which makes training very deep networks easier. On the research side, there is a wider variety of activation functions being studied, including leaky ReLU , parametric ReLU , and maxout units . Optimization algorithms : There are more optimization algorithms around today. Adagrad and Adadelta just been introduced in 2011 and 2012, respectively. But we now also have the Adam optimizer and it's become a very popular choice. Dropout : In the past few years, dropout has become a standard tool for regularization when training neural networks. Dropout is a computationally inexpensive form of ensembling for NNs. In general, a set of models trained on random samples of the dataset will outperform a single model trained on the entire dataset. This is difficult to do explicitly for NNs because they are so expensive to train. But a similar effect can be approximated just by randomly "turning off" neurons on each step. Different subgraphs in the NN end up getting trained on different data sets, and thereby learn different things. Like ensembling, this tends to make the overall NN more robust to overfitting. Dropout is a simple technique that seems to improve performance in almost every case, so it's now used de rigueur. Batch normalization : It's been known for a while that NNs train best on data that is normalized --- i.e., there is zero mean and unit variance. In a very deep network, as the data passes through each layer, the inputs will be transformed and will generally drift to a distribution that lacks this nice, normalized property. This makes learning in these deeper layers more difficult because, from its perspective, its inputs do not have zero mean and unit variance. The mean could be very large and the variance could be very small. Batch normalization addresses this by transforming the inputs to a layer to have zero mean and unit variance. This seems to be enormously effective in training very deep NNs. Theory : Up until very recently, it was thought that the reason deep NNs are hard to train is that the optimization algorithms get stuck in local minima and have trouble getting out and finding global minima. In the last four years there have been a number of studies that seem to indicate that this intuition was wrong (e.g., Goodfellow et al. 2014 ). In the very high dimensional parameter space of a deep NN, local minima tend not to be that much worse than global minima. The problem is actually that when training, the NN can find itself on a long, wide plateau. Furthermore, these plateaus can end abruptly in a steep cliff. If the NN takes small steps, it takes a very long time to learn. But if the steps are too large, it meets a huge gradient when it runs into the cliff, which undoes all the earlier work. (This can be avoided with gradient clipping, another post-2012 innovation.) New architectures Residual networks : Researchers have been able to train incredibly deep networks (more than 1000 layers!) using residual networks . The idea here is that each layer receives not only the output from the previous layer, but also the original input as well. If trained properly, this encourages each layer to learn something different from the previous layers, so that each additional layer adds information. Wide and deep networks : Wide, shallow networks have a tendency to simply memorize the mapping between their inputs and their outputs. Deep networks generalize much better. Usually you want good generalization, but there are some situations, like recommendation systems, in which simple memorization without generalization is important, too. In these cases you want to provide good, substantive solutions when a user makes a general query, but very precise solutions when the user makes a very specific query. Wide and deep networks are able to fulfill this task nicely. Neural turing machine : A shortcoming of traditional recurrent NNs (whether they be the standard RNN or something more sophisticated like an LSTM) is that their memory is somewhat "intuitive". They manage to remember past inputs by saving the hidden layer activations they produce into the future. However, sometimes it makes more sense to explicitly store some data. (This might be the difference between writing a phone number down on a piece of paper vs. remembering that the number had around 7 digits and there were a couple of 3s in there and maybe a dash somewhere in the middle.) The neural Turing machine is a way to try to address this issue. The idea is that the network can learn to explicitly commit certain facts to a memory bank. This is not straightforward to do because backprop algorithms require differentiable functions, but committing a datum to a memory address is an inherently discrete operation. Consequently, neural Turing machines get around this by committing a little bit of data to a distribution of different memory addresses. These architectures don't seem to work super well yet, but the idea is very important. Some variant of these will probably become widespread in the future. Generative adversarial networks : GANs are a very exciting idea that seems to be seeing a lot of practical use already. The idea here is to train two NNs simultaneously: one that tries to generate samples from the underlying probability distribution (a generator), and one that tries to distinguish between real data points and the fake data points generated by the generator (a discriminator). So, for example, if your dataset is a collection of pictures of bedrooms , the generator will try to make its own pictures of bedrooms, and the discriminator will try to figure out if it's looking at real pictures of bedrooms or fake pictures of bedrooms. In the end, you have two very useful NNs: one that is really good at classifying images as being bedrooms or not bedrooms, and one that is really good at generating realistic images of bedrooms.
{ "source": [ "https://datascience.stackexchange.com/questions/14352", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/14909/" ] }
14,581
The key difference between a GRU and an LSTM is that a GRU has two gates ( reset and update gates) whereas an LSTM has three gates (namely input , output and forget gates). Why do we make use of GRU when we clearly have more control on the network through the LSTM model (as we have three gates)? In which scenario GRU is preferred over LSTM?
GRUs and LSTMs utilize different approaches toward gating information to prevent the vanishing gradient problem. Here are the main points comparing the two: The GRU unit controls the flow of information like the LSTM unit, but without having to use a memory unit . It just exposes the full hidden content without any control. GRUs are relatively new, and in my experience, their performance is on par with LSTMs, but computationally more efficient ( as pointed out, they have a less complex structure ). For that reason, we are seeing it being used more and more. For a detailed description, you can explore this research paper on Arxiv . The paper explains all this brilliantly. You can also explore these blogs for a better idea: WildML Colah - Github Hope that helps!
{ "source": [ "https://datascience.stackexchange.com/questions/14581", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/23710/" ] }
14,583
I've used Hyperopt in Python, but I'm looking for a package with similar capabilities in R. Does a package like this exist?
GRUs and LSTMs utilize different approaches toward gating information to prevent the vanishing gradient problem. Here are the main points comparing the two: The GRU unit controls the flow of information like the LSTM unit, but without having to use a memory unit . It just exposes the full hidden content without any control. GRUs are relatively new, and in my experience, their performance is on par with LSTMs, but computationally more efficient ( as pointed out, they have a less complex structure ). For that reason, we are seeing it being used more and more. For a detailed description, you can explore this research paper on Arxiv . The paper explains all this brilliantly. You can also explore these blogs for a better idea: WildML Colah - Github Hope that helps!
{ "source": [ "https://datascience.stackexchange.com/questions/14583", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/21908/" ] }
14,645
I have a dataframe that among other things, contains a column of the number of milliseconds passed since 1970-1-1. I need to convert this column of ints to timestamp data, so I can then ultimately convert it to a column of datetime data by adding the timestamp column series to a series that consists entirely of datetime values for 1970-1-1. I know how to convert a series of strings to datetime data (pandas.to_datetime), but I can't find or come up with any solution to convert the entire column of ints to datetime data OR to timestamp data.
You can specify the unit of a pandas.to_datetime call. Stolen from here : # assuming `df` is your data frame and `date` is your column of timestamps df['date'] = pandas.to_datetime(df['date'], unit='s') Should work with integer datatypes, which makes sense if the unit is seconds since the epoch.
{ "source": [ "https://datascience.stackexchange.com/questions/14645", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/13165/" ] }