idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
901
Does Julia have any hope of sticking in the statistical community?
The following probably does not deserve to be an answer, but it is too important to be buried as a comment to someone else's response... I have not heard much said about memory consumption, just speed. R's entire semantics being pass-by-value can be painful, and this has been one criticism of the language (which is a separate issue from how many great packages already exist). Good memory management is important, as is having ways of dealing with out-of-core processing (e.g. numpy's memory mapped arrays or pytables, or Revolution Analytics' xdf format). While PyPy's JIT compiler allows for some striking Python benchmarks, memory consumption can be quite high. So, does anyone have experience with Julia and memory usage yet? Sounds like there are memory leaks on the Windows "alpha" version that will no doubt be addressed, and I am still waiting on access to a Linux box to play with the language myself.
Does Julia have any hope of sticking in the statistical community?
The following probably does not deserve to be an answer, but it is too important to be buried as a comment to someone else's response... I have not heard much said about memory consumption, just speed
Does Julia have any hope of sticking in the statistical community? The following probably does not deserve to be an answer, but it is too important to be buried as a comment to someone else's response... I have not heard much said about memory consumption, just speed. R's entire semantics being pass-by-value can be painful, and this has been one criticism of the language (which is a separate issue from how many great packages already exist). Good memory management is important, as is having ways of dealing with out-of-core processing (e.g. numpy's memory mapped arrays or pytables, or Revolution Analytics' xdf format). While PyPy's JIT compiler allows for some striking Python benchmarks, memory consumption can be quite high. So, does anyone have experience with Julia and memory usage yet? Sounds like there are memory leaks on the Windows "alpha" version that will no doubt be addressed, and I am still waiting on access to a Linux box to play with the language myself.
Does Julia have any hope of sticking in the statistical community? The following probably does not deserve to be an answer, but it is too important to be buried as a comment to someone else's response... I have not heard much said about memory consumption, just speed
902
Does Julia have any hope of sticking in the statistical community?
I think it's unlikely that Julia will ever replace R, for a lot of the reasons previously mentioned. Julia is a Matlab replacement, not a R replacement; they have different goals. Even after Julia has a fully-fleshed out statistics library, no one would ever teach an Intro to Statistics class in it. However, an area in which it could be incredible is as a speed-optimized programming language that's less painful than C/C++. If it were seamlessly linked to R (in the style of Rcpp), then it would see a ton of use in writing speed-critical segments of code. Unfortunately no such link exists currently: https://stackoverflow.com/questions/9965747/linking-r-and-julia
Does Julia have any hope of sticking in the statistical community?
I think it's unlikely that Julia will ever replace R, for a lot of the reasons previously mentioned. Julia is a Matlab replacement, not a R replacement; they have different goals. Even after Julia h
Does Julia have any hope of sticking in the statistical community? I think it's unlikely that Julia will ever replace R, for a lot of the reasons previously mentioned. Julia is a Matlab replacement, not a R replacement; they have different goals. Even after Julia has a fully-fleshed out statistics library, no one would ever teach an Intro to Statistics class in it. However, an area in which it could be incredible is as a speed-optimized programming language that's less painful than C/C++. If it were seamlessly linked to R (in the style of Rcpp), then it would see a ton of use in writing speed-critical segments of code. Unfortunately no such link exists currently: https://stackoverflow.com/questions/9965747/linking-r-and-julia
Does Julia have any hope of sticking in the statistical community? I think it's unlikely that Julia will ever replace R, for a lot of the reasons previously mentioned. Julia is a Matlab replacement, not a R replacement; they have different goals. Even after Julia h
903
Does Julia have any hope of sticking in the statistical community?
I am a Julia newbie, and am R competent. The reasons I find Julia interesting so far are performance and compatibility oriented. GPU tools. I'd like to use CUSPARSE for a statistical application. CRAN results indicate there's not much out there. Julia has bindings available which seem to work smoothly so far. using CUSPARSE N = 1000 M = 1000 hA = sprand(N, M, .01) hA = hA' * hA dA = CudaSparseMatrixCSR(hA) dC = CUSPARSE.csric02(dA, 'O') #incomplete Cholesky decomp hC = CUSPARSE.to_host(dC) HPC tools. One can use a cluster interactively with multiple compute nodes. nnodes = 2 ncores = 12 #ask for all cores on the nodes we control procs = addprocs(SlurmManager(nnodes*ncores), partition="tesla", nodes=nnodes) for worker in procs println(remotecall_fetch(readall, worker, `hostname`)) end Python compatibility. There's access to the python ecosystem. E.g. It was straightforward to find out how to read brain imaging data: import PyCall @pyimport nibabel fp = "foo_BOLD.nii.gz" res = nibabel.load(fp) data = res[:get_data](); C compatibility. The following generates a random integer using the C standard library. ccall( (:rand, "libc"), Int32, ()) Speed. Thought I would see how the Distributions.jl package perfomed against R's rnorm - which I assume is optimised. julia> F = Normal(3,1) Distributions.Normal(μ=3.0, σ=1.0) julia> @elapsed rand(F, 1000000) 0.03422067 In R: > system.time(rnorm(1000000, mean=3, sd=1)) user system elapsed 0.262 0.003 0.266
Does Julia have any hope of sticking in the statistical community?
I am a Julia newbie, and am R competent. The reasons I find Julia interesting so far are performance and compatibility oriented. GPU tools. I'd like to use CUSPARSE for a statistical application. CRAN
Does Julia have any hope of sticking in the statistical community? I am a Julia newbie, and am R competent. The reasons I find Julia interesting so far are performance and compatibility oriented. GPU tools. I'd like to use CUSPARSE for a statistical application. CRAN results indicate there's not much out there. Julia has bindings available which seem to work smoothly so far. using CUSPARSE N = 1000 M = 1000 hA = sprand(N, M, .01) hA = hA' * hA dA = CudaSparseMatrixCSR(hA) dC = CUSPARSE.csric02(dA, 'O') #incomplete Cholesky decomp hC = CUSPARSE.to_host(dC) HPC tools. One can use a cluster interactively with multiple compute nodes. nnodes = 2 ncores = 12 #ask for all cores on the nodes we control procs = addprocs(SlurmManager(nnodes*ncores), partition="tesla", nodes=nnodes) for worker in procs println(remotecall_fetch(readall, worker, `hostname`)) end Python compatibility. There's access to the python ecosystem. E.g. It was straightforward to find out how to read brain imaging data: import PyCall @pyimport nibabel fp = "foo_BOLD.nii.gz" res = nibabel.load(fp) data = res[:get_data](); C compatibility. The following generates a random integer using the C standard library. ccall( (:rand, "libc"), Int32, ()) Speed. Thought I would see how the Distributions.jl package perfomed against R's rnorm - which I assume is optimised. julia> F = Normal(3,1) Distributions.Normal(μ=3.0, σ=1.0) julia> @elapsed rand(F, 1000000) 0.03422067 In R: > system.time(rnorm(1000000, mean=3, sd=1)) user system elapsed 0.262 0.003 0.266
Does Julia have any hope of sticking in the statistical community? I am a Julia newbie, and am R competent. The reasons I find Julia interesting so far are performance and compatibility oriented. GPU tools. I'd like to use CUSPARSE for a statistical application. CRAN
904
Does Julia have any hope of sticking in the statistical community?
Julia 1.0 has just come out with a very usable IDE (Juno). It came out a bit late to the party as Python has already dominated Machine Learning, while R continues to dominate every other kind of statistical analysis. That being said, Julia is already rising to prominence in the area of finance and trading algorithms as fast development time AND execution are a must. In my opinion, unless another language comes along that is distinctly better, Julia's rise to prominence will probably look something like this: (1) It starts to eat MATLAB's lunch. MATLAB users like the MATLAB syntax but hate pretty much everything else. The slowness, the expensive licenses, the very limited ways to deal with complex data structures that are not matrices. I remember one quote where it is said that "If Julia replaces MATLAB, it will be a huge service to humanity". MATLAB users can become proficient in Julia very quickly and will be impressed by the ease it is to write quality code that does so much more than what MATLAB can do (Structs that are fast that you can put in arrays and quickly iterate over?). Not only this, researchers can make serious toolboxes in Julia (a small team Ph.D. students wrote a world-class differential equations package) that would have been impossible with MATLAB. (2) It starts taking over research in numerical methods and simulation. MIT is throwing its weight behind Julia, and the research community listen's to MIT. Numerical simulations and new numerical methods are ill-defined problems that have no libraries. This is where Julia as a language shines; if there is no libraries available, it is much easier to write fast quality code in Julia than any other language. It will be a numerical/simulation language that is written by mathematicians for mathematicians (sound similar to R yet?) (3) Another breakthrough in Machine Learning happens that gives Julia the edge. This is a bit of a wildcard which might not happen. TensorFlow is great, but it is extremely hard to hack. Python has already started showing cracks and TensorFlow has started adopting Swift (with Julia getting an honorable mention). If another machine learning breakthrough happens, it will be much easier to implement and hack in a Julia package like Flux.jl. (4) Julia starts slowly catching up to R, which will take a while. Doing stats in MATLAB is painful, but Juila is already way ahead of MATLAB with Distributions.jl. The fact is, R workflows can be easily translated to Julia. The only real advantage R has is the fact that there are so many packages are written by statisticians for statisticians. This process however, is also easy to do in Julia. The difference is that Julia is fast all the way down and you don't have to use another language for performance (the more "serious" R packages are written in languages like C). The problem with R is that packages written in R are too slow to handle large sets of data. The only alternative is to translate the packages into another language making development in R a slower process than Julia. If too many R packages need translating to handle larger datasets, R may start playing catch-up with Julia in these areas.
Does Julia have any hope of sticking in the statistical community?
Julia 1.0 has just come out with a very usable IDE (Juno). It came out a bit late to the party as Python has already dominated Machine Learning, while R continues to dominate every other kind of stati
Does Julia have any hope of sticking in the statistical community? Julia 1.0 has just come out with a very usable IDE (Juno). It came out a bit late to the party as Python has already dominated Machine Learning, while R continues to dominate every other kind of statistical analysis. That being said, Julia is already rising to prominence in the area of finance and trading algorithms as fast development time AND execution are a must. In my opinion, unless another language comes along that is distinctly better, Julia's rise to prominence will probably look something like this: (1) It starts to eat MATLAB's lunch. MATLAB users like the MATLAB syntax but hate pretty much everything else. The slowness, the expensive licenses, the very limited ways to deal with complex data structures that are not matrices. I remember one quote where it is said that "If Julia replaces MATLAB, it will be a huge service to humanity". MATLAB users can become proficient in Julia very quickly and will be impressed by the ease it is to write quality code that does so much more than what MATLAB can do (Structs that are fast that you can put in arrays and quickly iterate over?). Not only this, researchers can make serious toolboxes in Julia (a small team Ph.D. students wrote a world-class differential equations package) that would have been impossible with MATLAB. (2) It starts taking over research in numerical methods and simulation. MIT is throwing its weight behind Julia, and the research community listen's to MIT. Numerical simulations and new numerical methods are ill-defined problems that have no libraries. This is where Julia as a language shines; if there is no libraries available, it is much easier to write fast quality code in Julia than any other language. It will be a numerical/simulation language that is written by mathematicians for mathematicians (sound similar to R yet?) (3) Another breakthrough in Machine Learning happens that gives Julia the edge. This is a bit of a wildcard which might not happen. TensorFlow is great, but it is extremely hard to hack. Python has already started showing cracks and TensorFlow has started adopting Swift (with Julia getting an honorable mention). If another machine learning breakthrough happens, it will be much easier to implement and hack in a Julia package like Flux.jl. (4) Julia starts slowly catching up to R, which will take a while. Doing stats in MATLAB is painful, but Juila is already way ahead of MATLAB with Distributions.jl. The fact is, R workflows can be easily translated to Julia. The only real advantage R has is the fact that there are so many packages are written by statisticians for statisticians. This process however, is also easy to do in Julia. The difference is that Julia is fast all the way down and you don't have to use another language for performance (the more "serious" R packages are written in languages like C). The problem with R is that packages written in R are too slow to handle large sets of data. The only alternative is to translate the packages into another language making development in R a slower process than Julia. If too many R packages need translating to handle larger datasets, R may start playing catch-up with Julia in these areas.
Does Julia have any hope of sticking in the statistical community? Julia 1.0 has just come out with a very usable IDE (Juno). It came out a bit late to the party as Python has already dominated Machine Learning, while R continues to dominate every other kind of stati
905
Does Julia have any hope of sticking in the statistical community?
I am interested by the promise of better speed and easy parallelisation using different architectures. For that reason I will certainly watch Julia development but I am unlikely to use it until it can handle generalised linear mixed models, the has a good generic bootstrap package, a simple model language for building design matrices the capability equivalent to ggplot2 and a wide range from machine learning algorithms. No statistician can afford to have a fundamentalist attitude to the choice of tools. We will use whatever enables us to get the job done most efficiently. My guess is I will be sticking with R for a few years yet, but but it would be nice to be pleasantly surprised.
Does Julia have any hope of sticking in the statistical community?
I am interested by the promise of better speed and easy parallelisation using different architectures. For that reason I will certainly watch Julia development but I am unlikely to use it until it can
Does Julia have any hope of sticking in the statistical community? I am interested by the promise of better speed and easy parallelisation using different architectures. For that reason I will certainly watch Julia development but I am unlikely to use it until it can handle generalised linear mixed models, the has a good generic bootstrap package, a simple model language for building design matrices the capability equivalent to ggplot2 and a wide range from machine learning algorithms. No statistician can afford to have a fundamentalist attitude to the choice of tools. We will use whatever enables us to get the job done most efficiently. My guess is I will be sticking with R for a few years yet, but but it would be nice to be pleasantly surprised.
Does Julia have any hope of sticking in the statistical community? I am interested by the promise of better speed and easy parallelisation using different architectures. For that reason I will certainly watch Julia development but I am unlikely to use it until it can
906
Does Julia have any hope of sticking in the statistical community?
The luxury of NA's in R does not come without performance penalties. If Julia supports NA's with a smaller performance penalty then it becomes interesting to a segment of the stats community, but NA's also impose considerable extra work when using compiled code with R. Many of the packages in R rely on routines written in legacy languages (C, Fortran, or C++). In some cases the compiled routines were developed outside R and later used as the basis for R library packages. In others the routines were first implemented in R and then critical segments translated to a compiled language when performance was found lacking. Julia will be attractive if it can be used to implement equivalent routines There is an opportunity to design low-level support for NA's in a way that simplifies NA handling over what we have now when using R with compiled code. The massive number of R libraries represents the efforts of many many users. This was possible because R provided capabilities that weren't otherwise available/affordable. If Julia is to become widely used, it needs a group of users who find it does what they need so much better than the alternatives that is worth the effort needed to supply very basic things (e.g., graphics, date classes, NA's, etc.) available from existing languages.
Does Julia have any hope of sticking in the statistical community?
The luxury of NA's in R does not come without performance penalties. If Julia supports NA's with a smaller performance penalty then it becomes interesting to a segment of the stats community, but NA'
Does Julia have any hope of sticking in the statistical community? The luxury of NA's in R does not come without performance penalties. If Julia supports NA's with a smaller performance penalty then it becomes interesting to a segment of the stats community, but NA's also impose considerable extra work when using compiled code with R. Many of the packages in R rely on routines written in legacy languages (C, Fortran, or C++). In some cases the compiled routines were developed outside R and later used as the basis for R library packages. In others the routines were first implemented in R and then critical segments translated to a compiled language when performance was found lacking. Julia will be attractive if it can be used to implement equivalent routines There is an opportunity to design low-level support for NA's in a way that simplifies NA handling over what we have now when using R with compiled code. The massive number of R libraries represents the efforts of many many users. This was possible because R provided capabilities that weren't otherwise available/affordable. If Julia is to become widely used, it needs a group of users who find it does what they need so much better than the alternatives that is worth the effort needed to supply very basic things (e.g., graphics, date classes, NA's, etc.) available from existing languages.
Does Julia have any hope of sticking in the statistical community? The luxury of NA's in R does not come without performance penalties. If Julia supports NA's with a smaller performance penalty then it becomes interesting to a segment of the stats community, but NA'
907
Does Julia have any hope of sticking in the statistical community?
I will be up front, I have no experience with R, but I work with plenty of people that think it is an excellent tool for statistical analysis. My background is in data warehousing, and due to Julia's easily distributed, but more standard programming model, I think it could be a very interesting substitute for the transform portion of traditional ETL tools that generally do the job very poorly , most have no way of easily creating a standardized transform, or re-using the results of a transform already performed on a prior data-set. The support for tightly defined and typed tuples stands out, if I want to build an OLAP cube that basically needs to build more detailed tuples (fact tables) out of tuples already calculated, today's ETL tools have no 'building blocks' to speak of that can help, this industry has worked around this issue through various means in the past, but there are trade-offs. Traditional programming languages can help by providing centrally defined transformations, and Julia could potentially simplify the non-standard aggregations and distributions common in more complex data warehouse systems.
Does Julia have any hope of sticking in the statistical community?
I will be up front, I have no experience with R, but I work with plenty of people that think it is an excellent tool for statistical analysis. My background is in data warehousing, and due to Julia's
Does Julia have any hope of sticking in the statistical community? I will be up front, I have no experience with R, but I work with plenty of people that think it is an excellent tool for statistical analysis. My background is in data warehousing, and due to Julia's easily distributed, but more standard programming model, I think it could be a very interesting substitute for the transform portion of traditional ETL tools that generally do the job very poorly , most have no way of easily creating a standardized transform, or re-using the results of a transform already performed on a prior data-set. The support for tightly defined and typed tuples stands out, if I want to build an OLAP cube that basically needs to build more detailed tuples (fact tables) out of tuples already calculated, today's ETL tools have no 'building blocks' to speak of that can help, this industry has worked around this issue through various means in the past, but there are trade-offs. Traditional programming languages can help by providing centrally defined transformations, and Julia could potentially simplify the non-standard aggregations and distributions common in more complex data warehouse systems.
Does Julia have any hope of sticking in the statistical community? I will be up front, I have no experience with R, but I work with plenty of people that think it is an excellent tool for statistical analysis. My background is in data warehousing, and due to Julia's
908
Does Julia have any hope of sticking in the statistical community?
You can also use Julia and R together. There is Julia-to-R interface. With this packages you can play with Julia while calling R whenever it has a library that would be needed.
Does Julia have any hope of sticking in the statistical community?
You can also use Julia and R together. There is Julia-to-R interface. With this packages you can play with Julia while calling R whenever it has a library that would be needed.
Does Julia have any hope of sticking in the statistical community? You can also use Julia and R together. There is Julia-to-R interface. With this packages you can play with Julia while calling R whenever it has a library that would be needed.
Does Julia have any hope of sticking in the statistical community? You can also use Julia and R together. There is Julia-to-R interface. With this packages you can play with Julia while calling R whenever it has a library that would be needed.
909
Does Julia have any hope of sticking in the statistical community?
Julia has without doubt every chance of becoming a statistics power-users dream come true, take SAS for example, it's power lies in the numerous procs written in C - what Julia can do is give you the procs with the source code, with matrices as a built in data type dispensing with SAS/iml. I have no doubt that statisticians will flock to Julia once they get a handle on just what this puppy can do.
Does Julia have any hope of sticking in the statistical community?
Julia has without doubt every chance of becoming a statistics power-users dream come true, take SAS for example, it's power lies in the numerous procs written in C - what Julia can do is give you the
Does Julia have any hope of sticking in the statistical community? Julia has without doubt every chance of becoming a statistics power-users dream come true, take SAS for example, it's power lies in the numerous procs written in C - what Julia can do is give you the procs with the source code, with matrices as a built in data type dispensing with SAS/iml. I have no doubt that statisticians will flock to Julia once they get a handle on just what this puppy can do.
Does Julia have any hope of sticking in the statistical community? Julia has without doubt every chance of becoming a statistics power-users dream come true, take SAS for example, it's power lies in the numerous procs written in C - what Julia can do is give you the
910
Does Julia have any hope of sticking in the statistical community?
Oh yes, Julia will overtake R quite quickly. And the primary reasons will be "macros", 95% of the language is implemented in Julia, and its noise free, parsimonious syntax. If you don't have experience with lisp type of languages you might not understand it as yet, but you will see pretty quickly how R formula interface will became an obsolete and ugly mechanism, and will be replaced by specialized modeling micro languages akin to CL loop macro. Access to low level references of an object is also a big plus. I think R still didn't get that hiding internals from the user actually complicates than simplifies the things. As I see it now (having years of heavy use of R behind, and just finished reading Julia manual), Julia's main drawbacks with respect to R is no support for structural inheritance (this was intentional). Julia's type system is less ambitious than S4; it also supports multiple dispatch and multiple inheritance, but with a catch - there is only one level of concrete classes. On the other hand I rarely see class hierarchies in R deeper than 3 levels. Time will tell, but it will be sooner than most R users think:)
Does Julia have any hope of sticking in the statistical community?
Oh yes, Julia will overtake R quite quickly. And the primary reasons will be "macros", 95% of the language is implemented in Julia, and its noise free, parsimonious syntax. If you don't have experienc
Does Julia have any hope of sticking in the statistical community? Oh yes, Julia will overtake R quite quickly. And the primary reasons will be "macros", 95% of the language is implemented in Julia, and its noise free, parsimonious syntax. If you don't have experience with lisp type of languages you might not understand it as yet, but you will see pretty quickly how R formula interface will became an obsolete and ugly mechanism, and will be replaced by specialized modeling micro languages akin to CL loop macro. Access to low level references of an object is also a big plus. I think R still didn't get that hiding internals from the user actually complicates than simplifies the things. As I see it now (having years of heavy use of R behind, and just finished reading Julia manual), Julia's main drawbacks with respect to R is no support for structural inheritance (this was intentional). Julia's type system is less ambitious than S4; it also supports multiple dispatch and multiple inheritance, but with a catch - there is only one level of concrete classes. On the other hand I rarely see class hierarchies in R deeper than 3 levels. Time will tell, but it will be sooner than most R users think:)
Does Julia have any hope of sticking in the statistical community? Oh yes, Julia will overtake R quite quickly. And the primary reasons will be "macros", 95% of the language is implemented in Julia, and its noise free, parsimonious syntax. If you don't have experienc
911
Does Julia have any hope of sticking in the statistical community?
Julia's first target use cases are numerical problems. Basically, you can break these analysis and computational science fields into data science (data driven) and simulation science (model driven). Julia is dealing with the simulation science use cases first. They are also dealing with the data science cases, but more slowly. R will never be very useful for simulation science, but Julia will be very useful for both in a couple of years.
Does Julia have any hope of sticking in the statistical community?
Julia's first target use cases are numerical problems. Basically, you can break these analysis and computational science fields into data science (data driven) and simulation science (model driven). J
Does Julia have any hope of sticking in the statistical community? Julia's first target use cases are numerical problems. Basically, you can break these analysis and computational science fields into data science (data driven) and simulation science (model driven). Julia is dealing with the simulation science use cases first. They are also dealing with the data science cases, but more slowly. R will never be very useful for simulation science, but Julia will be very useful for both in a couple of years.
Does Julia have any hope of sticking in the statistical community? Julia's first target use cases are numerical problems. Basically, you can break these analysis and computational science fields into data science (data driven) and simulation science (model driven). J
912
Does Julia have any hope of sticking in the statistical community?
It needs to be able to apply any function to large datasets that don't fit on memory transparently for the user. That includes at least running mixed effects models, survival models or MCMC on datasets that fit on the disk but not on memory. And if possible on datasets distributed on several computers.
Does Julia have any hope of sticking in the statistical community?
It needs to be able to apply any function to large datasets that don't fit on memory transparently for the user. That includes at least running mixed effects models, survival models or MCMC on dataset
Does Julia have any hope of sticking in the statistical community? It needs to be able to apply any function to large datasets that don't fit on memory transparently for the user. That includes at least running mixed effects models, survival models or MCMC on datasets that fit on the disk but not on memory. And if possible on datasets distributed on several computers.
Does Julia have any hope of sticking in the statistical community? It needs to be able to apply any function to large datasets that don't fit on memory transparently for the user. That includes at least running mixed effects models, survival models or MCMC on dataset
913
Gradient Boosting Tree vs Random Forest
$\text{error = bias + variance}$ Boosting is based on weak learners (high bias, low variance). In terms of decision trees, weak learners are shallow trees, sometimes even as small as decision stumps (trees with two leaves). Boosting reduces error mainly by reducing bias (and also to some extent variance, by aggregating the output from many models). On the other hand, Random Forest uses as you said fully grown decision trees (low bias, high variance). It tackles the error reduction task in the opposite way: by reducing variance. The trees are made uncorrelated to maximize the decrease in variance, but the algorithm cannot reduce bias (which is slightly higher than the bias of an individual tree in the forest). Hence the need for large, unpruned trees, so that the bias is initially as low as possible. Please note that unlike Boosting (which is sequential), RF grows trees in parallel. The term iterative that you used is thus inappropriate.
Gradient Boosting Tree vs Random Forest
$\text{error = bias + variance}$ Boosting is based on weak learners (high bias, low variance). In terms of decision trees, weak learners are shallow trees, sometimes even as small as decision stumps
Gradient Boosting Tree vs Random Forest $\text{error = bias + variance}$ Boosting is based on weak learners (high bias, low variance). In terms of decision trees, weak learners are shallow trees, sometimes even as small as decision stumps (trees with two leaves). Boosting reduces error mainly by reducing bias (and also to some extent variance, by aggregating the output from many models). On the other hand, Random Forest uses as you said fully grown decision trees (low bias, high variance). It tackles the error reduction task in the opposite way: by reducing variance. The trees are made uncorrelated to maximize the decrease in variance, but the algorithm cannot reduce bias (which is slightly higher than the bias of an individual tree in the forest). Hence the need for large, unpruned trees, so that the bias is initially as low as possible. Please note that unlike Boosting (which is sequential), RF grows trees in parallel. The term iterative that you used is thus inappropriate.
Gradient Boosting Tree vs Random Forest $\text{error = bias + variance}$ Boosting is based on weak learners (high bias, low variance). In terms of decision trees, weak learners are shallow trees, sometimes even as small as decision stumps
914
Gradient Boosting Tree vs Random Forest
This question is addressed in this very nice post. Please take a look at it and the references therein. http://fastml.com/what-is-better-gradient-boosted-trees-or-random-forest/ Notice in the article that the speaks about calibration, and links to another (nice) blog post about it. Still, I find that the paper Obtaining Calibrated Probabilities from Boosting gives you a better understanding of what calibration in the context of boosted classifiers is, and what are standard methods to perform it. And finally one aspect missing (a bit more theoretical). Both RF and GBM are ensemble methods, meaning you build a classifier out a big number of smaller classifiers. Now the fundamental difference lies on the method used: RF uses decision trees, which are very prone to overfitting. In order to achieve higher accuracy, RF decides to create a large number of them based on bagging. The basic idea is to resample the data over and over and for each sample train a new classifier. Different classifiers overfit the data in a different way, and through voting those differences are averaged out. GBM is a boosting method, which builds on weak classifiers. The idea is to add a classifier at a time, so that the next classifier is trained to improve the already trained ensemble. Notice that for RF each iteration the classifier is trained independently from the rest.
Gradient Boosting Tree vs Random Forest
This question is addressed in this very nice post. Please take a look at it and the references therein. http://fastml.com/what-is-better-gradient-boosted-trees-or-random-forest/ Notice in the article
Gradient Boosting Tree vs Random Forest This question is addressed in this very nice post. Please take a look at it and the references therein. http://fastml.com/what-is-better-gradient-boosted-trees-or-random-forest/ Notice in the article that the speaks about calibration, and links to another (nice) blog post about it. Still, I find that the paper Obtaining Calibrated Probabilities from Boosting gives you a better understanding of what calibration in the context of boosted classifiers is, and what are standard methods to perform it. And finally one aspect missing (a bit more theoretical). Both RF and GBM are ensemble methods, meaning you build a classifier out a big number of smaller classifiers. Now the fundamental difference lies on the method used: RF uses decision trees, which are very prone to overfitting. In order to achieve higher accuracy, RF decides to create a large number of them based on bagging. The basic idea is to resample the data over and over and for each sample train a new classifier. Different classifiers overfit the data in a different way, and through voting those differences are averaged out. GBM is a boosting method, which builds on weak classifiers. The idea is to add a classifier at a time, so that the next classifier is trained to improve the already trained ensemble. Notice that for RF each iteration the classifier is trained independently from the rest.
Gradient Boosting Tree vs Random Forest This question is addressed in this very nice post. Please take a look at it and the references therein. http://fastml.com/what-is-better-gradient-boosted-trees-or-random-forest/ Notice in the article
915
Gradient Boosting Tree vs Random Forest
Although the above answers are really great, I would like to explain the difference in a very simple language. Bagging technique that is Bootstrap Aggregation where we build separate decision trees using bootstrapped set of samples and average the resulting predictions. Each individual decision tree are grown deep without any pruning and hence each of them has high variance and low bias but averaging them reduces the overall variance. They result in improved accuracy over prediction with a single tree. Bagging technique suffers from a disadvantage that of any of the predictor is very very strong than the other predictors. Each bagged tree will look similar because most of them will use that strong predictor. Hence the predictions from the bagged trees will be highly correlated. Unfortunately, averaging many highly correlated quantities does not lead to as large of a reduction in variance as averaging many uncorrelated quantities. Random Forest overcome this problem by forcing each split to consider only a subset of the predictors that are random. The main difference between bagging and random forests is the choice of predictor subset size. If a random forest is built using all the predictors, then it is equal to bagging. Boosting works in a similar way, except that the trees are grown sequentially: each tree is grown using information from previously grown trees. Boosting does not involve bootstrap sampling; instead each tree is fit on a modified version of the original data set.Unlike in bagging, the construction of each tree depends strongly on the trees that have already been grown. Because the growth of a particular tree takes into account the other trees that have already been grown, smaller trees are typically sufficient. These small trees are mostly Stump which have single split.
Gradient Boosting Tree vs Random Forest
Although the above answers are really great, I would like to explain the difference in a very simple language. Bagging technique that is Bootstrap Aggregation where we build separate decision trees u
Gradient Boosting Tree vs Random Forest Although the above answers are really great, I would like to explain the difference in a very simple language. Bagging technique that is Bootstrap Aggregation where we build separate decision trees using bootstrapped set of samples and average the resulting predictions. Each individual decision tree are grown deep without any pruning and hence each of them has high variance and low bias but averaging them reduces the overall variance. They result in improved accuracy over prediction with a single tree. Bagging technique suffers from a disadvantage that of any of the predictor is very very strong than the other predictors. Each bagged tree will look similar because most of them will use that strong predictor. Hence the predictions from the bagged trees will be highly correlated. Unfortunately, averaging many highly correlated quantities does not lead to as large of a reduction in variance as averaging many uncorrelated quantities. Random Forest overcome this problem by forcing each split to consider only a subset of the predictors that are random. The main difference between bagging and random forests is the choice of predictor subset size. If a random forest is built using all the predictors, then it is equal to bagging. Boosting works in a similar way, except that the trees are grown sequentially: each tree is grown using information from previously grown trees. Boosting does not involve bootstrap sampling; instead each tree is fit on a modified version of the original data set.Unlike in bagging, the construction of each tree depends strongly on the trees that have already been grown. Because the growth of a particular tree takes into account the other trees that have already been grown, smaller trees are typically sufficient. These small trees are mostly Stump which have single split.
Gradient Boosting Tree vs Random Forest Although the above answers are really great, I would like to explain the difference in a very simple language. Bagging technique that is Bootstrap Aggregation where we build separate decision trees u
916
How to reverse PCA and reconstruct original variables from several principal components?
PCA computes eigenvectors of the covariance matrix ("principal axes") and sorts them by their eigenvalues (amount of explained variance). The centered data can then be projected onto these principal axes to yield principal components ("scores"). For the purposes of dimensionality reduction, one can keep only a subset of principal components and discard the rest. (See here for a layman's introduction to PCA.) Let $\mathbf X_\text{raw}$ be the $n\times p$ data matrix with $n$ rows (data points) and $p$ columns (variables, or features). After subtracting the mean vector $\boldsymbol \mu$ from each row, we get the centered data matrix $\mathbf X$. Let $\mathbf V$ be the $p\times k$ matrix of some $k$ eigenvectors that we want to use; these would most often be the $k$ eigenvectors with the largest eigenvalues. Then the $n\times k$ matrix of PCA projections ("scores") will be simply given by $\mathbf Z=\mathbf {XV}$. This is illustrated on the figure below: the first subplot shows some centered data (the same data that I use in my animations in the linked thread) and its projections on the first principal axis. The second subplot shows only the values of this projection; the dimensionality has been reduced from two to one: In order to be able to reconstruct the original two variables from this one principal component, we can map it back to $p$ dimensions with $\mathbf V^\top$. Indeed, the values of each PC should be placed on the same vector as was used for projection; compare subplots 1 and 3. The result is then given by $\hat{\mathbf X} = \mathbf{ZV}^\top = \mathbf{XVV}^\top$. I am displaying it on the third subplot above. To get the final reconstruction $\hat{\mathbf X}_\text{raw}$, we need to add the mean vector $\boldsymbol \mu$ to that: $$\boxed{\text{PCA reconstruction} = \text{PC scores} \cdot \text{Eigenvectors}^\top + \text{Mean}}$$ Note that one can go directly from the first subplot to the third one by multiplying $\mathbf X$ with the $\mathbf {VV}^\top$ matrix; it is called a projection matrix. If all $p$ eigenvectors are used, then $\mathbf {VV}^\top$ is the identity matrix (no dimensionality reduction is performed, hence "reconstruction" is perfect). If only a subset of eigenvectors is used, it is not identity. This works for an arbitrary point $\mathbf z$ in the PC space; it can be mapped to the original space via $\hat{\mathbf x} = \mathbf{zV}^\top$. Discarding (removing) leading PCs Sometimes one wants to discard (to remove) one or few of the leading PCs and to keep the rest, instead of keeping the leading PCs and discarding the rest (as above). In this case all the formulas stay exactly the same, but $\mathbf V$ should consist of all principal axes except for the ones one wants to discard. In other words, $\mathbf V$ should always include all PCs that one wants to keep. Caveat about PCA on correlation When PCA is done on correlation matrix (and not on covariance matrix), the raw data $\mathbf X_\mathrm{raw}$ is not only centered by subtracting $\boldsymbol \mu$ but also scaled by dividing each column by its standard deviation $\sigma_i$. In this case, to reconstruct the original data, one needs to back-scale the columns of $\hat{\mathbf X}$ with $\sigma_i$ and only then to add back the mean vector $\boldsymbol \mu$. Image processing example This topic often comes up in the context of image processing. Consider Lenna -- one of the standard images in image processing literature (follow the links to find where it comes from). Below on the left, I display the grayscale variant of this $512\times 512$ image (file available here). We can treat this grayscale image as a $512\times 512$ data matrix $\mathbf X_\text{raw}$. I perform PCA on it and compute $\hat {\mathbf X}_\text{raw}$ using the first 50 principal components. The result is displayed on the right. Reverting SVD PCA is very closely related to singular value decomposition (SVD), see Relationship between SVD and PCA. How to use SVD to perform PCA? for more details. If a $n\times p$ matrix $\mathbf X$ is SVD-ed as $\mathbf X = \mathbf {USV}^\top$ and one selects a $k$-dimensional vector $\mathbf z$ that represents the point in the "reduced" $U$-space of $k$ dimensions, then to map it back to $p$ dimensions one needs to multiply it with $\mathbf S^\phantom\top_{1:k,1:k}\mathbf V^\top_{:,1:k}$. Examples in R, Matlab, Python, and Stata I will conduct PCA on the Fisher Iris data and then reconstruct it using the first two principal components. I am doing PCA on the covariance matrix, not on the correlation matrix, i.e. I am not scaling the variables here. But I still have to add the mean back. Some packages, like Stata, take care of that through the standard syntax. Thanks to @StasK and @Kodiologist for their help with the code. We will check the reconstruction of the first datapoint, which is: 5.1 3.5 1.4 0.2 Matlab load fisheriris X = meas; mu = mean(X); [eigenvectors, scores] = pca(X); nComp = 2; Xhat = scores(:,1:nComp) * eigenvectors(:,1:nComp)'; Xhat = bsxfun(@plus, Xhat, mu); Xhat(1,:) Output: 5.083 3.5174 1.4032 0.21353 R X = iris[,1:4] mu = colMeans(X) Xpca = prcomp(X) nComp = 2 Xhat = Xpca$x[,1:nComp] %*% t(Xpca$rotation[,1:nComp]) Xhat = scale(Xhat, center = -mu, scale = FALSE) Xhat[1,] Output: Sepal.Length Sepal.Width Petal.Length Petal.Width 5.0830390 3.5174139 1.4032137 0.2135317 For worked out R example of PCA reconstruction of images see also this answer. Python import numpy as np import sklearn.datasets, sklearn.decomposition X = sklearn.datasets.load_iris().data mu = np.mean(X, axis=0) pca = sklearn.decomposition.PCA() pca.fit(X) nComp = 2 Xhat = np.dot(pca.transform(X)[:,:nComp], pca.components_[:nComp,:]) Xhat += mu print(Xhat[0,]) Output: [ 5.08718247 3.51315614 1.4020428 0.21105556] Note that this differs slightly from the results in other languages. That is because Python's version of the Iris dataset contains mistakes. Stata webuse iris, clear pca sep* pet*, components(2) covariance predict _seplen _sepwid _petlen _petwid, fit list in 1 iris seplen sepwid petlen petwid _seplen _sepwid _petlen _petwid setosa 5.1 3.5 1.4 0.2 5.083039 3.517414 1.403214 .2135317
How to reverse PCA and reconstruct original variables from several principal components?
PCA computes eigenvectors of the covariance matrix ("principal axes") and sorts them by their eigenvalues (amount of explained variance). The centered data can then be projected onto these principal a
How to reverse PCA and reconstruct original variables from several principal components? PCA computes eigenvectors of the covariance matrix ("principal axes") and sorts them by their eigenvalues (amount of explained variance). The centered data can then be projected onto these principal axes to yield principal components ("scores"). For the purposes of dimensionality reduction, one can keep only a subset of principal components and discard the rest. (See here for a layman's introduction to PCA.) Let $\mathbf X_\text{raw}$ be the $n\times p$ data matrix with $n$ rows (data points) and $p$ columns (variables, or features). After subtracting the mean vector $\boldsymbol \mu$ from each row, we get the centered data matrix $\mathbf X$. Let $\mathbf V$ be the $p\times k$ matrix of some $k$ eigenvectors that we want to use; these would most often be the $k$ eigenvectors with the largest eigenvalues. Then the $n\times k$ matrix of PCA projections ("scores") will be simply given by $\mathbf Z=\mathbf {XV}$. This is illustrated on the figure below: the first subplot shows some centered data (the same data that I use in my animations in the linked thread) and its projections on the first principal axis. The second subplot shows only the values of this projection; the dimensionality has been reduced from two to one: In order to be able to reconstruct the original two variables from this one principal component, we can map it back to $p$ dimensions with $\mathbf V^\top$. Indeed, the values of each PC should be placed on the same vector as was used for projection; compare subplots 1 and 3. The result is then given by $\hat{\mathbf X} = \mathbf{ZV}^\top = \mathbf{XVV}^\top$. I am displaying it on the third subplot above. To get the final reconstruction $\hat{\mathbf X}_\text{raw}$, we need to add the mean vector $\boldsymbol \mu$ to that: $$\boxed{\text{PCA reconstruction} = \text{PC scores} \cdot \text{Eigenvectors}^\top + \text{Mean}}$$ Note that one can go directly from the first subplot to the third one by multiplying $\mathbf X$ with the $\mathbf {VV}^\top$ matrix; it is called a projection matrix. If all $p$ eigenvectors are used, then $\mathbf {VV}^\top$ is the identity matrix (no dimensionality reduction is performed, hence "reconstruction" is perfect). If only a subset of eigenvectors is used, it is not identity. This works for an arbitrary point $\mathbf z$ in the PC space; it can be mapped to the original space via $\hat{\mathbf x} = \mathbf{zV}^\top$. Discarding (removing) leading PCs Sometimes one wants to discard (to remove) one or few of the leading PCs and to keep the rest, instead of keeping the leading PCs and discarding the rest (as above). In this case all the formulas stay exactly the same, but $\mathbf V$ should consist of all principal axes except for the ones one wants to discard. In other words, $\mathbf V$ should always include all PCs that one wants to keep. Caveat about PCA on correlation When PCA is done on correlation matrix (and not on covariance matrix), the raw data $\mathbf X_\mathrm{raw}$ is not only centered by subtracting $\boldsymbol \mu$ but also scaled by dividing each column by its standard deviation $\sigma_i$. In this case, to reconstruct the original data, one needs to back-scale the columns of $\hat{\mathbf X}$ with $\sigma_i$ and only then to add back the mean vector $\boldsymbol \mu$. Image processing example This topic often comes up in the context of image processing. Consider Lenna -- one of the standard images in image processing literature (follow the links to find where it comes from). Below on the left, I display the grayscale variant of this $512\times 512$ image (file available here). We can treat this grayscale image as a $512\times 512$ data matrix $\mathbf X_\text{raw}$. I perform PCA on it and compute $\hat {\mathbf X}_\text{raw}$ using the first 50 principal components. The result is displayed on the right. Reverting SVD PCA is very closely related to singular value decomposition (SVD), see Relationship between SVD and PCA. How to use SVD to perform PCA? for more details. If a $n\times p$ matrix $\mathbf X$ is SVD-ed as $\mathbf X = \mathbf {USV}^\top$ and one selects a $k$-dimensional vector $\mathbf z$ that represents the point in the "reduced" $U$-space of $k$ dimensions, then to map it back to $p$ dimensions one needs to multiply it with $\mathbf S^\phantom\top_{1:k,1:k}\mathbf V^\top_{:,1:k}$. Examples in R, Matlab, Python, and Stata I will conduct PCA on the Fisher Iris data and then reconstruct it using the first two principal components. I am doing PCA on the covariance matrix, not on the correlation matrix, i.e. I am not scaling the variables here. But I still have to add the mean back. Some packages, like Stata, take care of that through the standard syntax. Thanks to @StasK and @Kodiologist for their help with the code. We will check the reconstruction of the first datapoint, which is: 5.1 3.5 1.4 0.2 Matlab load fisheriris X = meas; mu = mean(X); [eigenvectors, scores] = pca(X); nComp = 2; Xhat = scores(:,1:nComp) * eigenvectors(:,1:nComp)'; Xhat = bsxfun(@plus, Xhat, mu); Xhat(1,:) Output: 5.083 3.5174 1.4032 0.21353 R X = iris[,1:4] mu = colMeans(X) Xpca = prcomp(X) nComp = 2 Xhat = Xpca$x[,1:nComp] %*% t(Xpca$rotation[,1:nComp]) Xhat = scale(Xhat, center = -mu, scale = FALSE) Xhat[1,] Output: Sepal.Length Sepal.Width Petal.Length Petal.Width 5.0830390 3.5174139 1.4032137 0.2135317 For worked out R example of PCA reconstruction of images see also this answer. Python import numpy as np import sklearn.datasets, sklearn.decomposition X = sklearn.datasets.load_iris().data mu = np.mean(X, axis=0) pca = sklearn.decomposition.PCA() pca.fit(X) nComp = 2 Xhat = np.dot(pca.transform(X)[:,:nComp], pca.components_[:nComp,:]) Xhat += mu print(Xhat[0,]) Output: [ 5.08718247 3.51315614 1.4020428 0.21105556] Note that this differs slightly from the results in other languages. That is because Python's version of the Iris dataset contains mistakes. Stata webuse iris, clear pca sep* pet*, components(2) covariance predict _seplen _sepwid _petlen _petwid, fit list in 1 iris seplen sepwid petlen petwid _seplen _sepwid _petlen _petwid setosa 5.1 3.5 1.4 0.2 5.083039 3.517414 1.403214 .2135317
How to reverse PCA and reconstruct original variables from several principal components? PCA computes eigenvectors of the covariance matrix ("principal axes") and sorts them by their eigenvalues (amount of explained variance). The centered data can then be projected onto these principal a
917
How are the standard errors of coefficients calculated in a regression?
The linear model is written as $$ \left| \begin{array}{l} \mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \\ \mathbf{\epsilon} \sim N(0, \sigma^2 \mathbf{I}), \end{array} \right.$$ where $\mathbf{y}$ denotes the vector of responses, $\mathbf{\beta}$ is the vector of fixed effects parameters, $\mathbf{X}$ is the corresponding design matrix whose columns are the values of the explanatory variables, and $\mathbf{\epsilon}$ is the vector of random errors. It is well known that an estimate of $\mathbf{\beta}$ is given by (refer, e.g., to the wikipedia article) $$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$$ Hence $$ \textrm{Var}(\hat{\mathbf{\beta}}) = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1} (\mathbf{X}^{\prime} \mathbf{X}) (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}, $$ [reminder: $\textrm{Var}(AX)=A\times \textrm{Var}(X) \times A′$, for some random vector $X$ and some non-random matrix $A$] so that $$ \widehat{\textrm{Var}}(\hat{\mathbf{\beta}}) = \hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}, $$ where $\hat{\sigma}^2$ can be obtained by the Mean Square Error (MSE) in the ANOVA table. Example with a simple linear regression in R #------generate one data set with epsilon ~ N(0, 0.25)------ seed <- 1152 #seed n <- 100 #nb of observations a <- 5 #intercept b <- 2.7 #slope set.seed(seed) epsilon <- rnorm(n, mean=0, sd=sqrt(0.25)) x <- sample(x=c(0, 1), size=n, replace=TRUE) y <- a + b * x + epsilon #----------------------------------------------------------- #------using lm------ mod <- lm(y ~ x) #-------------------- #------using the explicit formulas------ X <- cbind(1, x) betaHat <- solve(t(X) %*% X) %*% t(X) %*% y var_betaHat <- anova(mod)[[3]][2] * solve(t(X) %*% X) #--------------------------------------- #------comparison------ #estimate > mod$coef (Intercept) x 5.020261 2.755577 > c(betaHat[1], betaHat[2]) [1] 5.020261 2.755577 #standard error > summary(mod)$coefficients[, 2] (Intercept) x 0.06596021 0.09725302 > sqrt(diag(var_betaHat)) x 0.06596021 0.09725302 #---------------------- When there is a single explanatory variable, the model reduces to $$y_i = a + bx_i + \epsilon_i, \qquad i = 1, \dotsc, n$$ and $$\mathbf{X} = \left( \begin{array}{cc} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_n \end{array} \right), \qquad \mathbf{\beta} = \left( \begin{array}{c} a\\b \end{array} \right)$$ so that $$(\mathbf{X}^{\prime} \mathbf{X})^{-1} = \frac{1}{n\sum x_i^2 - (\sum x_i)^2} \left( \begin{array}{cc} \sum x_i^2 & -\sum x_i \\ -\sum x_i & n \end{array} \right)$$ and formulas become more transparant. For example, the standard error of the estimated slope is $$\sqrt{\widehat{\textrm{Var}}(\hat{b})} = \sqrt{[\hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}]_{22}} = \sqrt{\frac{n \hat{\sigma}^2}{n\sum x_i^2 - (\sum x_i)^2}}.$$ > num <- n * anova(mod)[[3]][2] > denom <- n * sum(x^2) - sum(x)^2 > sqrt(num / denom) [1] 0.09725302
How are the standard errors of coefficients calculated in a regression?
The linear model is written as $$ \left| \begin{array}{l} \mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \\ \mathbf{\epsilon} \sim N(0, \sigma^2 \mathbf{I}), \end{array} \right.$$ where $
How are the standard errors of coefficients calculated in a regression? The linear model is written as $$ \left| \begin{array}{l} \mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \\ \mathbf{\epsilon} \sim N(0, \sigma^2 \mathbf{I}), \end{array} \right.$$ where $\mathbf{y}$ denotes the vector of responses, $\mathbf{\beta}$ is the vector of fixed effects parameters, $\mathbf{X}$ is the corresponding design matrix whose columns are the values of the explanatory variables, and $\mathbf{\epsilon}$ is the vector of random errors. It is well known that an estimate of $\mathbf{\beta}$ is given by (refer, e.g., to the wikipedia article) $$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$$ Hence $$ \textrm{Var}(\hat{\mathbf{\beta}}) = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1} (\mathbf{X}^{\prime} \mathbf{X}) (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}, $$ [reminder: $\textrm{Var}(AX)=A\times \textrm{Var}(X) \times A′$, for some random vector $X$ and some non-random matrix $A$] so that $$ \widehat{\textrm{Var}}(\hat{\mathbf{\beta}}) = \hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}, $$ where $\hat{\sigma}^2$ can be obtained by the Mean Square Error (MSE) in the ANOVA table. Example with a simple linear regression in R #------generate one data set with epsilon ~ N(0, 0.25)------ seed <- 1152 #seed n <- 100 #nb of observations a <- 5 #intercept b <- 2.7 #slope set.seed(seed) epsilon <- rnorm(n, mean=0, sd=sqrt(0.25)) x <- sample(x=c(0, 1), size=n, replace=TRUE) y <- a + b * x + epsilon #----------------------------------------------------------- #------using lm------ mod <- lm(y ~ x) #-------------------- #------using the explicit formulas------ X <- cbind(1, x) betaHat <- solve(t(X) %*% X) %*% t(X) %*% y var_betaHat <- anova(mod)[[3]][2] * solve(t(X) %*% X) #--------------------------------------- #------comparison------ #estimate > mod$coef (Intercept) x 5.020261 2.755577 > c(betaHat[1], betaHat[2]) [1] 5.020261 2.755577 #standard error > summary(mod)$coefficients[, 2] (Intercept) x 0.06596021 0.09725302 > sqrt(diag(var_betaHat)) x 0.06596021 0.09725302 #---------------------- When there is a single explanatory variable, the model reduces to $$y_i = a + bx_i + \epsilon_i, \qquad i = 1, \dotsc, n$$ and $$\mathbf{X} = \left( \begin{array}{cc} 1 & x_1 \\ 1 & x_2 \\ \vdots & \vdots \\ 1 & x_n \end{array} \right), \qquad \mathbf{\beta} = \left( \begin{array}{c} a\\b \end{array} \right)$$ so that $$(\mathbf{X}^{\prime} \mathbf{X})^{-1} = \frac{1}{n\sum x_i^2 - (\sum x_i)^2} \left( \begin{array}{cc} \sum x_i^2 & -\sum x_i \\ -\sum x_i & n \end{array} \right)$$ and formulas become more transparant. For example, the standard error of the estimated slope is $$\sqrt{\widehat{\textrm{Var}}(\hat{b})} = \sqrt{[\hat{\sigma}^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}]_{22}} = \sqrt{\frac{n \hat{\sigma}^2}{n\sum x_i^2 - (\sum x_i)^2}}.$$ > num <- n * anova(mod)[[3]][2] > denom <- n * sum(x^2) - sum(x)^2 > sqrt(num / denom) [1] 0.09725302
How are the standard errors of coefficients calculated in a regression? The linear model is written as $$ \left| \begin{array}{l} \mathbf{y} = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \\ \mathbf{\epsilon} \sim N(0, \sigma^2 \mathbf{I}), \end{array} \right.$$ where $
918
How are the standard errors of coefficients calculated in a regression?
The formulae for these can be found in any intermediate text on statistics, in particular, you can find them in Sheather (2009, Chapter 5), from where the following exercise is also taken (page 138). The following R code computes the coefficient estimates and their standard errors manually dfData <- as.data.frame( read.csv("https://gattonweb.uky.edu/sheather/book/docs/datasets/MichelinNY.csv", header=T)) # using direct calculations vY <- as.matrix(dfData[, -2])[, 5] # dependent variable mX <- cbind(constant = 1, as.matrix(dfData[, -2])[, -5]) # design matrix vBeta <- solve(t(mX)%*%mX, t(mX)%*%vY) # coefficient estimates dSigmaSq <- sum((vY - mX%*%vBeta)^2)/(nrow(mX)-ncol(mX)) # estimate of sigma-squared mVarCovar <- dSigmaSq*chol2inv(chol(t(mX)%*%mX)) # variance covariance matrix vStdErr <- sqrt(diag(mVarCovar)) # coeff. est. standard errors print(cbind(vBeta, vStdErr)) # output which produces the output vStdErr constant -57.6003854 9.2336793 InMichelin 1.9931416 2.6357441 Food 0.2006282 0.6682711 Decor 2.2048571 0.3929987 Service 3.0597698 0.5705031 Compare to the output from lm(): # using lm() names(dfData) summary(lm(Price ~ InMichelin + Food + Decor + Service, data = dfData)) which produces the output: Call: lm(formula = Price ~ InMichelin + Food + Decor + Service, data = dfData) Residuals: Min 1Q Median 3Q Max -20.898 -5.835 -0.755 3.457 105.785 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -57.6004 9.2337 -6.238 3.84e-09 *** InMichelin 1.9931 2.6357 0.756 0.451 Food 0.2006 0.6683 0.300 0.764 Decor 2.2049 0.3930 5.610 8.76e-08 *** Service 3.0598 0.5705 5.363 2.84e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 13.55 on 159 degrees of freedom Multiple R-squared: 0.6344, Adjusted R-squared: 0.6252 F-statistic: 68.98 on 4 and 159 DF, p-value: < 2.2e-16
How are the standard errors of coefficients calculated in a regression?
The formulae for these can be found in any intermediate text on statistics, in particular, you can find them in Sheather (2009, Chapter 5), from where the following exercise is also taken (page 138).
How are the standard errors of coefficients calculated in a regression? The formulae for these can be found in any intermediate text on statistics, in particular, you can find them in Sheather (2009, Chapter 5), from where the following exercise is also taken (page 138). The following R code computes the coefficient estimates and their standard errors manually dfData <- as.data.frame( read.csv("https://gattonweb.uky.edu/sheather/book/docs/datasets/MichelinNY.csv", header=T)) # using direct calculations vY <- as.matrix(dfData[, -2])[, 5] # dependent variable mX <- cbind(constant = 1, as.matrix(dfData[, -2])[, -5]) # design matrix vBeta <- solve(t(mX)%*%mX, t(mX)%*%vY) # coefficient estimates dSigmaSq <- sum((vY - mX%*%vBeta)^2)/(nrow(mX)-ncol(mX)) # estimate of sigma-squared mVarCovar <- dSigmaSq*chol2inv(chol(t(mX)%*%mX)) # variance covariance matrix vStdErr <- sqrt(diag(mVarCovar)) # coeff. est. standard errors print(cbind(vBeta, vStdErr)) # output which produces the output vStdErr constant -57.6003854 9.2336793 InMichelin 1.9931416 2.6357441 Food 0.2006282 0.6682711 Decor 2.2048571 0.3929987 Service 3.0597698 0.5705031 Compare to the output from lm(): # using lm() names(dfData) summary(lm(Price ~ InMichelin + Food + Decor + Service, data = dfData)) which produces the output: Call: lm(formula = Price ~ InMichelin + Food + Decor + Service, data = dfData) Residuals: Min 1Q Median 3Q Max -20.898 -5.835 -0.755 3.457 105.785 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -57.6004 9.2337 -6.238 3.84e-09 *** InMichelin 1.9931 2.6357 0.756 0.451 Food 0.2006 0.6683 0.300 0.764 Decor 2.2049 0.3930 5.610 8.76e-08 *** Service 3.0598 0.5705 5.363 2.84e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 13.55 on 159 degrees of freedom Multiple R-squared: 0.6344, Adjusted R-squared: 0.6252 F-statistic: 68.98 on 4 and 159 DF, p-value: < 2.2e-16
How are the standard errors of coefficients calculated in a regression? The formulae for these can be found in any intermediate text on statistics, in particular, you can find them in Sheather (2009, Chapter 5), from where the following exercise is also taken (page 138).
919
How are the standard errors of coefficients calculated in a regression?
Part of Ocram's answer is wrong. Actually: $\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y} - (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}.$ $E(\hat{\mathbf{\beta}}) = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$ And the comment of the first answer shows that more explanation of variance of coefficient is needed: $\textrm{Var}(\hat{\mathbf{\beta}}) = E(\hat{\mathbf{\beta}}-E(\hat{\mathbf{\beta}}))^2=\textrm{Var}(- (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}) =(\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}$ Edit Thanks, I $\mathbf{wrongly}$ ignored the hat on that beta. The deduction above is $\mathbf{wrong}$. The correct result is: 1.$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$ (To get this equation, set the first order derivative of $\mathbf{SSR}$ on $\mathbf{\beta}$ equal to zero, for maxmizing $\mathbf{SSR}$) 2.$E(\hat{\mathbf{\beta}}|\mathbf{X}) = E((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} (\mathbf{X}\mathbf{\beta}+\mathbf{\epsilon})|\mathbf{X}) = \mathbf{\beta} + ((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime})E(\mathbf{\epsilon}|\mathbf{X}) = \mathbf{\beta}.$ 3.$\textrm{Var}(\hat{\mathbf{\beta}}) = E(\hat{\mathbf{\beta}}-E(\hat{\mathbf{\beta}}|\mathbf{X}))^2=\textrm{Var}((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}) =(\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}$ Hopefully it helps.
How are the standard errors of coefficients calculated in a regression?
Part of Ocram's answer is wrong. Actually: $\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y} - (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mat
How are the standard errors of coefficients calculated in a regression? Part of Ocram's answer is wrong. Actually: $\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y} - (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}.$ $E(\hat{\mathbf{\beta}}) = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$ And the comment of the first answer shows that more explanation of variance of coefficient is needed: $\textrm{Var}(\hat{\mathbf{\beta}}) = E(\hat{\mathbf{\beta}}-E(\hat{\mathbf{\beta}}))^2=\textrm{Var}(- (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}) =(\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}$ Edit Thanks, I $\mathbf{wrongly}$ ignored the hat on that beta. The deduction above is $\mathbf{wrong}$. The correct result is: 1.$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$ (To get this equation, set the first order derivative of $\mathbf{SSR}$ on $\mathbf{\beta}$ equal to zero, for maxmizing $\mathbf{SSR}$) 2.$E(\hat{\mathbf{\beta}}|\mathbf{X}) = E((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} (\mathbf{X}\mathbf{\beta}+\mathbf{\epsilon})|\mathbf{X}) = \mathbf{\beta} + ((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime})E(\mathbf{\epsilon}|\mathbf{X}) = \mathbf{\beta}.$ 3.$\textrm{Var}(\hat{\mathbf{\beta}}) = E(\hat{\mathbf{\beta}}-E(\hat{\mathbf{\beta}}|\mathbf{X}))^2=\textrm{Var}((\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{\epsilon}) =(\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \;\sigma^2 \mathbf{I} \; \mathbf{X} (\mathbf{X}^{\prime} \mathbf{X})^{-1} = \sigma^2 (\mathbf{X}^{\prime} \mathbf{X})^{-1}$ Hopefully it helps.
How are the standard errors of coefficients calculated in a regression? Part of Ocram's answer is wrong. Actually: $\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y} - (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mat
920
When is R squared negative? [duplicate]
$R^2$ compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then $R^2$ is negative. Note that $R^2$ is not always the square of anything, so it can have a negative value without violating any rules of math. $R^2$ is negative only when the chosen model does not follow the trend of the data, so fits worse than a horizontal line. Example: fit data to a linear regression model constrained so that the $Y$ intercept must equal $1500$. The model makes no sense at all given these data. It is clearly the wrong model, perhaps chosen by accident. The fit of the model (a straight line constrained to go through the point (0,1500)) is worse than the fit of a horizontal line. Thus the sum-of-squares from the model $(SS_\text{res})$ is larger than the sum-of-squares from the horizontal line $(SS_\text{tot})$. If $R^2$ is computed as $1 - \frac{SS_\text{res}}{SS_\text{tot}}$. (here, $SS_{res}$ = residual error.) When $SS_\text{res}$ is greater than $SS_\text{tot}$, that equation could compute a negative value for $R^2$, if the value of the coeficient is greater than 1. With linear regression with no constraints, $R^2$ must be positive (or zero) and equals the square of the correlation coefficient, $r$. A negative $R^2$ is only possible with linear regression when either the intercept or the slope are constrained so that the "best-fit" line (given the constraint) fits worse than a horizontal line. With nonlinear regression, the $R^2$ can be negative whenever the best-fit model (given the chosen equation, and its constraints, if any) fits the data worse than a horizontal line. Bottom line: a negative $R^2$ is not a mathematical impossibility or the sign of a computer bug. It simply means that the chosen model (with its constraints) fits the data really poorly.
When is R squared negative? [duplicate]
$R^2$ compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then $R^2$ is negative. Note that $R^2$
When is R squared negative? [duplicate] $R^2$ compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then $R^2$ is negative. Note that $R^2$ is not always the square of anything, so it can have a negative value without violating any rules of math. $R^2$ is negative only when the chosen model does not follow the trend of the data, so fits worse than a horizontal line. Example: fit data to a linear regression model constrained so that the $Y$ intercept must equal $1500$. The model makes no sense at all given these data. It is clearly the wrong model, perhaps chosen by accident. The fit of the model (a straight line constrained to go through the point (0,1500)) is worse than the fit of a horizontal line. Thus the sum-of-squares from the model $(SS_\text{res})$ is larger than the sum-of-squares from the horizontal line $(SS_\text{tot})$. If $R^2$ is computed as $1 - \frac{SS_\text{res}}{SS_\text{tot}}$. (here, $SS_{res}$ = residual error.) When $SS_\text{res}$ is greater than $SS_\text{tot}$, that equation could compute a negative value for $R^2$, if the value of the coeficient is greater than 1. With linear regression with no constraints, $R^2$ must be positive (or zero) and equals the square of the correlation coefficient, $r$. A negative $R^2$ is only possible with linear regression when either the intercept or the slope are constrained so that the "best-fit" line (given the constraint) fits worse than a horizontal line. With nonlinear regression, the $R^2$ can be negative whenever the best-fit model (given the chosen equation, and its constraints, if any) fits the data worse than a horizontal line. Bottom line: a negative $R^2$ is not a mathematical impossibility or the sign of a computer bug. It simply means that the chosen model (with its constraints) fits the data really poorly.
When is R squared negative? [duplicate] $R^2$ compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then $R^2$ is negative. Note that $R^2$
921
When is R squared negative? [duplicate]
Have you forgotten to include an intercept in your regression? I'm not familiar with SPSS code, but on page 21 of Hayashi's Econometrics: If the regressors do not include a constant but (as some regression software packages do) you nevertheless calculate $R^2$ by the formula $R^2=1-\frac{\sum_{i=1}^{n}e_i^2}{\sum_{i=1}^{n}(y_i-\bar{y})^2}$ then the $R^2$ can be negative. This is because, without the benefit of an intercept, the regression could do worse than the sample mean in terms of tracking the dependent variable (i.e., the numerator could be greater than the denominator). I'd check and make sure that SPSS is including an intercept in your regression.
When is R squared negative? [duplicate]
Have you forgotten to include an intercept in your regression? I'm not familiar with SPSS code, but on page 21 of Hayashi's Econometrics: If the regressors do not include a constant but (as some regr
When is R squared negative? [duplicate] Have you forgotten to include an intercept in your regression? I'm not familiar with SPSS code, but on page 21 of Hayashi's Econometrics: If the regressors do not include a constant but (as some regression software packages do) you nevertheless calculate $R^2$ by the formula $R^2=1-\frac{\sum_{i=1}^{n}e_i^2}{\sum_{i=1}^{n}(y_i-\bar{y})^2}$ then the $R^2$ can be negative. This is because, without the benefit of an intercept, the regression could do worse than the sample mean in terms of tracking the dependent variable (i.e., the numerator could be greater than the denominator). I'd check and make sure that SPSS is including an intercept in your regression.
When is R squared negative? [duplicate] Have you forgotten to include an intercept in your regression? I'm not familiar with SPSS code, but on page 21 of Hayashi's Econometrics: If the regressors do not include a constant but (as some regr
922
When is R squared negative? [duplicate]
This can happen if you have a time series that is N.i.i.d. and you construct an inappropriate ARIMA model of the form(0,1,0) which is a first difference random walk model with no drift then the variance (sum of squares - SSE ) of the residuals will be larger than the variance (sum of squares SSO) of the original series. Thus the equation 1-SSE/SSO will yield a negative number as SSE execeedS SSO . We have seen this when users simply fit an assumed model or use inadequate procedures to identify/form an appropriate ARIMA structure. The larger message IS that a model can distort (much like a pair of bad glasses ) your vision. Without having access to your data I would otherwise have a problem in explaining your faulty results. Have you brought this to the attention of IBM ? The idea of an assumed model being counter-productive has been echoed by Harvey Motulsky. Great post Harvey !
When is R squared negative? [duplicate]
This can happen if you have a time series that is N.i.i.d. and you construct an inappropriate ARIMA model of the form(0,1,0) which is a first difference random walk model with no drift then the varian
When is R squared negative? [duplicate] This can happen if you have a time series that is N.i.i.d. and you construct an inappropriate ARIMA model of the form(0,1,0) which is a first difference random walk model with no drift then the variance (sum of squares - SSE ) of the residuals will be larger than the variance (sum of squares SSO) of the original series. Thus the equation 1-SSE/SSO will yield a negative number as SSE execeedS SSO . We have seen this when users simply fit an assumed model or use inadequate procedures to identify/form an appropriate ARIMA structure. The larger message IS that a model can distort (much like a pair of bad glasses ) your vision. Without having access to your data I would otherwise have a problem in explaining your faulty results. Have you brought this to the attention of IBM ? The idea of an assumed model being counter-productive has been echoed by Harvey Motulsky. Great post Harvey !
When is R squared negative? [duplicate] This can happen if you have a time series that is N.i.i.d. and you construct an inappropriate ARIMA model of the form(0,1,0) which is a first difference random walk model with no drift then the varian
923
When is R squared negative? [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Here's an explanation for those from the ML field: a negative R squared means that the model is predicting worse than the mean of the target values ($\bar{y}$). In other words, the mean squared error (MSE) of the model is higher than the MSE of a dummy estimator using the mean of the target values as the prediction ($Rˆ2 = 1-\frac{MSE(y,f)}{MSE(y,\bar{y})}$). As a curiosity, it can happen a counter-intuitive situation there's a high correlation between $y$ (target value) and $f$ (prediction), but still a negative r-squared. I'll demonstrate this using python below: from sklearn.metrics import r2_score from scipy.stats import pearsonr import numpy as np # True values y = np.array([10, 20, 30, 50, 90]) # Predictions f = np.array([20, 40, 60, 75, 135]) # Calculate r-squared r2 = r2_score(y, f) print('R-Squared:', r2) # Output: -0.012 corr = pearsonr(y, f)[0] print('Pearson Correlation:', corr) # Output: 0.992 # Here's a way to interpret: r2 is equivalent to the following mean_squared_error = lambda y, f: np.mean((y-f)**2) r2_eq = 1-mean_squared_error(y, f)/mean_squared_error(y, [y.mean()]*len(y)) print( 'R-Squared (from equation):', r2_eq ) # Output: -0.0124 # Hence it is negative as MSE given f is higher than MSE given y.mean() as f Which prints: R-Squared: -0.012499999999999956 Pearson Correlation: 0.9929674489269135 R-Squared (from equation): -0.012499999999999956
When is R squared negative? [duplicate]
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
When is R squared negative? [duplicate] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. Here's an explanation for those from the ML field: a negative R squared means that the model is predicting worse than the mean of the target values ($\bar{y}$). In other words, the mean squared error (MSE) of the model is higher than the MSE of a dummy estimator using the mean of the target values as the prediction ($Rˆ2 = 1-\frac{MSE(y,f)}{MSE(y,\bar{y})}$). As a curiosity, it can happen a counter-intuitive situation there's a high correlation between $y$ (target value) and $f$ (prediction), but still a negative r-squared. I'll demonstrate this using python below: from sklearn.metrics import r2_score from scipy.stats import pearsonr import numpy as np # True values y = np.array([10, 20, 30, 50, 90]) # Predictions f = np.array([20, 40, 60, 75, 135]) # Calculate r-squared r2 = r2_score(y, f) print('R-Squared:', r2) # Output: -0.012 corr = pearsonr(y, f)[0] print('Pearson Correlation:', corr) # Output: 0.992 # Here's a way to interpret: r2 is equivalent to the following mean_squared_error = lambda y, f: np.mean((y-f)**2) r2_eq = 1-mean_squared_error(y, f)/mean_squared_error(y, [y.mean()]*len(y)) print( 'R-Squared (from equation):', r2_eq ) # Output: -0.0124 # Hence it is negative as MSE given f is higher than MSE given y.mean() as f Which prints: R-Squared: -0.012499999999999956 Pearson Correlation: 0.9929674489269135 R-Squared (from equation): -0.012499999999999956
When is R squared negative? [duplicate] Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
924
The Sleeping Beauty Paradox
Strategy I would like to apply rational decision theory to the analysis, because that is one well-established way to attain rigor in solving a statistical decision problem. In trying to do so, one difficulty emerges as special: the alteration of SB’s consciousness. Rational decision theory has no mechanism to handle altered mental states. In asking SB for her credence in the coin flip, we are simultaneously treating her in a somewhat self-referential manner both as subject (of the SB experiment) and experimenter (concerning the coin flip). Let’s alter the experiment in an inessential way: instead of administering the memory-erasure drug, prepare a stable of Sleeping Beauty clones just before the experiment begins. (This is the key idea, because it helps us resist distracting--but ultimately irrelevant and misleading--philosophical issues.) The clones are like her in all respects, including memory and thought. SB is fully aware this will happen. We can clone, in principle. E. T. Jaynes replaces the question "how can we build a mathematical model of human common sense"--something we need in order to think through the Sleeping Beauty problem--by "How could we build a machine which would carry out useful plausible reasoning, following clearly defined principles expressing an idealized common sense?" Thus, if you like, replace SB by Jaynes' thinking robot, and clone that. (There have been, and still are, controversies about "thinking" machines. "They will never make a machine to replace the human mind—it does many things which no machine could ever do." You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!” --J. von Neumann, 1948. Quoted by E. T. Jaynes in Probability Theory: The Logic of Science, p. 4.) --Rube Goldberg The Sleeping Beauty experiment restated Prepare $n \ge 2$ identical copies of SB (including SB herself) on Sunday evening. They all go to sleep at the same time, potentially for 100 years. Whenever you need to awaken SB during the experiment, randomly select a clone who has not yet been awakened. Any awakenings will occur on Monday and, if needed, on Tuesday. I claim that this version of the experiment creates exactly the same set of possible results, right down to SB's mental states and awareness, with exactly the same probabilities. This potentially is one key point where philosophers might choose to attack my solution. I claim it's the last point at which they can attack it, because the remaining analysis is routine and rigorous. Now we apply the usual statistical machinery. Let's begin with the sample space (of possible experimental outcomes). Let $M$ mean "awakens Monday" and $T$ mean "awakens Tuesday." Similarly, let $h$ mean "heads" and $t$ mean "tails". Subscript the clones with integers $1, 2, \ldots, n$. Then the possible experimental outcomes can be written (in what I hope is a transparent, self-evident notation) as the set $$\eqalign{ \{&hM_1, hM_2, \ldots, hM_n, \\ &(tM_1, tT_2), (tM_1, tT_3), \ldots, (tM_1, tT_n), \\ &(tM_2, tT_1), (tM_2, tT_3), \ldots, (tM_2, tT_n), \\ &\cdots, \\ &(tM_n, tT_1), (tM_n, tT_2), \ldots, (tM_n, tT_{n-1}) & \}. }$$ Monday probabilities As one of the SB clones, you figure your chance of being awakened on Monday during a heads-up experiment is ($1/2$ chance of heads) times ($1/n$ chance I’m picked to be the clone who is awakened). In more technical terms: The set of heads outcomes is $h = \{hM_j, j=1,2, \ldots,n\}$. There are $n$ of them. The event where you are awakened with heads is $h(i) = \{hM_i\}$. The chance of any particular SB clone $i$ being awakened with the coin showing heads equals $$\Pr[h(i)] = \Pr[h] \times \Pr[h(i)|h] = \frac{1}{2} \times \frac{1}{n} = \frac{1}{2n}.$$ Tuesday probabilities The set of tails outcomes is $t = \{(tM_j, tT_k): j \ne k\}$. There are $n(n-1)$ of them. All are equally likely, by design. You, clone $i$, are awakened in $(n-1) + (n-1) = 2(n-1)$ of these cases; namely, the $n-1$ ways you can be awakened on Monday (there are $n-1$ remaining clones to be awakened Tuesday) plus the $n-1$ ways you can be awakened on Tuesday (there are $n-1$ possible Monday clones). Call this event $t(i)$. Your chance of being awakened during a tails-up experiment equals $$\Pr[t(i)] = \Pr[t] \times P[t(i)|t] = \frac{1}{2} \times \frac{2(n-1)}{n(n-1)} = \frac{1}{n}.$$ Bayes' Theorem Now that we have come this far, Bayes' Theorem--a mathematical tautology beyond dispute--finishes the work. Any clone's chance of heads is therefore $$\Pr[h | t(i) \cup h(i)] = \frac{\Pr[h]\Pr[h(i)|h]}{\Pr[h]\Pr[h(i)|h] + \Pr[t]\Pr[t(i)|t]} = \frac{1/(2n)}{1/n + 1/(2n)} = \frac{1}{3}.$$ Because SB is indistinguishable from her clones--even to herself!--this is the answer she should give when asked for her degree of belief in heads. Interpretations The question "what is the probability of heads" has two reasonable interpretations for this experiment: it can ask for the chance a fair coin lands heads, which is $\Pr[h] = 1/2$ (the Halfer answer), or it can ask for the chance the coin lands heads, conditioned on the fact that you were the clone awakened. This is $\Pr[h|t(i) \cup h(i)] = 1/3$ (the Thirder answer). In the situation in which SB (or rather any one of a set of identically prepared Jaynes thinking machines) finds herself, this analysis--which many others have performed (but I think less convincingly, because they did not so clearly remove the philosophical distractions in the experimental descriptions)--supports the Thirder answer. The Halfer answer is correct, but uninteresting, because it is not relevant to the situation in which SB finds herself. This resolves the paradox. This solution is developed within the context of a single well-defined experimental setup. Clarifying the experiment clarifies the question. A clear question leads to a clear answer. Comments I guess that, following Elga (2000), you could legitimately characterize our conditional answer as "count[ing] your own temporal location as relevant to the truth of h," but that characterization adds no insight to the problem: it only detracts from the mathematical facts in evidence. To me it appears to be just an obscure way of asserting that the "clones" interpretation of the probability question is the correct one. This analysis suggests that the underlying philosophical issue is one of identity: What happens to the clones who are not awakened? What cognitive and noetic relationships hold among the clones?--but that discussion is not a matter of statistical analysis; it belongs on a different forum.
The Sleeping Beauty Paradox
Strategy I would like to apply rational decision theory to the analysis, because that is one well-established way to attain rigor in solving a statistical decision problem. In trying to do so, one di
The Sleeping Beauty Paradox Strategy I would like to apply rational decision theory to the analysis, because that is one well-established way to attain rigor in solving a statistical decision problem. In trying to do so, one difficulty emerges as special: the alteration of SB’s consciousness. Rational decision theory has no mechanism to handle altered mental states. In asking SB for her credence in the coin flip, we are simultaneously treating her in a somewhat self-referential manner both as subject (of the SB experiment) and experimenter (concerning the coin flip). Let’s alter the experiment in an inessential way: instead of administering the memory-erasure drug, prepare a stable of Sleeping Beauty clones just before the experiment begins. (This is the key idea, because it helps us resist distracting--but ultimately irrelevant and misleading--philosophical issues.) The clones are like her in all respects, including memory and thought. SB is fully aware this will happen. We can clone, in principle. E. T. Jaynes replaces the question "how can we build a mathematical model of human common sense"--something we need in order to think through the Sleeping Beauty problem--by "How could we build a machine which would carry out useful plausible reasoning, following clearly defined principles expressing an idealized common sense?" Thus, if you like, replace SB by Jaynes' thinking robot, and clone that. (There have been, and still are, controversies about "thinking" machines. "They will never make a machine to replace the human mind—it does many things which no machine could ever do." You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!” --J. von Neumann, 1948. Quoted by E. T. Jaynes in Probability Theory: The Logic of Science, p. 4.) --Rube Goldberg The Sleeping Beauty experiment restated Prepare $n \ge 2$ identical copies of SB (including SB herself) on Sunday evening. They all go to sleep at the same time, potentially for 100 years. Whenever you need to awaken SB during the experiment, randomly select a clone who has not yet been awakened. Any awakenings will occur on Monday and, if needed, on Tuesday. I claim that this version of the experiment creates exactly the same set of possible results, right down to SB's mental states and awareness, with exactly the same probabilities. This potentially is one key point where philosophers might choose to attack my solution. I claim it's the last point at which they can attack it, because the remaining analysis is routine and rigorous. Now we apply the usual statistical machinery. Let's begin with the sample space (of possible experimental outcomes). Let $M$ mean "awakens Monday" and $T$ mean "awakens Tuesday." Similarly, let $h$ mean "heads" and $t$ mean "tails". Subscript the clones with integers $1, 2, \ldots, n$. Then the possible experimental outcomes can be written (in what I hope is a transparent, self-evident notation) as the set $$\eqalign{ \{&hM_1, hM_2, \ldots, hM_n, \\ &(tM_1, tT_2), (tM_1, tT_3), \ldots, (tM_1, tT_n), \\ &(tM_2, tT_1), (tM_2, tT_3), \ldots, (tM_2, tT_n), \\ &\cdots, \\ &(tM_n, tT_1), (tM_n, tT_2), \ldots, (tM_n, tT_{n-1}) & \}. }$$ Monday probabilities As one of the SB clones, you figure your chance of being awakened on Monday during a heads-up experiment is ($1/2$ chance of heads) times ($1/n$ chance I’m picked to be the clone who is awakened). In more technical terms: The set of heads outcomes is $h = \{hM_j, j=1,2, \ldots,n\}$. There are $n$ of them. The event where you are awakened with heads is $h(i) = \{hM_i\}$. The chance of any particular SB clone $i$ being awakened with the coin showing heads equals $$\Pr[h(i)] = \Pr[h] \times \Pr[h(i)|h] = \frac{1}{2} \times \frac{1}{n} = \frac{1}{2n}.$$ Tuesday probabilities The set of tails outcomes is $t = \{(tM_j, tT_k): j \ne k\}$. There are $n(n-1)$ of them. All are equally likely, by design. You, clone $i$, are awakened in $(n-1) + (n-1) = 2(n-1)$ of these cases; namely, the $n-1$ ways you can be awakened on Monday (there are $n-1$ remaining clones to be awakened Tuesday) plus the $n-1$ ways you can be awakened on Tuesday (there are $n-1$ possible Monday clones). Call this event $t(i)$. Your chance of being awakened during a tails-up experiment equals $$\Pr[t(i)] = \Pr[t] \times P[t(i)|t] = \frac{1}{2} \times \frac{2(n-1)}{n(n-1)} = \frac{1}{n}.$$ Bayes' Theorem Now that we have come this far, Bayes' Theorem--a mathematical tautology beyond dispute--finishes the work. Any clone's chance of heads is therefore $$\Pr[h | t(i) \cup h(i)] = \frac{\Pr[h]\Pr[h(i)|h]}{\Pr[h]\Pr[h(i)|h] + \Pr[t]\Pr[t(i)|t]} = \frac{1/(2n)}{1/n + 1/(2n)} = \frac{1}{3}.$$ Because SB is indistinguishable from her clones--even to herself!--this is the answer she should give when asked for her degree of belief in heads. Interpretations The question "what is the probability of heads" has two reasonable interpretations for this experiment: it can ask for the chance a fair coin lands heads, which is $\Pr[h] = 1/2$ (the Halfer answer), or it can ask for the chance the coin lands heads, conditioned on the fact that you were the clone awakened. This is $\Pr[h|t(i) \cup h(i)] = 1/3$ (the Thirder answer). In the situation in which SB (or rather any one of a set of identically prepared Jaynes thinking machines) finds herself, this analysis--which many others have performed (but I think less convincingly, because they did not so clearly remove the philosophical distractions in the experimental descriptions)--supports the Thirder answer. The Halfer answer is correct, but uninteresting, because it is not relevant to the situation in which SB finds herself. This resolves the paradox. This solution is developed within the context of a single well-defined experimental setup. Clarifying the experiment clarifies the question. A clear question leads to a clear answer. Comments I guess that, following Elga (2000), you could legitimately characterize our conditional answer as "count[ing] your own temporal location as relevant to the truth of h," but that characterization adds no insight to the problem: it only detracts from the mathematical facts in evidence. To me it appears to be just an obscure way of asserting that the "clones" interpretation of the probability question is the correct one. This analysis suggests that the underlying philosophical issue is one of identity: What happens to the clones who are not awakened? What cognitive and noetic relationships hold among the clones?--but that discussion is not a matter of statistical analysis; it belongs on a different forum.
The Sleeping Beauty Paradox Strategy I would like to apply rational decision theory to the analysis, because that is one well-established way to attain rigor in solving a statistical decision problem. In trying to do so, one di
925
The Sleeping Beauty Paradox
Thanks for this brilliant post (+1) and solution (+1). This paradox already gives me a headache. I just thought of the following situation which does not require fairies, miracles nor magic potions. Flip a fair coin on Monday noon. Upon 'Tails' send a mail to Alice and Bob (in a way that they don't know that the other has received a mail from you, and that they cannot communicate). Upon 'Heads', send a mail to one of them at random (with probability $1/2$). When Alice receives a mail, what is the probability that the coin landed on 'Heads'? The probability that she receives a letter is $1/2 \times 1/2 + 1/2 = 3/4$, and the probability that the coin landed on 'Heads' is $1/3$. Here there is no paradox because Alice does not receive a letter with probability $1/4$, in which case she knows the coin landed on 'Heads'. The fact that we don't ask her opinion in that case, does make this probability equal to 0. So, what is the difference? Why would Alice gain information by receiving a mail, and SB would learn nothing being awakened? Moving on to a more miraculous situation, we put 2 different SB to sleep. If the coin lands on 'Tails' we wake up both, if it lands on 'Heads' we wake up one of them at random. Here again, each of the SB should say that the probability of the coin landing on 'Heads' is $1/3$ and again there is no paradox because there is a $1/4$ chance that this SB would not be awakened. But this situation is very close to the original paradox because erasing the memory (or cloning) is equivalent to having two different SB. So, I am with @Douglas Zare here (+1). SB has learned something by being awakened. The fact that she cannot express her opinion on Tuesday when the coin is 'Heads' up because she is sleeping does not erase the information she has by being awakened. In my opinion the paradox lies in "she has learned absolutely nothing she did not know Sunday night" which is stated without justification. We have this impression because the situations when she is awakened are identical, but this is just like Alice receiving a mail: it is the fact that she is asked her opinion that gives her information. MAJOR EDIT: After giving it a deep thought, I change my opinion: Sleeping Beauty has learned nothing and the example I give above is not a good analogue of her situation. But here is an equivalent problem that is not paradoxical. I could play the following game with Alice and Bob: I toss a coin secretly and independently bet them 1\$ that they cannot guess it. But if the coin landed on 'Tails', the bet of either Alice of Bob is cancelled (money does not change hand). Given that they know the rules, what should they bet? 'Heads' obviously. If the coin lands on 'Heads', they gain 1\$, otherwise, they lose 0.5\$ on average. Does it mean that they believe that the coin has a 2/3 chance of landing on 'Heads'? Sure not. Simply the protocol is such that they do not gain the same amount of money for each answer. I believe that Sleeping Beauty is in the same situation as Alice or Bob. The events give her no information about the toss, but if she is asked to bet, her odds are not 1:1 because of asymmetries in the gain. I believe that this is what @whuber means by The Halfer answer is correct, but uninteresting, because it is not relevant to the situation in which SB finds herself. This resolves the paradox.
The Sleeping Beauty Paradox
Thanks for this brilliant post (+1) and solution (+1). This paradox already gives me a headache. I just thought of the following situation which does not require fairies, miracles nor magic potions. F
The Sleeping Beauty Paradox Thanks for this brilliant post (+1) and solution (+1). This paradox already gives me a headache. I just thought of the following situation which does not require fairies, miracles nor magic potions. Flip a fair coin on Monday noon. Upon 'Tails' send a mail to Alice and Bob (in a way that they don't know that the other has received a mail from you, and that they cannot communicate). Upon 'Heads', send a mail to one of them at random (with probability $1/2$). When Alice receives a mail, what is the probability that the coin landed on 'Heads'? The probability that she receives a letter is $1/2 \times 1/2 + 1/2 = 3/4$, and the probability that the coin landed on 'Heads' is $1/3$. Here there is no paradox because Alice does not receive a letter with probability $1/4$, in which case she knows the coin landed on 'Heads'. The fact that we don't ask her opinion in that case, does make this probability equal to 0. So, what is the difference? Why would Alice gain information by receiving a mail, and SB would learn nothing being awakened? Moving on to a more miraculous situation, we put 2 different SB to sleep. If the coin lands on 'Tails' we wake up both, if it lands on 'Heads' we wake up one of them at random. Here again, each of the SB should say that the probability of the coin landing on 'Heads' is $1/3$ and again there is no paradox because there is a $1/4$ chance that this SB would not be awakened. But this situation is very close to the original paradox because erasing the memory (or cloning) is equivalent to having two different SB. So, I am with @Douglas Zare here (+1). SB has learned something by being awakened. The fact that she cannot express her opinion on Tuesday when the coin is 'Heads' up because she is sleeping does not erase the information she has by being awakened. In my opinion the paradox lies in "she has learned absolutely nothing she did not know Sunday night" which is stated without justification. We have this impression because the situations when she is awakened are identical, but this is just like Alice receiving a mail: it is the fact that she is asked her opinion that gives her information. MAJOR EDIT: After giving it a deep thought, I change my opinion: Sleeping Beauty has learned nothing and the example I give above is not a good analogue of her situation. But here is an equivalent problem that is not paradoxical. I could play the following game with Alice and Bob: I toss a coin secretly and independently bet them 1\$ that they cannot guess it. But if the coin landed on 'Tails', the bet of either Alice of Bob is cancelled (money does not change hand). Given that they know the rules, what should they bet? 'Heads' obviously. If the coin lands on 'Heads', they gain 1\$, otherwise, they lose 0.5\$ on average. Does it mean that they believe that the coin has a 2/3 chance of landing on 'Heads'? Sure not. Simply the protocol is such that they do not gain the same amount of money for each answer. I believe that Sleeping Beauty is in the same situation as Alice or Bob. The events give her no information about the toss, but if she is asked to bet, her odds are not 1:1 because of asymmetries in the gain. I believe that this is what @whuber means by The Halfer answer is correct, but uninteresting, because it is not relevant to the situation in which SB finds herself. This resolves the paradox.
The Sleeping Beauty Paradox Thanks for this brilliant post (+1) and solution (+1). This paradox already gives me a headache. I just thought of the following situation which does not require fairies, miracles nor magic potions. F
926
The Sleeping Beauty Paradox
The paradox lies in the perspective change between a single experiment and its limit point. If # of experiments is taken into account, you can understand this even more precisely than the "either/or" of halvers and thirders: Single Experiment: Halvers are right If there is a single experiment, there are three outcomes and you just have to figure the probabilities from the perspective of the awakened: Heads was tossed: 50% Tails was tossed and this is my first awakening: 25% Tails was tossed and this is my second awakening: 25% So, in a single experiment, at any wakeup event, you should assume 50/50 that you are in a state where heads was tossed Two experiments: 42%ers are right Now, try two experiments: Heads was tossed twice: 25% (for both awakenings combined) Tails was tossed twice: 25% (for all four awakenings combined) Heads then Tails and this is my first awakening: 25%/3 Heads then Tails and this is my 2nd or 3rd awakening: 25%*2/3 Tails then Heads and this my 1st or 2nd awakening: 25%*2/3 Tails then Heads and this is my 3rd awakening: 25%/3. So here, {1, 3, 6} are your Heads states, with a combined probability of (25 + 25/3 + 25/3)%, 41.66%, which is less than 50%. If two experiments are run, at any wakeup event, you should assume 41.66% chance you are in a state where Heads was thrown Infinite experiments: Thirders are right I'm not going to do the math here, but if you look at the two-experiments options, you can see #1 and #2 drive it toward halves, and the rest drive it toward thirds. As the number of experiments increases, the options driving toward halves (all heads/all tails) will decrease in probability down to zero, leaving the "thirds" options to take over. If infinite experiments are run, at any wakeup event, you should assume 1/3 chance you are in a state where heads was thrown Preempting Retorts: But, gambling? Yes in the single experiment instance, you should still "gamble" by the thirds. This is not an inconsistency; it's just because you may be placing the same bet multiple times given a certain outcome, and know this in advance. (Or if you don't, the mafia does). Okay, how about two single experiments? Discrepancy much? No, because knowledge about whether you're on the first or 2nd experiment adds to your, erm, knowledge. Let's look at the "two experiments" options and filter them by knowledge that you're on the first experiment. Applicable for first awakening (1/2) Applicable for first two awakenings (2/4) Applicable Never applicable Applicable for first awakening (1/2) Not applicable Okay, take the Heads ones (1,3,6) multiply these, odds by applicability: 25/2 + 25/3 + 0 = 125/6. Now take the Tails ones (2,4,5) and do the same: 25*4/2 + 0 + 25*(2/3)/2 = 125/6. Viola, they're the same. The added information about which experiment you're in in fact adjusts the odds of what you know. But, the clones!! Simply put, contrary to the OP's answer's postulate, that cloning creates an equivalent experiment: cloning plus random selection does change the knowledge of the experimentee, in the same way "multiple experiments" changes the experiment. If there are two clones, you can see the probabilities of each clone correspond to the Two Experiments probabilities. Infinite clones converges to thirders. But it's not the same experiment, and it's not the same knowledge, as a single experiment with a single non-random subject. You say "random one of infinite" and I say Axiom of Choice dependency I don't know, my set theory isn't that great. But given for N less than infinity, you can establish some sequence that converges from half to a third, the infinite case equaling a third will either be true or undecidable at worst, no matter which axioms you invoke.
The Sleeping Beauty Paradox
The paradox lies in the perspective change between a single experiment and its limit point. If # of experiments is taken into account, you can understand this even more precisely than the "either/or"
The Sleeping Beauty Paradox The paradox lies in the perspective change between a single experiment and its limit point. If # of experiments is taken into account, you can understand this even more precisely than the "either/or" of halvers and thirders: Single Experiment: Halvers are right If there is a single experiment, there are three outcomes and you just have to figure the probabilities from the perspective of the awakened: Heads was tossed: 50% Tails was tossed and this is my first awakening: 25% Tails was tossed and this is my second awakening: 25% So, in a single experiment, at any wakeup event, you should assume 50/50 that you are in a state where heads was tossed Two experiments: 42%ers are right Now, try two experiments: Heads was tossed twice: 25% (for both awakenings combined) Tails was tossed twice: 25% (for all four awakenings combined) Heads then Tails and this is my first awakening: 25%/3 Heads then Tails and this is my 2nd or 3rd awakening: 25%*2/3 Tails then Heads and this my 1st or 2nd awakening: 25%*2/3 Tails then Heads and this is my 3rd awakening: 25%/3. So here, {1, 3, 6} are your Heads states, with a combined probability of (25 + 25/3 + 25/3)%, 41.66%, which is less than 50%. If two experiments are run, at any wakeup event, you should assume 41.66% chance you are in a state where Heads was thrown Infinite experiments: Thirders are right I'm not going to do the math here, but if you look at the two-experiments options, you can see #1 and #2 drive it toward halves, and the rest drive it toward thirds. As the number of experiments increases, the options driving toward halves (all heads/all tails) will decrease in probability down to zero, leaving the "thirds" options to take over. If infinite experiments are run, at any wakeup event, you should assume 1/3 chance you are in a state where heads was thrown Preempting Retorts: But, gambling? Yes in the single experiment instance, you should still "gamble" by the thirds. This is not an inconsistency; it's just because you may be placing the same bet multiple times given a certain outcome, and know this in advance. (Or if you don't, the mafia does). Okay, how about two single experiments? Discrepancy much? No, because knowledge about whether you're on the first or 2nd experiment adds to your, erm, knowledge. Let's look at the "two experiments" options and filter them by knowledge that you're on the first experiment. Applicable for first awakening (1/2) Applicable for first two awakenings (2/4) Applicable Never applicable Applicable for first awakening (1/2) Not applicable Okay, take the Heads ones (1,3,6) multiply these, odds by applicability: 25/2 + 25/3 + 0 = 125/6. Now take the Tails ones (2,4,5) and do the same: 25*4/2 + 0 + 25*(2/3)/2 = 125/6. Viola, they're the same. The added information about which experiment you're in in fact adjusts the odds of what you know. But, the clones!! Simply put, contrary to the OP's answer's postulate, that cloning creates an equivalent experiment: cloning plus random selection does change the knowledge of the experimentee, in the same way "multiple experiments" changes the experiment. If there are two clones, you can see the probabilities of each clone correspond to the Two Experiments probabilities. Infinite clones converges to thirders. But it's not the same experiment, and it's not the same knowledge, as a single experiment with a single non-random subject. You say "random one of infinite" and I say Axiom of Choice dependency I don't know, my set theory isn't that great. But given for N less than infinity, you can establish some sequence that converges from half to a third, the infinite case equaling a third will either be true or undecidable at worst, no matter which axioms you invoke.
The Sleeping Beauty Paradox The paradox lies in the perspective change between a single experiment and its limit point. If # of experiments is taken into account, you can understand this even more precisely than the "either/or"
927
The Sleeping Beauty Paradox
Let's vary the problem. If the coin comes up Heads, then SB is never awakened. If Tails, then SB is awakened once. Now the camps are Halfers and Zeroers. And clearly the Zeroers are correct. Or: Heads -> woken once; Tails -> woken a million times. Clearly, given she's awake, it's most likely tails. (P.S. On the subject of "new information" -- information may have been DESTROYED. So, another question is: has she lost information she once had?)
The Sleeping Beauty Paradox
Let's vary the problem. If the coin comes up Heads, then SB is never awakened. If Tails, then SB is awakened once. Now the camps are Halfers and Zeroers. And clearly the Zeroers are correct. Or: Heads
The Sleeping Beauty Paradox Let's vary the problem. If the coin comes up Heads, then SB is never awakened. If Tails, then SB is awakened once. Now the camps are Halfers and Zeroers. And clearly the Zeroers are correct. Or: Heads -> woken once; Tails -> woken a million times. Clearly, given she's awake, it's most likely tails. (P.S. On the subject of "new information" -- information may have been DESTROYED. So, another question is: has she lost information she once had?)
The Sleeping Beauty Paradox Let's vary the problem. If the coin comes up Heads, then SB is never awakened. If Tails, then SB is awakened once. Now the camps are Halfers and Zeroers. And clearly the Zeroers are correct. Or: Heads
928
The Sleeping Beauty Paradox
"Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This is wrong, as wrong as saying "Either I win the lottery or I don't, so the probability is $50\%$." She has learned that she has woken up. This is information. Now she should believe each possible awakening is equally likely, not each coin flip. If you are a doctor and a patient walks into your office, you have learned that the patient has walked into a doctor's office, which should change your assessment from the prior. If everyone goes to the doctor, but the sick half of the population goes $100$ times as often as the healthy half, then when the patient walks in you know the patient is probably sick. Here is another slight variation. Suppose whatever the outcome of the coin toss was, Sleeping Beauty will be woken up twice. However, if it is tails, she will be woken up nicely twice. If it is heads, she will be woken up nicely once, and will have a bucket of ice dumped on her once. If she wakes up in a pile of ice, she has information that the coin came up heads. If she wakes up nicely, she has information that the coin probably didn't come up heads. She can't have a nondegenerate test whose positive result (ice) tells her heads is more likely without the negative result (nice) indicating that heads is less likely.
The Sleeping Beauty Paradox
"Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This is wrong, as wrong as saying "Either I win the lottery or I don't, so the probability is $50\%$." She has
The Sleeping Beauty Paradox "Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This is wrong, as wrong as saying "Either I win the lottery or I don't, so the probability is $50\%$." She has learned that she has woken up. This is information. Now she should believe each possible awakening is equally likely, not each coin flip. If you are a doctor and a patient walks into your office, you have learned that the patient has walked into a doctor's office, which should change your assessment from the prior. If everyone goes to the doctor, but the sick half of the population goes $100$ times as often as the healthy half, then when the patient walks in you know the patient is probably sick. Here is another slight variation. Suppose whatever the outcome of the coin toss was, Sleeping Beauty will be woken up twice. However, if it is tails, she will be woken up nicely twice. If it is heads, she will be woken up nicely once, and will have a bucket of ice dumped on her once. If she wakes up in a pile of ice, she has information that the coin came up heads. If she wakes up nicely, she has information that the coin probably didn't come up heads. She can't have a nondegenerate test whose positive result (ice) tells her heads is more likely without the negative result (nice) indicating that heads is less likely.
The Sleeping Beauty Paradox "Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This is wrong, as wrong as saying "Either I win the lottery or I don't, so the probability is $50\%$." She has
929
The Sleeping Beauty Paradox
When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? What do you mean by "should"? What are the consequences of my beliefs? In such an experiment I wouldn't believe anything. This question is tagged as decision-theory, but, the way this experiment is conceived, I have no incentive to make a decision. We can modify the experiment in different ways, so that I feel inclined to give an answer. For example, I might have a guess on whether I was awaken because of "Heads" or "Tails", and I'd earn a candy for each correct answer I give. In that case, obviously, I'd decide on "Tails", because, in repeated experiments, I'd earn one candy per experiment on average: In 50% of the cases, the toss would be "Tails", I'd be awaken twice and I'd earn a candy both times. In the other 50% ("Heads") I'd earn nothing. Should I answer "Heads", I'd be earning only half a candy per experiment, because I'd get only one chance to answer and I'd be correct 50% of the time. If I'd myself toss a fair coin for the answer, I'd be earning $3/4$ of a candy. Another possibility is to earn a candy for each experiment in which all my answers were correct. In that case, it doesn't matter which systematic answer I give, since, on average, I'll be earning half a candy per experiment: If I decide on answering "Heads" all of the time, I'd be correct in 50% of the cases, and the same holds for "Tails". Only if I toss a coin myself, I'd be earning $3/8$ of a candy: In 50% of the cases the researchers would toss "Heads", and in 50% thereof I'd toss "Heads", too, earning me $1/4$ of a candy. In the other 50% of the cases, when the researches tossed "Tails", I'd have to toss "Tails" twice, which would happen only in $1/4$ of the cases, so that this would earn me only $1/8$ of a candy. how can this paradox be resolved in a statistically rigorous way? Is this even possible? Define "statistically rigorous way". The question about a belief is of no practical relevance. Only actions matter.
The Sleeping Beauty Paradox
When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? What do you mean by "should"? What are the consequences of my beliefs? In such an experiment I wo
The Sleeping Beauty Paradox When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? What do you mean by "should"? What are the consequences of my beliefs? In such an experiment I wouldn't believe anything. This question is tagged as decision-theory, but, the way this experiment is conceived, I have no incentive to make a decision. We can modify the experiment in different ways, so that I feel inclined to give an answer. For example, I might have a guess on whether I was awaken because of "Heads" or "Tails", and I'd earn a candy for each correct answer I give. In that case, obviously, I'd decide on "Tails", because, in repeated experiments, I'd earn one candy per experiment on average: In 50% of the cases, the toss would be "Tails", I'd be awaken twice and I'd earn a candy both times. In the other 50% ("Heads") I'd earn nothing. Should I answer "Heads", I'd be earning only half a candy per experiment, because I'd get only one chance to answer and I'd be correct 50% of the time. If I'd myself toss a fair coin for the answer, I'd be earning $3/4$ of a candy. Another possibility is to earn a candy for each experiment in which all my answers were correct. In that case, it doesn't matter which systematic answer I give, since, on average, I'll be earning half a candy per experiment: If I decide on answering "Heads" all of the time, I'd be correct in 50% of the cases, and the same holds for "Tails". Only if I toss a coin myself, I'd be earning $3/8$ of a candy: In 50% of the cases the researchers would toss "Heads", and in 50% thereof I'd toss "Heads", too, earning me $1/4$ of a candy. In the other 50% of the cases, when the researches tossed "Tails", I'd have to toss "Tails" twice, which would happen only in $1/4$ of the cases, so that this would earn me only $1/8$ of a candy. how can this paradox be resolved in a statistically rigorous way? Is this even possible? Define "statistically rigorous way". The question about a belief is of no practical relevance. Only actions matter.
The Sleeping Beauty Paradox When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? What do you mean by "should"? What are the consequences of my beliefs? In such an experiment I wo
930
The Sleeping Beauty Paradox
The question is ambiguous and so there only appears to be a paradox. The question is posed this way: When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? Which is confused with this question: When you are awakened, to what degree should you believe Heads was the reason you were awakened? In the first question the probability is 1/2. In the second question, 1/3. The problem is that the first question is stated, but second question is implied in the context of the experiment. Those who subconsciously accept the implication say it's 1/3. Those who read the question literally say it's 1/2. Those who are confused are not sure which question they're asking!
The Sleeping Beauty Paradox
The question is ambiguous and so there only appears to be a paradox. The question is posed this way: When you are awakened, to what degree should you believe that the outcome of the coin toss was He
The Sleeping Beauty Paradox The question is ambiguous and so there only appears to be a paradox. The question is posed this way: When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? Which is confused with this question: When you are awakened, to what degree should you believe Heads was the reason you were awakened? In the first question the probability is 1/2. In the second question, 1/3. The problem is that the first question is stated, but second question is implied in the context of the experiment. Those who subconsciously accept the implication say it's 1/3. Those who read the question literally say it's 1/2. Those who are confused are not sure which question they're asking!
The Sleeping Beauty Paradox The question is ambiguous and so there only appears to be a paradox. The question is posed this way: When you are awakened, to what degree should you believe that the outcome of the coin toss was He
931
The Sleeping Beauty Paradox
"Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This isn't correct, which is the error in the halfer argument. One thing that makes it hard to argue with,tho, is that the halfer argument which is based on this statement is seldom expressed with any more rigor than what I quoted. There are three problems. First, the argument does not define what "new information" means. It seems to mean "An event that originally had a non-zero probability cannot have occurred based on the evidence." Second, it never enumerates what is known on Sunday to see if it fits this definition; and it can, if you look at it properly. Finally, there is no theorem that says "if you have no new information of this kind, you can't update." If you do have it, Bayes Theorem will produce an update. But it is a fallacy to conclude, if you don't have this new information, that you can't update. Being a fallacy doesn't mean it isn't true, it means you can't make this conclusion based on this evidence alone. On Sunday Night, say SB rolls an imaginary six-sided die of her own. Since it is imaginary, she can't look at the result. But the purpose is to see if it matches the day she is awake: an even number means it matches Monday, and an odd number means Tuesday. But it can't match both, which effectively distinguishes the two days. SB can now (that is, on Sunday) calculate the probability for the eight possible combinations of {Heads/Tails, Monday/Tuesday, Match/No Match}. Each will be 1/8. But when she is awake, she knows that {Heads, Tuesday, Match} and {Heads, Tuesday, No Match} did not happen. This constitutes "new information" of the form the halfers argument says doesn’t exist, and it allows SB to update the probability that the researcher's coin landed on heads. It is 1/3 whether or not her imaginary coin matches the actual day. Since it is the same either way, it is 1/3 whether or not she knows if there is a match; and in fact, whether or not she rolls, or imagines rolling, the die. This extra die seems like a lot to go through to get a result. In fact, it isn’t necessary, but you need a different definition of "new information" to see why. Updating can occur anytime the significant (i.e., independent and not zero-probability) events in the prior sample space differ from the significant events in the posterior sample space. That way, the denominator of the ratio in Bayes Theorem is not 1. While this usually occurs when the evidence makes some of the events have zero probability, it can also occur when the evidence changes whether events are independent. This is a very unorthodox interpretation, but it works because Beauty is given more than one opportunity observe an outcome. And the point of my imaginary die, which distinguished the days, was to render the system into one where the total probability was 1. On Sunday, SB knows P(Awake,Monday,Heads) = P(Awake,Monday,Tails) = P(Awake,Tuesday,Tails)=1/2. These add up to more than 1/2 because the events are not independent based on the information SB has on Sunday. But they are independent when she is awake. The answer, according to Bayes Theorem, is (1/2)/(1/2+1/2+1/2)=1/3. There is nothing wrong with a denominator that is greater that 1; but the imaginary coin argument was designed to accomplish the same things without such a denominator.
The Sleeping Beauty Paradox
"Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This isn't correct, which is the error in the halfer argument. One thing that makes it hard to argue with,tho,
The Sleeping Beauty Paradox "Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This isn't correct, which is the error in the halfer argument. One thing that makes it hard to argue with,tho, is that the halfer argument which is based on this statement is seldom expressed with any more rigor than what I quoted. There are three problems. First, the argument does not define what "new information" means. It seems to mean "An event that originally had a non-zero probability cannot have occurred based on the evidence." Second, it never enumerates what is known on Sunday to see if it fits this definition; and it can, if you look at it properly. Finally, there is no theorem that says "if you have no new information of this kind, you can't update." If you do have it, Bayes Theorem will produce an update. But it is a fallacy to conclude, if you don't have this new information, that you can't update. Being a fallacy doesn't mean it isn't true, it means you can't make this conclusion based on this evidence alone. On Sunday Night, say SB rolls an imaginary six-sided die of her own. Since it is imaginary, she can't look at the result. But the purpose is to see if it matches the day she is awake: an even number means it matches Monday, and an odd number means Tuesday. But it can't match both, which effectively distinguishes the two days. SB can now (that is, on Sunday) calculate the probability for the eight possible combinations of {Heads/Tails, Monday/Tuesday, Match/No Match}. Each will be 1/8. But when she is awake, she knows that {Heads, Tuesday, Match} and {Heads, Tuesday, No Match} did not happen. This constitutes "new information" of the form the halfers argument says doesn’t exist, and it allows SB to update the probability that the researcher's coin landed on heads. It is 1/3 whether or not her imaginary coin matches the actual day. Since it is the same either way, it is 1/3 whether or not she knows if there is a match; and in fact, whether or not she rolls, or imagines rolling, the die. This extra die seems like a lot to go through to get a result. In fact, it isn’t necessary, but you need a different definition of "new information" to see why. Updating can occur anytime the significant (i.e., independent and not zero-probability) events in the prior sample space differ from the significant events in the posterior sample space. That way, the denominator of the ratio in Bayes Theorem is not 1. While this usually occurs when the evidence makes some of the events have zero probability, it can also occur when the evidence changes whether events are independent. This is a very unorthodox interpretation, but it works because Beauty is given more than one opportunity observe an outcome. And the point of my imaginary die, which distinguished the days, was to render the system into one where the total probability was 1. On Sunday, SB knows P(Awake,Monday,Heads) = P(Awake,Monday,Tails) = P(Awake,Tuesday,Tails)=1/2. These add up to more than 1/2 because the events are not independent based on the information SB has on Sunday. But they are independent when she is awake. The answer, according to Bayes Theorem, is (1/2)/(1/2+1/2+1/2)=1/3. There is nothing wrong with a denominator that is greater that 1; but the imaginary coin argument was designed to accomplish the same things without such a denominator.
The Sleeping Beauty Paradox "Whenever SB awakens, she has learned absolutely nothing she did not know Sunday night." This isn't correct, which is the error in the halfer argument. One thing that makes it hard to argue with,tho,
932
The Sleeping Beauty Paradox
I just re-tripped across this. I've refined some of my thoughts since that last post, and thought I might find a receptive audience for them here. First off, on the philosophy of how to address such a controversy: Say arguments A and B exist. Each has a premise, a sequence of deductions, and a result; and the results differ. The best way way to prove one argument is incorrect is to invalidate one of its deductions. If that were possible here, there wouldn't be a controversy. Another is to disprove the premise, but you can't do that directly. You can argue for why you don’t believe one, but that won't resolve anything unless you can convince others to stop believing it. To prove a premise wrong indirectly, you have to form an alternate sequence of deductions from it that leads to an absurdity or to a contradiction of the premise. The fallacious way is to argue that the opposing result violates your premise. That means that one is wrong, but it doesn't indicate which. +++++ The halfer's premise is "no new information." Their sequence of deductions is empty - none are needed. Pr(Heads|Awake) = Pr(Heads)=1/2. The thirders (specifically, Elga) have two premises - that Pr(H1|Awake and Monday) = Pr(T1|Awake and Monday), and Pr(T1|Awake and Tails) = Pr(T2|Awake and Tails). An incontrovertible sequence of deductions then leads to Pr(Heads|Awake) = 1/3. Note that the thirders don't ever assume there is new information - their premises are based on whatever information exists - "new" or not - when SB is awake. And I've never seen anyone argue for why a thirder premise is wrong, except that it violates the halfer result. So the halfers have provided none of the valid arguments I've listed. Just the fallacious one. But there are other deductions possible from "no new information," with a sequence of deductions that start with Pr(Heads|Awake) = 1/2. One is that Pr(Heads|Awake and Monday) = 2/3 and Pr(Tails|Awake and Monday) = 1/3. This does contradict the thirder premise, but like I said, that doesn’t help the halfer cause since it still could be their premise that is wrong. Ironically, this result does prove something - that the halfer premise contradicts itself. On Sunday, SB says Pr(Heads|Monday) = Pr(Tails|Monday), so adding the information "Awake" has allowed her to update these probabilities. It is new information. So I have proven the halfer premise can't be right. That doesn't mean the thirders are right, but it does mean that halfers have not provided any contrary evidence. +++++ There is another argument I find more convincing. It isn't completely original, but I'm not sure if the proper viewpoint has been emphasized enough. Consider a variation of the experiment: SB is always wakened on both days; usually it is in a room that is painted blue, but on Tuesday after Heads it is in a room that is painted red. What should she say the probability of Heads is, if she finds herself awake in a blue room? I don’t think anybody would seriously argue that it is anything but 1/3. There are three situations that could correspond to her current one, all are equally likely, and only one includes Heads. The salient point is that there is no difference between this version, and the original. What she "knows" - her "new information" - is that it is not H2. It does not matter how, or IF, she would know it could be H2 if it could. Her capability to observe situations that she knows do not apply is irrelevant if she knows they do not apply. I can not believe the halfer premise. It is based on a fact - that she can't observe H2 - that cannot matter since she can, and does, observe that it isn't H2. So I hope that I have provided a convincing argument for why the halfer premise is invalid. Along the way, I know I have demonstrated that the thirder result must be correct.
The Sleeping Beauty Paradox
I just re-tripped across this. I've refined some of my thoughts since that last post, and thought I might find a receptive audience for them here. First off, on the philosophy of how to address such a
The Sleeping Beauty Paradox I just re-tripped across this. I've refined some of my thoughts since that last post, and thought I might find a receptive audience for them here. First off, on the philosophy of how to address such a controversy: Say arguments A and B exist. Each has a premise, a sequence of deductions, and a result; and the results differ. The best way way to prove one argument is incorrect is to invalidate one of its deductions. If that were possible here, there wouldn't be a controversy. Another is to disprove the premise, but you can't do that directly. You can argue for why you don’t believe one, but that won't resolve anything unless you can convince others to stop believing it. To prove a premise wrong indirectly, you have to form an alternate sequence of deductions from it that leads to an absurdity or to a contradiction of the premise. The fallacious way is to argue that the opposing result violates your premise. That means that one is wrong, but it doesn't indicate which. +++++ The halfer's premise is "no new information." Their sequence of deductions is empty - none are needed. Pr(Heads|Awake) = Pr(Heads)=1/2. The thirders (specifically, Elga) have two premises - that Pr(H1|Awake and Monday) = Pr(T1|Awake and Monday), and Pr(T1|Awake and Tails) = Pr(T2|Awake and Tails). An incontrovertible sequence of deductions then leads to Pr(Heads|Awake) = 1/3. Note that the thirders don't ever assume there is new information - their premises are based on whatever information exists - "new" or not - when SB is awake. And I've never seen anyone argue for why a thirder premise is wrong, except that it violates the halfer result. So the halfers have provided none of the valid arguments I've listed. Just the fallacious one. But there are other deductions possible from "no new information," with a sequence of deductions that start with Pr(Heads|Awake) = 1/2. One is that Pr(Heads|Awake and Monday) = 2/3 and Pr(Tails|Awake and Monday) = 1/3. This does contradict the thirder premise, but like I said, that doesn’t help the halfer cause since it still could be their premise that is wrong. Ironically, this result does prove something - that the halfer premise contradicts itself. On Sunday, SB says Pr(Heads|Monday) = Pr(Tails|Monday), so adding the information "Awake" has allowed her to update these probabilities. It is new information. So I have proven the halfer premise can't be right. That doesn't mean the thirders are right, but it does mean that halfers have not provided any contrary evidence. +++++ There is another argument I find more convincing. It isn't completely original, but I'm not sure if the proper viewpoint has been emphasized enough. Consider a variation of the experiment: SB is always wakened on both days; usually it is in a room that is painted blue, but on Tuesday after Heads it is in a room that is painted red. What should she say the probability of Heads is, if she finds herself awake in a blue room? I don’t think anybody would seriously argue that it is anything but 1/3. There are three situations that could correspond to her current one, all are equally likely, and only one includes Heads. The salient point is that there is no difference between this version, and the original. What she "knows" - her "new information" - is that it is not H2. It does not matter how, or IF, she would know it could be H2 if it could. Her capability to observe situations that she knows do not apply is irrelevant if she knows they do not apply. I can not believe the halfer premise. It is based on a fact - that she can't observe H2 - that cannot matter since she can, and does, observe that it isn't H2. So I hope that I have provided a convincing argument for why the halfer premise is invalid. Along the way, I know I have demonstrated that the thirder result must be correct.
The Sleeping Beauty Paradox I just re-tripped across this. I've refined some of my thoughts since that last post, and thought I might find a receptive audience for them here. First off, on the philosophy of how to address such a
933
The Sleeping Beauty Paradox
Since sleeping beauty can't remember how many times she has woken up before, we are not looking at the probability of Heads given that she has woken up just this once, but the probability of Heads given that she has woken up at least once: So we have: $P(Heads\mid x\geq1) = 1/2$ and not $P(Heads\mid x=1) = 1/3$ Thus the answer is 50% (the halfers are right), and there is no paradox. People seem to be making this far, far more complex than it really is!
The Sleeping Beauty Paradox
Since sleeping beauty can't remember how many times she has woken up before, we are not looking at the probability of Heads given that she has woken up just this once, but the probability of Heads giv
The Sleeping Beauty Paradox Since sleeping beauty can't remember how many times she has woken up before, we are not looking at the probability of Heads given that she has woken up just this once, but the probability of Heads given that she has woken up at least once: So we have: $P(Heads\mid x\geq1) = 1/2$ and not $P(Heads\mid x=1) = 1/3$ Thus the answer is 50% (the halfers are right), and there is no paradox. People seem to be making this far, far more complex than it really is!
The Sleeping Beauty Paradox Since sleeping beauty can't remember how many times she has woken up before, we are not looking at the probability of Heads given that she has woken up just this once, but the probability of Heads giv
934
The Sleeping Beauty Paradox
A simple explanation for this would be that there are 3 ways in which sleeping beauty can wake up two of which are from a Tails toss. So the probability has to be 1/3 for a heads every time she wakes up. I've outlined it in a blog post The main argument against the "halfer" point of view is the following: In a bayesian sense, SB is always looking to see what new information she has. In reality, the moment she has decided to take part in the experiment, she has additional information that when she wakes up it could be in of the days. Or put in other words the lack of information (wiping out the memory) is what is providing the evidence here, subtly though.
The Sleeping Beauty Paradox
A simple explanation for this would be that there are 3 ways in which sleeping beauty can wake up two of which are from a Tails toss. So the probability has to be 1/3 for a heads every time she wakes
The Sleeping Beauty Paradox A simple explanation for this would be that there are 3 ways in which sleeping beauty can wake up two of which are from a Tails toss. So the probability has to be 1/3 for a heads every time she wakes up. I've outlined it in a blog post The main argument against the "halfer" point of view is the following: In a bayesian sense, SB is always looking to see what new information she has. In reality, the moment she has decided to take part in the experiment, she has additional information that when she wakes up it could be in of the days. Or put in other words the lack of information (wiping out the memory) is what is providing the evidence here, subtly though.
The Sleeping Beauty Paradox A simple explanation for this would be that there are 3 ways in which sleeping beauty can wake up two of which are from a Tails toss. So the probability has to be 1/3 for a heads every time she wakes
935
The Sleeping Beauty Paradox
One third of possible wakings are Heads wakings, and two thirds of possible wakings are Tails wakings. However, one half of princesses (or whatever) are Heads princesses, and one half are Tails princesses. The Tails princesses, individually and in aggregate, experience twice as many wakings as the Heads princesses. From the perspective of the princess, on waking up, there are three possibilities. She is either a Heads princess awaking for the first (and only) time ($H1$), a Tails princess awaking for the first time ($T1$), or a tails princess awaking for a second time ($T2$). There seems no reason to assume that these three outcomes are equally likely. Rather $P[H1]=0.5$, $P[T1]=0.25$, and $P[T2]=0.25$. I haven't read Vineberg's reasoning, but I think I can see how she arrives at a fair bet of $\$1/3$. Suppose that every time a princess awakens, she makes a bet of $\$x$ that she is a Heads princess, receiving \$1 if she is indeed a Heads princess, and \$0 otherwise. Then a Heads princess will receive $\$(1-x)$, and a Tails princess will receive $\$(-x)$ each time she plays. Since the Tails princesses must play twice, and since half of princesses are Heads princesses, the expected return is $\$(1-3x)/2$, and the fair price is $\$1/3$. Normally this would be conclusive evidence that the probability is $1/3$, but the usual reasoning does not hold in this case: the princesses who are destined to lose the bet are obliged to play the game twice, whereas those who are destined to win will play only once! This imbalance uncouples the usual relationship between probabilities and fair bets. (On the other hand, a technician who was assigned to help with the waking process really would have only a one third chance of being assigned to a Heads princess.)
The Sleeping Beauty Paradox
One third of possible wakings are Heads wakings, and two thirds of possible wakings are Tails wakings. However, one half of princesses (or whatever) are Heads princesses, and one half are Tails prin
The Sleeping Beauty Paradox One third of possible wakings are Heads wakings, and two thirds of possible wakings are Tails wakings. However, one half of princesses (or whatever) are Heads princesses, and one half are Tails princesses. The Tails princesses, individually and in aggregate, experience twice as many wakings as the Heads princesses. From the perspective of the princess, on waking up, there are three possibilities. She is either a Heads princess awaking for the first (and only) time ($H1$), a Tails princess awaking for the first time ($T1$), or a tails princess awaking for a second time ($T2$). There seems no reason to assume that these three outcomes are equally likely. Rather $P[H1]=0.5$, $P[T1]=0.25$, and $P[T2]=0.25$. I haven't read Vineberg's reasoning, but I think I can see how she arrives at a fair bet of $\$1/3$. Suppose that every time a princess awakens, she makes a bet of $\$x$ that she is a Heads princess, receiving \$1 if she is indeed a Heads princess, and \$0 otherwise. Then a Heads princess will receive $\$(1-x)$, and a Tails princess will receive $\$(-x)$ each time she plays. Since the Tails princesses must play twice, and since half of princesses are Heads princesses, the expected return is $\$(1-3x)/2$, and the fair price is $\$1/3$. Normally this would be conclusive evidence that the probability is $1/3$, but the usual reasoning does not hold in this case: the princesses who are destined to lose the bet are obliged to play the game twice, whereas those who are destined to win will play only once! This imbalance uncouples the usual relationship between probabilities and fair bets. (On the other hand, a technician who was assigned to help with the waking process really would have only a one third chance of being assigned to a Heads princess.)
The Sleeping Beauty Paradox One third of possible wakings are Heads wakings, and two thirds of possible wakings are Tails wakings. However, one half of princesses (or whatever) are Heads princesses, and one half are Tails prin
936
The Sleeping Beauty Paradox
Non-statistially In all her congenial geniousness, Sleeping Beauty can perform the hypothetical experiment in her sleep, which will shape her believes: import numpy as np # Take clones of our Sleeping Beauties. # One type of clones is persistently heads guessing, # the other persistently guesses tails. # Keeping score for heads guessing Sleeping Beauty ... guessed_heads_right = 0 # ... and also for the tails guessing Sleeping Beauty guessed_tails_right = 0 # Coding the toss outcomes HEADS = 0 TAILS = 1 # Function to wake up heads guessing Sleeping Beauty def heads_guesser_guesses_right(toss): return toss == HEADS # Function to wake up tails guessing Sleeping Beauty def tails_guesser_guesses_right(toss): return toss == TAILS # Repeating the tossing and awakenings many times for i in range(1000): # Toss fair coin, result is either HEADS or TAILS toss = np.random.randint(0, 2) # Waking SBs up first time and count successful guesses if heads_guesser_guesses_right(toss): guessed_heads_right += 1 if tails_guesser_guesses_right(toss): guessed_tails_right += 1 # If toss was TAILS, wake SBs up second time ... if toss == TAILS: # ... and counts successful guesses if heads_guesser_guesses_right(toss): guessed_heads_right += 1 if tails_guesser_guesses_right(toss): guessed_tails_right += 1 # Print the raw statistics print('Guessed HEADS right: {}'.format(guessed_heads_right)) print('Guessed TAILS right: {}'.format(guessed_tails_right)) Output: Guessed HEADS right: 498 Guessed TAILS right: 1004 So our Sleeping Beauty will believe to better be guessing tails. And statistically? The above algorithm is not a statistically rigorous way to determine what to guess. However, it does make it ominously clear that in case of tails, she gets to guess twice, thus guessing tails is twice as likely to be the right guess. This follows from the operational procedure of the experiment. Frequentist Probability Frequentist Probability is a concept of statistics based on the theories of Fisher, Neyman and (Egon) Pearson. A basic notion in Frequentist Probability is that the operations in experiments can be repeated, at least hypothetically, an infinite number of times. Each such operation $n$ leads to an outcome $E_{n}$. Crucially, an "Experiment" here is the act of waking her up and not the act of throwing the coin. The Frequentist Probability of an outcome $E$ is defined as: $Pr(E)\equiv\texttt{lim}_{n\rightarrow\infty}\left(\frac{E_{n}}{N}\right)$ This is exactly what Sleeping Beauty did in her head above: if $E$ is the event of being right while guessing HEADS, then $Pr(E)$ converges to $\frac{1}{3}$. And her believes? So when she finally arrives here in her reasoning she has statistically rigorous grounds to base her believes on. But how she will ultimately shape them, really depends on her psyche.
The Sleeping Beauty Paradox
Non-statistially In all her congenial geniousness, Sleeping Beauty can perform the hypothetical experiment in her sleep, which will shape her believes: import numpy as np # Take clones of our Sle
The Sleeping Beauty Paradox Non-statistially In all her congenial geniousness, Sleeping Beauty can perform the hypothetical experiment in her sleep, which will shape her believes: import numpy as np # Take clones of our Sleeping Beauties. # One type of clones is persistently heads guessing, # the other persistently guesses tails. # Keeping score for heads guessing Sleeping Beauty ... guessed_heads_right = 0 # ... and also for the tails guessing Sleeping Beauty guessed_tails_right = 0 # Coding the toss outcomes HEADS = 0 TAILS = 1 # Function to wake up heads guessing Sleeping Beauty def heads_guesser_guesses_right(toss): return toss == HEADS # Function to wake up tails guessing Sleeping Beauty def tails_guesser_guesses_right(toss): return toss == TAILS # Repeating the tossing and awakenings many times for i in range(1000): # Toss fair coin, result is either HEADS or TAILS toss = np.random.randint(0, 2) # Waking SBs up first time and count successful guesses if heads_guesser_guesses_right(toss): guessed_heads_right += 1 if tails_guesser_guesses_right(toss): guessed_tails_right += 1 # If toss was TAILS, wake SBs up second time ... if toss == TAILS: # ... and counts successful guesses if heads_guesser_guesses_right(toss): guessed_heads_right += 1 if tails_guesser_guesses_right(toss): guessed_tails_right += 1 # Print the raw statistics print('Guessed HEADS right: {}'.format(guessed_heads_right)) print('Guessed TAILS right: {}'.format(guessed_tails_right)) Output: Guessed HEADS right: 498 Guessed TAILS right: 1004 So our Sleeping Beauty will believe to better be guessing tails. And statistically? The above algorithm is not a statistically rigorous way to determine what to guess. However, it does make it ominously clear that in case of tails, she gets to guess twice, thus guessing tails is twice as likely to be the right guess. This follows from the operational procedure of the experiment. Frequentist Probability Frequentist Probability is a concept of statistics based on the theories of Fisher, Neyman and (Egon) Pearson. A basic notion in Frequentist Probability is that the operations in experiments can be repeated, at least hypothetically, an infinite number of times. Each such operation $n$ leads to an outcome $E_{n}$. Crucially, an "Experiment" here is the act of waking her up and not the act of throwing the coin. The Frequentist Probability of an outcome $E$ is defined as: $Pr(E)\equiv\texttt{lim}_{n\rightarrow\infty}\left(\frac{E_{n}}{N}\right)$ This is exactly what Sleeping Beauty did in her head above: if $E$ is the event of being right while guessing HEADS, then $Pr(E)$ converges to $\frac{1}{3}$. And her believes? So when she finally arrives here in her reasoning she has statistically rigorous grounds to base her believes on. But how she will ultimately shape them, really depends on her psyche.
The Sleeping Beauty Paradox Non-statistially In all her congenial geniousness, Sleeping Beauty can perform the hypothetical experiment in her sleep, which will shape her believes: import numpy as np # Take clones of our Sle
937
The Sleeping Beauty Paradox
I really like this example but I would argue that there is one point to make confounded with a couple of nuisance distractions. To avoid nuisance distractions, one arguable should try to discern an abstract diagrammatic representation of the problem that is clearly beyond reasonable doubt (as an adequate representation) and can be verifiably manipulated (re-manipulated by qualified others) to demonstrate the claims. As a simple example think of an (abstract mathematical) rectangle and the claim that it can be made into two triangles. Draw a free hand rectangle as a representation of a mathematical rectangle (in your drawing the four angles will not add exactly to 180 degrees and the adjacent lines will not be exactly equal or straight but there will be no real doubt that it represents a true rectangle). Now manipulate it by drawing a line from one opposite corner to another, which anyone else could do and you get a representation of two triangles that no one would reasonably doubt. Any questioning of can this be so seems nonsense, it just is. The point I am try to make here is that if you get a beyond a reasonable doubt representation of the SB problem as a joint probability distribution and can condition on an event that happens in the experiment in this representation - then claims of whether anything is learned by that event can be demonstrated by verifiable manipulation and require no (philosophical) discussion or questioning. Now I better present my attempt and readers will need to discern if I have succeeded. I will use a probability tree to represent joint probabilities for day sleeping in the experiments (DSIE), coin flip outcome on Monday (CFOM) and woken given one was sleeping in the experiment (WGSIE). I will draw it out (actually just write it out here) in terms of p(DSIE)*p(CFOM|DSIE)*p(WGSIE|DSIE,CFOM). I would like to call DSIE and CFOM possible unknowns and WGSIE the possible known, then p(DSIE,CFOM) is a prior and p(WGSIE| DSIE,CFOM) is a data model or likelihood and Bayes theorem applies, without this labelling it’s just conditional probability which is logically the same thing. Now we know p(DSIE=Mon) + p(DSIE=Tues) = 1 and p(DSIE=Tues) = ½ p(DSIE=Mon) so p(DSIE=Mon)=2/3 and p(DSIE=Tues)=1/3. Now P(CFOM=H|DSIE=Mon) = 1/2 , P(CFOM=T|DSIE=Mon) = 1/2 , P(CFOM=T|DSIE=Tues)=1. P(WGSIE| DSIE=.,CFOM=.) is always equals to one. Prior equals P(DSIE=Mon ,CFOM=H) = 2/3 * ½ = 1/3 P(DSIE=Mon ,CFOM=T) = 2/3 * ½ = 1/3 P(DSIE=Tues ,CFOM=T) = 1/3 *1 = 1/3 So marginal prior for CFOM = 1/3 H and 2/3 T, and the posterior given you were woken while sleeping in the experiment – will be the same (as no learning occurs) – so you prior is 2/3 T. OK – where did I go wrong? Do I need to review my probability theory?
The Sleeping Beauty Paradox
I really like this example but I would argue that there is one point to make confounded with a couple of nuisance distractions. To avoid nuisance distractions, one arguable should try to discern an a
The Sleeping Beauty Paradox I really like this example but I would argue that there is one point to make confounded with a couple of nuisance distractions. To avoid nuisance distractions, one arguable should try to discern an abstract diagrammatic representation of the problem that is clearly beyond reasonable doubt (as an adequate representation) and can be verifiably manipulated (re-manipulated by qualified others) to demonstrate the claims. As a simple example think of an (abstract mathematical) rectangle and the claim that it can be made into two triangles. Draw a free hand rectangle as a representation of a mathematical rectangle (in your drawing the four angles will not add exactly to 180 degrees and the adjacent lines will not be exactly equal or straight but there will be no real doubt that it represents a true rectangle). Now manipulate it by drawing a line from one opposite corner to another, which anyone else could do and you get a representation of two triangles that no one would reasonably doubt. Any questioning of can this be so seems nonsense, it just is. The point I am try to make here is that if you get a beyond a reasonable doubt representation of the SB problem as a joint probability distribution and can condition on an event that happens in the experiment in this representation - then claims of whether anything is learned by that event can be demonstrated by verifiable manipulation and require no (philosophical) discussion or questioning. Now I better present my attempt and readers will need to discern if I have succeeded. I will use a probability tree to represent joint probabilities for day sleeping in the experiments (DSIE), coin flip outcome on Monday (CFOM) and woken given one was sleeping in the experiment (WGSIE). I will draw it out (actually just write it out here) in terms of p(DSIE)*p(CFOM|DSIE)*p(WGSIE|DSIE,CFOM). I would like to call DSIE and CFOM possible unknowns and WGSIE the possible known, then p(DSIE,CFOM) is a prior and p(WGSIE| DSIE,CFOM) is a data model or likelihood and Bayes theorem applies, without this labelling it’s just conditional probability which is logically the same thing. Now we know p(DSIE=Mon) + p(DSIE=Tues) = 1 and p(DSIE=Tues) = ½ p(DSIE=Mon) so p(DSIE=Mon)=2/3 and p(DSIE=Tues)=1/3. Now P(CFOM=H|DSIE=Mon) = 1/2 , P(CFOM=T|DSIE=Mon) = 1/2 , P(CFOM=T|DSIE=Tues)=1. P(WGSIE| DSIE=.,CFOM=.) is always equals to one. Prior equals P(DSIE=Mon ,CFOM=H) = 2/3 * ½ = 1/3 P(DSIE=Mon ,CFOM=T) = 2/3 * ½ = 1/3 P(DSIE=Tues ,CFOM=T) = 1/3 *1 = 1/3 So marginal prior for CFOM = 1/3 H and 2/3 T, and the posterior given you were woken while sleeping in the experiment – will be the same (as no learning occurs) – so you prior is 2/3 T. OK – where did I go wrong? Do I need to review my probability theory?
The Sleeping Beauty Paradox I really like this example but I would argue that there is one point to make confounded with a couple of nuisance distractions. To avoid nuisance distractions, one arguable should try to discern an a
938
The Sleeping Beauty Paradox
I'm going to solve this problem for the generic case where SB is waken '$m$' times after 'Heads' and '$n$' times after 'Tails' with $m≤n$. Specifically, if coin is 'Heads', she will be awakened on... day 1 day 2 $\cdots$ $\cdots$ day $m$ ...and if coin is 'Tails', she will be awakened on... day 1 day 2 $\cdots$ $\cdots$ day $n$ $m≤n$ Then for this specific question, will be $m=1$ and $n=2$. I'm not going to make assumptions, will use only the given info that the coin is fair, thus before awakening it is $$P(Heads)=P(Tails)=1/2.$$ Upon SB is waken she doesn't know what day it is or whether she was waken before. She only knows a fair coin was tossed with possible results 'Heads' and 'Tails'. She also knows the awakening is happening on 'day 1' or 'day 2' or $\ldots$ , or 'day $n$'. For the possible result 'Heads', there are '$m$' possible results which I'll name $D_1$, $D_2$,$\ldots$, $D_m$. $D_1$: This awakening is happening on 'day 1' $D_2$: This awakening is happening on 'day 2' $D_3$: This awakening is happening on 'day 3' $\cdots$ $\cdots$ $D_m$: This awakening is happening on 'day $m$' For the possible result 'Tails', there are '$n$' possible results including the '$m$' possible results stated above. $D_1$: This awakening is happening on 'day 1' $D_2$: This awakening is happening on 'day 2' $D_3$: This awakening is happening on 'day 3' $\cdots$ $\cdots$ $D_n$: This awakening is happening on 'day $n$' So there are $m+n$ possible results. Now given the coin has landed 'Heads', the events $D_1$, $D_2$,$\ldots$, $D_m$ are equally likely. Therefore... $$P(D_1|H)=P(D_2|H)=\ldots=P(D_m|H) = \frac{1}{m}$$ Also, given the coin has landed 'Tails', the events $D_1$, $D_2$,$\ldots$, $D_n$ are equally likely. Therefore... $$P(D_1|T)=P(D_2|T)=\ldots=P(D_n|T) = \frac{1}{n}$$ Now, for any possible event $D_i$ where $i$ is integer and $1≤i≤m$ $$P(D_i\cap H)=P(H)\times P(D_i|H)=\frac{1}{2}\times \frac{1}{m} = \frac {1}{2m}$$ $$P(D_i∩T)=P(T)\times P(D_i|T)=\frac{1}{2}\times \frac{1}{n} = \frac {1}{2n}$$ for $m<i≤n$, it is obviously... $$P(D_i∩H)=P(H)\times P(D_i|H)=\frac{1}{2}\times0=0$$ $$P(D_i∩T)=P(T)\times P(D_i|T)=\frac{1}{2}\times \frac{1}{n} = \frac {1}{2n}$$ Now let's calculate the probabilities of possible events $D_1$, $D_2$,$\ldots$, $D_n$ for $1≤i≤m$ $$P(D_i)=P(D_i∩H)+P(D_i∩T) = \frac {1}{2m}+\frac {1}{2n}$$ for $m<i≤n$ $$P(D_i)=P(D_i∩H)+P(D_i∩T)=0+\frac {1}{2n} = \frac {1}{2n}$$ Now we can calculate the probability of 'Heads' given SB is awake. As said above, the possible events upon awakening are $D_1$, $D_2$,$\ldots$, $D_n$. Therefore the probability is... \begin{align}P(H|awake)&=P(H|(D_1∪D_2∪...∪D_n))\\ &\\ &=\frac {P(H∩(D_1∪D_2∪\ldots∪D_n))}{P(D_1∪D_2∪\ldots∪D_n)}\\ &\\ &=\frac {P((H∩D_1)∪(H∩D_2)∪\ldots∪(H∩D_n))}{P(D_1∪D_2∪\ldots∪D_n)}\\ &\\ &=\frac{P(H∩D_1)+P(H∩D_2)+\ldots+P(H∩D_n)}{P(D_1)+P(D_2)+\ldots+P(D_n)}\\ &\\ &=\frac{P(H∩D_1)+P(H∩D_2)+\ldots+P(H∩D_m)+\ldots+P(H∩D_n)}{P(D_1)+P(D_2)+\ldots+P(D_m)+\ldots+P(D_n)}\\ &\\ &=\frac{\frac {1}{2m}\times m + 0\times(n-m)}{(\frac {1}{2m}+\frac {1}{2n})\times m + \frac {1}{2n}\times(n-m)}\\ &\\ &=\frac{\frac {1}{2}+0}{\frac {1}{2}+\frac{m}{2n}+\frac {1}{2}-\frac{m}{2n}}=\frac{\frac{1}{2}}{\frac{1}{2}+\frac {1}{2}}=\frac{\frac{1}{2}}{1}=\frac{1}{2} \end{align} We already have the answer, but let's also calculate the probability of 'Heads' or 'Tails' given the awakening is happening on a certain day for $1≤i≤m$ $$P(H|D_i)=\frac{P(H∩D_i)}{P(D_i)}=\frac{\frac {1}{2m}}{\frac {1}{2m}+\frac {1}{2n}}=\frac{n}{m+n}$$ $$P(T|D_i)=\frac{P(T∩D_i)}{P(D_i)}=\frac{\frac {1}{2n}}{\frac {1}{2m}+\frac {1}{2n}}=\frac{m}{m+n}$$ for $m<i≤n$ $$P(H|D_i)=\frac{P(H∩D_i)}{P(D_i)}=\frac{0}{P(D_i)}=0$$ $$P(T|D_i)=\frac{P(T∩D_i)}{P(D_i)}=\frac{\frac{1}{2n}}{\frac{1}{2n}}=1$$ I'm aware this is not an answer for those who believe the "1/3" answer. This is just a simple use of conditional probabilities. Thus, I don't believe this problem is ambiguous and therefore a paradox. It is though confusing for the reader by making unclear which are the random experiments and which the possible events of those experiments.
The Sleeping Beauty Paradox
I'm going to solve this problem for the generic case where SB is waken '$m$' times after 'Heads' and '$n$' times after 'Tails' with $m≤n$. Specifically, if coin is 'Heads', she will be awakened on...
The Sleeping Beauty Paradox I'm going to solve this problem for the generic case where SB is waken '$m$' times after 'Heads' and '$n$' times after 'Tails' with $m≤n$. Specifically, if coin is 'Heads', she will be awakened on... day 1 day 2 $\cdots$ $\cdots$ day $m$ ...and if coin is 'Tails', she will be awakened on... day 1 day 2 $\cdots$ $\cdots$ day $n$ $m≤n$ Then for this specific question, will be $m=1$ and $n=2$. I'm not going to make assumptions, will use only the given info that the coin is fair, thus before awakening it is $$P(Heads)=P(Tails)=1/2.$$ Upon SB is waken she doesn't know what day it is or whether she was waken before. She only knows a fair coin was tossed with possible results 'Heads' and 'Tails'. She also knows the awakening is happening on 'day 1' or 'day 2' or $\ldots$ , or 'day $n$'. For the possible result 'Heads', there are '$m$' possible results which I'll name $D_1$, $D_2$,$\ldots$, $D_m$. $D_1$: This awakening is happening on 'day 1' $D_2$: This awakening is happening on 'day 2' $D_3$: This awakening is happening on 'day 3' $\cdots$ $\cdots$ $D_m$: This awakening is happening on 'day $m$' For the possible result 'Tails', there are '$n$' possible results including the '$m$' possible results stated above. $D_1$: This awakening is happening on 'day 1' $D_2$: This awakening is happening on 'day 2' $D_3$: This awakening is happening on 'day 3' $\cdots$ $\cdots$ $D_n$: This awakening is happening on 'day $n$' So there are $m+n$ possible results. Now given the coin has landed 'Heads', the events $D_1$, $D_2$,$\ldots$, $D_m$ are equally likely. Therefore... $$P(D_1|H)=P(D_2|H)=\ldots=P(D_m|H) = \frac{1}{m}$$ Also, given the coin has landed 'Tails', the events $D_1$, $D_2$,$\ldots$, $D_n$ are equally likely. Therefore... $$P(D_1|T)=P(D_2|T)=\ldots=P(D_n|T) = \frac{1}{n}$$ Now, for any possible event $D_i$ where $i$ is integer and $1≤i≤m$ $$P(D_i\cap H)=P(H)\times P(D_i|H)=\frac{1}{2}\times \frac{1}{m} = \frac {1}{2m}$$ $$P(D_i∩T)=P(T)\times P(D_i|T)=\frac{1}{2}\times \frac{1}{n} = \frac {1}{2n}$$ for $m<i≤n$, it is obviously... $$P(D_i∩H)=P(H)\times P(D_i|H)=\frac{1}{2}\times0=0$$ $$P(D_i∩T)=P(T)\times P(D_i|T)=\frac{1}{2}\times \frac{1}{n} = \frac {1}{2n}$$ Now let's calculate the probabilities of possible events $D_1$, $D_2$,$\ldots$, $D_n$ for $1≤i≤m$ $$P(D_i)=P(D_i∩H)+P(D_i∩T) = \frac {1}{2m}+\frac {1}{2n}$$ for $m<i≤n$ $$P(D_i)=P(D_i∩H)+P(D_i∩T)=0+\frac {1}{2n} = \frac {1}{2n}$$ Now we can calculate the probability of 'Heads' given SB is awake. As said above, the possible events upon awakening are $D_1$, $D_2$,$\ldots$, $D_n$. Therefore the probability is... \begin{align}P(H|awake)&=P(H|(D_1∪D_2∪...∪D_n))\\ &\\ &=\frac {P(H∩(D_1∪D_2∪\ldots∪D_n))}{P(D_1∪D_2∪\ldots∪D_n)}\\ &\\ &=\frac {P((H∩D_1)∪(H∩D_2)∪\ldots∪(H∩D_n))}{P(D_1∪D_2∪\ldots∪D_n)}\\ &\\ &=\frac{P(H∩D_1)+P(H∩D_2)+\ldots+P(H∩D_n)}{P(D_1)+P(D_2)+\ldots+P(D_n)}\\ &\\ &=\frac{P(H∩D_1)+P(H∩D_2)+\ldots+P(H∩D_m)+\ldots+P(H∩D_n)}{P(D_1)+P(D_2)+\ldots+P(D_m)+\ldots+P(D_n)}\\ &\\ &=\frac{\frac {1}{2m}\times m + 0\times(n-m)}{(\frac {1}{2m}+\frac {1}{2n})\times m + \frac {1}{2n}\times(n-m)}\\ &\\ &=\frac{\frac {1}{2}+0}{\frac {1}{2}+\frac{m}{2n}+\frac {1}{2}-\frac{m}{2n}}=\frac{\frac{1}{2}}{\frac{1}{2}+\frac {1}{2}}=\frac{\frac{1}{2}}{1}=\frac{1}{2} \end{align} We already have the answer, but let's also calculate the probability of 'Heads' or 'Tails' given the awakening is happening on a certain day for $1≤i≤m$ $$P(H|D_i)=\frac{P(H∩D_i)}{P(D_i)}=\frac{\frac {1}{2m}}{\frac {1}{2m}+\frac {1}{2n}}=\frac{n}{m+n}$$ $$P(T|D_i)=\frac{P(T∩D_i)}{P(D_i)}=\frac{\frac {1}{2n}}{\frac {1}{2m}+\frac {1}{2n}}=\frac{m}{m+n}$$ for $m<i≤n$ $$P(H|D_i)=\frac{P(H∩D_i)}{P(D_i)}=\frac{0}{P(D_i)}=0$$ $$P(T|D_i)=\frac{P(T∩D_i)}{P(D_i)}=\frac{\frac{1}{2n}}{\frac{1}{2n}}=1$$ I'm aware this is not an answer for those who believe the "1/3" answer. This is just a simple use of conditional probabilities. Thus, I don't believe this problem is ambiguous and therefore a paradox. It is though confusing for the reader by making unclear which are the random experiments and which the possible events of those experiments.
The Sleeping Beauty Paradox I'm going to solve this problem for the generic case where SB is waken '$m$' times after 'Heads' and '$n$' times after 'Tails' with $m≤n$. Specifically, if coin is 'Heads', she will be awakened on...
939
The Sleeping Beauty Paradox
Just another approach to frame this paradox with slightly different rules. Sleeping Beauty is set to sleep. The scientists and Sleeping Beauty agree that the scientists will play the lottery (with winning chance one in a Million). If they win, they will wake up Sleeping Beauty 10 Million times. If they don't win, they will wake her up just once. Now, Sleeping Beauty has been woken up. What should she think, did the scientists really win the lottery? Of course this is simply paraphrasing and does not resolve the paradox. In this situation, however, intuitively I would definitely believe that the scientists did not win the lottery, as the number of times that I am awakened does not affect the probability of winning the lottery. However, if money is involved and I win/lose a Dollar each time I am awakened, I would bet that they have won the lottery, as the risk for losing a lot of money in this case is more important to cover than the small gain from guessing correctly if they did not win the lottery. Apparently, I am a halfer. If someone would bet in this situation that the scientists actually have won the lottery, I would be very interested in your comment.
The Sleeping Beauty Paradox
Just another approach to frame this paradox with slightly different rules. Sleeping Beauty is set to sleep. The scientists and Sleeping Beauty agree that the scientists will play the lottery (with wi
The Sleeping Beauty Paradox Just another approach to frame this paradox with slightly different rules. Sleeping Beauty is set to sleep. The scientists and Sleeping Beauty agree that the scientists will play the lottery (with winning chance one in a Million). If they win, they will wake up Sleeping Beauty 10 Million times. If they don't win, they will wake her up just once. Now, Sleeping Beauty has been woken up. What should she think, did the scientists really win the lottery? Of course this is simply paraphrasing and does not resolve the paradox. In this situation, however, intuitively I would definitely believe that the scientists did not win the lottery, as the number of times that I am awakened does not affect the probability of winning the lottery. However, if money is involved and I win/lose a Dollar each time I am awakened, I would bet that they have won the lottery, as the risk for losing a lot of money in this case is more important to cover than the small gain from guessing correctly if they did not win the lottery. Apparently, I am a halfer. If someone would bet in this situation that the scientists actually have won the lottery, I would be very interested in your comment.
The Sleeping Beauty Paradox Just another approach to frame this paradox with slightly different rules. Sleeping Beauty is set to sleep. The scientists and Sleeping Beauty agree that the scientists will play the lottery (with wi
940
The Sleeping Beauty Paradox
I just thought of a new way to explain my point, and what is wrong with the 1/2 answer. Run two versions of the experiment at the same time, using the same coin flip. One version is just like the original. In the other, three (or four - it doesn’t matter) volunteers are needed; each is assigned a different combination of Heads-or-Tails and Monday-or-Tuesday (the Heads+Tuesday combination is omitted if you use only three volunteers). Label them HM, HT, TM, and TT, respectively (possibly omitting HT). If a volunteer in the second version is woken up this way, she knows she was equally likely to have been labeled HM, TM, or TT. In other words, the probability she was labeled HM, given that she is awake, is 1/3. Since the coin flip and day correspond to this assignment, she can trivially deduce that P(Heads|Awake)=1/3. The volunteer in the first version could be woken more than once. But since "today" is only one of those two possible days, when she is awake she has exactly the same information as the awake volunteer in the second version. She knows that her current circumstances can correspond to the label applied to one, AND ONLY ONE, of other volunteers. That is, she can say to herself "either the volunteer labeled HM, or HT, or TT is also awake. Since each is equally likely, there is a 1/3 chance it is HM and so a 1/3 chance the coin landed tails." The reason people make a mistake is that they confuse "is awake sometime during the experiment" with "is awake now." The 1/2 answer comes from the original SB saying to herself "either HM is the only other awake volunteer NOW, or TM and TT are BOTH awake SOMETIME DURING THE EXPERIMENT. Since each situation is equally likely, there is a 1/2 chance it is HM and so a 1/2 chance the coin landed tails." It is a mistake because only one other volunteer is awake now.
The Sleeping Beauty Paradox
I just thought of a new way to explain my point, and what is wrong with the 1/2 answer. Run two versions of the experiment at the same time, using the same coin flip. One version is just like the orig
The Sleeping Beauty Paradox I just thought of a new way to explain my point, and what is wrong with the 1/2 answer. Run two versions of the experiment at the same time, using the same coin flip. One version is just like the original. In the other, three (or four - it doesn’t matter) volunteers are needed; each is assigned a different combination of Heads-or-Tails and Monday-or-Tuesday (the Heads+Tuesday combination is omitted if you use only three volunteers). Label them HM, HT, TM, and TT, respectively (possibly omitting HT). If a volunteer in the second version is woken up this way, she knows she was equally likely to have been labeled HM, TM, or TT. In other words, the probability she was labeled HM, given that she is awake, is 1/3. Since the coin flip and day correspond to this assignment, she can trivially deduce that P(Heads|Awake)=1/3. The volunteer in the first version could be woken more than once. But since "today" is only one of those two possible days, when she is awake she has exactly the same information as the awake volunteer in the second version. She knows that her current circumstances can correspond to the label applied to one, AND ONLY ONE, of other volunteers. That is, she can say to herself "either the volunteer labeled HM, or HT, or TT is also awake. Since each is equally likely, there is a 1/3 chance it is HM and so a 1/3 chance the coin landed tails." The reason people make a mistake is that they confuse "is awake sometime during the experiment" with "is awake now." The 1/2 answer comes from the original SB saying to herself "either HM is the only other awake volunteer NOW, or TM and TT are BOTH awake SOMETIME DURING THE EXPERIMENT. Since each situation is equally likely, there is a 1/2 chance it is HM and so a 1/2 chance the coin landed tails." It is a mistake because only one other volunteer is awake now.
The Sleeping Beauty Paradox I just thought of a new way to explain my point, and what is wrong with the 1/2 answer. Run two versions of the experiment at the same time, using the same coin flip. One version is just like the orig
941
The Sleeping Beauty Paradox
As many questions, it depends of the exact meaning of the question: When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? If you are interpret it as "what are the odds that a tossed coin is Heads", obviously the answer is "half the odds". But what you are asking is not (in my interpretation) that, but "which is the chance that the current awakening was caused by a Heads?". In that case, obviously only a third of the awakenings are caused by a Heads, so the most probable answer is "Tails".
The Sleeping Beauty Paradox
As many questions, it depends of the exact meaning of the question: When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? If you are interpret it as "
The Sleeping Beauty Paradox As many questions, it depends of the exact meaning of the question: When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? If you are interpret it as "what are the odds that a tossed coin is Heads", obviously the answer is "half the odds". But what you are asking is not (in my interpretation) that, but "which is the chance that the current awakening was caused by a Heads?". In that case, obviously only a third of the awakenings are caused by a Heads, so the most probable answer is "Tails".
The Sleeping Beauty Paradox As many questions, it depends of the exact meaning of the question: When you are awakened, to what degree should you believe that the outcome of the coin toss was Heads? If you are interpret it as "
942
The Sleeping Beauty Paradox
This is a very interesting question. I will give my answer as if I were to be sleeping beauty. I feel a key point to understand is that we 100% trust the experimenter. 1) On Sunday night, if you ask me what the probability the coin is heads, I will tell you $\frac{1}{2}$. 2) Whenever you wake me up and ask me, I will tell you $\frac{1}{3}$. 3) When you tell me that this is the last time you are awakening me I will immediately switch to telling you the probability is $\frac{1}{2}$. Clearly (1) follows from the fact the coin is fair. (2) follows from the fact that when you are are woken, you are in one of 3 equally likely situations from your point of view. Each of them can occur with probability $\frac{1}{2}$. Then (3) follows in the same manner except that as soon as you are told this is the last time you are being awakened, the number of situations you can be in collapses to 2 (as now tails and this being the first time you were awakened is impossible).
The Sleeping Beauty Paradox
This is a very interesting question. I will give my answer as if I were to be sleeping beauty. I feel a key point to understand is that we 100% trust the experimenter. 1) On Sunday night, if you ask m
The Sleeping Beauty Paradox This is a very interesting question. I will give my answer as if I were to be sleeping beauty. I feel a key point to understand is that we 100% trust the experimenter. 1) On Sunday night, if you ask me what the probability the coin is heads, I will tell you $\frac{1}{2}$. 2) Whenever you wake me up and ask me, I will tell you $\frac{1}{3}$. 3) When you tell me that this is the last time you are awakening me I will immediately switch to telling you the probability is $\frac{1}{2}$. Clearly (1) follows from the fact the coin is fair. (2) follows from the fact that when you are are woken, you are in one of 3 equally likely situations from your point of view. Each of them can occur with probability $\frac{1}{2}$. Then (3) follows in the same manner except that as soon as you are told this is the last time you are being awakened, the number of situations you can be in collapses to 2 (as now tails and this being the first time you were awakened is impossible).
The Sleeping Beauty Paradox This is a very interesting question. I will give my answer as if I were to be sleeping beauty. I feel a key point to understand is that we 100% trust the experimenter. 1) On Sunday night, if you ask m
943
The Sleeping Beauty Paradox
Rather than giving a statistically rigorous answer, I'd like to modify the question slightly in a way that might convince people whose intuition leads them to be halfers. Some researchers want to put you to sleep. Depending on the secret toss of a fair coin, they will awaken you either once (Heads) or nine-hundred and ninety-nine times (Tails). After each awakening they will put you back to sleep with a drug that makes you forget that awakening. When you are awakened, what degree of belief should you have that the outcome of the coin toss was Heads? Following the same logic as before, there could be two camps - Halfers - the coin toss was fair, and SB knows this, so she should believe there is a one-half chance of heads. Thousanders - if the experiment was repeated many times, the coin toss would be heads only one in a thousand times, so she should believe that the chance of heads is one in a thousand. I believe that some of the confusion from the question as originally worded arises simply because there isn't much difference between a half and a third. People naturally think of probabilities as somewhat fuzzy concepts (particularly when the probability is a degree-of-belief rather than a frequency) and it's difficult to intuit the difference between degrees of belief of a half and a third. However, the difference between a half and one in a thousand is much more visceral. I claim that it will be intuitively obvious to more people that the answer to this problem is one in a thousand, rather than a half. I would be interested to see a "halfer" defend their argument using this version of the problem instead.
The Sleeping Beauty Paradox
Rather than giving a statistically rigorous answer, I'd like to modify the question slightly in a way that might convince people whose intuition leads them to be halfers. Some researchers want to put
The Sleeping Beauty Paradox Rather than giving a statistically rigorous answer, I'd like to modify the question slightly in a way that might convince people whose intuition leads them to be halfers. Some researchers want to put you to sleep. Depending on the secret toss of a fair coin, they will awaken you either once (Heads) or nine-hundred and ninety-nine times (Tails). After each awakening they will put you back to sleep with a drug that makes you forget that awakening. When you are awakened, what degree of belief should you have that the outcome of the coin toss was Heads? Following the same logic as before, there could be two camps - Halfers - the coin toss was fair, and SB knows this, so she should believe there is a one-half chance of heads. Thousanders - if the experiment was repeated many times, the coin toss would be heads only one in a thousand times, so she should believe that the chance of heads is one in a thousand. I believe that some of the confusion from the question as originally worded arises simply because there isn't much difference between a half and a third. People naturally think of probabilities as somewhat fuzzy concepts (particularly when the probability is a degree-of-belief rather than a frequency) and it's difficult to intuit the difference between degrees of belief of a half and a third. However, the difference between a half and one in a thousand is much more visceral. I claim that it will be intuitively obvious to more people that the answer to this problem is one in a thousand, rather than a half. I would be interested to see a "halfer" defend their argument using this version of the problem instead.
The Sleeping Beauty Paradox Rather than giving a statistically rigorous answer, I'd like to modify the question slightly in a way that might convince people whose intuition leads them to be halfers. Some researchers want to put
944
The Sleeping Beauty Paradox
I think the error is from the "thirders" and my reason for this is that the "awakenings" are not equally likely - if you are woken up then it is more likely to be "the first time" you were woken up - a 75% chance in fact. This means you cannot count the "3 outcomes" (heads1, tails1, tails2) equally. I think this also appears to be a case of $AA=A$ where $A$ is the proposition that SB is woken up. Saying something is true twice is the same thing as saying it once. SB has not been provided with new data, because the prediction from the prior was $Pr(A|I)=1$. Other ways of putting it are $I\implies A$ and $IA=I$. This means $p(H|AI)=p(H|I)=0.5$ The maths are clearly shown in the answer given by @pit847, so I won't repeat it in mine. but, in terms of betting $1$ dollar to guess the outcome at each awakening and you are given $g$ dollars if you are correct. In this case, you should always guess tails because this outcome is "weighted". If the coin was tails, then you will bet twice. so your expected profit (call this $U$) if you guess heads is $$E(U|h)=0.5\times (g-1) + 0.5\times (-2) = \frac{g - 3}{2}$$ and similarly for guessing tails $$E(U|t)=0.5\times (-1) + 0.5 \times (2g-2)=\frac{2g - 3}{2}$$ so you gain an extra $\frac{g}{2}$ on average from guessing tails. the "fair bet" amount is $g=\frac{3}{2}=1.5$ Now if we repeat the above but use a third instead of a half, we get $E(U|h)=\frac{g-5}{3}$ and $E(U|t)=\frac{4g-5}{3}$. so we still have that guessing tails is a better strategy. Also, the "fair bet" amount is $g=\frac{5}{4}=1.25$ Now we can say that "thirders" should take a bet where $g=1.4$. But the "halfers" would not take this bet. @Ytsen de Boer has a simulation we can test. We have $498$ heads and $502$ tails, so betting tails would give you $1004 \times 1.4 = 1405.6$ in won bets. But... you had to play $1502$ times to get this - which is a net loss of $97.6$ - so the "thirders" lose! also note this is actually a slightly favourable outcome for betting tails.
The Sleeping Beauty Paradox
I think the error is from the "thirders" and my reason for this is that the "awakenings" are not equally likely - if you are woken up then it is more likely to be "the first time" you were woken up -
The Sleeping Beauty Paradox I think the error is from the "thirders" and my reason for this is that the "awakenings" are not equally likely - if you are woken up then it is more likely to be "the first time" you were woken up - a 75% chance in fact. This means you cannot count the "3 outcomes" (heads1, tails1, tails2) equally. I think this also appears to be a case of $AA=A$ where $A$ is the proposition that SB is woken up. Saying something is true twice is the same thing as saying it once. SB has not been provided with new data, because the prediction from the prior was $Pr(A|I)=1$. Other ways of putting it are $I\implies A$ and $IA=I$. This means $p(H|AI)=p(H|I)=0.5$ The maths are clearly shown in the answer given by @pit847, so I won't repeat it in mine. but, in terms of betting $1$ dollar to guess the outcome at each awakening and you are given $g$ dollars if you are correct. In this case, you should always guess tails because this outcome is "weighted". If the coin was tails, then you will bet twice. so your expected profit (call this $U$) if you guess heads is $$E(U|h)=0.5\times (g-1) + 0.5\times (-2) = \frac{g - 3}{2}$$ and similarly for guessing tails $$E(U|t)=0.5\times (-1) + 0.5 \times (2g-2)=\frac{2g - 3}{2}$$ so you gain an extra $\frac{g}{2}$ on average from guessing tails. the "fair bet" amount is $g=\frac{3}{2}=1.5$ Now if we repeat the above but use a third instead of a half, we get $E(U|h)=\frac{g-5}{3}$ and $E(U|t)=\frac{4g-5}{3}$. so we still have that guessing tails is a better strategy. Also, the "fair bet" amount is $g=\frac{5}{4}=1.25$ Now we can say that "thirders" should take a bet where $g=1.4$. But the "halfers" would not take this bet. @Ytsen de Boer has a simulation we can test. We have $498$ heads and $502$ tails, so betting tails would give you $1004 \times 1.4 = 1405.6$ in won bets. But... you had to play $1502$ times to get this - which is a net loss of $97.6$ - so the "thirders" lose! also note this is actually a slightly favourable outcome for betting tails.
The Sleeping Beauty Paradox I think the error is from the "thirders" and my reason for this is that the "awakenings" are not equally likely - if you are woken up then it is more likely to be "the first time" you were woken up -
945
The Sleeping Beauty Paradox
The perception that there is a paradox here stems from the conflation of two different questions: What is the probability of heads? (1/2) What is the probability of observing a heads upon waking from the sleeper's perspective? (1/3) However, these are both valid and different questions and so there is no paradox to resolve.
The Sleeping Beauty Paradox
The perception that there is a paradox here stems from the conflation of two different questions: What is the probability of heads? (1/2) What is the probability of observing a heads upon waking from
The Sleeping Beauty Paradox The perception that there is a paradox here stems from the conflation of two different questions: What is the probability of heads? (1/2) What is the probability of observing a heads upon waking from the sleeper's perspective? (1/3) However, these are both valid and different questions and so there is no paradox to resolve.
The Sleeping Beauty Paradox The perception that there is a paradox here stems from the conflation of two different questions: What is the probability of heads? (1/2) What is the probability of observing a heads upon waking from
946
The Sleeping Beauty Paradox
Beauty learns no new information, yet her credence for heads is 1/3. The 1/3 answer comes from two applications of the law of total probability (show below), and the fact that B's credence for her belief in the current day being Monday, given she's told the coin landed tails, is 1/2. The decision quadruple defined below, can then be modified to show, depending on the experiment setup, that her credence for heads changes. But from the perspective of the awakening B, all these experiments look identical - going to sleep and waking up - thus showing that B learns no new information during the course of the experiment which would allow her to update her belief for heads. Contrast this with the Monty Hall problem, where the host opens a particular, and not merely a door, allowing the contestant to update her belief using Bayes's Theorem. Let $H$ be the event the coin landed heads, and $M$ the current day being monday. Then by the law of total probability: $P(H) = P(H|M)P(M)+P(H|\sim M)P(\sim M)= P(H|M)P(M)+P(H|\sim M)(1-P(M))$ $P(M) = P(M|H)P(H)+P(M|\sim H)P(\sim H) = P(M|H)P(H)+P(M|\sim H)(1-P(H)) $ forming a linear equation system in $P(H)$ and $P(M)$. Define the decision quadruple $d:=(P(H|M),P(M|H),P(H|\sim M),P(M|\sim H))$ - giving Beauty's credence for different decisions related to certain events. This problem has $d=(1/2, 1, 0, 1/2)$, solving (1) and (2) with 1/3 and 2/3. If $d=(2/3, 1, 0, 1/2)$, corresponding to flipping a second coin on tails to determine which day Beauty should be awoken, but not both, gives a solution of 1/2 and 3/4.
The Sleeping Beauty Paradox
Beauty learns no new information, yet her credence for heads is 1/3. The 1/3 answer comes from two applications of the law of total probability (show below), and the fact that B's credence for her bel
The Sleeping Beauty Paradox Beauty learns no new information, yet her credence for heads is 1/3. The 1/3 answer comes from two applications of the law of total probability (show below), and the fact that B's credence for her belief in the current day being Monday, given she's told the coin landed tails, is 1/2. The decision quadruple defined below, can then be modified to show, depending on the experiment setup, that her credence for heads changes. But from the perspective of the awakening B, all these experiments look identical - going to sleep and waking up - thus showing that B learns no new information during the course of the experiment which would allow her to update her belief for heads. Contrast this with the Monty Hall problem, where the host opens a particular, and not merely a door, allowing the contestant to update her belief using Bayes's Theorem. Let $H$ be the event the coin landed heads, and $M$ the current day being monday. Then by the law of total probability: $P(H) = P(H|M)P(M)+P(H|\sim M)P(\sim M)= P(H|M)P(M)+P(H|\sim M)(1-P(M))$ $P(M) = P(M|H)P(H)+P(M|\sim H)P(\sim H) = P(M|H)P(H)+P(M|\sim H)(1-P(H)) $ forming a linear equation system in $P(H)$ and $P(M)$. Define the decision quadruple $d:=(P(H|M),P(M|H),P(H|\sim M),P(M|\sim H))$ - giving Beauty's credence for different decisions related to certain events. This problem has $d=(1/2, 1, 0, 1/2)$, solving (1) and (2) with 1/3 and 2/3. If $d=(2/3, 1, 0, 1/2)$, corresponding to flipping a second coin on tails to determine which day Beauty should be awoken, but not both, gives a solution of 1/2 and 3/4.
The Sleeping Beauty Paradox Beauty learns no new information, yet her credence for heads is 1/3. The 1/3 answer comes from two applications of the law of total probability (show below), and the fact that B's credence for her bel
947
The Sleeping Beauty Paradox
Ignore all the complex theory and just calculate the odds. There are 3 possible events: awake-from-heads, awake-from-tails, awake-from-tails. Heads has one event, while tails has two events. The odds are 1:2 against heads. Since flipping heads and tails are equally likely, we can just count the events from heads divided by the total possible events to get the probability. The probability of heads is 1/3. If you really want 1/2 to be the correct answer, you'll need to find a coin that flips heads 2/3 of the time.
The Sleeping Beauty Paradox
Ignore all the complex theory and just calculate the odds. There are 3 possible events: awake-from-heads, awake-from-tails, awake-from-tails. Heads has one event, while tails has two events. The odds
The Sleeping Beauty Paradox Ignore all the complex theory and just calculate the odds. There are 3 possible events: awake-from-heads, awake-from-tails, awake-from-tails. Heads has one event, while tails has two events. The odds are 1:2 against heads. Since flipping heads and tails are equally likely, we can just count the events from heads divided by the total possible events to get the probability. The probability of heads is 1/3. If you really want 1/2 to be the correct answer, you'll need to find a coin that flips heads 2/3 of the time.
The Sleeping Beauty Paradox Ignore all the complex theory and just calculate the odds. There are 3 possible events: awake-from-heads, awake-from-tails, awake-from-tails. Heads has one event, while tails has two events. The odds
948
The Sleeping Beauty Paradox
If sleeping beauty had to say either heads or tails - she would minimise her expected 0-1 loss function (evaluated each day) by picking tails. If, however, the 0-1 loss function was only evaluated each trial then either heads or tails would be equally good.
The Sleeping Beauty Paradox
If sleeping beauty had to say either heads or tails - she would minimise her expected 0-1 loss function (evaluated each day) by picking tails. If, however, the 0-1 loss function was only evaluated eac
The Sleeping Beauty Paradox If sleeping beauty had to say either heads or tails - she would minimise her expected 0-1 loss function (evaluated each day) by picking tails. If, however, the 0-1 loss function was only evaluated each trial then either heads or tails would be equally good.
The Sleeping Beauty Paradox If sleeping beauty had to say either heads or tails - she would minimise her expected 0-1 loss function (evaluated each day) by picking tails. If, however, the 0-1 loss function was only evaluated eac
949
The Sleeping Beauty Paradox
The thirders win Instead of a coin, lets a assume a fair dice: on friday, the sleeping beauty will sleep: if the dice == 1 , they will awake her on saturday; if the dice == 2 , they will awake her on saturday and sunday; if the dice == 3 , they will awake her on saturday, sunday and monday; if the dice == 4 , they will awake her on saturday, sunday, monday and tuesday; if the dice == 5 , they will awake her on saturday, sunday, monday, tuesday and wednesday; if the dice == 6 , they will awake her on saturday, sunday, monday, tuesday, wednesday and thursday; Every time they ask her ' to what degree should you believe that the outcome of the dice was 1?' The halfers will say the probability of dice = 1 is 1/6 The thirders will say the probability of dice = 1 is 1/21 But simulation clearly solves the problem : days <- c("saturday", "sunday", "monday", "tuesday", "wednesday", "thursday") #she will answer the dice was 1 every time #the trick here is that this is not absolutely random because every day implies the days before it. number_of_correct_answer <- 0 number_of_days <- 0 for (i in 1:1000){ dice <- sample(1:6,1) for (item in days[1:dice]){ number_of_correct_answer <- number_of_correct_answer + (dice == 1) number_of_days <- number_of_days + 1 } } number_of_correct_answer/number_of_days #equals 1/21 #but if we divided by 1000 , which is incorrect because every experiment has more than one day we will get 1/6 number_of_correct_answer/1000 #equals 1/6 Also we can simulate the toss problem days <- c("monday", "tuesday") number_of_correct_answer <- 0 number_of_tosses <- 0 for (i in 1:1000){ toss <- sample(1:2,1) for (item in days[1:toss]){ number_of_correct_answer <- number_of_correct_answer + (toss == 1) number_of_tosses <- number_of_tosses + 1 } } number_of_correct_answer/number_of_tosses #equals 1/3 #but if we divided by 1000 , which is incorrect because every experiment can has more than one toss we will get 1/2 number_of_correct_answer/1000 #equals 1/2
The Sleeping Beauty Paradox
The thirders win Instead of a coin, lets a assume a fair dice: on friday, the sleeping beauty will sleep: if the dice == 1 , they will awake her on saturday; if the dice == 2 , they will awake her on
The Sleeping Beauty Paradox The thirders win Instead of a coin, lets a assume a fair dice: on friday, the sleeping beauty will sleep: if the dice == 1 , they will awake her on saturday; if the dice == 2 , they will awake her on saturday and sunday; if the dice == 3 , they will awake her on saturday, sunday and monday; if the dice == 4 , they will awake her on saturday, sunday, monday and tuesday; if the dice == 5 , they will awake her on saturday, sunday, monday, tuesday and wednesday; if the dice == 6 , they will awake her on saturday, sunday, monday, tuesday, wednesday and thursday; Every time they ask her ' to what degree should you believe that the outcome of the dice was 1?' The halfers will say the probability of dice = 1 is 1/6 The thirders will say the probability of dice = 1 is 1/21 But simulation clearly solves the problem : days <- c("saturday", "sunday", "monday", "tuesday", "wednesday", "thursday") #she will answer the dice was 1 every time #the trick here is that this is not absolutely random because every day implies the days before it. number_of_correct_answer <- 0 number_of_days <- 0 for (i in 1:1000){ dice <- sample(1:6,1) for (item in days[1:dice]){ number_of_correct_answer <- number_of_correct_answer + (dice == 1) number_of_days <- number_of_days + 1 } } number_of_correct_answer/number_of_days #equals 1/21 #but if we divided by 1000 , which is incorrect because every experiment has more than one day we will get 1/6 number_of_correct_answer/1000 #equals 1/6 Also we can simulate the toss problem days <- c("monday", "tuesday") number_of_correct_answer <- 0 number_of_tosses <- 0 for (i in 1:1000){ toss <- sample(1:2,1) for (item in days[1:toss]){ number_of_correct_answer <- number_of_correct_answer + (toss == 1) number_of_tosses <- number_of_tosses + 1 } } number_of_correct_answer/number_of_tosses #equals 1/3 #but if we divided by 1000 , which is incorrect because every experiment can has more than one toss we will get 1/2 number_of_correct_answer/1000 #equals 1/2
The Sleeping Beauty Paradox The thirders win Instead of a coin, lets a assume a fair dice: on friday, the sleeping beauty will sleep: if the dice == 1 , they will awake her on saturday; if the dice == 2 , they will awake her on
950
The Sleeping Beauty Paradox
The apparent paradox derives from the false premise that probabilities are absolute. In fact, probabilities are relative to the definition of the events being counted. This is an important point to understand for machine learning. We may wish to calculate the probability of something (eg, a transcription being correct given a piece of audio) through its decomposition into factors (the probabilities of letters at various times, $P(Letter,Time|Audio)$) modeled by a model that looks not at the whole audio but at an instant of it (it calculates $P(Letter|Time,Audio)$). $P(Letter,Time)$ can be equal to $P(Letter|Time)$ because the P's are defined differently. Different P's cannot be put into the same equation, but careful analysis can allow us to convert between the two domains. Both P(Heads)=1/2 w.r.t. worlds (or births), and P(Heads)=1/3 w.r.t. instants (or awakenings) are true, but after being put to sleep Sleeping Beauty can only calculate probabilities with regard to instants because she knows her memory gets erased. (Before sleeping, she would calculate it with regard to worlds.)
The Sleeping Beauty Paradox
The apparent paradox derives from the false premise that probabilities are absolute. In fact, probabilities are relative to the definition of the events being counted. This is an important point to un
The Sleeping Beauty Paradox The apparent paradox derives from the false premise that probabilities are absolute. In fact, probabilities are relative to the definition of the events being counted. This is an important point to understand for machine learning. We may wish to calculate the probability of something (eg, a transcription being correct given a piece of audio) through its decomposition into factors (the probabilities of letters at various times, $P(Letter,Time|Audio)$) modeled by a model that looks not at the whole audio but at an instant of it (it calculates $P(Letter|Time,Audio)$). $P(Letter,Time)$ can be equal to $P(Letter|Time)$ because the P's are defined differently. Different P's cannot be put into the same equation, but careful analysis can allow us to convert between the two domains. Both P(Heads)=1/2 w.r.t. worlds (or births), and P(Heads)=1/3 w.r.t. instants (or awakenings) are true, but after being put to sleep Sleeping Beauty can only calculate probabilities with regard to instants because she knows her memory gets erased. (Before sleeping, she would calculate it with regard to worlds.)
The Sleeping Beauty Paradox The apparent paradox derives from the false premise that probabilities are absolute. In fact, probabilities are relative to the definition of the events being counted. This is an important point to un
951
The Sleeping Beauty Paradox
Late to the party, I know. This question is very similar to the Monty Hall problem, where you are asked to guess behind which of 3 doors the prize is. Say you choose Door No.1. Then the presenter (who knows where the prize is) removes Door No.3 from the game, and asks if you'd like to switch your guess from Door No1 to Door No2, or stick with your initial guess. The story goes, you should always switch, because there's a higher probability for the prize to be in Door No2. People usually get confused at this point and point out that the probability of the prize being in either door is still 1/3. But that's not the point. The question isn't what the initial probability was, the real question is what are the chances your first guess was correct, vs what are the chances you got it wrong. In which case, you should switch, because the chances you got it wrong are 2/3. As with the Monty Hall problem, things become incredibly clearer if we make 3 doors into a million doors. If there's a million doors, and you choose Door No1, and the presenter closes doors from 3 to one million, leaving only Doors No1 and Doors No2 in play, would you switch? Of course you would! The chances of you having picked Door No1 correctly in the first place was 1 in a million. Chances are you didn't. In other words, the error in reasoning comes from believing that the probability of performing an action is equal to the probability of an action having been performed, when the context between the two does not make them equivalent statements. Phrased differently, depending on the context and circumstances of the problem, the probability of 'choosing correctly' may not be the same as the probability of 'having chosen correctly'. Similarly with the sleeping beauty problem. If you were not awaken 2 times in the case of tails, but 1 million times, it makes more sense for you to say "this current awakening I'm experiencing right now is far more likely to be one of those in the middle of a streak of million awakenings from a Tails throw, than me having just happened to bump onto that single awakening that has resulted from Heads". The argument that it's a fair coin has nothing to do with anything here. The fair coin only tells you what are the chances of 'throwing' Heads, i.e. the probability of having to wake once versus a million times, when you first throw that coin. So if you ask SB before the experiment to choose whether she'll sleep once or a million times before each throw, her probability of 'choosing correctly' is indeed 50%. But from that point on, assuming consecutive experiments, and the fact that SB is not told which experiment she's currently in, at any point that she's woken up, the probability of having 'thrown' Heads is far less, since she's more likely to be woken up from one of the million awakenings than from a single one. Note that this implies consecutive experiments, as per the phrasing of the problem. If SB is reassured from the beginning of the experiment that there will only be a single experiment (i.e. only one toin coss), then her belief goes back to 50%, since at any point in time, the fact that she may have woken up many times before now becomes irrelevant. In other words, in this context, the probability of 'choosing correctly' and 'having chosen correctly' again become equivalent. Note also, that any rephrasings using 'betting', are also different questions changing the context entirely. E.g. even in a single experiment, if you were to gain money each time you guessed correctly, you'd obviously go for tails; but this is because the expected reward is higher, not because the probability of tails is any different from heads. Therefore any 'solutions' that introduce betting are only valid to the extent that they collapse the problem to a very particular interpretation.
The Sleeping Beauty Paradox
Late to the party, I know. This question is very similar to the Monty Hall problem, where you are asked to guess behind which of 3 doors the prize is. Say you choose Door No.1. Then the presenter (who
The Sleeping Beauty Paradox Late to the party, I know. This question is very similar to the Monty Hall problem, where you are asked to guess behind which of 3 doors the prize is. Say you choose Door No.1. Then the presenter (who knows where the prize is) removes Door No.3 from the game, and asks if you'd like to switch your guess from Door No1 to Door No2, or stick with your initial guess. The story goes, you should always switch, because there's a higher probability for the prize to be in Door No2. People usually get confused at this point and point out that the probability of the prize being in either door is still 1/3. But that's not the point. The question isn't what the initial probability was, the real question is what are the chances your first guess was correct, vs what are the chances you got it wrong. In which case, you should switch, because the chances you got it wrong are 2/3. As with the Monty Hall problem, things become incredibly clearer if we make 3 doors into a million doors. If there's a million doors, and you choose Door No1, and the presenter closes doors from 3 to one million, leaving only Doors No1 and Doors No2 in play, would you switch? Of course you would! The chances of you having picked Door No1 correctly in the first place was 1 in a million. Chances are you didn't. In other words, the error in reasoning comes from believing that the probability of performing an action is equal to the probability of an action having been performed, when the context between the two does not make them equivalent statements. Phrased differently, depending on the context and circumstances of the problem, the probability of 'choosing correctly' may not be the same as the probability of 'having chosen correctly'. Similarly with the sleeping beauty problem. If you were not awaken 2 times in the case of tails, but 1 million times, it makes more sense for you to say "this current awakening I'm experiencing right now is far more likely to be one of those in the middle of a streak of million awakenings from a Tails throw, than me having just happened to bump onto that single awakening that has resulted from Heads". The argument that it's a fair coin has nothing to do with anything here. The fair coin only tells you what are the chances of 'throwing' Heads, i.e. the probability of having to wake once versus a million times, when you first throw that coin. So if you ask SB before the experiment to choose whether she'll sleep once or a million times before each throw, her probability of 'choosing correctly' is indeed 50%. But from that point on, assuming consecutive experiments, and the fact that SB is not told which experiment she's currently in, at any point that she's woken up, the probability of having 'thrown' Heads is far less, since she's more likely to be woken up from one of the million awakenings than from a single one. Note that this implies consecutive experiments, as per the phrasing of the problem. If SB is reassured from the beginning of the experiment that there will only be a single experiment (i.e. only one toin coss), then her belief goes back to 50%, since at any point in time, the fact that she may have woken up many times before now becomes irrelevant. In other words, in this context, the probability of 'choosing correctly' and 'having chosen correctly' again become equivalent. Note also, that any rephrasings using 'betting', are also different questions changing the context entirely. E.g. even in a single experiment, if you were to gain money each time you guessed correctly, you'd obviously go for tails; but this is because the expected reward is higher, not because the probability of tails is any different from heads. Therefore any 'solutions' that introduce betting are only valid to the extent that they collapse the problem to a very particular interpretation.
The Sleeping Beauty Paradox Late to the party, I know. This question is very similar to the Monty Hall problem, where you are asked to guess behind which of 3 doors the prize is. Say you choose Door No.1. Then the presenter (who
952
The Sleeping Beauty Paradox
When Sleeping Beauty is awoken, she knows: A fair coin was tossed to give result $r$; if $r = \mathrm{H}$ then this is the sole subsequent awakening; and if $r = \mathrm{T}$ then this is one of two subsequent awakenings. Call this information $\mathcal{I}$. Nothing else is relevant to her question, which is: What is $\mathrm{prob}(r = \mathrm{H} | \mathcal{I})?$ This is a question of assigning probabilities, as opposed to inferring them. If $w$ is the number of the awakening, then $\mathcal{I}$ is equivalent to $$ (r = \mathrm{H} \vee r = \mathrm{T}) \wedge (r = \mathrm{H} \implies w = 1) \wedge (r = \mathrm{T} \implies (w = 1 \vee w = 2)) $$ which is logically equivalent to $$ (r = \mathrm{H} \wedge w = 1) \vee (r = \mathrm{T} \wedge w = 1) \vee (r = \mathrm{T} \wedge w = 2) $$ Sleeping Beauty has no further information. By the principle of insufficient reason, she is obliged to assign a probability of $\frac{1}{3}$ to each disjunct. Therefore, $\mathrm{prob}(r = \mathrm{H} | \mathcal{I}) = \frac{1}{3}$. PS On second thoughts, the preceding answer applies when "fair coin" is interpreted to mean merely that there are two possibilities for the coin flip result, $\mathrm{H}$ or $\mathrm{T}$. But probably a more faithful interpretation of the phrase "fair coin" is that it specifies directly that $\mathrm{prob}(r = \mathrm{H} | \mathcal{I}) = \frac{1}{2}$, whereupon the answer is given in the problem statement. In my view, however, statements of this sort are technically inadmissible, because a probability is something which must be worked out from the antecedent and consequent propositions. The phrase "the secret toss of a fair coin" raises the question: how does Sleeping Beauty know it's fair? What information does she have which establishes that? Normally the fairness of an ideal coin is worked out from the fact that there are two possibilities which are informationally equivalent. When the coin flip is mixed up with the wakening factor, we get three possibilities which are informationally equivalent. It's essentially a three-sided ideal coin, so we arrive at the solution above.
The Sleeping Beauty Paradox
When Sleeping Beauty is awoken, she knows: A fair coin was tossed to give result $r$; if $r = \mathrm{H}$ then this is the sole subsequent awakening; and if $r = \mathrm{T}$ then this is one of two s
The Sleeping Beauty Paradox When Sleeping Beauty is awoken, she knows: A fair coin was tossed to give result $r$; if $r = \mathrm{H}$ then this is the sole subsequent awakening; and if $r = \mathrm{T}$ then this is one of two subsequent awakenings. Call this information $\mathcal{I}$. Nothing else is relevant to her question, which is: What is $\mathrm{prob}(r = \mathrm{H} | \mathcal{I})?$ This is a question of assigning probabilities, as opposed to inferring them. If $w$ is the number of the awakening, then $\mathcal{I}$ is equivalent to $$ (r = \mathrm{H} \vee r = \mathrm{T}) \wedge (r = \mathrm{H} \implies w = 1) \wedge (r = \mathrm{T} \implies (w = 1 \vee w = 2)) $$ which is logically equivalent to $$ (r = \mathrm{H} \wedge w = 1) \vee (r = \mathrm{T} \wedge w = 1) \vee (r = \mathrm{T} \wedge w = 2) $$ Sleeping Beauty has no further information. By the principle of insufficient reason, she is obliged to assign a probability of $\frac{1}{3}$ to each disjunct. Therefore, $\mathrm{prob}(r = \mathrm{H} | \mathcal{I}) = \frac{1}{3}$. PS On second thoughts, the preceding answer applies when "fair coin" is interpreted to mean merely that there are two possibilities for the coin flip result, $\mathrm{H}$ or $\mathrm{T}$. But probably a more faithful interpretation of the phrase "fair coin" is that it specifies directly that $\mathrm{prob}(r = \mathrm{H} | \mathcal{I}) = \frac{1}{2}$, whereupon the answer is given in the problem statement. In my view, however, statements of this sort are technically inadmissible, because a probability is something which must be worked out from the antecedent and consequent propositions. The phrase "the secret toss of a fair coin" raises the question: how does Sleeping Beauty know it's fair? What information does she have which establishes that? Normally the fairness of an ideal coin is worked out from the fact that there are two possibilities which are informationally equivalent. When the coin flip is mixed up with the wakening factor, we get three possibilities which are informationally equivalent. It's essentially a three-sided ideal coin, so we arrive at the solution above.
The Sleeping Beauty Paradox When Sleeping Beauty is awoken, she knows: A fair coin was tossed to give result $r$; if $r = \mathrm{H}$ then this is the sole subsequent awakening; and if $r = \mathrm{T}$ then this is one of two s
953
The Sleeping Beauty Paradox
Here's an argument that being a halfer can lead Sleeping Beauty astray. When halfer Sleeping Beauty wakes up she believes $\mathbb{P}(heads) = \frac12$. Suppose she suddenly decides she can't take the uncertainty, overpowers the experimenter, and forces him to tell her whether this is the first or second time she has been woken. Case A: the experimenter tells her this is the second time she has woken. SB must now believe $\mathbb{P}(heads) = 0$. Her subjective $\mathbb{P}(heads)$ has decreased. Case B: the experimenter tells her this is the first time she has woken. Now her subjective $\mathbb{P}(heads)$ must increase (by the law of total probability). So she must now believe $\mathbb{P}(heads) > \frac12$! I don't see how this can be a sensible conclusion. Another argument which I haven't seen before: imagine that if tails is flipped then SB is woken up by a bell once and by a drum once, and that if heads is flipped then SB is woken up once by either a bell or a drum with equal probability. If she finds herself woken by a bell, I think it's uncontroversial that Bayes' Theorem tells her that the probability that the coin was heads is 1/3. Likewise if she is woken by a drum. So she's a thirder regardless of which instrument wakes her, which means she should be a thirder even if she ignores the instruments completely. But ignoring the instruments makes this problem identical to the original problem, implying that she should be a thirder in the original problem.
The Sleeping Beauty Paradox
Here's an argument that being a halfer can lead Sleeping Beauty astray. When halfer Sleeping Beauty wakes up she believes $\mathbb{P}(heads) = \frac12$. Suppose she suddenly decides she can't take the
The Sleeping Beauty Paradox Here's an argument that being a halfer can lead Sleeping Beauty astray. When halfer Sleeping Beauty wakes up she believes $\mathbb{P}(heads) = \frac12$. Suppose she suddenly decides she can't take the uncertainty, overpowers the experimenter, and forces him to tell her whether this is the first or second time she has been woken. Case A: the experimenter tells her this is the second time she has woken. SB must now believe $\mathbb{P}(heads) = 0$. Her subjective $\mathbb{P}(heads)$ has decreased. Case B: the experimenter tells her this is the first time she has woken. Now her subjective $\mathbb{P}(heads)$ must increase (by the law of total probability). So she must now believe $\mathbb{P}(heads) > \frac12$! I don't see how this can be a sensible conclusion. Another argument which I haven't seen before: imagine that if tails is flipped then SB is woken up by a bell once and by a drum once, and that if heads is flipped then SB is woken up once by either a bell or a drum with equal probability. If she finds herself woken by a bell, I think it's uncontroversial that Bayes' Theorem tells her that the probability that the coin was heads is 1/3. Likewise if she is woken by a drum. So she's a thirder regardless of which instrument wakes her, which means she should be a thirder even if she ignores the instruments completely. But ignoring the instruments makes this problem identical to the original problem, implying that she should be a thirder in the original problem.
The Sleeping Beauty Paradox Here's an argument that being a halfer can lead Sleeping Beauty astray. When halfer Sleeping Beauty wakes up she believes $\mathbb{P}(heads) = \frac12$. Suppose she suddenly decides she can't take the
954
Objective function, cost function, loss function: are they the same thing?
These are not very strict terms and they are highly related. However: Loss function is usually a function defined on a data point, prediction and label, and measures the penalty. For example: Square loss: $l(f(x_i|\theta),y_i) = \left (f(x_i|\theta)-y_i \right )^2$, used in linear regression Hinge loss: $l(f(x_i|\theta), y_i) = \max(0, 1-f(x_i|\theta)y_i)$, used in SVM 0/1 loss: $l(f(x_i|\theta), y_i) = 1 \iff f(x_i|\theta) \neq y_i$, used in theoretical analysis and definition of accuracy Cost function is usually more general. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization). For example: Mean Squared Error: $MSE(\theta) = \frac{1}{N} \sum_{i=1}^N \left (f(x_i|\theta)-y_i \right )^2$ SVM cost function: $SVM(\theta) = \|\theta\|^2 + C \sum_{i=1}^N \xi_i$ (there are additional constraints connecting $\xi_i$ with $C$ and with training set) Objective function is the most general term for any function that you optimize during training. For example, a probability of generating training set in maximum likelihood approach is a well defined objective function, but it is not a loss function nor cost function (however you could define an equivalent cost function). For example: MLE is a type of objective function (which you maximize) Divergence between classes can be an objective function but it is barely a cost function, unless you define something artificial, like 1-Divergence, and name it a cost Long story short, I would say that: A loss function is a part of a cost function which is a type of an objective function. All that being said, thse terms are far from strict, and depending on context, research group, background, can shift and be used in a different meaning. With the main (only?) common thing being "loss" and "cost" functions being something that want wants to minimise, and objective function being something one wants to optimise (which can be both maximisation or minimisation).
Objective function, cost function, loss function: are they the same thing?
These are not very strict terms and they are highly related. However: Loss function is usually a function defined on a data point, prediction and label, and measures the penalty. For example: Square
Objective function, cost function, loss function: are they the same thing? These are not very strict terms and they are highly related. However: Loss function is usually a function defined on a data point, prediction and label, and measures the penalty. For example: Square loss: $l(f(x_i|\theta),y_i) = \left (f(x_i|\theta)-y_i \right )^2$, used in linear regression Hinge loss: $l(f(x_i|\theta), y_i) = \max(0, 1-f(x_i|\theta)y_i)$, used in SVM 0/1 loss: $l(f(x_i|\theta), y_i) = 1 \iff f(x_i|\theta) \neq y_i$, used in theoretical analysis and definition of accuracy Cost function is usually more general. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization). For example: Mean Squared Error: $MSE(\theta) = \frac{1}{N} \sum_{i=1}^N \left (f(x_i|\theta)-y_i \right )^2$ SVM cost function: $SVM(\theta) = \|\theta\|^2 + C \sum_{i=1}^N \xi_i$ (there are additional constraints connecting $\xi_i$ with $C$ and with training set) Objective function is the most general term for any function that you optimize during training. For example, a probability of generating training set in maximum likelihood approach is a well defined objective function, but it is not a loss function nor cost function (however you could define an equivalent cost function). For example: MLE is a type of objective function (which you maximize) Divergence between classes can be an objective function but it is barely a cost function, unless you define something artificial, like 1-Divergence, and name it a cost Long story short, I would say that: A loss function is a part of a cost function which is a type of an objective function. All that being said, thse terms are far from strict, and depending on context, research group, background, can shift and be used in a different meaning. With the main (only?) common thing being "loss" and "cost" functions being something that want wants to minimise, and objective function being something one wants to optimise (which can be both maximisation or minimisation).
Objective function, cost function, loss function: are they the same thing? These are not very strict terms and they are highly related. However: Loss function is usually a function defined on a data point, prediction and label, and measures the penalty. For example: Square
955
Objective function, cost function, loss function: are they the same thing?
Quoting from section 4.3 in "Deep Learning" book by Ian Goodfellow, Yoshua Bengio, Aaron Courville (emphasis in the original): The function we want to minimize or maximize is called the objective function, or criterion. When we are minimizing it, we may also call it the cost function, loss function, or error function. In this book, we use these terms interchangeably, though some machine learning publications assign special meaning to some of these terms. In this book at least, loss and cost are the same.
Objective function, cost function, loss function: are they the same thing?
Quoting from section 4.3 in "Deep Learning" book by Ian Goodfellow, Yoshua Bengio, Aaron Courville (emphasis in the original): The function we want to minimize or maximize is called the objective fun
Objective function, cost function, loss function: are they the same thing? Quoting from section 4.3 in "Deep Learning" book by Ian Goodfellow, Yoshua Bengio, Aaron Courville (emphasis in the original): The function we want to minimize or maximize is called the objective function, or criterion. When we are minimizing it, we may also call it the cost function, loss function, or error function. In this book, we use these terms interchangeably, though some machine learning publications assign special meaning to some of these terms. In this book at least, loss and cost are the same.
Objective function, cost function, loss function: are they the same thing? Quoting from section 4.3 in "Deep Learning" book by Ian Goodfellow, Yoshua Bengio, Aaron Courville (emphasis in the original): The function we want to minimize or maximize is called the objective fun
956
Objective function, cost function, loss function: are they the same thing?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. In Andrew NG's words- "Finally, the loss function was defined with respect to a single training example. It measures how well you're doing on a single training example. I'm now going to define something called the cost function, which measures how well you're doing an entire training set. So the cost function J which is applied to your parameters W and B is going to be the average with one of the m of the sum of the loss function applied to each of the training examples and turn."
Objective function, cost function, loss function: are they the same thing?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
Objective function, cost function, loss function: are they the same thing? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted. In Andrew NG's words- "Finally, the loss function was defined with respect to a single training example. It measures how well you're doing on a single training example. I'm now going to define something called the cost function, which measures how well you're doing an entire training set. So the cost function J which is applied to your parameters W and B is going to be the average with one of the m of the sum of the loss function applied to each of the training examples and turn."
Objective function, cost function, loss function: are they the same thing? Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
957
Objective function, cost function, loss function: are they the same thing?
According to Prof. Andrew Ng (see slides on page 11), Function h(X) represents your hypothesis. For fixed fitting parameters theta, it is a function of features X. I'd say this can also be called the Objective Function. The Cost function J is a function of the fitting parameters theta. J = J(theta). According to the Hastie et al.'s textbook "Elements of Statistical Learning", by p.37: "We seek a function f (X) for predicting Y given values of the input X." [...] the loss function L(Y, f(X)) is "a function for penalizing the errors in prediction", So it seems "loss function" is a slightly more general term than "cost function". If you seek for "loss" in that PDF, I think that they use "cost function" and "loss function" somewhat synonymously. Indeed, p. 502 "The situation [in Clustering] is somewhat similar to the specification of a loss or cost function in prediction problems (supervised learning)". Maybe these terms exist because they evolved independently in different academic communities. "Objective Function" is an old term used in Operations Research, and Engineering Mathematics. "Loss function" might be more in use among statisticians. But I'm speculating here.
Objective function, cost function, loss function: are they the same thing?
According to Prof. Andrew Ng (see slides on page 11), Function h(X) represents your hypothesis. For fixed fitting parameters theta, it is a function of features X. I'd say this can also be called the
Objective function, cost function, loss function: are they the same thing? According to Prof. Andrew Ng (see slides on page 11), Function h(X) represents your hypothesis. For fixed fitting parameters theta, it is a function of features X. I'd say this can also be called the Objective Function. The Cost function J is a function of the fitting parameters theta. J = J(theta). According to the Hastie et al.'s textbook "Elements of Statistical Learning", by p.37: "We seek a function f (X) for predicting Y given values of the input X." [...] the loss function L(Y, f(X)) is "a function for penalizing the errors in prediction", So it seems "loss function" is a slightly more general term than "cost function". If you seek for "loss" in that PDF, I think that they use "cost function" and "loss function" somewhat synonymously. Indeed, p. 502 "The situation [in Clustering] is somewhat similar to the specification of a loss or cost function in prediction problems (supervised learning)". Maybe these terms exist because they evolved independently in different academic communities. "Objective Function" is an old term used in Operations Research, and Engineering Mathematics. "Loss function" might be more in use among statisticians. But I'm speculating here.
Objective function, cost function, loss function: are they the same thing? According to Prof. Andrew Ng (see slides on page 11), Function h(X) represents your hypothesis. For fixed fitting parameters theta, it is a function of features X. I'd say this can also be called the
958
Objective function, cost function, loss function: are they the same thing?
The loss function computes the error for a single training example, while the cost function is the average of the loss functions of the entire training set.
Objective function, cost function, loss function: are they the same thing?
The loss function computes the error for a single training example, while the cost function is the average of the loss functions of the entire training set.
Objective function, cost function, loss function: are they the same thing? The loss function computes the error for a single training example, while the cost function is the average of the loss functions of the entire training set.
Objective function, cost function, loss function: are they the same thing? The loss function computes the error for a single training example, while the cost function is the average of the loss functions of the entire training set.
959
Objective function, cost function, loss function: are they the same thing?
Actually to be simple If you have m training data like this (x(1),y(1)),(x(2),y(2)), . . . (x(m),y(m)) We use loss function L(ycap,y) to find loss between ycap and y of a single training set If we want to find loss between ycap and y of a whole training set we use cost function. Note:- ycap means output from our model And y means expected output Note:- Credit goes Andrew ng Resource: coursera neural network and deep learning
Objective function, cost function, loss function: are they the same thing?
Actually to be simple If you have m training data like this (x(1),y(1)),(x(2),y(2)), . . . (x(m),y(m)) We use loss function L(ycap,y) to find loss between ycap and y of a single training set If we wan
Objective function, cost function, loss function: are they the same thing? Actually to be simple If you have m training data like this (x(1),y(1)),(x(2),y(2)), . . . (x(m),y(m)) We use loss function L(ycap,y) to find loss between ycap and y of a single training set If we want to find loss between ycap and y of a whole training set we use cost function. Note:- ycap means output from our model And y means expected output Note:- Credit goes Andrew ng Resource: coursera neural network and deep learning
Objective function, cost function, loss function: are they the same thing? Actually to be simple If you have m training data like this (x(1),y(1)),(x(2),y(2)), . . . (x(m),y(m)) We use loss function L(ycap,y) to find loss between ycap and y of a single training set If we wan
960
Objective function, cost function, loss function: are they the same thing?
The terms cost and loss functions are synonymous. Some people also call them the error function. The more general scenario is to define an objective function first that we want to optimize. This objective function could be to: maximize the posterior probabilities (e.g., naive Bayes) maximize a fitness function (genetic programming) maximize the total reward/value function (reinforcement learning) maximize information gain/minimize child node impurities (CART decision tree classification) minimize a mean squared error cost (or loss) function (CART, decision tree regression, linear regression, adaptive linear neurons, … maximize log-likelihood or minimize cross-entropy loss (or cost) function minimize hinge loss (support vector machine)
Objective function, cost function, loss function: are they the same thing?
The terms cost and loss functions are synonymous. Some people also call them the error function. The more general scenario is to define an objective function first that we want to optimize. This objec
Objective function, cost function, loss function: are they the same thing? The terms cost and loss functions are synonymous. Some people also call them the error function. The more general scenario is to define an objective function first that we want to optimize. This objective function could be to: maximize the posterior probabilities (e.g., naive Bayes) maximize a fitness function (genetic programming) maximize the total reward/value function (reinforcement learning) maximize information gain/minimize child node impurities (CART decision tree classification) minimize a mean squared error cost (or loss) function (CART, decision tree regression, linear regression, adaptive linear neurons, … maximize log-likelihood or minimize cross-entropy loss (or cost) function minimize hinge loss (support vector machine)
Objective function, cost function, loss function: are they the same thing? The terms cost and loss functions are synonymous. Some people also call them the error function. The more general scenario is to define an objective function first that we want to optimize. This objec
961
Objective function, cost function, loss function: are they the same thing?
To give you a short answer, according to me they are synonymous. However, the cost function is used more in optimization problem and loss function is used in parameter estimation.
Objective function, cost function, loss function: are they the same thing?
To give you a short answer, according to me they are synonymous. However, the cost function is used more in optimization problem and loss function is used in parameter estimation.
Objective function, cost function, loss function: are they the same thing? To give you a short answer, according to me they are synonymous. However, the cost function is used more in optimization problem and loss function is used in parameter estimation.
Objective function, cost function, loss function: are they the same thing? To give you a short answer, according to me they are synonymous. However, the cost function is used more in optimization problem and loss function is used in parameter estimation.
962
Objective function, cost function, loss function: are they the same thing?
How about Score function? Not related directly to the question, but I wanted to add this here, to have a completed reference to all these computational terminologies. In statistics, the score (or informant[1]) is the gradient of the log-likelihood function with respect to the parameter vector. This term is used specifically with the Maximum likelihood estimation under the econometric modeling field. So, the Score function can also be considered as an objective function. Reference https://en.wikipedia.org/wiki/Score_(statistics)
Objective function, cost function, loss function: are they the same thing?
How about Score function? Not related directly to the question, but I wanted to add this here, to have a completed reference to all these computational terminologies. In statistics, the score (or inf
Objective function, cost function, loss function: are they the same thing? How about Score function? Not related directly to the question, but I wanted to add this here, to have a completed reference to all these computational terminologies. In statistics, the score (or informant[1]) is the gradient of the log-likelihood function with respect to the parameter vector. This term is used specifically with the Maximum likelihood estimation under the econometric modeling field. So, the Score function can also be considered as an objective function. Reference https://en.wikipedia.org/wiki/Score_(statistics)
Objective function, cost function, loss function: are they the same thing? How about Score function? Not related directly to the question, but I wanted to add this here, to have a completed reference to all these computational terminologies. In statistics, the score (or inf
963
What's the difference between principal component analysis and multidimensional scaling?
Classic Torgerson's metric MDS is actually done by transforming distances into similarities and performing PCA (eigen-decomposition or singular-value-decomposition) on those. [The other name of this procedure (distances between objects -> similarities between them -> PCA, whereby loadings are the sought-for coordinates) is Principal Coordinate Analysis or PCoA.] So, PCA might be called the algorithm of the simplest MDS. Non-metric MDS is based on iterative ALSCAL or PROXSCAL algorithm (or algorithm similar to them) which is a more versatile mapping technique than PCA and can be applied to metric MDS as well. While PCA retains m important dimensions for you, ALSCAL/PROXSCAL fits configuration to m dimensions (you pre-define m) and it reproduces dissimilarities on the map more directly and accurately than PCA usually can (see Illustration section below). Thus, MDS and PCA are probably not at the same level to be in line or opposite to each other. PCA is just a method while MDS is a class of analysis. As mapping, PCA is a particular case of MDS. On the other hand, PCA is a particular case of Factor analysis which, being a data reduction, is more than only a mapping, while MDS is only a mapping. As for your question about metric MDS vs non-metric MDS there's little to comment because the answer is straightforward. If I believe my input dissimilarities are so close to be euclidean distances that a linear transform will suffice to map them in m-dimensional space, I will prefer metric MDS. If I don't believe, then monotonic transform is necessary, implying use of non-metric MDS. A note on terminology for a reader. Term Classic(al) MDS (CMDS) can have two different meanings in a vast literature on MDS, so it is ambiguous and should be avoided. One definition is that CMDS is a synonym of Torgerson's metric MDS. Another definition is that CMDS is any MDS (by any algorithm; metric or nonmetric analysis) with single matrix input (for there exist models analyzing many matrices at once - Individual "INDSCAL" model and Replicated model). Illustration to the answer. Some cloud of points (ellipse) is being mapped on a one-dimensional mds-map. A pair of points is shown in red dots. Iterative or "true" MDS aims straight to reconstruct pairwise distances between objects. For it is the task of any MDS. Various stress or misfit criteria could be minimized between original distances and distances on the map: $\|D_o-D_m\|_2^2$, $\|D_o^2-D_m^2\|_1$, $\|D_o-D_m\|_1$. An algorithm may (non-metric MDS) or may not (metric MDS) include monotonic transformation on this way. PCA-based MDS (Torgerson's, or PCoA) is not straight. It minimizes the squared distances between objects in the original space and their images on the map. This is not quite genuine MDS task; it is successful, as MDS, only to the extent to which the discarded junior principal axes are weak. If $P_1$ explains much more variance than $P_2$ the former can alone substantially reflect pairwise distances in the cloud, especially for points lying far apart along the ellipse. Iterative MDS will always win, and especially when the map is wanted very low-dimensional. Iterative MDS, too, will succeed more when a cloud ellipse is thin, but will fulfill the MDS-task better than PCoA. By the property of the double-centration matrix (described here) it appears that PCoA minimizes $\|D_o\|_2^2-\|D_m\|_2^2$, which is different from any of the above minimizations. Once again, PCA projects cloud's points on the most advantageous all-corporal saving subspace. It does not project pairwise distances, relative locations of points on a subspace most saving in that respect, as iterative MDS does it. Nevertheless, historically PCoA/PCA is considered among the methods of metric MDS.
What's the difference between principal component analysis and multidimensional scaling?
Classic Torgerson's metric MDS is actually done by transforming distances into similarities and performing PCA (eigen-decomposition or singular-value-decomposition) on those. [The other name of this p
What's the difference between principal component analysis and multidimensional scaling? Classic Torgerson's metric MDS is actually done by transforming distances into similarities and performing PCA (eigen-decomposition or singular-value-decomposition) on those. [The other name of this procedure (distances between objects -> similarities between them -> PCA, whereby loadings are the sought-for coordinates) is Principal Coordinate Analysis or PCoA.] So, PCA might be called the algorithm of the simplest MDS. Non-metric MDS is based on iterative ALSCAL or PROXSCAL algorithm (or algorithm similar to them) which is a more versatile mapping technique than PCA and can be applied to metric MDS as well. While PCA retains m important dimensions for you, ALSCAL/PROXSCAL fits configuration to m dimensions (you pre-define m) and it reproduces dissimilarities on the map more directly and accurately than PCA usually can (see Illustration section below). Thus, MDS and PCA are probably not at the same level to be in line or opposite to each other. PCA is just a method while MDS is a class of analysis. As mapping, PCA is a particular case of MDS. On the other hand, PCA is a particular case of Factor analysis which, being a data reduction, is more than only a mapping, while MDS is only a mapping. As for your question about metric MDS vs non-metric MDS there's little to comment because the answer is straightforward. If I believe my input dissimilarities are so close to be euclidean distances that a linear transform will suffice to map them in m-dimensional space, I will prefer metric MDS. If I don't believe, then monotonic transform is necessary, implying use of non-metric MDS. A note on terminology for a reader. Term Classic(al) MDS (CMDS) can have two different meanings in a vast literature on MDS, so it is ambiguous and should be avoided. One definition is that CMDS is a synonym of Torgerson's metric MDS. Another definition is that CMDS is any MDS (by any algorithm; metric or nonmetric analysis) with single matrix input (for there exist models analyzing many matrices at once - Individual "INDSCAL" model and Replicated model). Illustration to the answer. Some cloud of points (ellipse) is being mapped on a one-dimensional mds-map. A pair of points is shown in red dots. Iterative or "true" MDS aims straight to reconstruct pairwise distances between objects. For it is the task of any MDS. Various stress or misfit criteria could be minimized between original distances and distances on the map: $\|D_o-D_m\|_2^2$, $\|D_o^2-D_m^2\|_1$, $\|D_o-D_m\|_1$. An algorithm may (non-metric MDS) or may not (metric MDS) include monotonic transformation on this way. PCA-based MDS (Torgerson's, or PCoA) is not straight. It minimizes the squared distances between objects in the original space and their images on the map. This is not quite genuine MDS task; it is successful, as MDS, only to the extent to which the discarded junior principal axes are weak. If $P_1$ explains much more variance than $P_2$ the former can alone substantially reflect pairwise distances in the cloud, especially for points lying far apart along the ellipse. Iterative MDS will always win, and especially when the map is wanted very low-dimensional. Iterative MDS, too, will succeed more when a cloud ellipse is thin, but will fulfill the MDS-task better than PCoA. By the property of the double-centration matrix (described here) it appears that PCoA minimizes $\|D_o\|_2^2-\|D_m\|_2^2$, which is different from any of the above minimizations. Once again, PCA projects cloud's points on the most advantageous all-corporal saving subspace. It does not project pairwise distances, relative locations of points on a subspace most saving in that respect, as iterative MDS does it. Nevertheless, historically PCoA/PCA is considered among the methods of metric MDS.
What's the difference between principal component analysis and multidimensional scaling? Classic Torgerson's metric MDS is actually done by transforming distances into similarities and performing PCA (eigen-decomposition or singular-value-decomposition) on those. [The other name of this p
964
What's the difference between principal component analysis and multidimensional scaling?
Two types of metric MDS The task of metric multidimensional scaling (MDS) can be abstractly formulated as follows: given a $n\times n$ matrix $\mathbf D$ of pairwise distances between $n$ points, find a low-dimensional embedding of data points in $\mathbb R^k$ such that Euclidean distances between them approximate the given distances: $$\|\mathbf x_i - \mathbf x_j\|\approx D_{ij}.$$ If "approximate" here is understood in a the usual sense of reconstruction error, i.e. if the goal is to minimize the cost function called "stress": $$\text{Stress} \sim \Big\|\mathbf D - \|\mathbf x_i - \mathbf x_j\|\Big\|^2,$$ then the solution is not equivalent to PCA. The solution is not given by any closed formula, and must be computed by a dedicated iterative algorithm. "Classical MDS", also known as "Torgerson MDS", replaces this cost function by a related but not equivalent one, called "strain": $$\text{Strain} \sim \Big\|\mathbf K_c - \langle\mathbf x_i, \mathbf x_j\rangle\Big\|^2,$$ that seeks to minimize reconstruction error of centered scalar products instead of distances. It turns out that $\mathbf K_c$ can be computed from $\mathbf D$ (if $\mathbf D$ are Euclidean distances) and that minimizing reconstruction error of $\mathbf K_c$ is exactly what PCA does, as shown in the next section. Classical (Torgerson) MDS on Euclidean distances is equivalent to PCA Let the data be collected in matrix $\mathbf X$ of $n \times k$ size with observations in rows and features in columns. Let $\mathbf X_c$ be the centered matrix with subtracted column means. Then PCA amounts to doing singular value decomposition $\mathbf X_c = \mathbf {USV^\top}$, with columns of $\mathbf{US}$ being principal components. A common way to obtain them is via an eigendecomposition of the covariance matrix $\frac{1}{n}\mathbf X_c^\top \mathbf X^\vphantom{\top}_c$, but another possible way is to perform an eigendecomposition of the Gram matrix $\mathbf K_c = \mathbf X^\vphantom{\top}_c \mathbf X^\top_c=\mathbf U \mathbf S^2 \mathbf U^\top$: principal components are its eigenvectors scaled by the square roots of the respective eigenvalues. It is easy to see that $\mathbf X_c = (\mathbf I - \frac{1}{n}\mathbf 1_n)\mathbf X$, where $\mathbf 1_n$ is a $n \times n$ matrix of ones. From this we immediately get that $$\mathbf K_c = \left(\mathbf I - \frac{\mathbf 1_n}{n}\right)\mathbf K\left(\mathbf I - \frac{\mathbf 1_n}{n}\right) = \mathbf K - \frac{\mathbf 1_n}{n} \mathbf K - \mathbf K \frac{\mathbf 1_n}{n} + \frac{\mathbf 1_n}{n} \mathbf K \frac{\mathbf 1_n}{n},$$ where $\mathbf K = \mathbf X \mathbf X^\top$ is a Gram matrix of uncentered data. This is useful: if we have the Gram matrix of uncentered data we can center it directly, without getting back to $\mathbf X$ itself. This operation is sometimes called double-centering: notice that it amounts to subtracting row means and column means from $\mathbf K$ (and adding back the global mean that gets subtracted twice), so that both row means and column means of $\mathbf K_c$ are equal to zero. Now consider a $n \times n$ matrix $\mathbf D$ of pairwise Euclidean distances with $D_{ij} = \|\mathbf x_i - \mathbf x_j\|$. Can this matrix be converted into $\mathbf K_c$ in order to perform PCA? Turns out that the answer is yes. Indeed, by the law of cosines we see that \begin{align} D_{ij}^2 = \|\mathbf x_i - \mathbf x_j\|^2 &= \|\mathbf x_i - \bar{\mathbf x}\|^2 + \|\mathbf x_j - \bar{\mathbf x}\|^2 - 2\langle\mathbf x_i - \bar{\mathbf x}, \mathbf x_j - \bar{\mathbf x} \rangle \\ &= \|\mathbf x_i - \bar{\mathbf x}\|^2 + \|\mathbf x_j - \bar{\mathbf x}\|^2 - 2[K_c]_{ij}. \end{align} So $-\mathbf D^2/2$ differs from $\mathbf K_c$ only by some row and column constants (here $\mathbf D^2$ means element-wise square!). Meaning that if we double-center it, we will get $\mathbf K_c$: $$\mathbf K_c = -\left(\mathbf I - \frac{\mathbf 1_n}{n}\right)\frac{\mathbf D^2}{2}\left(\mathbf I - \frac{\mathbf 1_n}{n}\right).$$ Which means that starting from the matrix of pairwise Euclidean distances $\mathbf D$ we can perform PCA and get principal components. This is exactly what classical (Torgerson) MDS does: $\mathbf D \mapsto \mathbf K_c \mapsto \mathbf{US}$, so its outcome is equivalent to PCA. Of course, if any other distance measure is chosen instead of $\|\mathbf x_i - \mathbf x_j\|$, then classical MDS will result in something else. Reference: The Elements of Statistical Learning, section 18.5.2.
What's the difference between principal component analysis and multidimensional scaling?
Two types of metric MDS The task of metric multidimensional scaling (MDS) can be abstractly formulated as follows: given a $n\times n$ matrix $\mathbf D$ of pairwise distances between $n$ points, fin
What's the difference between principal component analysis and multidimensional scaling? Two types of metric MDS The task of metric multidimensional scaling (MDS) can be abstractly formulated as follows: given a $n\times n$ matrix $\mathbf D$ of pairwise distances between $n$ points, find a low-dimensional embedding of data points in $\mathbb R^k$ such that Euclidean distances between them approximate the given distances: $$\|\mathbf x_i - \mathbf x_j\|\approx D_{ij}.$$ If "approximate" here is understood in a the usual sense of reconstruction error, i.e. if the goal is to minimize the cost function called "stress": $$\text{Stress} \sim \Big\|\mathbf D - \|\mathbf x_i - \mathbf x_j\|\Big\|^2,$$ then the solution is not equivalent to PCA. The solution is not given by any closed formula, and must be computed by a dedicated iterative algorithm. "Classical MDS", also known as "Torgerson MDS", replaces this cost function by a related but not equivalent one, called "strain": $$\text{Strain} \sim \Big\|\mathbf K_c - \langle\mathbf x_i, \mathbf x_j\rangle\Big\|^2,$$ that seeks to minimize reconstruction error of centered scalar products instead of distances. It turns out that $\mathbf K_c$ can be computed from $\mathbf D$ (if $\mathbf D$ are Euclidean distances) and that minimizing reconstruction error of $\mathbf K_c$ is exactly what PCA does, as shown in the next section. Classical (Torgerson) MDS on Euclidean distances is equivalent to PCA Let the data be collected in matrix $\mathbf X$ of $n \times k$ size with observations in rows and features in columns. Let $\mathbf X_c$ be the centered matrix with subtracted column means. Then PCA amounts to doing singular value decomposition $\mathbf X_c = \mathbf {USV^\top}$, with columns of $\mathbf{US}$ being principal components. A common way to obtain them is via an eigendecomposition of the covariance matrix $\frac{1}{n}\mathbf X_c^\top \mathbf X^\vphantom{\top}_c$, but another possible way is to perform an eigendecomposition of the Gram matrix $\mathbf K_c = \mathbf X^\vphantom{\top}_c \mathbf X^\top_c=\mathbf U \mathbf S^2 \mathbf U^\top$: principal components are its eigenvectors scaled by the square roots of the respective eigenvalues. It is easy to see that $\mathbf X_c = (\mathbf I - \frac{1}{n}\mathbf 1_n)\mathbf X$, where $\mathbf 1_n$ is a $n \times n$ matrix of ones. From this we immediately get that $$\mathbf K_c = \left(\mathbf I - \frac{\mathbf 1_n}{n}\right)\mathbf K\left(\mathbf I - \frac{\mathbf 1_n}{n}\right) = \mathbf K - \frac{\mathbf 1_n}{n} \mathbf K - \mathbf K \frac{\mathbf 1_n}{n} + \frac{\mathbf 1_n}{n} \mathbf K \frac{\mathbf 1_n}{n},$$ where $\mathbf K = \mathbf X \mathbf X^\top$ is a Gram matrix of uncentered data. This is useful: if we have the Gram matrix of uncentered data we can center it directly, without getting back to $\mathbf X$ itself. This operation is sometimes called double-centering: notice that it amounts to subtracting row means and column means from $\mathbf K$ (and adding back the global mean that gets subtracted twice), so that both row means and column means of $\mathbf K_c$ are equal to zero. Now consider a $n \times n$ matrix $\mathbf D$ of pairwise Euclidean distances with $D_{ij} = \|\mathbf x_i - \mathbf x_j\|$. Can this matrix be converted into $\mathbf K_c$ in order to perform PCA? Turns out that the answer is yes. Indeed, by the law of cosines we see that \begin{align} D_{ij}^2 = \|\mathbf x_i - \mathbf x_j\|^2 &= \|\mathbf x_i - \bar{\mathbf x}\|^2 + \|\mathbf x_j - \bar{\mathbf x}\|^2 - 2\langle\mathbf x_i - \bar{\mathbf x}, \mathbf x_j - \bar{\mathbf x} \rangle \\ &= \|\mathbf x_i - \bar{\mathbf x}\|^2 + \|\mathbf x_j - \bar{\mathbf x}\|^2 - 2[K_c]_{ij}. \end{align} So $-\mathbf D^2/2$ differs from $\mathbf K_c$ only by some row and column constants (here $\mathbf D^2$ means element-wise square!). Meaning that if we double-center it, we will get $\mathbf K_c$: $$\mathbf K_c = -\left(\mathbf I - \frac{\mathbf 1_n}{n}\right)\frac{\mathbf D^2}{2}\left(\mathbf I - \frac{\mathbf 1_n}{n}\right).$$ Which means that starting from the matrix of pairwise Euclidean distances $\mathbf D$ we can perform PCA and get principal components. This is exactly what classical (Torgerson) MDS does: $\mathbf D \mapsto \mathbf K_c \mapsto \mathbf{US}$, so its outcome is equivalent to PCA. Of course, if any other distance measure is chosen instead of $\|\mathbf x_i - \mathbf x_j\|$, then classical MDS will result in something else. Reference: The Elements of Statistical Learning, section 18.5.2.
What's the difference between principal component analysis and multidimensional scaling? Two types of metric MDS The task of metric multidimensional scaling (MDS) can be abstractly formulated as follows: given a $n\times n$ matrix $\mathbf D$ of pairwise distances between $n$ points, fin
965
What's the difference between principal component analysis and multidimensional scaling?
Uhm... quite different. In PCA, you are given the multivariate continuous data (a multivariate vector for each subject), and you are trying to figure out if you don't need that many dimensions to conceptualize them. In (metric) MDS, you are given the matrix of distances between the objects, and you are trying to figure out what the locations of these objects in space are (and whether you need a 1D, 2D, 3D, etc. space). In non-metric MDS, you only know that objects 1 and 2 are more distant than objects 2 and 3, so you try to quantify that, on top of finding the dimensions and locations. With a notable stretch of imagination, you can say that a common goal of PCA and MDS is to visualize objects in 2D or 3D. But given how different the inputs are, these methods won't be discussed as even distantly related in any multivariate textbook. I would guess that you can convert the data usable for PCA into data usable for MDS (say, by computing Mahalanobis distances between objects, using the sample covariance matrix), but that would immediately result in a loss of information: MDS is only defined up to location and rotation, and the latter two can be done more informatively with PCA. If I were to briefly show someone the results of non-metric MDS and wanted to give them a rough idea of what it does without going into detail, I could say: Given the measures of similarity or dissimilarity that we have, we are trying to map our objects/subjects in such a way that the 'cities' they make up have distances between them that are as close to these similarity measures as we can make them. We could only map them perfectly in $n$-dimensional space, though, so I am representing the two most informative dimensions here -- kinda like what you would do in PCA if you showed a picture with the two leading principal components.
What's the difference between principal component analysis and multidimensional scaling?
Uhm... quite different. In PCA, you are given the multivariate continuous data (a multivariate vector for each subject), and you are trying to figure out if you don't need that many dimensions to conc
What's the difference between principal component analysis and multidimensional scaling? Uhm... quite different. In PCA, you are given the multivariate continuous data (a multivariate vector for each subject), and you are trying to figure out if you don't need that many dimensions to conceptualize them. In (metric) MDS, you are given the matrix of distances between the objects, and you are trying to figure out what the locations of these objects in space are (and whether you need a 1D, 2D, 3D, etc. space). In non-metric MDS, you only know that objects 1 and 2 are more distant than objects 2 and 3, so you try to quantify that, on top of finding the dimensions and locations. With a notable stretch of imagination, you can say that a common goal of PCA and MDS is to visualize objects in 2D or 3D. But given how different the inputs are, these methods won't be discussed as even distantly related in any multivariate textbook. I would guess that you can convert the data usable for PCA into data usable for MDS (say, by computing Mahalanobis distances between objects, using the sample covariance matrix), but that would immediately result in a loss of information: MDS is only defined up to location and rotation, and the latter two can be done more informatively with PCA. If I were to briefly show someone the results of non-metric MDS and wanted to give them a rough idea of what it does without going into detail, I could say: Given the measures of similarity or dissimilarity that we have, we are trying to map our objects/subjects in such a way that the 'cities' they make up have distances between them that are as close to these similarity measures as we can make them. We could only map them perfectly in $n$-dimensional space, though, so I am representing the two most informative dimensions here -- kinda like what you would do in PCA if you showed a picture with the two leading principal components.
What's the difference between principal component analysis and multidimensional scaling? Uhm... quite different. In PCA, you are given the multivariate continuous data (a multivariate vector for each subject), and you are trying to figure out if you don't need that many dimensions to conc
966
What's the difference between principal component analysis and multidimensional scaling?
PCA yields the EXACT same results as classical MDS if Euclidean distance is used. I'm quoting Cox & Cox (2001), p 43-44: There is a duality between a principals components analysis and PCO [principal coordinates analysis, aka classical MDS] where dissimilarities are given by Euclidean distance. The section in Cox & Cox explains it pretty clearly: Imagine you have $X$ = attributes of $n$ products by $p$ dimensions, mean centered PCA is attained by finding eigenvectors of the covariance matrix ~ $X'X$ (divided by n-1) -- call the eigenvectors $\xi$, and eigenvalues $\mu$. MDS is attained by first converting $X$ into distance matrix, here, Euclidean distance, i.e., $XX'$, then finding the eigenvectors -- call the eigenvectors $v$, and eigenvalues $\lambda$. p 43: "It is a well known result that the eigenvalues of $XX'$ are the same as those for $X'X$, together with an extra n-p zero eigenvalues." So, for $i < p$, $\mu_i$ = $\lambda_i$ Going back to definition of eigenvectors, consider the $i^{th}$ eigenvalues. $X'Xv_i = \lambda_i v_i$ Premultiply $v_i$ with $X'$, we get $(X'X)X'v_i = \lambda_i X'v_i$ We also have $X'X \xi_i = \mu_i \xi_i$. Since $\lambda_i = \mu_i$, we get that $\xi_i = X'v_i$ for $i<p$.
What's the difference between principal component analysis and multidimensional scaling?
PCA yields the EXACT same results as classical MDS if Euclidean distance is used. I'm quoting Cox & Cox (2001), p 43-44: There is a duality between a principals components analysis and PCO [principal
What's the difference between principal component analysis and multidimensional scaling? PCA yields the EXACT same results as classical MDS if Euclidean distance is used. I'm quoting Cox & Cox (2001), p 43-44: There is a duality between a principals components analysis and PCO [principal coordinates analysis, aka classical MDS] where dissimilarities are given by Euclidean distance. The section in Cox & Cox explains it pretty clearly: Imagine you have $X$ = attributes of $n$ products by $p$ dimensions, mean centered PCA is attained by finding eigenvectors of the covariance matrix ~ $X'X$ (divided by n-1) -- call the eigenvectors $\xi$, and eigenvalues $\mu$. MDS is attained by first converting $X$ into distance matrix, here, Euclidean distance, i.e., $XX'$, then finding the eigenvectors -- call the eigenvectors $v$, and eigenvalues $\lambda$. p 43: "It is a well known result that the eigenvalues of $XX'$ are the same as those for $X'X$, together with an extra n-p zero eigenvalues." So, for $i < p$, $\mu_i$ = $\lambda_i$ Going back to definition of eigenvectors, consider the $i^{th}$ eigenvalues. $X'Xv_i = \lambda_i v_i$ Premultiply $v_i$ with $X'$, we get $(X'X)X'v_i = \lambda_i X'v_i$ We also have $X'X \xi_i = \mu_i \xi_i$. Since $\lambda_i = \mu_i$, we get that $\xi_i = X'v_i$ for $i<p$.
What's the difference between principal component analysis and multidimensional scaling? PCA yields the EXACT same results as classical MDS if Euclidean distance is used. I'm quoting Cox & Cox (2001), p 43-44: There is a duality between a principals components analysis and PCO [principal
967
What's the difference between principal component analysis and multidimensional scaling?
Comparison: "Metric MDS gives the SAME result as PCA"- procedurally- when we look at the way SVD is used to obtain the optimum. But, the preserved high-dimensional criteria is different. PCA uses a centered covariance matrix while MDS uses a gram matrix obtained by double-centering distance matrices. Will put the difference mathematically: PCA can be viewed as maximizing $Tr(X^T(I-\frac{1}{n}ee^T)X)$ over $X$ under constraints that $X$ is orthogonal, thereby giving axes/principal components. In multidimensional scaling a gram matrix(a p.s.d matrix that can be represented as $Z^TZ$) is computed from euclidean distance between rows in $X$ and the following is minimized over $Y$. minimize: $||G-Y^TY||_{F}^{2}$.
What's the difference between principal component analysis and multidimensional scaling?
Comparison: "Metric MDS gives the SAME result as PCA"- procedurally- when we look at the way SVD is used to obtain the optimum. But, the preserved high-dimensional criteria is different. PCA uses a ce
What's the difference between principal component analysis and multidimensional scaling? Comparison: "Metric MDS gives the SAME result as PCA"- procedurally- when we look at the way SVD is used to obtain the optimum. But, the preserved high-dimensional criteria is different. PCA uses a centered covariance matrix while MDS uses a gram matrix obtained by double-centering distance matrices. Will put the difference mathematically: PCA can be viewed as maximizing $Tr(X^T(I-\frac{1}{n}ee^T)X)$ over $X$ under constraints that $X$ is orthogonal, thereby giving axes/principal components. In multidimensional scaling a gram matrix(a p.s.d matrix that can be represented as $Z^TZ$) is computed from euclidean distance between rows in $X$ and the following is minimized over $Y$. minimize: $||G-Y^TY||_{F}^{2}$.
What's the difference between principal component analysis and multidimensional scaling? Comparison: "Metric MDS gives the SAME result as PCA"- procedurally- when we look at the way SVD is used to obtain the optimum. But, the preserved high-dimensional criteria is different. PCA uses a ce
968
Why are p-values uniformly distributed under the null hypothesis?
To clarify a bit. The p-value is uniformly distributed when the null hypothesis is true and all other assumptions are met. The reason for this is really the definition of alpha as the probability of a type I error. We want the probability of rejecting a true null hypothesis to be alpha, we reject when the observed $\text{p-value} < \alpha$, the only way this happens for any value of alpha is when the p-value comes from a uniform distribution. The whole point of using the correct distribution (normal, t, f, chisq, etc.) is to transform from the test statistic to a uniform p-value. If the null hypothesis is false then the distribution of the p-value will (hopefully) be more weighted towards 0. The Pvalue.norm.sim and Pvalue.binom.sim functions in the TeachingDemos package for R will simulate several data sets, compute the p-values and plot them to demonstrate this idea. Also see: Murdoch, D, Tsai, Y, and Adcock, J (2008). P-Values are Random Variables. The American Statistician, 62, 242-245. for some more details. Edit: Since people are still reading this answer and commenting, I thought that I would address @whuber's comment. It is true that when using a composite null hypothesis like $\mu_1 \leq \mu_2$ that the p-values will only be uniformly distributed when the 2 means are exactly equal and will not be a uniform if $\mu_1$ is any value that is less than $\mu_2$. This can easily be seen using the Pvalue.norm.sim function and setting it to do a one sided test and simulating with the simulation and hypothesized means different (but in the direction to make the null true). As far as statistical theory goes, this does not matter. Consider if I claimed that I am taller than every member of your family, one way to test this claim would be to compare my height to the height of each member of your family one at a time. Another option would be to find the member of your family that is the tallest and compare their height with mine. If I am taller than that one person then I am taller than the rest as well and my claim is true, if I am not taller than that one person then my claim is false. Testing a composite null can be seen as a similar process, rather than testing all the possible combinations where $\mu_1 \leq \mu_2$ we can test just the equality part because if we can reject that $\mu_1 = \mu_2$ in favour of $\mu_1 > \mu_2$ then we know that we can also reject all the possibilities of $\mu_1 < \mu_2$. If we look at the distribution of p-values for cases where $\mu_1 < \mu_2$ then the distribution will not be perfectly uniform but will have more values closer to 1 than to 0 meaning that the probability of a type I error will be less than the selected $\alpha$ value making it a conservative test. The uniform becomes the limiting distribution as $\mu_1$ gets closer to $\mu_2$ (the people who are more current on the stat-theory terms could probably state this better in terms of distributional supremum or something like that). So by constructing our test assuming the equal part of the null even when the null is composite, then we are designing our test to have a probability of a type I error that is at most $\alpha$ for any conditions where the null is true.
Why are p-values uniformly distributed under the null hypothesis?
To clarify a bit. The p-value is uniformly distributed when the null hypothesis is true and all other assumptions are met. The reason for this is really the definition of alpha as the probability of
Why are p-values uniformly distributed under the null hypothesis? To clarify a bit. The p-value is uniformly distributed when the null hypothesis is true and all other assumptions are met. The reason for this is really the definition of alpha as the probability of a type I error. We want the probability of rejecting a true null hypothesis to be alpha, we reject when the observed $\text{p-value} < \alpha$, the only way this happens for any value of alpha is when the p-value comes from a uniform distribution. The whole point of using the correct distribution (normal, t, f, chisq, etc.) is to transform from the test statistic to a uniform p-value. If the null hypothesis is false then the distribution of the p-value will (hopefully) be more weighted towards 0. The Pvalue.norm.sim and Pvalue.binom.sim functions in the TeachingDemos package for R will simulate several data sets, compute the p-values and plot them to demonstrate this idea. Also see: Murdoch, D, Tsai, Y, and Adcock, J (2008). P-Values are Random Variables. The American Statistician, 62, 242-245. for some more details. Edit: Since people are still reading this answer and commenting, I thought that I would address @whuber's comment. It is true that when using a composite null hypothesis like $\mu_1 \leq \mu_2$ that the p-values will only be uniformly distributed when the 2 means are exactly equal and will not be a uniform if $\mu_1$ is any value that is less than $\mu_2$. This can easily be seen using the Pvalue.norm.sim function and setting it to do a one sided test and simulating with the simulation and hypothesized means different (but in the direction to make the null true). As far as statistical theory goes, this does not matter. Consider if I claimed that I am taller than every member of your family, one way to test this claim would be to compare my height to the height of each member of your family one at a time. Another option would be to find the member of your family that is the tallest and compare their height with mine. If I am taller than that one person then I am taller than the rest as well and my claim is true, if I am not taller than that one person then my claim is false. Testing a composite null can be seen as a similar process, rather than testing all the possible combinations where $\mu_1 \leq \mu_2$ we can test just the equality part because if we can reject that $\mu_1 = \mu_2$ in favour of $\mu_1 > \mu_2$ then we know that we can also reject all the possibilities of $\mu_1 < \mu_2$. If we look at the distribution of p-values for cases where $\mu_1 < \mu_2$ then the distribution will not be perfectly uniform but will have more values closer to 1 than to 0 meaning that the probability of a type I error will be less than the selected $\alpha$ value making it a conservative test. The uniform becomes the limiting distribution as $\mu_1$ gets closer to $\mu_2$ (the people who are more current on the stat-theory terms could probably state this better in terms of distributional supremum or something like that). So by constructing our test assuming the equal part of the null even when the null is composite, then we are designing our test to have a probability of a type I error that is at most $\alpha$ for any conditions where the null is true.
Why are p-values uniformly distributed under the null hypothesis? To clarify a bit. The p-value is uniformly distributed when the null hypothesis is true and all other assumptions are met. The reason for this is really the definition of alpha as the probability of
969
Why are p-values uniformly distributed under the null hypothesis?
Under the null hypothesis, your test statistic $T$ has the distribution $F(t)$ (e.g., standard normal). We show that the p-value $P=F(T)$ has a probability distribution $$\begin{equation*} \Pr(P < p) = \Pr(F^{-1}(P) < F^{-1}(p)) = \Pr(T < t) \equiv p; \end{equation*}$$ in other words, $P$ is distributed uniformly. This holds so long as $F(\cdot)$ is invertible, a necessary condition of which is that $T$ is not a discrete random variable. This result is general: the distribution of an invertible CDF of a random variable is uniform on $[0,1]$.
Why are p-values uniformly distributed under the null hypothesis?
Under the null hypothesis, your test statistic $T$ has the distribution $F(t)$ (e.g., standard normal). We show that the p-value $P=F(T)$ has a probability distribution $$\begin{equation*} \Pr(P < p)
Why are p-values uniformly distributed under the null hypothesis? Under the null hypothesis, your test statistic $T$ has the distribution $F(t)$ (e.g., standard normal). We show that the p-value $P=F(T)$ has a probability distribution $$\begin{equation*} \Pr(P < p) = \Pr(F^{-1}(P) < F^{-1}(p)) = \Pr(T < t) \equiv p; \end{equation*}$$ in other words, $P$ is distributed uniformly. This holds so long as $F(\cdot)$ is invertible, a necessary condition of which is that $T$ is not a discrete random variable. This result is general: the distribution of an invertible CDF of a random variable is uniform on $[0,1]$.
Why are p-values uniformly distributed under the null hypothesis? Under the null hypothesis, your test statistic $T$ has the distribution $F(t)$ (e.g., standard normal). We show that the p-value $P=F(T)$ has a probability distribution $$\begin{equation*} \Pr(P < p)
970
Why are p-values uniformly distributed under the null hypothesis?
Let $T$ denote the random variable with cumulative distribution function $F(t) \equiv \Pr(T<t)$ for all $t$. Assuming that $F$ is invertible we can derive distribution of the random p-value $P = F(T)$ as follows: $$ \Pr(P<p) = \Pr(F(T) < p) = \Pr(T < F^{-1}(p)) = F(F^{-1}(p)) = p, $$ from which we can conclude that the distribution of $P$ is uniform on $[0,1]$. This answer is similar to Charlie's, but avoids having to define $t = F^{-1}(p)$.
Why are p-values uniformly distributed under the null hypothesis?
Let $T$ denote the random variable with cumulative distribution function $F(t) \equiv \Pr(T<t)$ for all $t$. Assuming that $F$ is invertible we can derive distribution of the random p-value $P = F(T)$
Why are p-values uniformly distributed under the null hypothesis? Let $T$ denote the random variable with cumulative distribution function $F(t) \equiv \Pr(T<t)$ for all $t$. Assuming that $F$ is invertible we can derive distribution of the random p-value $P = F(T)$ as follows: $$ \Pr(P<p) = \Pr(F(T) < p) = \Pr(T < F^{-1}(p)) = F(F^{-1}(p)) = p, $$ from which we can conclude that the distribution of $P$ is uniform on $[0,1]$. This answer is similar to Charlie's, but avoids having to define $t = F^{-1}(p)$.
Why are p-values uniformly distributed under the null hypothesis? Let $T$ denote the random variable with cumulative distribution function $F(t) \equiv \Pr(T<t)$ for all $t$. Assuming that $F$ is invertible we can derive distribution of the random p-value $P = F(T)$
971
Why are p-values uniformly distributed under the null hypothesis?
I think the answer as to "Why are p-values uniformly distributed under the null hypothesis?" has been sufficiently discussed from a mathematical perspective. What I thought is missing is a visual explanation of this and the idea of thinking of p-values as areas to the left of a set of quantiles under a given continuous distribution (probability density function). By quantiles I mean cut-off points along a distribution (in this example the standard normal distribution), which split the distribution into equal parts containing exactly the same area under the curve. For this example, I generated 100 random data points from the standard normal distribution with a mean of 0 and a standard deviation of 1, $\mathcal{N}(\mu = 0, \sigma = 1)$. Then I plotted those points in a histogram and we can see a bell-shaped distribution forming (Fig. 1A). Then I calculated the p-values of those points, i.e. the areas to the left of those points given the standard normal distribution, plotted those p-values in a histogram (Fig. 1B) and a uniform(ish) distribution is emerging binning those p-values in 0.1 intervals. This step, i.e. the step from Fig 1A to Fig 1B is puzzling for many people and has been for me as well for some time - until I started thinking of p-values as areas under the curve. My thought was that if I split the standard normal distribution into equal chunks containing the same area (in this case 0.1 to match the histogram in Fig 1B), I will have larger intervals in the tails (Fig 1C). Now if I go back to Fig 1A, I will be able to fit all points ranging from -4 to -1.28 (the interval in Fig 1C) into the first bin of Fig 1B since they all result into areas (or p-values) of less than or equal to 0.1. As the density of points is increasing towards the mean, the intervals that cover an area of 0.1 are becoming increasingly smaller (Fig 1C) but the number of points in those intervals remains roughly equal and in this case matches the count in Fig 1B. Once I understood this it was also easy for me to explain why a random sample of 100 points from a normal distribution with mean of 0 and a standard deviation of 3, $\mathcal{N}(\mu = 0, \sigma = 3)$ results into a higher frequency of p-values around 0 and 1 or in the tails (Fig 2B). The reason is that the p-values are calculated based on the standard normal distribution yet the sample comes from a normal distribution with mean of 0 and a standard deviation of 3. This will result into many more points in the tails than it would be for a sample coming from the standard normal distribution. I hope this was not overly confusing and added some value to this thread.
Why are p-values uniformly distributed under the null hypothesis?
I think the answer as to "Why are p-values uniformly distributed under the null hypothesis?" has been sufficiently discussed from a mathematical perspective. What I thought is missing is a visual expl
Why are p-values uniformly distributed under the null hypothesis? I think the answer as to "Why are p-values uniformly distributed under the null hypothesis?" has been sufficiently discussed from a mathematical perspective. What I thought is missing is a visual explanation of this and the idea of thinking of p-values as areas to the left of a set of quantiles under a given continuous distribution (probability density function). By quantiles I mean cut-off points along a distribution (in this example the standard normal distribution), which split the distribution into equal parts containing exactly the same area under the curve. For this example, I generated 100 random data points from the standard normal distribution with a mean of 0 and a standard deviation of 1, $\mathcal{N}(\mu = 0, \sigma = 1)$. Then I plotted those points in a histogram and we can see a bell-shaped distribution forming (Fig. 1A). Then I calculated the p-values of those points, i.e. the areas to the left of those points given the standard normal distribution, plotted those p-values in a histogram (Fig. 1B) and a uniform(ish) distribution is emerging binning those p-values in 0.1 intervals. This step, i.e. the step from Fig 1A to Fig 1B is puzzling for many people and has been for me as well for some time - until I started thinking of p-values as areas under the curve. My thought was that if I split the standard normal distribution into equal chunks containing the same area (in this case 0.1 to match the histogram in Fig 1B), I will have larger intervals in the tails (Fig 1C). Now if I go back to Fig 1A, I will be able to fit all points ranging from -4 to -1.28 (the interval in Fig 1C) into the first bin of Fig 1B since they all result into areas (or p-values) of less than or equal to 0.1. As the density of points is increasing towards the mean, the intervals that cover an area of 0.1 are becoming increasingly smaller (Fig 1C) but the number of points in those intervals remains roughly equal and in this case matches the count in Fig 1B. Once I understood this it was also easy for me to explain why a random sample of 100 points from a normal distribution with mean of 0 and a standard deviation of 3, $\mathcal{N}(\mu = 0, \sigma = 3)$ results into a higher frequency of p-values around 0 and 1 or in the tails (Fig 2B). The reason is that the p-values are calculated based on the standard normal distribution yet the sample comes from a normal distribution with mean of 0 and a standard deviation of 3. This will result into many more points in the tails than it would be for a sample coming from the standard normal distribution. I hope this was not overly confusing and added some value to this thread.
Why are p-values uniformly distributed under the null hypothesis? I think the answer as to "Why are p-values uniformly distributed under the null hypothesis?" has been sufficiently discussed from a mathematical perspective. What I thought is missing is a visual expl
972
Why are p-values uniformly distributed under the null hypothesis?
Simple simulation of distribution of p-values in case of linear regression between two independent variables : # estimated model is: y = a0 + a1*x + e obs<-100 # obs in each single regression Nloops<-1000 # number of experiments output<-numeric(Nloops) # vector holding p-values of estimated a1 parameter from Nloops experiments for(i in seq_along(output)){ x<-rnorm(obs) y<-rnorm(obs) # x and y are independent, so null hypothesis is true output[i] <-(summary(lm(y~x)) $ coefficients)[2,4] # we grab p-value of a1 if(i%%100==0){cat(i,"from",Nloops,date(),"\n")} # after each 100 iteration info is printed } plot(hist(output), main="Histogram of a1 p-values") ks.test(output,"punif") # Null hypothesis is that output distr. is uniform
Why are p-values uniformly distributed under the null hypothesis?
Simple simulation of distribution of p-values in case of linear regression between two independent variables : # estimated model is: y = a0 + a1*x + e obs<-100 # obs in each single reg
Why are p-values uniformly distributed under the null hypothesis? Simple simulation of distribution of p-values in case of linear regression between two independent variables : # estimated model is: y = a0 + a1*x + e obs<-100 # obs in each single regression Nloops<-1000 # number of experiments output<-numeric(Nloops) # vector holding p-values of estimated a1 parameter from Nloops experiments for(i in seq_along(output)){ x<-rnorm(obs) y<-rnorm(obs) # x and y are independent, so null hypothesis is true output[i] <-(summary(lm(y~x)) $ coefficients)[2,4] # we grab p-value of a1 if(i%%100==0){cat(i,"from",Nloops,date(),"\n")} # after each 100 iteration info is printed } plot(hist(output), main="Histogram of a1 p-values") ks.test(output,"punif") # Null hypothesis is that output distr. is uniform
Why are p-values uniformly distributed under the null hypothesis? Simple simulation of distribution of p-values in case of linear regression between two independent variables : # estimated model is: y = a0 + a1*x + e obs<-100 # obs in each single reg
973
Percentile vs quantile vs quartile
0 quartile = 0 quantile = 0 percentile 1 quartile = 0.25 quantile = 25 percentile 2 quartile = .5 quantile = 50 percentile (median) 3 quartile = .75 quantile = 75 percentile 4 quartile = 1 quantile = 100 percentile
Percentile vs quantile vs quartile
0 quartile = 0 quantile = 0 percentile 1 quartile = 0.25 quantile = 25 percentile 2 quartile = .5 quantile = 50 percentile (median) 3 quartile = .75 quantile = 75 percentile 4 quartile = 1 quantile =
Percentile vs quantile vs quartile 0 quartile = 0 quantile = 0 percentile 1 quartile = 0.25 quantile = 25 percentile 2 quartile = .5 quantile = 50 percentile (median) 3 quartile = .75 quantile = 75 percentile 4 quartile = 1 quantile = 100 percentile
Percentile vs quantile vs quartile 0 quartile = 0 quantile = 0 percentile 1 quartile = 0.25 quantile = 25 percentile 2 quartile = .5 quantile = 50 percentile (median) 3 quartile = .75 quantile = 75 percentile 4 quartile = 1 quantile =
974
Percentile vs quantile vs quartile
Percentiles go from $0$ to $100$. Quartiles go from $1$ to $4$ (or $0$ to $4$). Quantiles can go from anything to anything. Percentiles and quartiles are examples of quantiles.
Percentile vs quantile vs quartile
Percentiles go from $0$ to $100$. Quartiles go from $1$ to $4$ (or $0$ to $4$). Quantiles can go from anything to anything. Percentiles and quartiles are examples of quantiles.
Percentile vs quantile vs quartile Percentiles go from $0$ to $100$. Quartiles go from $1$ to $4$ (or $0$ to $4$). Quantiles can go from anything to anything. Percentiles and quartiles are examples of quantiles.
Percentile vs quantile vs quartile Percentiles go from $0$ to $100$. Quartiles go from $1$ to $4$ (or $0$ to $4$). Quantiles can go from anything to anything. Percentiles and quartiles are examples of quantiles.
975
Percentile vs quantile vs quartile
In order to define these terms rigorously, it is helpful to first define the quantile function which is also known as the inverse cumulative distribution function. Recall that for a random variable $X$, the cumulative distribution function $F_X$ is defined by the equation $$ F_X(x) := \Pr(X \le x). $$ The quantile function is defined by the equation $$ Q(p)\,=\,\inf\left\{ x\in \mathbb{R} : p \le F(x) \right\}. $$ Now that we have got these definitions out of the way, we can define the terms: percentile: a measure used in statistics indicating the value below which a given percentage of observations in a group of observations fall. Example: the 20th percentile of $X$ is the value $Q_X(0.20)$ quantile: values taken from regular intervals of the quantile function of a random variable. For instance, for some integer $k \geq 2$, the $k$-quartiles are defined as the values i.e. $Q_X(j/k)$ for $j = 1, 2, \ldots, k - 1$. Example: the 5-quantiles of $X$ are the values $Q_X(0.2), Q_X(0.4), Q_X(0.6), Q_X(0.8)$ quartile: a special case of quantile, in particular the 4-quantiles. The quartiles of $X$ are the values $Q_X(0.25), Q_X(0.5), Q_X(0.75)$ It may be helpful for you to work out an example of what these definitions mean when say $X \sim U[0,100]$, i.e. $X$ is uniformly distributed from 0 to 100. References from Wikipedia: https://en.wikipedia.org/wiki/Quantile https://en.wikipedia.org/wiki/Quantile_function https://en.wikipedia.org/wiki/Quartile https://en.wikipedia.org/wiki/Percentile
Percentile vs quantile vs quartile
In order to define these terms rigorously, it is helpful to first define the quantile function which is also known as the inverse cumulative distribution function. Recall that for a random variable $X
Percentile vs quantile vs quartile In order to define these terms rigorously, it is helpful to first define the quantile function which is also known as the inverse cumulative distribution function. Recall that for a random variable $X$, the cumulative distribution function $F_X$ is defined by the equation $$ F_X(x) := \Pr(X \le x). $$ The quantile function is defined by the equation $$ Q(p)\,=\,\inf\left\{ x\in \mathbb{R} : p \le F(x) \right\}. $$ Now that we have got these definitions out of the way, we can define the terms: percentile: a measure used in statistics indicating the value below which a given percentage of observations in a group of observations fall. Example: the 20th percentile of $X$ is the value $Q_X(0.20)$ quantile: values taken from regular intervals of the quantile function of a random variable. For instance, for some integer $k \geq 2$, the $k$-quartiles are defined as the values i.e. $Q_X(j/k)$ for $j = 1, 2, \ldots, k - 1$. Example: the 5-quantiles of $X$ are the values $Q_X(0.2), Q_X(0.4), Q_X(0.6), Q_X(0.8)$ quartile: a special case of quantile, in particular the 4-quantiles. The quartiles of $X$ are the values $Q_X(0.25), Q_X(0.5), Q_X(0.75)$ It may be helpful for you to work out an example of what these definitions mean when say $X \sim U[0,100]$, i.e. $X$ is uniformly distributed from 0 to 100. References from Wikipedia: https://en.wikipedia.org/wiki/Quantile https://en.wikipedia.org/wiki/Quantile_function https://en.wikipedia.org/wiki/Quartile https://en.wikipedia.org/wiki/Percentile
Percentile vs quantile vs quartile In order to define these terms rigorously, it is helpful to first define the quantile function which is also known as the inverse cumulative distribution function. Recall that for a random variable $X
976
Percentile vs quantile vs quartile
From wiki page: https://en.wikipedia.org/wiki/Quantile Some q-quantiles have special names: The only 2-quantile is called the median The 3-quantiles are called tertiles or terciles → T The 4-quantiles are called quartiles → Q The 5-quantiles are called quintiles → QU The 6-quantiles are called sextiles → S The 8-quantiles are called octiles → O (as added by @NickCox - now on wiki page also) The 10-quantiles are called deciles → D The 12-quantiles are called duodeciles → Dd The 20-quantiles are called vigintiles → V The 100-quantiles are called percentiles → P The 1000-quantiles are called permilles → Pr The difference between quantile, quartile and percentile becomes obvious.
Percentile vs quantile vs quartile
From wiki page: https://en.wikipedia.org/wiki/Quantile Some q-quantiles have special names: The only 2-quantile is called the median The 3-quantiles are called tertiles or terciles → T The 4-quantile
Percentile vs quantile vs quartile From wiki page: https://en.wikipedia.org/wiki/Quantile Some q-quantiles have special names: The only 2-quantile is called the median The 3-quantiles are called tertiles or terciles → T The 4-quantiles are called quartiles → Q The 5-quantiles are called quintiles → QU The 6-quantiles are called sextiles → S The 8-quantiles are called octiles → O (as added by @NickCox - now on wiki page also) The 10-quantiles are called deciles → D The 12-quantiles are called duodeciles → Dd The 20-quantiles are called vigintiles → V The 100-quantiles are called percentiles → P The 1000-quantiles are called permilles → Pr The difference between quantile, quartile and percentile becomes obvious.
Percentile vs quantile vs quartile From wiki page: https://en.wikipedia.org/wiki/Quantile Some q-quantiles have special names: The only 2-quantile is called the median The 3-quantiles are called tertiles or terciles → T The 4-quantile
977
How to choose between Pearson and Spearman correlation?
If you want to explore your data it is best to compute both, since the relation between the Spearman (S) and Pearson (P) correlations will give some information. Briefly, S is computed on ranks and so depicts monotonic relationships while P is on true values and depicts linear relationships. As an example, if you set: x=(1:100); y=exp(x); % then, corr(x,y,'type','Spearman'); % will equal 1, and corr(x,y,'type','Pearson'); % will be about equal to 0.25 This is because $y$ increases monotonically with $x$ so the Spearman correlation is perfect, but not linearly, so the Pearson correlation is imperfect. corr(x,log(y),'type','Pearson'); % will equal 1 Doing both is interesting because if you have S > P, that means that you have a correlation that is monotonic but not linear. Since it is good to have linearity in statistics (it is easier) you can try to apply a transformation on $y$ (such a log). I hope this helps to make the differences between the types of correlations easier to understand.
How to choose between Pearson and Spearman correlation?
If you want to explore your data it is best to compute both, since the relation between the Spearman (S) and Pearson (P) correlations will give some information. Briefly, S is computed on ranks and so
How to choose between Pearson and Spearman correlation? If you want to explore your data it is best to compute both, since the relation between the Spearman (S) and Pearson (P) correlations will give some information. Briefly, S is computed on ranks and so depicts monotonic relationships while P is on true values and depicts linear relationships. As an example, if you set: x=(1:100); y=exp(x); % then, corr(x,y,'type','Spearman'); % will equal 1, and corr(x,y,'type','Pearson'); % will be about equal to 0.25 This is because $y$ increases monotonically with $x$ so the Spearman correlation is perfect, but not linearly, so the Pearson correlation is imperfect. corr(x,log(y),'type','Pearson'); % will equal 1 Doing both is interesting because if you have S > P, that means that you have a correlation that is monotonic but not linear. Since it is good to have linearity in statistics (it is easier) you can try to apply a transformation on $y$ (such a log). I hope this helps to make the differences between the types of correlations easier to understand.
How to choose between Pearson and Spearman correlation? If you want to explore your data it is best to compute both, since the relation between the Spearman (S) and Pearson (P) correlations will give some information. Briefly, S is computed on ranks and so
978
How to choose between Pearson and Spearman correlation?
Shortest and mostly correct answer is: Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship (few infinities more general case, but for some power tradeoff). So if you assume/think that the relation is linear (or, as a special case, that those are a two measures of the same thing, so the relation is $y=1\cdot x+0$) and the situation is not too weired (check other answers for details), go with Pearson. Otherwise use Spearman.
How to choose between Pearson and Spearman correlation?
Shortest and mostly correct answer is: Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship (few infinities more general case, but for some power tradeoff). So if you ass
How to choose between Pearson and Spearman correlation? Shortest and mostly correct answer is: Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship (few infinities more general case, but for some power tradeoff). So if you assume/think that the relation is linear (or, as a special case, that those are a two measures of the same thing, so the relation is $y=1\cdot x+0$) and the situation is not too weired (check other answers for details), go with Pearson. Otherwise use Spearman.
How to choose between Pearson and Spearman correlation? Shortest and mostly correct answer is: Pearson benchmarks linear relationship, Spearman benchmarks monotonic relationship (few infinities more general case, but for some power tradeoff). So if you ass
979
How to choose between Pearson and Spearman correlation?
This happens often in statistics: there are a variety of methods which could be applied in your situation, and you don't know which one to choose. You should base your decision the pros and cons of the methods under consideration and the specifics of your problem, but even then the decision is usually subjective with no agreed-upon "correct" answer. Usually it is a good idea to try out as many methods as seem reasonable and that your patience will allow and see which ones give you the best results in the end. The difference between the Pearson correlation and the Spearman correlation is that the Pearson is most appropriate for measurements taken from an interval scale, while the Spearman is more appropriate for measurements taken from ordinal scales. Examples of interval scales include "temperature in Fahrenheit" and "length in inches", in which the individual units (1 deg F, 1 in) are meaningful. Things like "satisfaction scores" tend to be of the ordinal type since while it is clear that "5 happiness" is happier than "3 happiness", it is not clear whether you could give a meaningful interpretation of "1 unit of happiness". But when you add up many measurements of the ordinal type, which is what you have in your case, you end up with a measurement which is really neither ordinal nor interval, and is difficult to interpret. I would recommend that you convert your satisfaction scores to quantile scores and then work with the sums of those, as this will give you data which is a little more amenable to interpretation. But even in this case it is not clear whether Pearson or Spearman would be more appropriate.
How to choose between Pearson and Spearman correlation?
This happens often in statistics: there are a variety of methods which could be applied in your situation, and you don't know which one to choose. You should base your decision the pros and cons of t
How to choose between Pearson and Spearman correlation? This happens often in statistics: there are a variety of methods which could be applied in your situation, and you don't know which one to choose. You should base your decision the pros and cons of the methods under consideration and the specifics of your problem, but even then the decision is usually subjective with no agreed-upon "correct" answer. Usually it is a good idea to try out as many methods as seem reasonable and that your patience will allow and see which ones give you the best results in the end. The difference between the Pearson correlation and the Spearman correlation is that the Pearson is most appropriate for measurements taken from an interval scale, while the Spearman is more appropriate for measurements taken from ordinal scales. Examples of interval scales include "temperature in Fahrenheit" and "length in inches", in which the individual units (1 deg F, 1 in) are meaningful. Things like "satisfaction scores" tend to be of the ordinal type since while it is clear that "5 happiness" is happier than "3 happiness", it is not clear whether you could give a meaningful interpretation of "1 unit of happiness". But when you add up many measurements of the ordinal type, which is what you have in your case, you end up with a measurement which is really neither ordinal nor interval, and is difficult to interpret. I would recommend that you convert your satisfaction scores to quantile scores and then work with the sums of those, as this will give you data which is a little more amenable to interpretation. But even in this case it is not clear whether Pearson or Spearman would be more appropriate.
How to choose between Pearson and Spearman correlation? This happens often in statistics: there are a variety of methods which could be applied in your situation, and you don't know which one to choose. You should base your decision the pros and cons of t
980
How to choose between Pearson and Spearman correlation?
I ran into an interesting corner case today. If we are looking at very small numbers of samples, the difference between Spearman and Pearson can be dramatic. In case below, the two methods report an exactly opposite correlation. Some quick rules of thumb to decide on Spearman vs. Pearson: The assumptions of Pearson's are constant variance and linearity (or something reasonably close to that), and if these are not met, it might be worth trying Spearman's. The example above is a corner case that only pops up if there is a handful (<5) of data points. If there is >100 data points, and the data is linear or close to it, then Pearson will be very similar to Spearman. If you feel that linear regression is a suitable method to analyze your data, then the output of Pearson's will match the sign and magnitude of a linear regression slope (if the variables are standardized). If your data has some non-linear components that linear regression won't pick up, then first try to straighten out the data into a linear form by applying a transform (perhaps log e). If that doesn't work, then Spearman may be appropriate. I always try Pearson's first, and if that doesn't work, then I try Spearman's. Can you add any more rules of thumb or correct the ones I have just deduced? I have made this question a community Wiki so you can do so. p.s. Here is the R code to reproduce the graph above: # Script that shows that in some corner cases, the reported correlation for spearman can be # exactly opposite to that for pearson. In this case, spearman is +0.4 and pearson is -0.4. y = c(+2.5,-0.5, -0.8, -1) x = c(+0.2,-3, -2.5,+0.6) plot(y ~ x,xlim=c(-6,+6),ylim=c(-1,+2.5)) title("Correlation: corner case for Spearman vs. Pearson\nNote that they are exactly opposite each other (-0.4 vs. +0.4)") abline(v=0) abline(h=0) lm1=lm(y ~ x) abline(lm1,col="red") spearman = cor(y,x,method="spearman") pearson = cor(y,x,method="pearson") legend("topleft", c("Red line: regression.", sprintf("Spearman: %.5f",spearman), sprintf("Pearson: +%.5f",pearson) ))
How to choose between Pearson and Spearman correlation?
I ran into an interesting corner case today. If we are looking at very small numbers of samples, the difference between Spearman and Pearson can be dramatic. In case below, the two methods report an e
How to choose between Pearson and Spearman correlation? I ran into an interesting corner case today. If we are looking at very small numbers of samples, the difference between Spearman and Pearson can be dramatic. In case below, the two methods report an exactly opposite correlation. Some quick rules of thumb to decide on Spearman vs. Pearson: The assumptions of Pearson's are constant variance and linearity (or something reasonably close to that), and if these are not met, it might be worth trying Spearman's. The example above is a corner case that only pops up if there is a handful (<5) of data points. If there is >100 data points, and the data is linear or close to it, then Pearson will be very similar to Spearman. If you feel that linear regression is a suitable method to analyze your data, then the output of Pearson's will match the sign and magnitude of a linear regression slope (if the variables are standardized). If your data has some non-linear components that linear regression won't pick up, then first try to straighten out the data into a linear form by applying a transform (perhaps log e). If that doesn't work, then Spearman may be appropriate. I always try Pearson's first, and if that doesn't work, then I try Spearman's. Can you add any more rules of thumb or correct the ones I have just deduced? I have made this question a community Wiki so you can do so. p.s. Here is the R code to reproduce the graph above: # Script that shows that in some corner cases, the reported correlation for spearman can be # exactly opposite to that for pearson. In this case, spearman is +0.4 and pearson is -0.4. y = c(+2.5,-0.5, -0.8, -1) x = c(+0.2,-3, -2.5,+0.6) plot(y ~ x,xlim=c(-6,+6),ylim=c(-1,+2.5)) title("Correlation: corner case for Spearman vs. Pearson\nNote that they are exactly opposite each other (-0.4 vs. +0.4)") abline(v=0) abline(h=0) lm1=lm(y ~ x) abline(lm1,col="red") spearman = cor(y,x,method="spearman") pearson = cor(y,x,method="pearson") legend("topleft", c("Red line: regression.", sprintf("Spearman: %.5f",spearman), sprintf("Pearson: +%.5f",pearson) ))
How to choose between Pearson and Spearman correlation? I ran into an interesting corner case today. If we are looking at very small numbers of samples, the difference between Spearman and Pearson can be dramatic. In case below, the two methods report an e
981
How to choose between Pearson and Spearman correlation?
While agreeing with Charles' answer, I would suggest (on a strictly practical level) that you compute both of the coefficients and look at the differences. In many cases, they will be exactly the same, so you don't need to worry. If however, they are different then you need to look at whether or not you met the assumptions of Pearson's (constant variance and linearity) and if these are not met, you are probably better off using Spearman's.
How to choose between Pearson and Spearman correlation?
While agreeing with Charles' answer, I would suggest (on a strictly practical level) that you compute both of the coefficients and look at the differences. In many cases, they will be exactly the same
How to choose between Pearson and Spearman correlation? While agreeing with Charles' answer, I would suggest (on a strictly practical level) that you compute both of the coefficients and look at the differences. In many cases, they will be exactly the same, so you don't need to worry. If however, they are different then you need to look at whether or not you met the assumptions of Pearson's (constant variance and linearity) and if these are not met, you are probably better off using Spearman's.
How to choose between Pearson and Spearman correlation? While agreeing with Charles' answer, I would suggest (on a strictly practical level) that you compute both of the coefficients and look at the differences. In many cases, they will be exactly the same
982
R vs SAS, why is SAS preferred by private companies?
I think there are several issues (in ascending order of possible validity): Tradition / habit: people are used to SAS, and don't want to have to learn something new. (Making it more difficult, the way you think in SAS and R is different.) This can apply to anyone who might have to send you code, or read / use your code, including managers and colleagues. Distrust of freeware: I've had several people say they aren't willing to accept results from R because you don't have a for-profit company vetting the code to ensure it gives correct results before it goes out to customers, lest they end up losing business. Big data: R performs operations with everything in memory, whereas SAS doesn't necessarily. Thus, if your data approaches the limits of your memory, there will be problems. Personally, I only think #3 has any legitimate merit, although there are approaches to big data that have been developed with R. The issues with #1 speak for themselves. I think #2 ignores several facts: there is some vetting that goes on with R, many of the main packages are written by some of the biggest names in statistics, and there have been studies that compare the accuracy of different statistical software & R has certainly been competitive.
R vs SAS, why is SAS preferred by private companies?
I think there are several issues (in ascending order of possible validity): Tradition / habit: people are used to SAS, and don't want to have to learn something new. (Making it more difficult, the
R vs SAS, why is SAS preferred by private companies? I think there are several issues (in ascending order of possible validity): Tradition / habit: people are used to SAS, and don't want to have to learn something new. (Making it more difficult, the way you think in SAS and R is different.) This can apply to anyone who might have to send you code, or read / use your code, including managers and colleagues. Distrust of freeware: I've had several people say they aren't willing to accept results from R because you don't have a for-profit company vetting the code to ensure it gives correct results before it goes out to customers, lest they end up losing business. Big data: R performs operations with everything in memory, whereas SAS doesn't necessarily. Thus, if your data approaches the limits of your memory, there will be problems. Personally, I only think #3 has any legitimate merit, although there are approaches to big data that have been developed with R. The issues with #1 speak for themselves. I think #2 ignores several facts: there is some vetting that goes on with R, many of the main packages are written by some of the biggest names in statistics, and there have been studies that compare the accuracy of different statistical software & R has certainly been competitive.
R vs SAS, why is SAS preferred by private companies? I think there are several issues (in ascending order of possible validity): Tradition / habit: people are used to SAS, and don't want to have to learn something new. (Making it more difficult, the
983
R vs SAS, why is SAS preferred by private companies?
In addition to the good answers so far, I'd add the embarrassment factor. If you spend hundreds of thousands of dollars last year on SAS and SAS support, and you propose spending nothing for R, with extremely low support prices (Revolution, etc), someone up the chain's going to ask why. Was it a mistake to spend so much money last year when R existed last year? Or is it a mistake to drop professional software for something created by a group of volunteers? Once the problem's framed in that manner, it's a lose-lose proposition, so perhaps better to not bring it up.
R vs SAS, why is SAS preferred by private companies?
In addition to the good answers so far, I'd add the embarrassment factor. If you spend hundreds of thousands of dollars last year on SAS and SAS support, and you propose spending nothing for R, with e
R vs SAS, why is SAS preferred by private companies? In addition to the good answers so far, I'd add the embarrassment factor. If you spend hundreds of thousands of dollars last year on SAS and SAS support, and you propose spending nothing for R, with extremely low support prices (Revolution, etc), someone up the chain's going to ask why. Was it a mistake to spend so much money last year when R existed last year? Or is it a mistake to drop professional software for something created by a group of volunteers? Once the problem's framed in that manner, it's a lose-lose proposition, so perhaps better to not bring it up.
R vs SAS, why is SAS preferred by private companies? In addition to the good answers so far, I'd add the embarrassment factor. If you spend hundreds of thousands of dollars last year on SAS and SAS support, and you propose spending nothing for R, with e
984
R vs SAS, why is SAS preferred by private companies?
On top of what gung has correctly identified here, the biggest issue in the corporate world is legacy. And when you have a good quality production code that is known to do the job, you don't change it. SAS was out there since 1970s, and at the time it was the only effective, by then-standards, scripting statistical language. The amount of production code accumulated since then in SAS in pharma and government is unimaginable, tens of thosands of human years. Rewriting this in R or Stata would take a few years, the resulting code will become more flexible, more efficient, more transparent, easier and cheaper to maintain, but nobody will pay for such refactoring. (My experience doing this is that my Stata code is generally about three times shorter; I once had a project converting SPSS code into Stata where I made it about 20 times shorter. For those of you who worked in maintaining your statistical packages... well, you know what that means.) In a sense, this is a similar story with the academic publishers: they are riding a tide of the end users maintaining their subscriptions out of necessity; a university without subscription to Nature is not really a university. Free publishing via professional societies will make it cheaper, people prepare their submissions in LaTeX these days, so they are camera ready, and the same people will be providing the peer review, so there will be no quality setback on any of the dimensions. But... there's no brand name and the impact factor behind the online journals. This sums it all up: http://scatter.wordpress.com/2011/06/28/stata-12/. Stata is preferred in economics and policy-related circles, and the more I learn SAS, the more I like Stata.
R vs SAS, why is SAS preferred by private companies?
On top of what gung has correctly identified here, the biggest issue in the corporate world is legacy. And when you have a good quality production code that is known to do the job, you don't change it
R vs SAS, why is SAS preferred by private companies? On top of what gung has correctly identified here, the biggest issue in the corporate world is legacy. And when you have a good quality production code that is known to do the job, you don't change it. SAS was out there since 1970s, and at the time it was the only effective, by then-standards, scripting statistical language. The amount of production code accumulated since then in SAS in pharma and government is unimaginable, tens of thosands of human years. Rewriting this in R or Stata would take a few years, the resulting code will become more flexible, more efficient, more transparent, easier and cheaper to maintain, but nobody will pay for such refactoring. (My experience doing this is that my Stata code is generally about three times shorter; I once had a project converting SPSS code into Stata where I made it about 20 times shorter. For those of you who worked in maintaining your statistical packages... well, you know what that means.) In a sense, this is a similar story with the academic publishers: they are riding a tide of the end users maintaining their subscriptions out of necessity; a university without subscription to Nature is not really a university. Free publishing via professional societies will make it cheaper, people prepare their submissions in LaTeX these days, so they are camera ready, and the same people will be providing the peer review, so there will be no quality setback on any of the dimensions. But... there's no brand name and the impact factor behind the online journals. This sums it all up: http://scatter.wordpress.com/2011/06/28/stata-12/. Stata is preferred in economics and policy-related circles, and the more I learn SAS, the more I like Stata.
R vs SAS, why is SAS preferred by private companies? On top of what gung has correctly identified here, the biggest issue in the corporate world is legacy. And when you have a good quality production code that is known to do the job, you don't change it
985
R vs SAS, why is SAS preferred by private companies?
I have worked as effectively a SAS programmer for the last seven years, next to me a co-worker has been programming SAS longer than I have been alive. As noted here, there is a massive amount of inertia/legacy behind SAS; but SAS just like R is a way to a means, not the means itself. SAS is extremely efficient at sequential data access, and database access through SQL is extremely well integrated. PROC's are very well documented, but unfortunately not-entirely standardized with notation (PROC OPTMODEL and IML are two examples). It is a bit clumsy when it comes to writing complicated code, and not as elegant for parallel code. I have also found importing csv files to be a source of great misery at times and prefer to just dump it to R first then to a database. Although SAS does have interfaces to shared objects and dll's you don't get nice access to any header files or anything like that, and code distribution also isn't available through happy packages. There is however little concern about someone including some esoteric now-defunct or broken package in your code that you now need to maintain, and the quality of the code in SAS tends to be uniformly excellent (R core code is also excellent, and also freely available to anyone). As mentioned before SAS is also extremely expensive, but it is a good tool that I go to when I know there is a canned procedure that works well for my needs. R + SAS + mysql with a little bit of perl to glue to them together works amazingly :)
R vs SAS, why is SAS preferred by private companies?
I have worked as effectively a SAS programmer for the last seven years, next to me a co-worker has been programming SAS longer than I have been alive. As noted here, there is a massive amount of iner
R vs SAS, why is SAS preferred by private companies? I have worked as effectively a SAS programmer for the last seven years, next to me a co-worker has been programming SAS longer than I have been alive. As noted here, there is a massive amount of inertia/legacy behind SAS; but SAS just like R is a way to a means, not the means itself. SAS is extremely efficient at sequential data access, and database access through SQL is extremely well integrated. PROC's are very well documented, but unfortunately not-entirely standardized with notation (PROC OPTMODEL and IML are two examples). It is a bit clumsy when it comes to writing complicated code, and not as elegant for parallel code. I have also found importing csv files to be a source of great misery at times and prefer to just dump it to R first then to a database. Although SAS does have interfaces to shared objects and dll's you don't get nice access to any header files or anything like that, and code distribution also isn't available through happy packages. There is however little concern about someone including some esoteric now-defunct or broken package in your code that you now need to maintain, and the quality of the code in SAS tends to be uniformly excellent (R core code is also excellent, and also freely available to anyone). As mentioned before SAS is also extremely expensive, but it is a good tool that I go to when I know there is a canned procedure that works well for my needs. R + SAS + mysql with a little bit of perl to glue to them together works amazingly :)
R vs SAS, why is SAS preferred by private companies? I have worked as effectively a SAS programmer for the last seven years, next to me a co-worker has been programming SAS longer than I have been alive. As noted here, there is a massive amount of iner
986
R vs SAS, why is SAS preferred by private companies?
So I use both R and SAS - admittedly in academia - but there are a couple reasons that I tend to head toward SAS at times: Better documentation. R is getting better at this, but documentation, especially the official documentation, is often kind of terrible and opaque. Beyond that, SAS is supported by a massive infrastructure of books - the use R! series is helping this in R, but it's not quite there yet. I can turn to Paul Allison's Survival Analysis Using SAS, or Categorical Data Analysis Using SAS or the book I have on Monte Carlo methods using SAS and I have a book clearly written in a fairly consistent style for the language I'm using. Inertia. This isn't just "companies are lazy" - inertia has value too. There's institutional knowledge. So-and-so has code that does that - and does it well. Packages. Some packages in R are amazing. Some packages are not. You have to go find them, evaluate them, and even then there's some leap-of-faith issues in that the package is only as good as the guy writing it. It's hard to trust that. SAS has essentially the "full faith and credit of the SAS Institute" which has a pretty solid track record. Single-source support. If SAS is broken, you call SAS. If R is broken you call....?
R vs SAS, why is SAS preferred by private companies?
So I use both R and SAS - admittedly in academia - but there are a couple reasons that I tend to head toward SAS at times: Better documentation. R is getting better at this, but documentation, especi
R vs SAS, why is SAS preferred by private companies? So I use both R and SAS - admittedly in academia - but there are a couple reasons that I tend to head toward SAS at times: Better documentation. R is getting better at this, but documentation, especially the official documentation, is often kind of terrible and opaque. Beyond that, SAS is supported by a massive infrastructure of books - the use R! series is helping this in R, but it's not quite there yet. I can turn to Paul Allison's Survival Analysis Using SAS, or Categorical Data Analysis Using SAS or the book I have on Monte Carlo methods using SAS and I have a book clearly written in a fairly consistent style for the language I'm using. Inertia. This isn't just "companies are lazy" - inertia has value too. There's institutional knowledge. So-and-so has code that does that - and does it well. Packages. Some packages in R are amazing. Some packages are not. You have to go find them, evaluate them, and even then there's some leap-of-faith issues in that the package is only as good as the guy writing it. It's hard to trust that. SAS has essentially the "full faith and credit of the SAS Institute" which has a pretty solid track record. Single-source support. If SAS is broken, you call SAS. If R is broken you call....?
R vs SAS, why is SAS preferred by private companies? So I use both R and SAS - admittedly in academia - but there are a couple reasons that I tend to head toward SAS at times: Better documentation. R is getting better at this, but documentation, especi
987
R vs SAS, why is SAS preferred by private companies?
Nobody has suggested the reason it is preferred is plain idiocy. Here's two quotes I recently came across: "Using open-source software such as R was out of the question – we couldn't guarantee a perfectly repeatable outcome" and "We would be unable to provide any support for this as it is open source software" Two minutes with these people would show them how wrong they are.
R vs SAS, why is SAS preferred by private companies?
Nobody has suggested the reason it is preferred is plain idiocy. Here's two quotes I recently came across: "Using open-source software such as R was out of the question – we couldn't guarantee a pe
R vs SAS, why is SAS preferred by private companies? Nobody has suggested the reason it is preferred is plain idiocy. Here's two quotes I recently came across: "Using open-source software such as R was out of the question – we couldn't guarantee a perfectly repeatable outcome" and "We would be unable to provide any support for this as it is open source software" Two minutes with these people would show them how wrong they are.
R vs SAS, why is SAS preferred by private companies? Nobody has suggested the reason it is preferred is plain idiocy. Here's two quotes I recently came across: "Using open-source software such as R was out of the question – we couldn't guarantee a pe
988
R vs SAS, why is SAS preferred by private companies?
One issue does not seem to have been addressed explicitly: ass-covering. If you go with SAS and things blow up, the decision maker can always say that he bought state-of-the-art software, and how was he to know it would break? If he decided to go with R, this argument will be harder to make. Yes, this is related to the inertia argument already mentioned here. A few decades ago, they used to say that "noboby ever got fired for buying IBM", which has been called the greatest marketing phrase ever.
R vs SAS, why is SAS preferred by private companies?
One issue does not seem to have been addressed explicitly: ass-covering. If you go with SAS and things blow up, the decision maker can always say that he bought state-of-the-art software, and how was
R vs SAS, why is SAS preferred by private companies? One issue does not seem to have been addressed explicitly: ass-covering. If you go with SAS and things blow up, the decision maker can always say that he bought state-of-the-art software, and how was he to know it would break? If he decided to go with R, this argument will be harder to make. Yes, this is related to the inertia argument already mentioned here. A few decades ago, they used to say that "noboby ever got fired for buying IBM", which has been called the greatest marketing phrase ever.
R vs SAS, why is SAS preferred by private companies? One issue does not seem to have been addressed explicitly: ass-covering. If you go with SAS and things blow up, the decision maker can always say that he bought state-of-the-art software, and how was
989
R vs SAS, why is SAS preferred by private companies?
The times they are a changing As of 2015, actuaries under the age of about 35 prefer using R - the text books use both R and SAS code. Older actuaries never learnt to use R and prefer SAS and do not use R. The proportion of actuaries actually coding in SAS will decline. If you search Google scholar for papers referring to SAS - then you will find a steady 550-ish publications per year for the last few years. If you search for papers using R ("R Foundation for Statistical Computing"), there were 25,100 in 2014 and as of mid-July 2015 there are 16,700. Plotting the rate - it's growing very fast! SAS didn't help themselves for a few years by demanding large licence fees from universities - which they have since reversed - but it is now too late many universities have converted to teaching using R and not SAS. New statistical techniques are published in papers in conjunction with an R package. Some techniques that have been in base R for years have still not appeared in SAS. You can now use R from inside SAS. In summary, things are changing and changing fast.
R vs SAS, why is SAS preferred by private companies?
The times they are a changing As of 2015, actuaries under the age of about 35 prefer using R - the text books use both R and SAS code. Older actuaries never learnt to use R and prefer SAS and do not
R vs SAS, why is SAS preferred by private companies? The times they are a changing As of 2015, actuaries under the age of about 35 prefer using R - the text books use both R and SAS code. Older actuaries never learnt to use R and prefer SAS and do not use R. The proportion of actuaries actually coding in SAS will decline. If you search Google scholar for papers referring to SAS - then you will find a steady 550-ish publications per year for the last few years. If you search for papers using R ("R Foundation for Statistical Computing"), there were 25,100 in 2014 and as of mid-July 2015 there are 16,700. Plotting the rate - it's growing very fast! SAS didn't help themselves for a few years by demanding large licence fees from universities - which they have since reversed - but it is now too late many universities have converted to teaching using R and not SAS. New statistical techniques are published in papers in conjunction with an R package. Some techniques that have been in base R for years have still not appeared in SAS. You can now use R from inside SAS. In summary, things are changing and changing fast.
R vs SAS, why is SAS preferred by private companies? The times they are a changing As of 2015, actuaries under the age of about 35 prefer using R - the text books use both R and SAS code. Older actuaries never learnt to use R and prefer SAS and do not
990
R vs SAS, why is SAS preferred by private companies?
As a user of both SAS and R, I would say the biggest reason we use SAS over R (when we do) is its ability for sequential processing. We only need machines with no more than 4GB RAM to process 15 years worth of data. I would need a much larger machine using stock R and I have not tried to migrate the SAS code to run with Revolution R.
R vs SAS, why is SAS preferred by private companies?
As a user of both SAS and R, I would say the biggest reason we use SAS over R (when we do) is its ability for sequential processing. We only need machines with no more than 4GB RAM to process 15 year
R vs SAS, why is SAS preferred by private companies? As a user of both SAS and R, I would say the biggest reason we use SAS over R (when we do) is its ability for sequential processing. We only need machines with no more than 4GB RAM to process 15 years worth of data. I would need a much larger machine using stock R and I have not tried to migrate the SAS code to run with Revolution R.
R vs SAS, why is SAS preferred by private companies? As a user of both SAS and R, I would say the biggest reason we use SAS over R (when we do) is its ability for sequential processing. We only need machines with no more than 4GB RAM to process 15 year
991
R vs SAS, why is SAS preferred by private companies?
In the pharmaceutical industry SAS is used because it is what the FDA uses and likes. There are some serious reasons though. Results are traceable and the output has a time stamp. FDA statisticians can check what you get. It is very good for database management and it is reliable software. Of course many of the attributes of SAS can be argued to be present in other software packages including R and SAS is expensive. Still I think anyone wanting to be an applied statistician working in industry will be best off to at least learn how to program in SAS. Use R or STATA if you prefer but know SAS. When you work for a company that wants you to use SAS they will pay for the licensing.
R vs SAS, why is SAS preferred by private companies?
In the pharmaceutical industry SAS is used because it is what the FDA uses and likes. There are some serious reasons though. Results are traceable and the output has a time stamp. FDA statisticians
R vs SAS, why is SAS preferred by private companies? In the pharmaceutical industry SAS is used because it is what the FDA uses and likes. There are some serious reasons though. Results are traceable and the output has a time stamp. FDA statisticians can check what you get. It is very good for database management and it is reliable software. Of course many of the attributes of SAS can be argued to be present in other software packages including R and SAS is expensive. Still I think anyone wanting to be an applied statistician working in industry will be best off to at least learn how to program in SAS. Use R or STATA if you prefer but know SAS. When you work for a company that wants you to use SAS they will pay for the licensing.
R vs SAS, why is SAS preferred by private companies? In the pharmaceutical industry SAS is used because it is what the FDA uses and likes. There are some serious reasons though. Results are traceable and the output has a time stamp. FDA statisticians
992
R vs SAS, why is SAS preferred by private companies?
I think this quote from Anne H. Milley sums up the way a lot of people feel about R: We have customers who build engines for aircraft. I am happy they are not using freeware when I get on a jet. Unfortunately, I think this misconception (free==inferior) is common in the general public.
R vs SAS, why is SAS preferred by private companies?
I think this quote from Anne H. Milley sums up the way a lot of people feel about R: We have customers who build engines for aircraft. I am happy they are not using freeware when I get on a jet. U
R vs SAS, why is SAS preferred by private companies? I think this quote from Anne H. Milley sums up the way a lot of people feel about R: We have customers who build engines for aircraft. I am happy they are not using freeware when I get on a jet. Unfortunately, I think this misconception (free==inferior) is common in the general public.
R vs SAS, why is SAS preferred by private companies? I think this quote from Anne H. Milley sums up the way a lot of people feel about R: We have customers who build engines for aircraft. I am happy they are not using freeware when I get on a jet. U
993
R vs SAS, why is SAS preferred by private companies?
(slightly off topic): viewing it the other point round: some of the advantages R has in academia don't apply to industry. E.g. in academia it is a clear advantage if you can tell the students to go and get the software and work at home. In industry, you're usually not supposed to take any data home with you... Neither are you supposed to try out a few things(TM), download tons of packages (even if reputable & tested), use cutting-edge methods. Instead you're usually expected to stick to methods & code that have been used for years and where the behaviour is known for ages. You wouldn't win much academic merits with that. And of course, as has been mentioned: noone is going to risk redoing all kinds of regulatory approval for the sake of switching to R. From what I've seen that's less about R and more about the enormous costs + work for getting regulatory approval.
R vs SAS, why is SAS preferred by private companies?
(slightly off topic): viewing it the other point round: some of the advantages R has in academia don't apply to industry. E.g. in academia it is a clear advantage if you can tell the students to go a
R vs SAS, why is SAS preferred by private companies? (slightly off topic): viewing it the other point round: some of the advantages R has in academia don't apply to industry. E.g. in academia it is a clear advantage if you can tell the students to go and get the software and work at home. In industry, you're usually not supposed to take any data home with you... Neither are you supposed to try out a few things(TM), download tons of packages (even if reputable & tested), use cutting-edge methods. Instead you're usually expected to stick to methods & code that have been used for years and where the behaviour is known for ages. You wouldn't win much academic merits with that. And of course, as has been mentioned: noone is going to risk redoing all kinds of regulatory approval for the sake of switching to R. From what I've seen that's less about R and more about the enormous costs + work for getting regulatory approval.
R vs SAS, why is SAS preferred by private companies? (slightly off topic): viewing it the other point round: some of the advantages R has in academia don't apply to industry. E.g. in academia it is a clear advantage if you can tell the students to go a
994
R vs SAS, why is SAS preferred by private companies?
Whilst its quite pessimistic, my answer would be that the kind of people who make sweeping decisions in corporations like 'we just use SAS' are also the kind of people who don't trust what they don't understand, and automatically think the value of something is directly proportional to the amount of money you spend on it. This leads them to prefer paying for SAS rather than spend time investigating alternatives.
R vs SAS, why is SAS preferred by private companies?
Whilst its quite pessimistic, my answer would be that the kind of people who make sweeping decisions in corporations like 'we just use SAS' are also the kind of people who don't trust what they don't
R vs SAS, why is SAS preferred by private companies? Whilst its quite pessimistic, my answer would be that the kind of people who make sweeping decisions in corporations like 'we just use SAS' are also the kind of people who don't trust what they don't understand, and automatically think the value of something is directly proportional to the amount of money you spend on it. This leads them to prefer paying for SAS rather than spend time investigating alternatives.
R vs SAS, why is SAS preferred by private companies? Whilst its quite pessimistic, my answer would be that the kind of people who make sweeping decisions in corporations like 'we just use SAS' are also the kind of people who don't trust what they don't
995
R vs SAS, why is SAS preferred by private companies?
Why would a major drug company even want to convert to R from SAS? SAS costs millions but it is nothing to a drug company. However, converting all the stable reporting systems from SAS to R would cost 50-100 times more. SAS has phenomenal support system: every time I needed help they were able to provide it within few hours. And what exactly does R have that SAS does not: 1) better graphics...ok, it is a big one but graphics are not everything. besides R can always be used an extra tool to create some cool graphs and SAS is not too bad when it comes to graphics 2) modern and more efficient programming language. Many SAS users are not programmers and don't care about using a cool language. They just want to be able to analyze the data. I love R but it would be insane for a big company to convert to SAS. It could make sense for smaller firms though
R vs SAS, why is SAS preferred by private companies?
Why would a major drug company even want to convert to R from SAS? SAS costs millions but it is nothing to a drug company. However, converting all the stable reporting systems from SAS to R would cost
R vs SAS, why is SAS preferred by private companies? Why would a major drug company even want to convert to R from SAS? SAS costs millions but it is nothing to a drug company. However, converting all the stable reporting systems from SAS to R would cost 50-100 times more. SAS has phenomenal support system: every time I needed help they were able to provide it within few hours. And what exactly does R have that SAS does not: 1) better graphics...ok, it is a big one but graphics are not everything. besides R can always be used an extra tool to create some cool graphs and SAS is not too bad when it comes to graphics 2) modern and more efficient programming language. Many SAS users are not programmers and don't care about using a cool language. They just want to be able to analyze the data. I love R but it would be insane for a big company to convert to SAS. It could make sense for smaller firms though
R vs SAS, why is SAS preferred by private companies? Why would a major drug company even want to convert to R from SAS? SAS costs millions but it is nothing to a drug company. However, converting all the stable reporting systems from SAS to R would cost
996
R vs SAS, why is SAS preferred by private companies?
There are several main advantages, in no particular order SAS has a large installed base and a long track record I'm purposefully avoiding use of pejorative terms like "legacy" or "habit" Many companies have been using SAS for 30 or 40 years, and they have millions of lines of working code. In addition, there are all of the benefits of a stable code base with millions of user days in an area where small errors can be critical. This is the same reason that Unix flavors are still popular even though Unix is over 40 years old and obsolete in some ways. Finally, there is a large community of experienced SAS professionals who are used to solving business problems SAS is well suited to heterogeneous, complex data and operating environments Companies have lots of different data sources, based in different types of systems, as well as in many cases, multiple operating environments. R has only very recently gotten some extremely basic capabilities to deal with more than can be kept in memory. Compare this with SAS's ability to support native, optimized, in-database processing for terradata, to cite just one example. In most real world situations, the hardest part of analytics is dealing with the data and operating environment. (need to run your Windows developed model scoring code on the mainframe? With SAS, no problem. With R, you are out of luck.) R doesn't solve any of those problems. The user doesn't have to worry about being "on their own" A SAS user can be reasonably certain that every code module has been tested by qualified people. It is not necessary to devote time and effort to learning the provenance of the code, or independently validating it. Furthermore, if issues of any kind are encountered, robust assistance (from something as basic as documentation to something as comprehensive as detailed exploring unexpected results or behavior of a sophisticated method) the user can pick up the phone and get help. It's "good enough" The language turns off some people because it is different than modern languages for general programming. Having said that, the language is high level, powerful, expressive, and comprehensive. In short, once you learn it, it gets the job done. For companies, the elegance of the solution isn't much of a selling point.
R vs SAS, why is SAS preferred by private companies?
There are several main advantages, in no particular order SAS has a large installed base and a long track record I'm purposefully avoiding use of pejorative terms like "legacy" or "habit" Many comp
R vs SAS, why is SAS preferred by private companies? There are several main advantages, in no particular order SAS has a large installed base and a long track record I'm purposefully avoiding use of pejorative terms like "legacy" or "habit" Many companies have been using SAS for 30 or 40 years, and they have millions of lines of working code. In addition, there are all of the benefits of a stable code base with millions of user days in an area where small errors can be critical. This is the same reason that Unix flavors are still popular even though Unix is over 40 years old and obsolete in some ways. Finally, there is a large community of experienced SAS professionals who are used to solving business problems SAS is well suited to heterogeneous, complex data and operating environments Companies have lots of different data sources, based in different types of systems, as well as in many cases, multiple operating environments. R has only very recently gotten some extremely basic capabilities to deal with more than can be kept in memory. Compare this with SAS's ability to support native, optimized, in-database processing for terradata, to cite just one example. In most real world situations, the hardest part of analytics is dealing with the data and operating environment. (need to run your Windows developed model scoring code on the mainframe? With SAS, no problem. With R, you are out of luck.) R doesn't solve any of those problems. The user doesn't have to worry about being "on their own" A SAS user can be reasonably certain that every code module has been tested by qualified people. It is not necessary to devote time and effort to learning the provenance of the code, or independently validating it. Furthermore, if issues of any kind are encountered, robust assistance (from something as basic as documentation to something as comprehensive as detailed exploring unexpected results or behavior of a sophisticated method) the user can pick up the phone and get help. It's "good enough" The language turns off some people because it is different than modern languages for general programming. Having said that, the language is high level, powerful, expressive, and comprehensive. In short, once you learn it, it gets the job done. For companies, the elegance of the solution isn't much of a selling point.
R vs SAS, why is SAS preferred by private companies? There are several main advantages, in no particular order SAS has a large installed base and a long track record I'm purposefully avoiding use of pejorative terms like "legacy" or "habit" Many comp
997
R vs SAS, why is SAS preferred by private companies?
Customer support. I once had a chat with a friend working in a company specializing in installing servers, and he then explained to me why big companies always opt for Microsoft products rather than go open source. The advantage Microsoft has over its open source competitors is the customer support. If something goes wrong with the product, the company can call Microsoft, big companies even have personalized support for them. Not so with open source software. I think that is the exact same reason SAS is getting precedence over R.
R vs SAS, why is SAS preferred by private companies?
Customer support. I once had a chat with a friend working in a company specializing in installing servers, and he then explained to me why big companies always opt for Microsoft products rather than g
R vs SAS, why is SAS preferred by private companies? Customer support. I once had a chat with a friend working in a company specializing in installing servers, and he then explained to me why big companies always opt for Microsoft products rather than go open source. The advantage Microsoft has over its open source competitors is the customer support. If something goes wrong with the product, the company can call Microsoft, big companies even have personalized support for them. Not so with open source software. I think that is the exact same reason SAS is getting precedence over R.
R vs SAS, why is SAS preferred by private companies? Customer support. I once had a chat with a friend working in a company specializing in installing servers, and he then explained to me why big companies always opt for Microsoft products rather than g
998
R vs SAS, why is SAS preferred by private companies?
I once worked for a consulting company that gave SAS assistance to a large chip manufacturer in the Silicon Valley. Our contact person at the company told us that he got an offer by another company to give them the exact same consulting, by using a different software which covers all areas covered by SAS and which would cost the company a fraction of what SAS was charging them (\$30,000 as opposed to \$1,000,000). The contact person considered what to do and decided against informing his boss about the offer because he feared getting fired for using SAS in the first place and not considering cheaper alternatives. Instead, he insisted that our consulting company give their company a big break in our consulting fee. Our company agreed.
R vs SAS, why is SAS preferred by private companies?
I once worked for a consulting company that gave SAS assistance to a large chip manufacturer in the Silicon Valley. Our contact person at the company told us that he got an offer by another company t
R vs SAS, why is SAS preferred by private companies? I once worked for a consulting company that gave SAS assistance to a large chip manufacturer in the Silicon Valley. Our contact person at the company told us that he got an offer by another company to give them the exact same consulting, by using a different software which covers all areas covered by SAS and which would cost the company a fraction of what SAS was charging them (\$30,000 as opposed to \$1,000,000). The contact person considered what to do and decided against informing his boss about the offer because he feared getting fired for using SAS in the first place and not considering cheaper alternatives. Instead, he insisted that our consulting company give their company a big break in our consulting fee. Our company agreed.
R vs SAS, why is SAS preferred by private companies? I once worked for a consulting company that gave SAS assistance to a large chip manufacturer in the Silicon Valley. Our contact person at the company told us that he got an offer by another company t
999
R vs SAS, why is SAS preferred by private companies?
What about Frontends? What is R's equivalent for the SAS Enterprise Guide, Web Report Studio or Enterprise Miner? Edit: These tools make it possible for a non-programming User to use a DATA WAREHOUSE, without knowledge about the underlying technology. They are not primarily tools for the use of SAS as such. R GUI's are just IDE's for the R language/system, AFAIK. They cannot provide help for the non-technical user who wants to gain information & insight from the DWH.
R vs SAS, why is SAS preferred by private companies?
What about Frontends? What is R's equivalent for the SAS Enterprise Guide, Web Report Studio or Enterprise Miner? Edit: These tools make it possible for a non-programming User to use a DATA WAREHOUSE,
R vs SAS, why is SAS preferred by private companies? What about Frontends? What is R's equivalent for the SAS Enterprise Guide, Web Report Studio or Enterprise Miner? Edit: These tools make it possible for a non-programming User to use a DATA WAREHOUSE, without knowledge about the underlying technology. They are not primarily tools for the use of SAS as such. R GUI's are just IDE's for the R language/system, AFAIK. They cannot provide help for the non-technical user who wants to gain information & insight from the DWH.
R vs SAS, why is SAS preferred by private companies? What about Frontends? What is R's equivalent for the SAS Enterprise Guide, Web Report Studio or Enterprise Miner? Edit: These tools make it possible for a non-programming User to use a DATA WAREHOUSE,
1,000
R vs SAS, why is SAS preferred by private companies?
I don't think application security has been mentioned. This question was raised in Stack Overflow but dropped since it was off topic. I collaborate with the the Swedish National Board of Health and Welfare that use SAS. When I talked to their statisticians (that like R) they claim that their IT-folks prefer SAS since they don't trust the packages downloaded in R. My wife also works in SAS and her institution often claims the same issue... I would love to see some comments on this issue. I've done a quick search but haven't found any good references...
R vs SAS, why is SAS preferred by private companies?
I don't think application security has been mentioned. This question was raised in Stack Overflow but dropped since it was off topic. I collaborate with the the Swedish National Board of Health and We
R vs SAS, why is SAS preferred by private companies? I don't think application security has been mentioned. This question was raised in Stack Overflow but dropped since it was off topic. I collaborate with the the Swedish National Board of Health and Welfare that use SAS. When I talked to their statisticians (that like R) they claim that their IT-folks prefer SAS since they don't trust the packages downloaded in R. My wife also works in SAS and her institution often claims the same issue... I would love to see some comments on this issue. I've done a quick search but haven't found any good references...
R vs SAS, why is SAS preferred by private companies? I don't think application security has been mentioned. This question was raised in Stack Overflow but dropped since it was off topic. I collaborate with the the Swedish National Board of Health and We