source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
549,030
I have recently heard the term "sufficient dimension reduction" tossed around, although I have struggled to find material on the concept that I fully understand or that clearly explains why this specialized variant of dimension reduction is needed to begin with. What is the goal of sufficient dimension reduction techniques? Why can't it already be accomplished by non-sufficient dimension reduction techniques? When can this goal be achieved and when is is impossible? I've heard that "sufficient dimension reduction" simply denotes a reduction of dimensions without a loss of information, but I struggle to understand when this could occur unless there are linear dependencies in the data (and in such a situation I don't see why a new theoretical framework of data reduction would be necessary to eliminate the dependency). See Adragni and Cook (2009) "Sufficient dimension reduction and prediction in regression," which contains a definition of "sufficient dimension reduction" a few paragraphs into the article. This article may be accessed here for those with access to Royal Society Publishing, or check the Wikipedia article here for a brief overview. The few researchers who discuss this concept tend to present it as a new "paradigm," but the concept is over a decade old and does not seem to have caught on beyond very niche groups mainly in theoretical research. EDIT: It may be helpful if someone could find an example of an applied use of sufficient dimension reduction. Is there a study that has found and used a sufficient dimension reduction on a large real world dataset?
I cover this in some detail in Chapter 2 of RMS . Briefly, besides extrapolation problems, ordinary polynomials have these problems: The shape of the fit in one region of the data is influenced by far away points Polynomials cannot fit threshold effects, e.g., a nearly flat curve that suddenly accelerates Polynomials cannot fit logarithmic-looking relationships, e.g., ones that get progressively flatter over a long interval Polynomials can't have a very rapid turn These are reasons that regression splines are so popular, i.e., segmented polynomials tend to work better than unsegmented polynomials. You can also relax a continuity assumption for a spline if you want to have a discontinuous change point in the fit.
{ "source": [ "https://stats.stackexchange.com/questions/549030", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/298714/" ] }
549,334
In math class we were asked an optional problem I can't solve on my own: You are fighting a dragon with 250 hit points and are rolling a 20 sided die to deal damage. The dragon takes damage equal to the number rolled on the die with 2 exceptions: If you roll a 20 you deal no damage (dragon is immune to critical damage) and if you roll a 1 you break your combo and the dragon kills you immediatly. You roll until the dragon is defeated or you roll a 1 and break your combo. What is the probability of defeating the dragon? My mathematical approaches included calculating the expected damage per roll $$ E_{ w }= \frac{ \sum_{ n=2 }^{ w-1 }{ n } }{ w } $$ where $$ E_{ 20 }= 9.45 $$ and then tried calculate the probability of not failing in the minumum number of rolls required to defeat the dragon but in the end that led me to a probability so low anyone would consider it 0 and didn't seemed right to begin with. I did understand that the problem is a lot more complex than it seems, since the probability changes drastically with the number of rolls before any other roll and their results. Through empirical testing with a small python program, the probability of winnng came out to be only about 25%, which surprised me, given the chance of dealing damage is 90%. How would you correctly tackle the problem and even leave it "parameterized" so the HP of the dragon and number of sides on the die could be changed?
Your suggestion to solve a general version of the problem is spot on. Let's set this up. The die has two special outcomes: "death," which terminates the process, and 0, which has no effect. We might as well remove the 0, creating a "truncated die" of 19 sides. Let the probability that the die shows up a numeric value $\omega$ be $p(\omega)$ and let $p_{*}$ be the probability of death. $\Omega$ is the set of all these possible numeric values (not including "death," which is non-numeric). You aim to reach a total of $T$ before observing "death." When $T\le 0,$ you have achieved this threshold, so the chance of winning is $1.$ Otherwise, when $T \gt 0,$ partition the event "eventually I win" into the separate numbered outcomes occurring within $\Omega.$ It is an axiom of probability that the chance of this event is the sum of the chances of its (non-overlapping) components. $$\Pr(\text{Reach } T) = \sum_{\omega\in\Omega}p(\omega) \Pr(\text{Reach } T-\omega).$$ This recursion can be carried out with a simple form of a dynamic program. Unless the values and probabilities are very special, you can't hope to write a nice closed formula for the solution. You just have to carry out the calculation by computing the values for $T=1,$ $T=2,$ etc., in order. (This is called "eliminating tail recursion" in computer science.) The number of calculations performed by this algorithm is proportional to $T$ times the number of unique values on the truncated die. That makes it effective for moderately large $T$ and realistic dice. By means of such a program (using double precision floating point) I find the chance of reaching $T=250$ is $0.269880432506\ldots.$ As a reality check, you expect to deal about 9.5 damage points per roll, suggesting it will take about $250/9.5 \approx 27$ rolls to win. But on each roll there is a $1-1/20$ chance of surviving, so your chance of surviving by then is $$(1-1/20)^{27} = \left[(1-1/20)^{20}\right]^{27/20} \approx \exp(-27/20) = 0.25924\ldots.$$ That's pretty close to the answer I obtained. As another reality check, that's also close to your simulation results. Indeed, I obtain comparable simulation results: they do not differ significantly from the exact answer. I leave it to you to write the program. It will require a data structure that can store all the values of $\Pr(\text{Reach } T-\omega)$ given on the right hand side of the formula. Consider using an array for this, indexed by the values $0,1,2,\ldots, T.$ BTW, there are other solution methods. This problem describes a Markov Chain whose states are the total values that have been reached (from $0$ through $T,$ since anything larger than $T$ might as well be combined with $T$ ), along with a special (absorbing) "death" state. This chain can be analyzed in terms of a large matrix (having $250+2$ dimensions). As a practical matter this formulation isn't worth much, but the theory of Markov Chains provides insight into the process. You can mine that theory for information on your chances of winning and on how many rolls it is likely to take you to win if you do. Yet another approach was suggested in a comment to the question: exploit a geometric distribution. This refers to analyzing the process according to how many rolls you will have before dying. To deal hit points, imagine rolling the die, with its "death" face removed parallel with flipping a (biased) coin, whose function is to determine whether you die. (Thus, in the situation of the question, each of the 19 remaining sides of the die--including the $0,$ which must be left in--has a $1/19$ chance; more generally, the side with value $\omega$ has a chance $p(\omega)/(1-p_{*}).$ ) The two sides of the coin are "death" (with probability $p_{*}$ ) and "continue" (with probability $1-p_{*}$ ). At each turn you separately roll the truncated die and flip the coin, accumulating hit points until you reach the threshold $T$ or the coin turns up "death." A simplification is available, because it's easy to work out the chance of never flipping "death" in the first $n=0,1,2,\ldots$ turns: it equals $(1-p_{*})^n$ because all the flips are independent. Formally, this describes a random variable $N$ whose value equals $n$ with probability $p_{*}(1-p_{*})^{n}.$ (This is a geometric distribution ). To model the rolls of the truncated die, let $X_1$ be its value in the first roll, $X_2$ its value in the second roll, and so on. The sum after $n$ rolls therefore is $S_n = X_1 + X_2 + \cdots + X_n.$ (This is a random walk .) The chance of reaching the threshold can be computed by decomposing this event into the countable infinity of possibilities corresponding to the number of rolls needed. The basic rules of conditional probability tell us $$\Pr(\text{Reach }T) = \sum_{n=0}^\infty \Pr(S_N\ge T\mid N=n)\Pr(N=n).\tag{*}$$ The right hand side requires us to find these chances of each $S_n$ reaching the threshold. Although this isn't much of a simplification for a general die ( $S_n$ can have a very complicated distribution) , it leads to a good approximation when the process is likely to take many rolls before dying or reaching the threshold. That's because the sum of a large number of the $X_i$ approximately has a Normal distribution (according to the Central Limit Theorem ). When the expectation of the truncated die is $\mu$ (equal to $9.45/(1-0.05)$ in the question) and its variance is $\sigma^2,$ the distribution of $S_n$ has an expectation of $n\mu$ and variance of $n\sigma^2$ (according to basic laws of expectation and variance as applied to the independent variables $X_1,X_2,\ldots, X_n$ ). Writing $\Phi(x;n\mu,n\sigma^2$ for the Normal distribution function with expectation $n\mu$ and variance $n\sigma^2,$ we obtain $$\Pr(S_N\ge T\mid N=n) \approx 1 - \Phi\left(T-\frac{1}{2}; n\mu, n\sigma^2\right).$$ Plugging this into $(*)$ along with the geometric distribution law yields $$\Pr(\text{Reach }T) \approx \sum_{n=1}^\infty \left(1 - \Phi\left(T-\frac{1}{2}; n\mu, n\sigma^2\right)\right)p_{*}(1-p_{*})^n.$$ As a practical matter, we may terminate the sum by the time $\sum_{i=n}^\infty p_{*}(1-p_{*})^i$ is less than a tolerable error $\epsilon\gt 0,$ because the $\Phi$ factor never exceeds $1.$ This upper limit equals $\log(p_{*}\epsilon)/\log(1-p_{*}).$ (For the situation in the question, that upper limit is around 418.) We can also work out a reasonable value for beginning the sum (by skipping over really tiny initial values). That leads to the relatively short and simple code shown below (written in R ). Its output, obtained through the command dragon.Normal() , is $0.269879\ldots,$ agreeing with the exact answer to five significant figures. # Use `NA` to specify the "death" sides of the die; otherwise, specify the # values on its faces. `p` gives the associated probabilities. dragon.Normal <- function(threshold=250, die=c(NA, 2:19, 0), p=rep(1/20,20), eps=1e-6) { # # Find and remove the "death" face(s) to create the truncated die. # i.death <- which(is.na(die)) p.death <- sum(p[i.death]) if (p.death <= 0) return(1) die <- die[-i.death] p <- p[-i.death] p <- p / sum(p) # # Compute the expectation and variance of the truncated die. # mu <- sum(die * p) sigma2 <- sum((die-mu)^2 * p) # # Establish limits for the sum. # N <- ceiling(log(eps * p.death) / log(1 - p.death)) if (N > 1e8) stop("Problem is too large.") Z <- qnorm(eps) n <- min(N, max(1, floor((Z * (1 - sqrt(1 + 4*threshold*mu/(Z^2*sigma2)))/(2*mu))^2 * sigma2))) # # Compute the sum. # n <- n:N sum(p.death * (1-p.death)^n * pnorm(threshold - 1/2, mu*n, sqrt(sigma2*n), lower.tail=FALSE)) }
{ "source": [ "https://stats.stackexchange.com/questions/549334", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/338581/" ] }
549,356
Is it statistically correct to calculate correlations between two broken time series? For example, x <- seq(from = 1, to = 3000, by = 1) + 10 + rnorm(3000, sd = 5) y <- seq(from = 1, to = 3000, by = 1) + 10 + rnorm(3000, sd = 12) blockA1 <- c(1:500, 600:900) blockB1 <- c(501:599, 901:2299) blockA2 <- c(2300:2700,2901:3000) blockB2 <- c(2701:2900) blockA <- c(blockA1,blockA2) blockB <- c(blockB1,blockB2) cor(x[blockA], y[blockA]) Is this a valid way to compute correlations? EDIT: For example, let's say I collected heart rate data continuously on day 1 (x) and day 2 (y). blockA are rest periods and blockB are activity periods. By splitting the data into rest/activity blocks, I'm breaking the time series. Can I really assume that blockA1 + blockA2 (the entire rest period for day 1) is equivalent to measuring blockA continuously?
Your suggestion to solve a general version of the problem is spot on. Let's set this up. The die has two special outcomes: "death," which terminates the process, and 0, which has no effect. We might as well remove the 0, creating a "truncated die" of 19 sides. Let the probability that the die shows up a numeric value $\omega$ be $p(\omega)$ and let $p_{*}$ be the probability of death. $\Omega$ is the set of all these possible numeric values (not including "death," which is non-numeric). You aim to reach a total of $T$ before observing "death." When $T\le 0,$ you have achieved this threshold, so the chance of winning is $1.$ Otherwise, when $T \gt 0,$ partition the event "eventually I win" into the separate numbered outcomes occurring within $\Omega.$ It is an axiom of probability that the chance of this event is the sum of the chances of its (non-overlapping) components. $$\Pr(\text{Reach } T) = \sum_{\omega\in\Omega}p(\omega) \Pr(\text{Reach } T-\omega).$$ This recursion can be carried out with a simple form of a dynamic program. Unless the values and probabilities are very special, you can't hope to write a nice closed formula for the solution. You just have to carry out the calculation by computing the values for $T=1,$ $T=2,$ etc., in order. (This is called "eliminating tail recursion" in computer science.) The number of calculations performed by this algorithm is proportional to $T$ times the number of unique values on the truncated die. That makes it effective for moderately large $T$ and realistic dice. By means of such a program (using double precision floating point) I find the chance of reaching $T=250$ is $0.269880432506\ldots.$ As a reality check, you expect to deal about 9.5 damage points per roll, suggesting it will take about $250/9.5 \approx 27$ rolls to win. But on each roll there is a $1-1/20$ chance of surviving, so your chance of surviving by then is $$(1-1/20)^{27} = \left[(1-1/20)^{20}\right]^{27/20} \approx \exp(-27/20) = 0.25924\ldots.$$ That's pretty close to the answer I obtained. As another reality check, that's also close to your simulation results. Indeed, I obtain comparable simulation results: they do not differ significantly from the exact answer. I leave it to you to write the program. It will require a data structure that can store all the values of $\Pr(\text{Reach } T-\omega)$ given on the right hand side of the formula. Consider using an array for this, indexed by the values $0,1,2,\ldots, T.$ BTW, there are other solution methods. This problem describes a Markov Chain whose states are the total values that have been reached (from $0$ through $T,$ since anything larger than $T$ might as well be combined with $T$ ), along with a special (absorbing) "death" state. This chain can be analyzed in terms of a large matrix (having $250+2$ dimensions). As a practical matter this formulation isn't worth much, but the theory of Markov Chains provides insight into the process. You can mine that theory for information on your chances of winning and on how many rolls it is likely to take you to win if you do. Yet another approach was suggested in a comment to the question: exploit a geometric distribution. This refers to analyzing the process according to how many rolls you will have before dying. To deal hit points, imagine rolling the die, with its "death" face removed parallel with flipping a (biased) coin, whose function is to determine whether you die. (Thus, in the situation of the question, each of the 19 remaining sides of the die--including the $0,$ which must be left in--has a $1/19$ chance; more generally, the side with value $\omega$ has a chance $p(\omega)/(1-p_{*}).$ ) The two sides of the coin are "death" (with probability $p_{*}$ ) and "continue" (with probability $1-p_{*}$ ). At each turn you separately roll the truncated die and flip the coin, accumulating hit points until you reach the threshold $T$ or the coin turns up "death." A simplification is available, because it's easy to work out the chance of never flipping "death" in the first $n=0,1,2,\ldots$ turns: it equals $(1-p_{*})^n$ because all the flips are independent. Formally, this describes a random variable $N$ whose value equals $n$ with probability $p_{*}(1-p_{*})^{n}.$ (This is a geometric distribution ). To model the rolls of the truncated die, let $X_1$ be its value in the first roll, $X_2$ its value in the second roll, and so on. The sum after $n$ rolls therefore is $S_n = X_1 + X_2 + \cdots + X_n.$ (This is a random walk .) The chance of reaching the threshold can be computed by decomposing this event into the countable infinity of possibilities corresponding to the number of rolls needed. The basic rules of conditional probability tell us $$\Pr(\text{Reach }T) = \sum_{n=0}^\infty \Pr(S_N\ge T\mid N=n)\Pr(N=n).\tag{*}$$ The right hand side requires us to find these chances of each $S_n$ reaching the threshold. Although this isn't much of a simplification for a general die ( $S_n$ can have a very complicated distribution) , it leads to a good approximation when the process is likely to take many rolls before dying or reaching the threshold. That's because the sum of a large number of the $X_i$ approximately has a Normal distribution (according to the Central Limit Theorem ). When the expectation of the truncated die is $\mu$ (equal to $9.45/(1-0.05)$ in the question) and its variance is $\sigma^2,$ the distribution of $S_n$ has an expectation of $n\mu$ and variance of $n\sigma^2$ (according to basic laws of expectation and variance as applied to the independent variables $X_1,X_2,\ldots, X_n$ ). Writing $\Phi(x;n\mu,n\sigma^2$ for the Normal distribution function with expectation $n\mu$ and variance $n\sigma^2,$ we obtain $$\Pr(S_N\ge T\mid N=n) \approx 1 - \Phi\left(T-\frac{1}{2}; n\mu, n\sigma^2\right).$$ Plugging this into $(*)$ along with the geometric distribution law yields $$\Pr(\text{Reach }T) \approx \sum_{n=1}^\infty \left(1 - \Phi\left(T-\frac{1}{2}; n\mu, n\sigma^2\right)\right)p_{*}(1-p_{*})^n.$$ As a practical matter, we may terminate the sum by the time $\sum_{i=n}^\infty p_{*}(1-p_{*})^i$ is less than a tolerable error $\epsilon\gt 0,$ because the $\Phi$ factor never exceeds $1.$ This upper limit equals $\log(p_{*}\epsilon)/\log(1-p_{*}).$ (For the situation in the question, that upper limit is around 418.) We can also work out a reasonable value for beginning the sum (by skipping over really tiny initial values). That leads to the relatively short and simple code shown below (written in R ). Its output, obtained through the command dragon.Normal() , is $0.269879\ldots,$ agreeing with the exact answer to five significant figures. # Use `NA` to specify the "death" sides of the die; otherwise, specify the # values on its faces. `p` gives the associated probabilities. dragon.Normal <- function(threshold=250, die=c(NA, 2:19, 0), p=rep(1/20,20), eps=1e-6) { # # Find and remove the "death" face(s) to create the truncated die. # i.death <- which(is.na(die)) p.death <- sum(p[i.death]) if (p.death <= 0) return(1) die <- die[-i.death] p <- p[-i.death] p <- p / sum(p) # # Compute the expectation and variance of the truncated die. # mu <- sum(die * p) sigma2 <- sum((die-mu)^2 * p) # # Establish limits for the sum. # N <- ceiling(log(eps * p.death) / log(1 - p.death)) if (N > 1e8) stop("Problem is too large.") Z <- qnorm(eps) n <- min(N, max(1, floor((Z * (1 - sqrt(1 + 4*threshold*mu/(Z^2*sigma2)))/(2*mu))^2 * sigma2))) # # Compute the sum. # n <- n:N sum(p.death * (1-p.death)^n * pnorm(threshold - 1/2, mu*n, sqrt(sigma2*n), lower.tail=FALSE)) }
{ "source": [ "https://stats.stackexchange.com/questions/549356", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/212831/" ] }
549,382
I have always had the following question: Were regression models originally designed and intended to be used for predictions/inference only at the "cohort level" (i.e. for large groups of people), or were they also intended for prediction/inference at the individual level (provided the data is large enough and "well behaved" i.e. moderate levels of variance)? To illustrate my example, I use regression models from the domain of "survival analysis". As a refresher, survival analysis (also called "time to event" modelling) is interested in modelling the "time at which some "event" happens" (e.g. medical death, failure, mortgage defaulting, etc.), with specific focus to accommodate "incomplete" data (i.e. censored data - but this is not overly important for my question). Historically, survival analysis models were useful in estimating the "survival rates" in clinical studies, and estimating the effect of different covariates (i.e. predictor variables) on the survival rate. Using the R programming language, I will demonstrate applications of survival regression models at the cohort level and at the individual level . 1) Survival Analysis Regression at the Cohort Level Historically, survival regression models were mainly used for analysis at the cohort level - for example, comparing the survival rates between two different groups in a medical study (e.g. survival rate for men who smoke cigarettes vs women who smoke cigarettes). In the below example, survival analysis is performed to estimate the overall survival rate for all army veterans, and then to compare the survival rate for veterans who received "Treatment A" vs. "Treatment B" in this case, the "treatment" is the cohort, i.e. there are 2 cohorts): #load libraries library(survival) library(ranger) library(ggplot2) library(dplyr) library(ggfortify) #load data data(veteran) #analysis on all veterans km <- with(veteran, Surv(time, status)) #cohort level analysis km_trt_fit <- survfit(Surv(time, status) ~ trt, data=veteran) #visualize results autoplot(km_fit) autoplot(km_trt_fit) We can also fit a survival regression model (the "Cox Proportional Hazards Model") to estimate the effect of individual variables on the survival rate: #fit model cox <- coxph(Surv(time, status) ~ trt + celltype + karno + diagtime + age + prior , data = vet) #view results : summary of variables on survival rate summary(cox) Call: coxph(formula = Surv(time, status) ~ trt + celltype + karno + diagtime + age + prior, data = vet) n= 137, number of events= 128 coef exp(coef) se(coef) z Pr(>|z|) trttest 2.946e-01 1.343e+00 2.075e-01 1.419 0.15577 celltypesmallcell 8.616e-01 2.367e+00 2.753e-01 3.130 0.00175 ** celltypeadeno 1.196e+00 3.307e+00 3.009e-01 3.975 7.05e-05 *** celltypelarge 4.013e-01 1.494e+00 2.827e-01 1.420 0.15574 karno -3.282e-02 9.677e-01 5.508e-03 -5.958 2.55e-09 *** diagtime 8.132e-05 1.000e+00 9.136e-03 0.009 0.99290 age -8.706e-03 9.913e-01 9.300e-03 -0.936 0.34920 priorYes 7.159e-02 1.074e+00 2.323e-01 0.308 0.75794 #model fit statistics (good sign: p-value is less than alpha at 0.05) Concordance= 0.736 (se = 0.03 ) Rsquare= 0.364 (max possible= 0.999 ) Likelihood ratio test= 62.1 on 8 df, p=2e-10 Wald test = 62.37 on 8 df, p=2e-10 Score (logrank) test = 66.74 on 8 df, p=2e-11 We can also visualize this cohort level survival regression model : #plot autoplot(cox_fit) These are some of the main applications of survival regression models at the cohort level. 2) Survival Regression Analysis at the Individual Level I have heard that traditionally, survival analysis regression was not extended to the individual level and was only meant for the cohort level. This is because when these models were developed (e.g. 1950's- 1970's), datasets were very small - thus, survival models applied to small data and the individual level were almost destined to fail, because they data (when analyzed at the individual level) contained insufficient information to generalize to new data (in statistics, this is called the "ecological fallacy" and the "reference class problem"). As a quick example, imagine you have a dataset with 50 "healthy patients" and 50 "sick patients" with the goal of studying survival rates. The "sick patients" are made up of 25 patients with diabetes, 22 patients with kidney failure and 3 patients with liver failure. Assuming that "healthy patients" on average live longer than "sick patients" - if you wanted to make a regression model for only patients with liver failure, you would be forced to make a model with only 3 data points: apart from having too few observations for a model, these 3 patients might contain biases/outliers and likely are not representative of the true population of patients with liver failure, and thus this model is likely to perform poorly in the real world on new patients. However, if you decide to aggregate all the "sick patients" together - you now have a lot more data points and you can expose your model to a larger variety of data, making your model likely to make more accurate predictions about the "general" survival rates of "sick patients". However, in recent years - researchers now have access to far larger datasets (e.g. 100,000 rows) compared to the data available when the original survival models were developed (e.g. if you have data on 100,000 patients with liver disease, its likely that this contains enough diversity of information such that a model might be able to generalize to new patients). This lead to the intersection of survival analysis, big data and machine learning. For example, the "random forest" algorithm has been extended (Ishwaran et al., 2008) to the survival analysis context, where it can be used to make predictions on the individual level : #load libraries library(survival) library(dplyr) library(ranger) library(data.table) library(ggplot2) #use the built in "lung" data set #remove missing values (dataset is called "a") a = na.omit(lung) #create id variable a$ID <- seq_along(a[,1]) #create test set with only the first 3 rows new = a[1:3,] #create a training set by removing first three rows a = a[-c(1:3),] #fit survival model (random survival forest) r_fit <- ranger(Surv(time,status) ~ age + sex + ph.ecog + ph.karno + pat.karno + meal.cal + wt.loss, data = a, mtry = 4, importance = "permutation", splitrule = "extratrees", verbose = TRUE) #create new intermediate variables required for the survival curves death_times <- r_fit $unique.death.times surv_prob <-data.frame(r_fit$ survival) avg_prob <- sapply(surv_prob, mean) #use survival model to produce estimated survival curves for the first three observations pred <- predict(r_fit, new, type = 'response') $survival pred <- data.table(pred) colnames(pred) <- as.character(r_fit$ unique.death.times) #plot the results for these 3 patients plot(r_fit $unique.death.times, pred[1,], type = "l", col = "red") lines(r_fit$ unique.death.times, pred[2,], type = "l", col = "green") lines(r_fit$unique.death.times, pred[3,], type = "l", col = "blue") Optional : Variable Importance (not the same as Variable "Effect") and Model Fit vi <- data.frame(sort(round(r_fit$variable.importance, 4), decreasing = TRUE)) names(vi) <- "importance" head(vi) importance ph.ecog 0.0296 sex 0.0205 pat.karno 0.0123 ph.karno 0.0075 wt.loss 0.0022 age -0.0012 cat("Prediction Error = 1 - Harrell's c-index = ", r_fit$prediction.error) #note: in this particular example the model happens to perform poorly (c-index = 0 is bad, c-index = 1 is good) Prediction Error = 1 - Harrell's c-index = 0.3800807 Conclusion: Thus, when "enough" data is available (and if you are ready to assume a certain amount of "risk") : Can regression models be used to make predictions and inferences at the individual level? In the above picture, assuming we trust our model (e.g. if the c-value was higher), could we infer that the patient with the "Red Curve" is likely to survive longer than the patients with the "Blue Curve" and "Green Curve" - therefore, in some scenarios, it might be more advisable to prioritize the treatment of the patients corresponding to the "Blue Curve" and "Green Curve"? Thanks! Note: I have seen some attempts of estimating individual survival curves over here ( https://arxiv.org/pdf/1811.11347.pdf ) References: https://rviews.rstudio.com/2017/09/25/survival-analysis-with-r/ https://en.wikipedia.org/wiki/Reference_class_problem https://en.wikipedia.org/wiki/Ecological_fallacy
Your suggestion to solve a general version of the problem is spot on. Let's set this up. The die has two special outcomes: "death," which terminates the process, and 0, which has no effect. We might as well remove the 0, creating a "truncated die" of 19 sides. Let the probability that the die shows up a numeric value $\omega$ be $p(\omega)$ and let $p_{*}$ be the probability of death. $\Omega$ is the set of all these possible numeric values (not including "death," which is non-numeric). You aim to reach a total of $T$ before observing "death." When $T\le 0,$ you have achieved this threshold, so the chance of winning is $1.$ Otherwise, when $T \gt 0,$ partition the event "eventually I win" into the separate numbered outcomes occurring within $\Omega.$ It is an axiom of probability that the chance of this event is the sum of the chances of its (non-overlapping) components. $$\Pr(\text{Reach } T) = \sum_{\omega\in\Omega}p(\omega) \Pr(\text{Reach } T-\omega).$$ This recursion can be carried out with a simple form of a dynamic program. Unless the values and probabilities are very special, you can't hope to write a nice closed formula for the solution. You just have to carry out the calculation by computing the values for $T=1,$ $T=2,$ etc., in order. (This is called "eliminating tail recursion" in computer science.) The number of calculations performed by this algorithm is proportional to $T$ times the number of unique values on the truncated die. That makes it effective for moderately large $T$ and realistic dice. By means of such a program (using double precision floating point) I find the chance of reaching $T=250$ is $0.269880432506\ldots.$ As a reality check, you expect to deal about 9.5 damage points per roll, suggesting it will take about $250/9.5 \approx 27$ rolls to win. But on each roll there is a $1-1/20$ chance of surviving, so your chance of surviving by then is $$(1-1/20)^{27} = \left[(1-1/20)^{20}\right]^{27/20} \approx \exp(-27/20) = 0.25924\ldots.$$ That's pretty close to the answer I obtained. As another reality check, that's also close to your simulation results. Indeed, I obtain comparable simulation results: they do not differ significantly from the exact answer. I leave it to you to write the program. It will require a data structure that can store all the values of $\Pr(\text{Reach } T-\omega)$ given on the right hand side of the formula. Consider using an array for this, indexed by the values $0,1,2,\ldots, T.$ BTW, there are other solution methods. This problem describes a Markov Chain whose states are the total values that have been reached (from $0$ through $T,$ since anything larger than $T$ might as well be combined with $T$ ), along with a special (absorbing) "death" state. This chain can be analyzed in terms of a large matrix (having $250+2$ dimensions). As a practical matter this formulation isn't worth much, but the theory of Markov Chains provides insight into the process. You can mine that theory for information on your chances of winning and on how many rolls it is likely to take you to win if you do. Yet another approach was suggested in a comment to the question: exploit a geometric distribution. This refers to analyzing the process according to how many rolls you will have before dying. To deal hit points, imagine rolling the die, with its "death" face removed parallel with flipping a (biased) coin, whose function is to determine whether you die. (Thus, in the situation of the question, each of the 19 remaining sides of the die--including the $0,$ which must be left in--has a $1/19$ chance; more generally, the side with value $\omega$ has a chance $p(\omega)/(1-p_{*}).$ ) The two sides of the coin are "death" (with probability $p_{*}$ ) and "continue" (with probability $1-p_{*}$ ). At each turn you separately roll the truncated die and flip the coin, accumulating hit points until you reach the threshold $T$ or the coin turns up "death." A simplification is available, because it's easy to work out the chance of never flipping "death" in the first $n=0,1,2,\ldots$ turns: it equals $(1-p_{*})^n$ because all the flips are independent. Formally, this describes a random variable $N$ whose value equals $n$ with probability $p_{*}(1-p_{*})^{n}.$ (This is a geometric distribution ). To model the rolls of the truncated die, let $X_1$ be its value in the first roll, $X_2$ its value in the second roll, and so on. The sum after $n$ rolls therefore is $S_n = X_1 + X_2 + \cdots + X_n.$ (This is a random walk .) The chance of reaching the threshold can be computed by decomposing this event into the countable infinity of possibilities corresponding to the number of rolls needed. The basic rules of conditional probability tell us $$\Pr(\text{Reach }T) = \sum_{n=0}^\infty \Pr(S_N\ge T\mid N=n)\Pr(N=n).\tag{*}$$ The right hand side requires us to find these chances of each $S_n$ reaching the threshold. Although this isn't much of a simplification for a general die ( $S_n$ can have a very complicated distribution) , it leads to a good approximation when the process is likely to take many rolls before dying or reaching the threshold. That's because the sum of a large number of the $X_i$ approximately has a Normal distribution (according to the Central Limit Theorem ). When the expectation of the truncated die is $\mu$ (equal to $9.45/(1-0.05)$ in the question) and its variance is $\sigma^2,$ the distribution of $S_n$ has an expectation of $n\mu$ and variance of $n\sigma^2$ (according to basic laws of expectation and variance as applied to the independent variables $X_1,X_2,\ldots, X_n$ ). Writing $\Phi(x;n\mu,n\sigma^2$ for the Normal distribution function with expectation $n\mu$ and variance $n\sigma^2,$ we obtain $$\Pr(S_N\ge T\mid N=n) \approx 1 - \Phi\left(T-\frac{1}{2}; n\mu, n\sigma^2\right).$$ Plugging this into $(*)$ along with the geometric distribution law yields $$\Pr(\text{Reach }T) \approx \sum_{n=1}^\infty \left(1 - \Phi\left(T-\frac{1}{2}; n\mu, n\sigma^2\right)\right)p_{*}(1-p_{*})^n.$$ As a practical matter, we may terminate the sum by the time $\sum_{i=n}^\infty p_{*}(1-p_{*})^i$ is less than a tolerable error $\epsilon\gt 0,$ because the $\Phi$ factor never exceeds $1.$ This upper limit equals $\log(p_{*}\epsilon)/\log(1-p_{*}).$ (For the situation in the question, that upper limit is around 418.) We can also work out a reasonable value for beginning the sum (by skipping over really tiny initial values). That leads to the relatively short and simple code shown below (written in R ). Its output, obtained through the command dragon.Normal() , is $0.269879\ldots,$ agreeing with the exact answer to five significant figures. # Use `NA` to specify the "death" sides of the die; otherwise, specify the # values on its faces. `p` gives the associated probabilities. dragon.Normal <- function(threshold=250, die=c(NA, 2:19, 0), p=rep(1/20,20), eps=1e-6) { # # Find and remove the "death" face(s) to create the truncated die. # i.death <- which(is.na(die)) p.death <- sum(p[i.death]) if (p.death <= 0) return(1) die <- die[-i.death] p <- p[-i.death] p <- p / sum(p) # # Compute the expectation and variance of the truncated die. # mu <- sum(die * p) sigma2 <- sum((die-mu)^2 * p) # # Establish limits for the sum. # N <- ceiling(log(eps * p.death) / log(1 - p.death)) if (N > 1e8) stop("Problem is too large.") Z <- qnorm(eps) n <- min(N, max(1, floor((Z * (1 - sqrt(1 + 4*threshold*mu/(Z^2*sigma2)))/(2*mu))^2 * sigma2))) # # Compute the sum. # n <- n:N sum(p.death * (1-p.death)^n * pnorm(threshold - 1/2, mu*n, sqrt(sigma2*n), lower.tail=FALSE)) }
{ "source": [ "https://stats.stackexchange.com/questions/549382", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/77179/" ] }
550,800
My friends are in a bit of an argument over Dungeons & Dragons. My player managed to guess the outcome of a D20 roll before it happened, and my friend said that his chance of guessing the number was 1 in 20. Another friend argues that his chance of guessing the roll is 1 in 400 because the probability of him randomly guessing a number and then rolling it were both 1 in 20 so the compound probability is 1 in 400. Which of these probabilities is a better characterization of our situation, and what were really his chances?
There are 400 possibilities and 20 of them, each occuring with probability $\frac{1}{400}$ , have the guess equal to the outcome. So the total probability of having the guess equal to the outcome is $20\cdot \frac{1}{400} = \frac{20}{400} = \frac{1}{20}$ $$\small{ \begin{array}{rc} & \text{OUTCOME}\\ \begin{array}{} \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{E}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{U}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{G}} \\ \end{array} &\begin{array}{c|ccccccccccccccccccccccccccc} &1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20 \\ \hline 1 & \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 2 & \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 3 & \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 4 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 5 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 6 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 7 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 8 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 9 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 10 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 11 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 12 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 13 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 14 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 15 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 16 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 17 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}\\ 18 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}& \frac{1}{400}\\ 19 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}& \frac{1}{400}\\ 20 & \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \frac{1}{400}& \color{red}{ \frac{1}{400}}\\ \end{array}\end{array}}$$ More general If the guesses do not have equal ${1}/{20}$ probability for each number, but instead values $p_i$ then the 400 possibilities would not be all with probability $(1/20)\cdot(1/20)={1}/{400}$ , but instead ${p_i}/{20}$ . The concept is not different however. The answer boils down again to the sum of the diagonal and is $\sum_{i=1}^{20} \frac{p_i}{20} = \frac{1}{20} \sum_{i=1}^{20} p_i = \frac{1}{20}$ . $$\small{ \begin{array}{rc} & \text{OUTCOME}\\ \begin{array}{} \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{S}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{E}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{U}} \\ \require{HTML} \style{display: inline-block; transform: rotate(270deg)}{\text{G}} \\ \end{array} &\begin{array}{c|ccccccccccccccccccccccccccc} &1&2&3&4&5&6&7&8&9&10&11&12&13&14&15&16&17&18&19&20 \\ \hline 1& \color{red}{ \frac{p_{1}}{20}}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}& \frac{p_{1}}{20}\\2& \frac{p_{2}}{20}& \color{red}{ \frac{p_{2}}{20}}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}& \frac{p_{2}}{20}\\3& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \color{red}{ \frac{p_{3}}{20}}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}& \frac{p_{3}}{20}\\4& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \color{red}{ \frac{p_{4}}{20}}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}& \frac{p_{4}}{20}\\5& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \color{red}{ \frac{p_{5}}{20}}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}& \frac{p_{5}}{20}\\6& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \color{red}{ \frac{p_{6}}{20}}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}& \frac{p_{6}}{20}\\7& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \color{red}{ \frac{p_{7}}{20}}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}& \frac{p_{7}}{20}\\8& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \color{red}{ \frac{p_{8}}{20}}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}& \frac{p_{8}}{20}\\9& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \color{red}{ \frac{p_{9}}{20}}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}& \frac{p_{9}}{20}\\10& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \color{red}{ \frac{p_{10}}{20}}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}& \frac{p_{10}}{20}\\11& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \color{red}{ \frac{p_{11}}{20}}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}& \frac{p_{11}}{20}\\12& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \color{red}{ \frac{p_{12}}{20}}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}& \frac{p_{12}}{20}\\13& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \color{red}{ \frac{p_{13}}{20}}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}& \frac{p_{13}}{20}\\14& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \color{red}{ \frac{p_{14}}{20}}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}& \frac{p_{14}}{20}\\15& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \color{red}{ \frac{p_{15}}{20}}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}& \frac{p_{15}}{20}\\16& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \color{red}{ \frac{p_{16}}{20}}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}& \frac{p_{16}}{20}\\17& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \color{red}{ \frac{p_{17}}{20}}& \frac{p_{17}}{20}& \frac{p_{17}}{20}& \frac{p_{17}}{20}\\18& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \frac{p_{18}}{20}& \color{red}{ \frac{p_{18}}{20}}& \frac{p_{18}}{20}& \frac{p_{18}}{20}\\19& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \frac{p_{19}}{20}& \color{red}{ \frac{p_{19}}{20}}& \frac{p_{19}}{20}\\20& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \frac{p_{20}}{20}& \color{red}{ \frac{p_{20}}{20}}\\ \end{array}\end{array}}$$ More interesting is the case when both probabilities, for the guess and for the outcome, are not uniform (not equal probabilities). For instance, we could imagine rolling an unfair d20 two times. Then the probability will be equal to the expectation $E[p_i] = \sum_{i=1}^{20} p_i^2$ . This will be larger than $\frac{1}{20}$ if the $p_i$ are unequal.
{ "source": [ "https://stats.stackexchange.com/questions/550800", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/339657/" ] }
550,847
This is an interview question I got and could not solve. Consider a two-person game where A and B take turns sampling from a uniform distribution $U[0, 1]$ . The game continues as long as they get a continuously increasing sequence. If, at any point, a player gets a number less than the last number (the largest number so far), that player loses. A goes first. What is the probability of A winning? For example, if the sequence is $0.1 (A), 0.15 (B), 0.2 (A), 0.25 (B), 0.12 (A)$ , then A loses. I think because A has no restrictions on their very first turn while B does, A's chance of winning is higher than 0.5. At any given turn $j$ , we're looking at the probability that a newly sampled number is larger than the last sampled number $x_{j-1}$ . This is $1 - CDF$ of a uniform distribution, which is just $F(x) = x$ , so the probability of getting a larger number on step $j$ is $1 - x_{j-1}$ . But I don't know where to go from here. If it were a discrete sampling, I would try to condition on B's last number maybe but here, I'm not sure what to do. I could still condition using integration but I don't know how to also use the information about whether on turn $j$ , we are considering A or B or that A went first. My thinking is we use the parity of $j$ but not quite sure how to. Calling A's first turn $j = 1$ , we have if $j~\mathrm{mod}~2 = 0$ then it's B's turn and if $j~\mathrm{mod}~2 = 1$ , it's A turn. So if $j~\mathrm{mod}~2 = 1$ , A loses with probability $x_{j - 1}$ but if $j~\mathrm{mod}~2 = 0$ , A wins with probability $x_{j - 1}$ ? Another thought process I had was to consider the problem graphically. I was picturing a 1 x 1 square where the x and y axis represent A and B's numbers respectively. A samples a number first, which immediately shrinks the "safe" region for B. This continues until one player loses. At each step, the height and width of the unit square take turns decreasing by $x_{j} - x_{j - 1}$ . I also recognize that after A's first step, we are essentially back to the same game with just a starting position of $x_1$ instead of $0$ , so this might involve setting up a recurrence relation but don't know how. I'm happy with just hints on how to solve this but I won't mind a solution either.
You can solve this combinatorially, without using calculus. All you need to look at is the probability that the first $n$ samples are in a certain order, and for any particular order this is simply $1/n!$ The game ends after exactly $n$ steps if and only if the first $n-1$ samples are in increasing order, and the last sample is not. The last sample can occupy any of the $n$ positions except the highest, so there are $n-1$ such sequences; hence the probability that the game ends after exactly $n$ steps is $\frac{n-1}{n!}$ . And $A$ wins if the game ends after an even number of steps, so $A$ 's probability of winning is $$\begin{align} \sum_{n=1}^\infty\frac{2n-1}{(2n!)} & = \sum_{n=1}^\infty\left(\frac{1}{(2n-1)!}-\frac{1}{(2n!)}\right) \\ & = \sum_{n=1}^\infty\frac{(-1)^{n+1}}{n!} \\ & = 1-\sum_{n=0}^\infty\frac{(-1)^n}{n!} \\ & = 1 - \frac{1}{e} \end{align}$$ This assumes nothing about the particular distribution of the samples, except that it is continuous. So the answer is the same whatever the distribution.
{ "source": [ "https://stats.stackexchange.com/questions/550847", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/255114/" ] }
552,339
I'd like to understand why parametric tests are more powerful than their non-parametric alternatives. Is the word choice of "power" the same as statistical power? As I understand it, power just relates to the likelihood of getting a p-value that will correctly reject a false/incorrect null hypothesis, but I don't understand how this relates to statistical tests based on the normal distribution specifically.
This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so I guess it's an answer. Why are parametric tests more powerful than non-parametric tests? As a general statement, the title premise is false. Parametric tests are not in general more powerful than nonparametric tests. Some books make such general claims but it makes no sense unless we are very specific about which parametric tests and which nonparametric tests under which parametric assumptions, and we find that in fact it's typically only true if we specifically choose the circumstances under which a parametric test has the highest power relative to any other test -- and even then, there may often be nonparametric tests that have equivalent power in very large samples (with small effect sizes). Is the word choice of "power" the same as statistical power? Yes. However, to compute power we need to specify a precise set of assumptions and a specific alternative. I don't understand how this relates to statistical tests based on the normal distribution specifically. Nothing about the terms "parametric" nor "nonparametric" relate specifically to the normal distribution. see the opening paragraph here: https://en.wikipedia.org/wiki/Parametric_statistics Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modeled by a probability distribution that has a fixed set of parameters. $^{[1]}$ Conversely a non-parametric model does not assume an explicit (finite-parametric) mathematical form for the distribution when modeling the data. However, it may make some assumptions about that distribution, such as continuity or symmetry. Some textbooks (particularly ones written for students in some application areas, typically by academics in those areas) get this definition quite wrong. Beware; in my experience, if this term is misused, much else will tend to be wrong as well. Can we make a true statement that says something like what's in your question? Yes, but it requires heavy qualification. If we use the uniformly most powerful test (should such a test exist) under some specific distributional assumption, and that distributional assumption is exactly correct, and all the other assumptions hold, then a nonparametric test will not exceed that power (otherwise the parametric test would not have been uniformly most powerful after all). However - in spite of stacking the deck in favour of the parametric test like that - in many cases you can find a nonparametric test that has the same large sample power in exactly that stacked-deck situation -- it just won't be one of the common rank-based tests you're likely to have seen before. What we're doing is in the parametric case choosing a test statistic which has all the information in the statistic about the difference from the null, given the distributional assumption and the specific form of alternative. If you optimize power under some set of assumptions, obviously you can't beat it under those assumptions, and that's the situation we're in. Conover's book Practical Nonparametric Statistics has a section discussing tests with an asymptotic relative efficiency (ARE) of 1, relative to tests that assume normality. This ARE is while under that normal assumption. He focuses there on normal scores tests (score-based rank-tests which I would tend to avoid in most typical situations for other reasons), but it does help to illustrate that the claimed advantages for parametric tests may not always be so clear. It's the next section (on permutation tests, under "Fishers Method of Randomization") where I tend to focus. In any case, such stacking of the deck in favour of the parametric assumption still doesn't universally beat nonparametric tests. Of course, in a real-world testing situation such neatly 'stacked decks' don't occur. The parametric model is not a fact about our real data, but a model -- a convenient approximation. As George Box put it, All models are wrong . In this case the questions we would want to ask are (a) "is there a nonparametric test that's essentially as powerful as this parametric test in the situation where the parametric assumption holds?" (to which the answer is often 'yes') and (b) "how far do we need to modify the exact parametric assumption before it is less powerful than some suitable nonparametric test?" (which is often "hardly at all"). In that case, if you don't know which of the two sets of circumstances you're in, why would you prefer the parametric test? Let me address a common test. Consider the two-sample equal-variance t-test, which is uniformly most powerful for a one-sided test of a shift in mean when the population is exactly normal. (a) Is it more powerful than every nonparametric test? Well, no, in the sense that there are nonparametric tests whose asymptotic relative efficiency is 1 (that is, if you look at the ratio of sample sizes required to achieve the same power at a given significance level, that ratio goes to 1 in large samples); specifically there are permutation tests (e.g. based on the same statistic) with this property. The asymptotic power is also a good guide to the relative power at typical sample sizes (if you make sure the tests are being performed at the same actual significance level). (b) Do you need to modify the situation much before some non-parametric test has better power? As I suggested above, in this location-test under normality case, hardly at all. Even if we restrict consideration to just the most commonly used rank tests (which is limiting our potential power), you don't need to make the distribution very much more heavy-tailed than the normal before the Wilcoxon-Mann-Whitney test typically has better power. If we're allowed to choose something with better power at the normal (though the Wilcoxon-Mann-Whitney has excellent performance there), it can kick in even quicker. It can be extremely hard to tell whether you're sampling from a population with a very slightly heavier tail than the one you assumed, so having slightly better power (at best) in a situation you cannot be confident holds may be an extremely dubious advantage. In any case you should not try to tell which situation you're in by looking at the sample you're conducting the test on (at least not if it will affect your choice of test), since that data-based test choice will impact the properties of your subsequently chosen test.
{ "source": [ "https://stats.stackexchange.com/questions/552339", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/337106/" ] }
552,464
Quote from Babe Ruth: Every strike brings me closer to the next home run. As I understand memorylessness, this is meaningless. For every at-bat, there is a certain probability that he will strike, and there is a certain probability that he will hit a home run, and that's that. The likelihood of a home run at any particular point in time does not increase as strikes accrue. However, I have an intuitive understanding of what he means. Is there some statistically-rigorous way to express it or make sense of it? Maybe it makes sense for someone looking back on Babe Ruth's career with the benefit of hindsight. Or, maybe if we imagine an omniscient deity who can see the entire timeline of the universe at once. The deity can indeed see that, from any particular moment, there are N strikes remaining before Ruth hits the next home run. Another strike reduces that number to N-1. So, indeed, every strike brings him closer to the next home run. Epilogue If I could go back in time and rewrite this question, I would have omitted all the baseball references and simply described a guy rolling dice, hoping for a seven. He says, "I'm hoping for a seven, but I'm not bothered when I get something else, because every roll of the dice brings me closer to that seven!" Assuming he eventually rolls a seven, is his assertion the gambler's fallacy? Why or why not? Thanks to @Ben for articulating that this is not the gambler's fallacy. It would have been the gambler's fallacy if he had instead said, "Every roll of the dice which does not result in a seven makes it more likely that the next roll results in a seven." The guy didn't make any such statement, and he didn't make any statement at all about probability, merely about the passage of time. By assuming that there is a seven in his future, we have made it undeniably true that every roll of the dice brings him closer to the seven. In fact, it is trivially true. Every second that ticks by, even when he is sleeping, brings him closer to that seven.
It is both meaningful and (usually) correct You are overcomplicating this by bringing probability into a simple non-probabilistic assertion. You need not invoke an omniscient deity in order to accept that there is a reality that exists independently of knowledge of it. (You seem to be operating under the assumption that reality is only admissible to discussion if there is an omniscient being with total knowledge of it; this is a reasonably common misconception of probability, which is examined in this related question .) The simplest rigorous examination of this statement is a non-statistical analysis based on looking at the underlying population of values pertaining to all the balls Babe Ruth ever hit. Let $X_1,...,X_N$ be the ordered career outcomes of all balls faced by Babe Ruth, with $X_i = \bullet$ denoting a strike and $X_i = \diamond$ denoting a home-run (we need not specify the notation for other possible outcomes). At the end of ball $n$ the number of balls until the next home-run is: $$B_n \equiv \min \{ k \in \mathbb{N} | X_{n+k} = \diamond \}.$$ Now, we know that a strike and a home-run are mutually exclusive --- i.e., no single ball can be both. Consequently, if ball $n+1$ is a strike (i.e., if $X_{n+1} = \bullet$ ) and if $B_n<\infty$ (i.e., if Babe has at least one home-run left in his career) then we can easily show that $B_{n+1} = B_n-1$ . This confirms Babe's statement that his strike brings him (one ball) closer to his next home-run. The only exception to this is when Babe gets to the point where he has already hit his last home-run, so that there are no more home-runs left to come in his career. At this point with have $B_n = \infty$ and getting a strike on ball $n+1$ still gives $B_{n+1} = \infty$ . In this latter case Babe is no closer to the next home-run, because there is no next home-run. Of course, at the time of Babe's last home-run, he probably didn't know that would be his last. (According to this historical account , Babe's last home-run was on 25 March 1935. He went on to play five more times without another home-run.) At that point his saying would be wrong, and looking back in hindsight we now know this. Ultimately, this statement by Babe Ruth is no more controversial than if he asserted, "The elapsing of time spent not getting a home-run brings me closer to my next home-run". That is of course also true, setting aside the situation where he has no future home-runs to get closer to. Finally, I do not agree with other comments/answers here that assert that this is the gambler's fallacy. It could (but might not) be a manifestation of the gambler's fallacy if he instead said, "Every strike makes it more likely that I will get a home-run in the future". That could be an example of the gambler's fallacy because it would assert that a bad outcome now makes a good outcome in the future more likely. (On the other hand, if strikes are not independent then it might not be.) In any case, merely asserting that the elapsing of time required for a bad outcome to occur now makes a subsequent good outcome closer in time is not the gambler's fallacy, and is not a fallacy at all.
{ "source": [ "https://stats.stackexchange.com/questions/552464", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/235632/" ] }
552,497
When doing hypothesis testing, we calculate the distribution of test statistic (for example z ) under null hypothesis and then compare the actual z (one calculated from our data) to it. Why can't we instead calculate the distribution of z statistic from our data (by bootstraping mean, for example) (or calculate 95% CI) and compare it to 0, like it is shown below. I'd also see an example (or a simulation) where this fails, not just a theory behind it.
It is both meaningful and (usually) correct You are overcomplicating this by bringing probability into a simple non-probabilistic assertion. You need not invoke an omniscient deity in order to accept that there is a reality that exists independently of knowledge of it. (You seem to be operating under the assumption that reality is only admissible to discussion if there is an omniscient being with total knowledge of it; this is a reasonably common misconception of probability, which is examined in this related question .) The simplest rigorous examination of this statement is a non-statistical analysis based on looking at the underlying population of values pertaining to all the balls Babe Ruth ever hit. Let $X_1,...,X_N$ be the ordered career outcomes of all balls faced by Babe Ruth, with $X_i = \bullet$ denoting a strike and $X_i = \diamond$ denoting a home-run (we need not specify the notation for other possible outcomes). At the end of ball $n$ the number of balls until the next home-run is: $$B_n \equiv \min \{ k \in \mathbb{N} | X_{n+k} = \diamond \}.$$ Now, we know that a strike and a home-run are mutually exclusive --- i.e., no single ball can be both. Consequently, if ball $n+1$ is a strike (i.e., if $X_{n+1} = \bullet$ ) and if $B_n<\infty$ (i.e., if Babe has at least one home-run left in his career) then we can easily show that $B_{n+1} = B_n-1$ . This confirms Babe's statement that his strike brings him (one ball) closer to his next home-run. The only exception to this is when Babe gets to the point where he has already hit his last home-run, so that there are no more home-runs left to come in his career. At this point with have $B_n = \infty$ and getting a strike on ball $n+1$ still gives $B_{n+1} = \infty$ . In this latter case Babe is no closer to the next home-run, because there is no next home-run. Of course, at the time of Babe's last home-run, he probably didn't know that would be his last. (According to this historical account , Babe's last home-run was on 25 March 1935. He went on to play five more times without another home-run.) At that point his saying would be wrong, and looking back in hindsight we now know this. Ultimately, this statement by Babe Ruth is no more controversial than if he asserted, "The elapsing of time spent not getting a home-run brings me closer to my next home-run". That is of course also true, setting aside the situation where he has no future home-runs to get closer to. Finally, I do not agree with other comments/answers here that assert that this is the gambler's fallacy. It could (but might not) be a manifestation of the gambler's fallacy if he instead said, "Every strike makes it more likely that I will get a home-run in the future". That could be an example of the gambler's fallacy because it would assert that a bad outcome now makes a good outcome in the future more likely. (On the other hand, if strikes are not independent then it might not be.) In any case, merely asserting that the elapsing of time required for a bad outcome to occur now makes a subsequent good outcome closer in time is not the gambler's fallacy, and is not a fallacy at all.
{ "source": [ "https://stats.stackexchange.com/questions/552497", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/172965/" ] }
552,531
Why is the exponential family so important in statistics? I was recently reading about the exponential family within statistics. As far as I understand, the exponential family refers to any probability distribution function that can be written in the following format (notice the "exponent" in this equation): This includes common probability distribution functions such as the normal distribution , the gamma distribution , the Poisson distribution , etc. Probability distributions from the exponential family are often used as the "link function" in regression problems (e.g., in count data settings, the response variable can be related to the covariates through a Poisson distribution) - probability distribution functions that belong to the exponential family are often used due to their "desirable mathematical properties". For example, these properties are the following: Why are these properties so important? A) The first property is about "sufficient statistics". A "sufficient statistic" is a statistic that provides more information for any given data set/model parameter compared to any other statistic. I am having trouble understanding why this is important. In the case of logistic regression, the logit link function is used (part of the exponential family) to link the response variable with the observed covariates. What exactly are the "statistics" in this case (e.g.. in a logistic regression model, do these "statistics" refer to the "mean" and "variance" of the beta-coefficients of the regression model)? What are the "fixed values" in this case? B) Exponential families have conjugate priors. In the Bayesian setting, a prior p(thetha | x) is called a conjugate prior if it is in the same family as the posterior distribution p(x | thetha). If a prior is a conjugate prior - this means that a closed form solution exists and numerical integration techniques (e.g., MCMC ) are not required to sample the posterior distribution. Is this correct? C) Is the third property essentially similar to the second property? D) I don't understand the fourth property at all. Variational Bayes are an alternative to MCMC sampling techniques that approximate the posterior distribution with a simpler distribution - this can save computational time for high dimensional posterior distributions with big data. Does the fourth property mean that variational Bayes with conjugate priors in the exponential family have closed form solutions? So any Bayesian model that uses the exponential family does not require MCMC - is this correct? References: Exponential family Sufficient statistic Conjugate prior
Excellent questions. Regarding A: A sufficient statistic is nothing more than a distillation of the information that is contained in the sample with respect to a given model . As you would expect, if you have a sample $x_i \sim N(\mu,\sigma^2)$ for $i \in \{1, \ldots, N\}$ and each independent, it is clear that so long as we calculate the sample mean and sample variance, it doesn't matter what the values of each $x_i$ are. In linear regression (easier to talk about than logistic in this context), the sampling distribution of the unknown coefficient vector (for known variance) is $N(\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}, \sigma^2\mathbf{X}^\top\mathbf{X})^{-1})$ , so as long as these final quantities are identical, inference based thereupon while be too. This is the idea of sufficiency. Note that in the $N(\mu,\sigma^2)$ example, the sufficient statistic comprises of just two numbers: $\hat{\mu}=\frac{1}{N}\sum_{i=1}^N x_i$ and $\frac{1}{N}\sum_{i=1}^N (x_i-\hat{\mu})^2$ , no matter how big our sample size $N$ is (and assuming $N>2$ ). Likewise, the vector $(\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}$ is of dimension $P$ and $\sigma^2(\mathbf{X}^\top\mathbf{X})^{-1}$ of dimension $P\times P$ (here $P$ is the dimension of the design matrix), which are both independent of $N$ (though, technically, the matrix $\sigma^2(\mathbf{X}^\top\mathbf{X})^{-1}$ is just a constant under our assumptions). So in these examples, the sufficient statistic has a fixed number of values (not fixed values ), or as I would put it, fixed dimension. Let's note three more things. First, that there is no such thing as the sufficient statistic for a distribution, rather, there are many possible statistics which may be sufficient, and which may be of different dimension. Indeed, our second thing to discuss is that the entire sample itself, since it naturally contains all information contained in itself, is always a sufficient statistic. This is a trivial case, but an important one, as in general one cannot always expect to find a sufficient statistic of dimension less than $N$ . And the final thing to note is model specificity: that's why I wrote with respect to a given model above . Changing your likelihood will change the sufficient statistics, at least potentially, for a given dataset. Regarding B : What you're saying is correct, but additionally to allowing analytic posteriors in the univariate case, conjugacy has serious benefits in the context of Bayesian hierarchical models estimated via MCMC. This is because conditional posteriors are also available in closed form. So we can actually accelerate Metropolis-within-Gibbs style MCMC algos with conjugacy. Regarding C: It's definitely a similar idea, but I do want to make clear that we're talking about two different distributions here: "posterior" versus "posterior predictive". As the name implies, both of these are posterior distributions, which means that they are distributions of an unknown variable conditioned on our known data. A "posterior" plain and simple usually refers to something like $P(\mu, \sigma^2| \{x_1, \ldots, x_N\})$ from our normal example above: a distribution of unkown parameters defined in the data generating distributions. In contrast, a "posterior predictive" gives the distribution of a hypothetical $N+1$ 'st data point $x_{N+1}$ conditional on the observed data: $P(x_{N+1}| \{x_1, \ldots, x_N\})$ . Notice that this is not conditional on the parameters $\mu$ and $\sigma^2$ : they had to be integrated out. It is this additional integral that is guaranteed by conjugacy. Regarding D: In the context of Variational Bayes (VB), you have some posterior distribution $P(\theta|X)$ where $\theta$ is some vector of $P$ parameters and $X$ are some data. Rather than trying to generate a sample from it like MCMC, we are instead going to use an approximate posterior distribution that's easy to work with and pretty close to the true one. That's called a variational distribution and is denoted $Q_\eta(\theta)$ . Notice that our variational distribution is indexed by variational parameters $\eta$ . Variational parameters are nothing like the parameters we do Bayesian inference on, and are nothing like our data. They don't have a distribution associated with them and they don't have some hypothetical role generating the data. Rather, they are chosen as a result of an iterative optimization algorithm. The whole idea of variational inference is to define some measure of dissimilarity between the variational distribution and the true posterior and then minimize that measure with respect to the parameters $\eta$ . We'll denote the result of that optimization process by $\hat{\eta}(X)$ . At that point, hopefully $Q_{\hat{\eta}(X)}(\theta)$ is pretty close to $P(\theta|X)$ , and if we do inferences using $Q_{\hat{\eta}(X)}(\theta)$ instead we'll get similar answers. Now where does conjugacy fit in? A popular way to measure dissimilarity is this measure, which is called the reverse KL cost: $$ \hat{\eta}(X) := \underset{\eta}{\textrm{argmin}}\, \mathbb{E}_{\theta\sim Q_\eta}\bigg[\frac{\log Q_{\eta}(\theta)}{\log P(\theta|X)}\bigg] $$ This integral cannot be solved in terms of simple functions in general. However, it is available in closed form when: We use a conjugate prior to define $P(\theta|X)$ . We assume that variational distribution is independent , so in other words that $q_\eta(\theta)=\prod_{j=1}^P q_{j,\eta}(\theta_j)$ . We further restrict ourselves to a particular $q_{j,\eta_j}$ for each $j$ (which is determined by the likelihood). So it's not that the variational posterior is available in closed form. Rather, it's that the cost function which defines the variational posterior is available in closed form. The cost function being closed form makes computing the variational distribution an easier optimization problem, since we can analytically compute function values and gradients.
{ "source": [ "https://stats.stackexchange.com/questions/552531", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/77179/" ] }
553,289
Suppose there is a coin that has a 5% chance of landing on HEADS and a 95% chance of landing on TAILS. Based on a computer simulation, I want to find out the following : The average number of flips before observing HEADS, TAILS, HEADS (note: not the probability, but the number of flips) Using the R programming language, I tried to make this simulation (this simulation keeps flipping a coin until HTH and counts the number of flips until this happens - it then repeats this same process 10,000 times): results <- list() for (j in 1:10000) { response_i <- '' i <- 1 while (response_i != 'HTH') { response_i <- c("H","T") response_i <- sample(response_i, 3, replace=TRUE, prob=c(0.05, 0.95)) response_i <- paste(response_i, collapse = '') iteration_i = i if (response_i == 'HTH') { run_i = data.frame(response_i, iteration_i) results[[j]] <- run_i } i <- i + 1 } } data <- do.call('rbind', results) We can now see a histogram of this data: hist(data$iteration_i, breaks = 500, main = "Number of Flips Before HTH") We can also see the summary of this data: summary(data$iteration_i) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.0 119.0 288.0 413.7 573.0 3346.0 My Question: Could any "mathematical equation" have predicted the results of this simulation in advance? Could any "formula" have shown that the average number of flips to get HTH would have been 413? Can Markov Chains be used to solve this problem? Based on the "skewed" shape of this histogram, is the "arithmetical mean" (i.e. mean = sum(x_i)/n) a "faithful" representation of the "true mean"? Looking at the above histogram, we can clearly see that you are are more likely to see HTH before 437 iterations compared to seeing HTH after 437 iterations, e.g. (on 100,000 simulations, the new average is 418): nrow(data[which(data$iteration_i <418), ]) 63184 nrow(data[which(data$iteration_i > 418), ]) 36739 For such distributions, is there a better method to find out the "expectation" of this experiment? Thanks!
At any given point in the game, you're $3$ or fewer "perfect flips" away from winning. For example, suppose you've flipped the following sequence so far: $$ HTTHHHTTTTTTH $$ You haven't won yet, but you could win in two more flips if those two flips are $TH$ . In other words, your last flip was $H$ so you have made "one flip" worth of progress toward your goal. Since you mentioned Markov Chains, let's describe the "state" of the game by how much progress you have made toward the desired sequence $HTH$ . At every point in the game, your progress is either $0$ , $1$ , or $2$ --if it reaches $3$ , then you have won. So we'll label the states $0$ , $1$ , $2$ . (And if you want, you can say that there's an "absorbing state" called "state $3$ ".) You start out in state $0$ , of course. You want to know the expected number of flips, from the starting point, state $0$ . Let $E_i$ denote the expected number of flips, starting from state $i$ . At state $0$ , what can happen? You can either flip $H$ , and move to state $1$ , or you flip $T$ and remain in state $0$ . But either way, your "flip counter" goes up by $1$ . So: $$ E_0 = p (1 + E_1) + (1-p)(1 + E_0), $$ where $p = P(H)$ , or equivalently $$ E_0 = 1 + p E_1 + (1-p) E_0. $$ The " $1+$ " comes from incrementing your "flip counter". At state $1$ , you want $T$ , not $H$ . But if you do get an $H$ , at least you don't go back to the beginning--you still have an $H$ that you can build on next time. So: $$ E_1 = 1 + p E_1 + (1-p) E_2. $$ At state $2$ , you either flip $H$ and win, or you flip $T$ and go all the way back to the beginning. $$ E_2 = 1 + (1-p) E_0. $$ Now solve the three linear equations for the three unknowns. In particular you want $E_0$ . I get $$ E_0 = \left( \frac{1}{p} \right) \left( \frac{1}{p} + \frac{1}{1-p} + 1 \right), $$ which for $p=1/20$ gives $E_0 = 441 + 1/19 \approx 441.0526$ . (So the mean is not $413$ . In my own simulations I do get results around $441$ on average, at least if I do around $10^5$ or $10^6$ trials.) In case you are interested, our three linear equations come from the Law of Total Expectation . This is really the same as the approach in Stephan Kolassa's answer , but it is a little more efficient because we don't need as many states. For example, there is no real difference between $TTT$ and $HTT$ --either way, you're back at the beginning. So we can "collapse" those sequences together, instead of treating them as separate states. Simulation code (two ways, sorry for using Python instead of R): # Python 3 import random def one_trial(p=0.05): # p = P(Heads) state = 0 # states are 0, 1, 2, representing the progress made so far flip_count = 0 # number of flips so far while True: flip_count += 1 if state == 0: # empty state state = random.random() < p # 1 if H, 0 if T elif state == 1: # 'H' state += random.random() >= p # state 1 (H) if flip H, state 2 (HT) if flip T else: # state 2, 'HT' if random.random() < p: # HTH, game ends! return flip_count else: # HTT, back to empty state state = 0 def slow_trial(p=0.05): sequence = '' while sequence[-3:] != 'HTH': if random.random() < p: sequence += 'H' else: sequence += 'T' return len(sequence) N = 10**5 print(sum(one_trial() for _ in range(N)) / N) print(sum(slow_trial() for _ in range(N)) / N)
{ "source": [ "https://stats.stackexchange.com/questions/553289", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/77179/" ] }
555,450
I'm aware of the Law(s) of Large Numbers, concerning the means. However, intuitively, I'd expect not just the mean, but also the observed relative frequencies (or the histogram, if we have a continuous distribution) to approach the theoretical PMF/PDF as the number of trials goes into infinity. Is my intuition wrong? Always or only for some degenerate cases (e.g. Cauchy)? If not, is there a special name for that law?
While the law of large numbers is framed in terms of "means" this actually gives you a large amount of flexibility to show convergence of other types of quantities. In particular, you can use indicator functions to get convergence results for the probabilities of any specified event. To see how to do this, suppose we start with a sequence $X_1,X_2,X_3 ,... \sim \text{IID } F_X$ and note that the law of large numbers says that (in various probabilistic senses) we have the following convergence: $$\frac{1}{n} \sum_{i=1}^n X_i \rightarrow \mathbb{E}(X) \quad \quad \quad \quad \quad \text{as } n \rightarrow \infty.$$ In the sections below I will show how you can use this basic result to show that the empirical CDF converges to the true CDF of the underlying distribution in certain useful senses. This will also show you how the law of large numbers can be applied in a creative way to prove convergence results for other things that don't look like "means" of quantities (but actually are). Pointwise convergence of the empirical CDF to the true CDF: In your question you are interested in the convergence of the empirical distribution function to the true distribution function $F_X$ . Let's start by looking at a particular point $x$ by examining the sequence of values $Y_1,Y_2,Y_3 ,...$ defined by $Y_i \equiv \mathbb{I}(X_i \leqslant x)$ . This latter sequence is also IID, so the law of large numbers says that (in various probabilistic senses) we have the following convergence: $$\frac{1}{n} \sum_{i=1}^n Y_i \rightarrow \mathbb{E}(Y) \quad \quad \quad \quad \quad \text{as } n \rightarrow \infty.$$ Now, at the point $x$ the empirical distribution function for the sequence $\mathbf{X}$ and the true CDF for the distribution can be written respectively as: $$\begin{align} \hat{F}_n(x) &\equiv \frac{1}{n} \sum_{i=1}^n \mathbb{I}(X_i \leqslant x) = \frac{1}{n} \sum_{i=1}^n Y_i, \\[12pt] F_X(x) &\equiv \mathbb{P}(X_i \leqslant x) = \mathbb{E}(Y). \\[6pt] \end{align}$$ (The latter result follows from the fact that $\mathbb{E}(Y) = \mathbb{P}(Y=1)$ for any indicator variable $Y$ .) We can therefore re-frame the previous convergence statement from the law of large numbers to give the pointwise convergence result: $$\hat{F}_n(x) \rightarrow F_X(x) \quad \quad \quad \quad \quad \text{as } n \rightarrow \infty.$$ You can see that this demonstrates that the empirical CDF converges pointwise to the true CDF for IID data; this is a direct consequence of the law of large numbers. Specifically, the weak law of large numbers establishes pointwise convergence in probability, and the strong law of large numbers establishes pointwise convergence almost surely. Uniform convergence of the empirical CDF to the true CDF: To go further than the above result, you need to use the uniform law of large numbers —or some other similar theorem—to establish uniform convergence of the empirical CDF to the true CDF. If you use the uniform law of large numbers then you can establish uniform convergence of the empirical CDF under some restrictive assumptions on the underlying CDF. However, there is actually a stronger theorem called the Glivenko–Cantelli theorem that establishes uniform convergence of the empirical CDF to the true CDF (almost surely) for any IID sequence of data. That is, the theorem proves that: $$\sup_x | \hat{F}_n(x) - F_X(x) | \overset{\text{a.s}}{\rightarrow} 0 \quad \quad \quad \quad \quad \text{as } n \rightarrow \infty.$$ If you would like to learn more about this part, it is worth having a look at the proofs of the uniform law of large numbers and the Glivenko–Cantelli theorem to see how each of them work to establish uniform convergence. The former theorem is broader, but it comes with some restrictions on the input function. The latter theorem applies specifically to the empirical CDF of IID data, but it establishes uniform convergence (almost surely) without any additional assumptions.
{ "source": [ "https://stats.stackexchange.com/questions/555450", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/169343/" ] }
556,085
This is the screenshot I took from a video on linear regression made by Luis Serrano. He explained linear regression step by step (scratch version). The first step was to start with a random line. The question is do we actually draw a random line, or instead do we perform some calculation like taking an average of y values and initially draw a line. Because if we take any random line it might not fall near any points at all. Maybe it will fall on the 3rd quadrant of the coordinate system where there are no points in this case.
NO What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$ . Error often gets denoted by an $L$ for "loss". $$ L(y, \hat y) = \sum_{i = 1}^N \bigg(y_i - \hat y_i\bigg)^2 $$ We have our regression model, $\hat y_i =\hat\beta_0 + \hat\beta_1x$ , so the $\hat y$ is a function of $\hat\beta_0$ and $\hat\beta_1$ . $$ L(y, \hat\beta_0, \hat\beta_1) = \sum_{i = 1}^N \bigg(y_i - (\hat\beta_0 + \hat\beta_1x)\bigg)^2 $$ We want to find the $\hat\beta_0$ and $\hat\beta_1$ that minimize $L$ . What the video does is simulate pieces of the entire "loss function". For $\hat\beta_0 = 1$ and $\hat\beta_1 = 7$ , you get a certain loss value. For $\hat\beta_0 = 1$ and $\hat\beta_1 = 8$ , you get another loss value. One approach to finding the minimum is to pick random values until you find one that results in a loss value that seems low enough (or you're tired of waiting). Much of the deep learning work uses variations of this, with tricks like stochastic gradient descent to make the algorithm get (close to) the right answer in a short amount of time. In OLS linear regression, however, calculus gives us a solution to the minimization problem, and we do not have to play such games. $$\hat\beta_1=\frac{cov(x,y)}{var(x)}\\ \hat\beta_0=\bar y-\hat\beta_1\bar x$$
{ "source": [ "https://stats.stackexchange.com/questions/556085", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/292184/" ] }
558,754
In my organisation, we are embarking on an AI initiative where we try to identify business use cases and solve them using traditional ML algorithms. However, our business users say that before they even take part in brainstorming, selecting, and reducing the feature space, they are asking the data folks to do a detailed scan and experiments and find out what are the most important and looks like important features through experiments... Example: Let's say my data has 200 features and 30K rows. Our business team says that they will not be able to guide what the most relevant features to look at are, because they think this might bias the results. So, they want the data folks to find the important features through experiments. Later, take these features and go to business team to check its relevance. Basically, no domain expert input until they get some confidence in what the algorithm outputs (for relevant features which has influence on the target variable). Is this how it normally works in real-world AI projects? Is this a better approach to start with an AI project? Is there anything that we should be aware of?
This will probably be closed quickly as opinion-based, but here is a point you may want to consider. 200 features is a lot , and 30k rows is less than it sounds like. A "fishing expedition" to find relevant features is quite likely to overfit and select spurious features . The danger is that when you go to your domain experts with these features you "found" to be relevant, they may not push back. Instead, it's a very common human reaction to start telling stories about how these features are indeed useful, because we humans are very good at explaining stuff, even stuff that is simply noise. Talking to your domain experts first will not completely avoid this problem, but it may reduce the number of wild goose chases. You may be interested in my answer to "How to know that your machine learning problem is hopeless?" .
{ "source": [ "https://stats.stackexchange.com/questions/558754", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/241460/" ] }
559,808
Let's say I have $N$ covariates in my regression model, and they explain 95% of the variation of the target set, i.e. $r^2=0.95$ . If there are multicollinearity between these covariates and PCA is performed to reduce the dimensionality. If the principal components explains, say 80% of the variation (as opposed to 95%), then I have incurred some loss in the accuracy of my model. Effectively, if PCA solves the issue of multicollinearity at the cost of accuracy, is there any benefit to it, other than the fact it can speed up model training and can reduce collinear covariates into statistically independent and robust variables?
Your question is implicitly assuming that reducing explained variation is necessarily a bad thing. Recall that $R^2$ is defined as: $$ R^2 = 1 - \frac{SS_{res}}{SS_{tot}} $$ where $SS_{res} = \sum_{i}{(y_i - \hat{y})^2}$ is a residual sum of squares and $SS_{tot} = \sum_{i}{(y_i - \bar{y})^2}$ is a total sum of squares. You can easily get an $R^2 = 1$ (i.e. $SS_{res} = 0$ ) by fitting a line that passes through all of the (training) points (though this, in general, requires more flexible model as opposed to a simple linear regression, as noted by Eric), which is a perfect example of overfitting . So reducing explained variation isn't necessarily bad as it could result in a better performance on unseen (test) data. PCA can be a good preprocessing technique if there are reasons to believe that the dataset has an intrinsic lower-dimensional structure.
{ "source": [ "https://stats.stackexchange.com/questions/559808", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/344662/" ] }
560,098
Regarding machine learning methods but dealing with arbitrary floating-point precision. It sounds cool to me but I am unsure if this would be of any use... Did anyone ever encountered cases where, for example, a logistic regression task needs to be carried out with an extremely high precision? Can it make sense? I am looking for opinions from ML experts. Edit: If not arbitrary high , maybe quadruple precision could be useful?
No, it is almost never a problem. First of all, there's a measurement error—even physicists account for it, while the rest of us rarely can be lucky enough for as precise measurements as theirs. Second, you are dealing with sampled data, so there is error due to sampling. Finally, we have all kinds of biases and noise in the data. In the end, we are usually far from having precise data, so we don’t need algorithms more precise than the data itself. More than this, there is research showing that you can train neural networks with low (8-bit, 2-bit) precision without performance drop . Some argue that this might even have a regularizing effect. It can probably be extended to some degree to other models.
{ "source": [ "https://stats.stackexchange.com/questions/560098", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/174383/" ] }
561,035
Can we say that the value of the cumulative distribution function at the mean F(X< Mean) is always 0.5 for all kind of distributions (even ones that are not symmetric)?
No, this is false. That point is the median, and it is not equal to the mean in all cases, for example, an exponential distribution. $$ X\sim \exp(1)\\ \mathbb E[X]=1\\ \operatorname{median}(X)=\log 2\approx 0.69 $$ We can simulate this in software, such as R. set.seed(2021) X <- rexp(10000, 1) mean(X) # approximately 1 median(X) # approximately 0.69 EDIT The asymmetric Laplace distribution is an example where the distribution extends over the entire real line, not just the positive numbers. (In more technical language, the PDF has support on all of $\mathbb R$ .)
{ "source": [ "https://stats.stackexchange.com/questions/561035", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/347112/" ] }
561,933
I am in the process of solving a Machine Learning challenge, and I want to do it the right way. I did some exploratory data analysisand I wanted to check the distribution of the data. As displayed in the figure below, the box plots of each of the numerical continuous dataset features. According to my understanding, several features do contain outliers, but after reading some articles, I encountered cases where outliers are mistaken for a Pareto distribution, and removing them leads to information loss. Could you please explain which contain and which do not contain outliers?
Outliers, like beauty and much else, are often in the eye of the beholder. Identifying outliers is often treated as if it were a matter of knowing that your data are a mix of gazelles (regular beasts) and giraffes (probably large, awkward and unwanted), so the problem is to tell which is which. In real applications, it may not be obvious that there are two such distinct subsets, even in abstract principle. And even if the principle is accepted, it can be hard in practice to tell them apart: imagine how you would proceed if the variables were height, weight, and number of legs and there are known to be young giraffes in the data. Box plots are routinely misused in this context. First of all, although box plots were (re-)introduced by John W. Tukey in the 1970s, he himself played with several different conventions, and later people haven't stopped inventing different variations -- which could be worthwhile experiment. Unfortunately users often fail to explain in reports which detailed rules their software uses -- which is not helpful. Here I guess that the convention being followed is one Tukey introduced: plot data points individually whenever any is more than 1.5 IQR from the nearer quartile. Here IQR is interquartile range or upper quartile MINUS lower quartile. Tukey himself regarded this just as a way of identifying data points that need to be thought about, and not at all as a reliable criterion for identifying points that should be rejected. Indeed, sometimes outliers (and skewness) are a signal that data should be analysed on a transformed scale and/or that they need resistant or robust summaries. . I won't try to say which of your data points are outliers, because The whole idea that some points are expected to be BAD outliers or problematic, and the rest GOOD, needs to be argued in each case. For example, in banking some transactions are rogue and most are honest, so there is strong concern to identify rogues, which is rarely easy. In many other fields, strongly skewed distributions are entirely expected and the larger outliers are known extremes (the Amazon, or Amazon, say). There are no outliers except in relation to an implicit or explicit model of what is expected. If one distribution is expected to be (roughly) normal and another (roughly) Poisson or anything else, I have quite different expectations of whether outliers are likely and how to detect them. (This is the kind of point made in the question with the example of a Pareto distribution.) Here the data are just labelled X without even a signal for whether they are versions of the same (kind of) variable or whether they are quite different. Box plots alone are inadequate to make the decision even if #1 and #2 are satisfied. I'd much prefer dot or quantile plots. I'd also want to know the number of data points, which is impossible to discern precisely. But by failing to show simple patterns box plots can signal that you need another and a deeper look. Some of the box plots above are degenerate in that median and quartiles are identical, or very close, so that the box collapses to a line: in that case every other data point is plotted individually. (If the IQR is zero, every point beyond the quartiles must be plotted individually!) Such a pattern could be anything from the quirks of small samples to some special situation: the box plots alone cannot do more than flag an oddity. Specifically X27 X31 X33 spring out as demanding careful scrutiny. Specifically the numerical labels for these data suggest that the data were standardized first (likely by (value MINUS mean) / SD): this pre-processing is over-valued and inhibits or prohibits reading off basic characteristics of the data, such as the data being all positive the data being all positive or zero the data being positive, zero or negative whether only certain values occur (e.g. discrete grades or counts that are integers). This is one of the worst problems, as box plots are often hard work with such data. Repeated values plot as one symbol or contribute to compressed boxes. X33 seems possibly an example as there are perhaps only 3 distinct values; they may even be nominal codes with no intrinsic meaning.
{ "source": [ "https://stats.stackexchange.com/questions/561933", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/347762/" ] }
562,954
I'm currently looking into the gender pay gap using data from glass door (found via kaggle). The dataset has columns for gender, age, performance evaluations of employees, seniority, pay etc. For context: I have learned a lot of Data Science/Machine learning/programming over the past few years, and am just doing a few of my own basic portfolio projects for practice, before applying for jobs. I have done a fairly naive t-test, comparing average pay for men vs average pay for women. I am now looking to add in controls, comparing similar age groups, seniority, education level etc. I want to do more t-tests, as well as looking at a chi-square distribution and/or ANOVA. As I do multiple tests A/B tests, I want to avoid p-hacking. I have a few hypotheses, for example I expect the pay gap to be greater for older age groups. But this is mostly exploring the data, I don't have a single hypothesis I am looking to prove for the entire study, nor do I have a political agenda. I'm not sure it would really count as p-hacking as long as I choose which comparisons to make, and report everything. I would think it's only p-hacking if I selected which t-test results to report to help prove a hypothesis. Is this fair? And another question (forget my data for a moment), with ANOVA, as it compares multiple groups at once to look for significance, is this not p-hacking?
If you are doing explorative analysis, then you don't care about p-values. What you do is search for any pattern. P-values are used to verify a hypothesis, but you have none. However, if after your explorative analysis you are gonna perform some hypothesis tests with the same data then this gives the erroneous p-values if the hypothesis were created by the same data. If you only have a single data set available then you can split the data into two subsets, one for analysis and another for follow-up research to verify whether the found patterns are much different from statistical variations in the sampling. You seem to be doing a search for patterns by using hypothesis tests and p-values. That is not p-hacking if you regard the p-values only as an aid in pattern recognition (a search for anomalies) instead of a value to report in relation to an experiment to verify a certain effect. You have to be careful though that you do not switch the meaning from a statistic used in pattern recognition to a value that expresses the statistical significance of an experiment to measure an effect.
{ "source": [ "https://stats.stackexchange.com/questions/562954", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/348488/" ] }
562,958
I would like to do a linear regression between my response "weed coverage [%]" and my predictor "soil moisture content[%]". Since the response was not normally distributed and had issues with heteroskedecity, I have decided to transform my response by applying the log. Now, I got diagnostic plots and normality-test results as seen below. My question: Can I do a simple linear regression now? Thanks for your help! EDIT: I forgot to mention that my observations were measured on different lakes and on different dates, e.g I measured soil moisture content at Lake A, Lake B and Lake C on Date A and later on Date B and so on. That means they are dependend.. What do I need to do now? ----------------------------------------------- Test Statistic pvalue ----------------------------------------------- Shapiro-Wilk 0.9917 0.0057 Kolmogorov-Smirnov 0.0425 0.3116 Cramer-von Mises 31.7515 0.0000 Anderson-Darling 0.9348 0.0177 -----------------------------------------------
If you are doing explorative analysis, then you don't care about p-values. What you do is search for any pattern. P-values are used to verify a hypothesis, but you have none. However, if after your explorative analysis you are gonna perform some hypothesis tests with the same data then this gives the erroneous p-values if the hypothesis were created by the same data. If you only have a single data set available then you can split the data into two subsets, one for analysis and another for follow-up research to verify whether the found patterns are much different from statistical variations in the sampling. You seem to be doing a search for patterns by using hypothesis tests and p-values. That is not p-hacking if you regard the p-values only as an aid in pattern recognition (a search for anomalies) instead of a value to report in relation to an experiment to verify a certain effect. You have to be careful though that you do not switch the meaning from a statistic used in pattern recognition to a value that expresses the statistical significance of an experiment to measure an effect.
{ "source": [ "https://stats.stackexchange.com/questions/562958", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/345827/" ] }
563,171
I have been encountering many assumptions associated with linear regression (especially ordinary least squares regression) which are untrue or unnecessary. For example: independent variables must have a Gaussian distribution outliers are the points either above or below the upper or lower whiskers correspondingly (employing Boxplot terminology) and that the sole purpose of transformations is to bring a distribution close to normal in order to suit the model. I would like to know what are the myths that are generally taken for facts/assumptions about linear regression, especially concerning associated nonlinear transformations and distributional assumptions. How did such myths come about?
There are three myths that bother me. Predictor variables need to be normal. The pooled/marginal distribution of $Y$ has to be normal. Predictor variables should not be correlated, and if they are, some should be removed. I believe that the first two come from misunderstanding the standard assumption about normality in an OLS linear regression, which assumes that the error terms, which are estimated by the residuals, are normal. It seems that people have misinterpreted this to mean that the pooled/marginal distribution of all $Y$ values has to be normal. Indeed, as is mentioned in a comment, we still get a lot of what we like about OLS linear regression without having normal error terms, though we do not need a normal marginal distribution of $Y$ or normal features in order to have the OLS estimator coincide with maximum likelihood estimation. For the myth about correlated predictors, I have two hypotheses. People misinterpret the Gauss-Markov assumption about error term independence to mean that the predictors are independent. People think they can eliminate features to get strong performance with fewer variables, decreasing the overfitting. I understand the idea of dropping predictors in order to have less overfitting risk without sacrificing much of the information in your feature space, but that seems not to work out. My post here gets into why and links to further reading.
{ "source": [ "https://stats.stackexchange.com/questions/563171", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/332360/" ] }
563,197
I have the following problem: "There are 15 identical candies. How many ways are there to distribute them among 7 kids?" I know the conventional way to solve a similar problem (unordered with repetition) is using the formula (n+k−1 C k) (=54,264) and I'm convinced with why this formula works. However, I'm trying to use another approach and I can't figure out why that doesn't work. Please consider this other problem first: "There are 4 people and 9 different assignments. We need to distribute all assignments among people. No assignment should be assigned to two people. Every person can be given arbitrary number of assignments from 0 to 9. How many ways are there to do it?" This is a 'Tuples' problem ie. ordered with repetitions and its solution is simply 4^9. Therefore, why can't I solve the first problem with a similar approach, but then dividing by n! This is how I'm thinking: We have 15 candies and 7 kids. The first candy can be distributed in 7 ways (one of the 7 kids in going to take it). The second can also be distributed in 7 ways .. etc So we now have 7^15 just like the approach we used for the second (assignments) problem. But now since the candies are identical, order doesn't matter, and we should divide by 15! .. But solving it this way gives a completely different and significantly smaller answer 4,747,561,509,943 / 1,307,674,368,000 = 3.63 What is wrong with thinking about the problem in this approach and why is it not working? I would just like to know what I'm missing here and learn from my mistake so that I don't approach a future problem with a similar thinking.
There are three myths that bother me. Predictor variables need to be normal. The pooled/marginal distribution of $Y$ has to be normal. Predictor variables should not be correlated, and if they are, some should be removed. I believe that the first two come from misunderstanding the standard assumption about normality in an OLS linear regression, which assumes that the error terms, which are estimated by the residuals, are normal. It seems that people have misinterpreted this to mean that the pooled/marginal distribution of all $Y$ values has to be normal. Indeed, as is mentioned in a comment, we still get a lot of what we like about OLS linear regression without having normal error terms, though we do not need a normal marginal distribution of $Y$ or normal features in order to have the OLS estimator coincide with maximum likelihood estimation. For the myth about correlated predictors, I have two hypotheses. People misinterpret the Gauss-Markov assumption about error term independence to mean that the predictors are independent. People think they can eliminate features to get strong performance with fewer variables, decreasing the overfitting. I understand the idea of dropping predictors in order to have less overfitting risk without sacrificing much of the information in your feature space, but that seems not to work out. My post here gets into why and links to further reading.
{ "source": [ "https://stats.stackexchange.com/questions/563197", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/348662/" ] }
563,733
Suppose we have any kind of coin. Why should the relative frequency of getting a heads converge to any value at all? One answer is that this is simply what we empirically observe this to be the case, and I think this is a valid answer. However, my question is, is there some simple set of principles/axioms that apply to coin tossing from which we can derive this fact?
This is an excellent question, and it shows that you are thinking about important foundational matters in simple probability problems. The convergence outcome follows from the condition of exchangeability . If the coin is tossed in a manner that is consistent from flip-to-flip, then one might reasonably assume that the resulting sequence of coin tosses is exchangeable, meaning that the probability of any particular sequence of outcomes does not depend on the order those outcomes occur in . For example, the condition of exhangeability would say that the outcome $H \cdot H \cdot T \cdot T \cdot H \cdot T$ has the same probability as the outcome $H \cdot H \cdot H \cdot T \cdot T \cdot T$ , and exchangeability of the sequence would mean that this is true for strings of any length which are permutations of each other. The assumption of exchangeability is the operational assumption that reflects the idea of "repeated trials" of an experiment --- it captures the idea that nothing is changing from trial-to-trial, such that sets of outcomes which are permutations of one another should have the same probability. Now, if this assumption holds then the sequence of outcomes will be IID (conditional on the underlying distribution) with fixed probability for heads/tails which applies to all the flips . (This is due to a famous mathematical result called de Finetti's representation theorem; see related questions here and here .) The strong law of large numbers then kicks in to give you the convergence result ---i.e., the sample proportion of heads/tails converges to the (fixed) probability of heads/tails with probability one. What if exchangability doesn't hold? Can there be a lack of convergence? Although there are also weaker assumptions that can allow similar convergence results, if the underlying assumption of exchangeability does not hold ---i.e., if the probability of a sequence of coin-toss outcomes depends on their order --- then it is possible to get a situation where there is no convergence. As an example of the latter, suppose that you have a way of tossing a coin that can bias it to one side or the other ---e.g., you start with a certain face of the coin upwards and you flip it in a way that gives a small and consistent number of rotations before landing on a soft surface (where it doesn't bounce). Suppose that this method is sufficiently effective that you can bias the coin 70-30 in favour of one side. (For reasons why it is difficult to bias a coin-flip in practice, see e.g., Gelman and Nolan 2002 and Diaconis, Holmes and Montgomery 2007 .) Now, suppose you were to execute a sequence of coin tosses in such a way that you start off biasing your tosses towards heads, but each time the sample proportion of heads exceeds 60% you change to bias towards tails, and each time the sample proportion of tails exceeds 60% you change to bias towards heads. If you were to execute this method then you would obtain a sample proportion that "oscillates" endlessly between about 40-60% heads without ever converging to a fixed value. In this instance you can see that the assumption of exchangeability does not hold, since the order of outcomes gives information on your present flipping-method (which therefore affects the probability of a subsequent outcome). Illustrating non-convergence for the biased-flipping mechanism: We can implement a computational simulation of the above flipping mechanism using the R code below. Here we create a function oscillating.flips that can implement that method for a given biasing probability, switching probability and starting side. oscillating.flips <- function(n, big.prob, switch.prob, start.head = TRUE) { #Set vector of flip outcomes and sample proportions FLIPS <- rep('', n) PROPS <- rep(NA, n) #Set starting values VALS <- c('H', 'T') HEAD <- start.head #Execute the coin flips for (k in 1:n) { #Set probability and perform the coin flip PROB <- c(big.prob, 1-big.prob) if (!HEAD) { PROB <- rev(PROB) } FLIPS[k] <- sample(VALS, size = 1, prob = PROB) #Update sample proportion and execute switch (if triggered) if (k == 1) { PROPS[k] <- 1*(FLIPS[k] == 'H') } else { PROPS[k] <- ((k-1)*PROPS[k-1] + (FLIPS[k] == 'H'))/k } if (PROPS[k] > switch.prob) { HEAD <- FALSE } if (PROPS[k] < 1-switch.prob) { HEAD <- TRUE } } #Return the flips data.frame(flip = 1:n, outcome = FLIPS, head.props = PROPS) } We implement this function using the mechanism described above (70% weighting towards biased side, switching probability of 60%, and starting biased to heads) and we get $n=10^6$ simulated coin-flips for the problem, with a running output of the sample proportion of heads. We plot these sample proportions against the number of flips, with the latter shown on a logarithmic scale. As you can see from the plot, the sample proportion does not converge to any fixed value --- instead it oscillates between the switching probabilities as expected. #Generate coin-flips set.seed(187826487) FLIPS <- oscillating.flips(10^6, big.prob = 0.7, switch.prob = 0.6, start.head = TRUE) #Plot the resulting sample proportion of heads library(ggplot2) THEME <- theme(plot.title = element_text(hjust = 0.5, size = 14, face = 'bold'), plot.subtitle = element_text(hjust = 0.5, face = 'bold')) FIGURE <- ggplot(aes(x = flip, y = head.props), data = FLIPS) + geom_point() + geom_hline(yintercept = 0.5, linetype = 'dashed', colour = 'red') + scale_x_log10(breaks = scales::trans_breaks("log10", function(x) 10^x), labels = scales::trans_format("log10", scales::math_format(10^.x))) + scale_y_continuous(limits = c(0, 1)) + THEME + ggtitle('Example of Biased Coin-Flipping Mechanism') + labs(subtitle = '(Proportion of heads does not converge!) \n') + xlab('Number of Flips') + ylab('Sample Proportion of Heads') FIGURE
{ "source": [ "https://stats.stackexchange.com/questions/563733", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/316764/" ] }
566,119
Suppose you have a data set that doesn't appear to be normal when its distribution is first plotted (e.g., it's qqplot is curved). If after some kind of transformation is applied (e.g., log, square root, etc.) it seems to follow normality (e.g., qqplot is more straight), does that mean that the dataset was actually normal in the first place and just needed to be transformed properly, or is that an incorrect assumption to make?
NO It means that the transformed distribution is normal. Depending on the transformation, it might suggest a lack of normality of the original distribution. For instance, if a log-transformed distribution is normal, then the original distribution was log-normal, which certainly is not normal.
{ "source": [ "https://stats.stackexchange.com/questions/566119", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/58785/" ] }
566,696
After using PCA to reduce dimensionality, does PCA preserve linear separability for any linearly separable set? Will the data still be linearly separable after the transformation? I am thinking that it does preserve linear separability because the PCA just reduces the dimension and not the relationship between the points in terms of separability. Am I on the right line of thinking?
No, it may be that the discriminative information is in the direction of a principal component that explains a relatively small amount of the total variance, and hence gets discarded. Consider a two-dimensional dataset where the two classes lie in long parallel cigar-shaped elongated Gaussian clusters with a small gap between them. Most of the variance lies along the long axis of the clusters, so the first PC will be in that direction and the second will be orthogonal to it. The data are indistinguishable from the first component, so if you discard the second component, the data will no longer be separable.
{ "source": [ "https://stats.stackexchange.com/questions/566696", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/351146/" ] }
566,713
It seems that batch gradient descent is the traditional gradient descent, except that the objective function is in the form of summation?
No, it may be that the discriminative information is in the direction of a principal component that explains a relatively small amount of the total variance, and hence gets discarded. Consider a two-dimensional dataset where the two classes lie in long parallel cigar-shaped elongated Gaussian clusters with a small gap between them. Most of the variance lies along the long axis of the clusters, so the first PC will be in that direction and the second will be orthogonal to it. The data are indistinguishable from the first component, so if you discard the second component, the data will no longer be separable.
{ "source": [ "https://stats.stackexchange.com/questions/566713", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/337929/" ] }
568,281
My understanding is that probability (at least from a frequentist viewpoint) is a mathematical tool for modeling correlations. So, for example, we can say that two events $X$ and $Y$ are defined to be independent if $P(X\cap Y) = P(X)P(Y)$ , or equivalently $P(X|Y) = P(X)$ , and so on. However, something like whether or not $P(X|Y) = P(X)$ tells us nothing about causation. This leads me to the question that this post is about. Is there any sort of theory or mathematical field of study that concerns itself with modeling causation? I suspect there are answers in two possible forms. The first is that there might be specialized models specific to a given domain of study (biology, physics, economics). The second might be some generalized abstract theory akin to constructor theory that honestly only a mathematician would think up. An answer in any possible form would be appreciated.
There are two main approaches to the New Causal Revolution. One is the graphical approach (as in, directed acyclic graphs ), championed by Judea Pearl . The other is the potential outcome framework , championed by Donald Rubin . For the graphical approach , I recommend these books in this order: The Book of Why , by Pearl and MacKenzie. Prerequisite: introductory statistics. Causal Inference in Statistics: A Primer , by Pearl, Glymour, and Jewell. Prerequisites: the full calculus sequence, followed by mathematical statistics . Causality: Models, Reasoning, and Inference , by Pearl. This book is extremely difficult (I have not read it, though I have skimmed in parts) but has almost everything in there. One recent development not in this book is the maximal ancestor graph approach. Prerequisites: first mathematical statistics, then Bayesian statistics (I would recommend first Bayesian Statistics for Beginners: a step-by-step approach and then Bayesian Data Analysis , but the first book might be sufficient; also note that a pdf of Bayesian Data Analysis can be obtained for free at the website of one of the authors ), and finally Bayesian networks . This book has accompanying homework sets on its webpage : go to CAUSALITY, then 7. Viewgraphs and homeworks for instructors. Indeed, Pearl's UCLA webpage has errata for all three of the above books. For the potential outcomes framework , the main book appears to be Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction , by Rubin and Imbens. I have not read it; but the prerequisites appear to be mathematical statistics at least - analysis and design of experiments wouldn't hurt, nor would Bayesian statistics (see #3 above for recommendations there). One weakness of this book is that it has no exercises or errata page. The two approaches have different strengths and weaknesses, but as noted by Carlos in his comment, they are theoretically unified via Structural Causal Modeling - addressed in several of the books above.
{ "source": [ "https://stats.stackexchange.com/questions/568281", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/316764/" ] }
569,181
I feel like I might be missing something dumb, but in a set of non-negative numbers, is it possible for the mean to be less than half of the median? Example: there are 999 numbers and we are told that the median is 10. This means that the sum of the numbers cannot be any less than 5,000. The configuration to achieve that sum is: 500 numbers are equal to 10, 499 are equal to 0. This makes the mean 5,000 / 999 = (slightly more than) 5. Since 5,000 is the lowest possible sum, (slightly more than) 5 is the lowest possible mean. Therefore the lowest possible mean is more than one half of the median. Am I wrong?
You are correct. This is one example of a general result called Markov's inequality, which says that for a non-negative random variable $X$ and number $a$ , $$P(X\geq a)\leq \frac{E[X]}{a}$$ If you plug in the median of $X$ for $a$ you get $$P(X\geq \text{median})\leq \frac{E[X]}{\text{median}}$$ so $$0.5\leq \frac{E[X]}{\text{median}}$$ and $$ 0.5\times\text{median}\leq \text{mean}$$ Your argument is also roughly how Markov's inequality is proved .
{ "source": [ "https://stats.stackexchange.com/questions/569181", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/353918/" ] }
569,187
I would like to use some tool to detect spike in one class's proportion. Assume I received roughly the same percentage of red, blue and yellow candies throughout time. That means the absolute number can go up and down for each type of candy but the percentage for each candy will stay roughly the same . For example, it could be 80 red (28%), 90 blue (32%), 110 yellow (39%) 1 red (33%), 1 blue (33%), 1 yellow (33%) And these are totally fine. I want a tool to statistically detect when a particular percentage is larger than usual. For example, let's say one day i received 5 red (71%), 1 yellow and 1 blue. That will trigger the warning that the percentage for red is abnormal. My gut instinct told me i should use Chi-square test to test one class vs. the rest of the classes. Is that correct?
You are correct. This is one example of a general result called Markov's inequality, which says that for a non-negative random variable $X$ and number $a$ , $$P(X\geq a)\leq \frac{E[X]}{a}$$ If you plug in the median of $X$ for $a$ you get $$P(X\geq \text{median})\leq \frac{E[X]}{\text{median}}$$ so $$0.5\leq \frac{E[X]}{\text{median}}$$ and $$ 0.5\times\text{median}\leq \text{mean}$$ Your argument is also roughly how Markov's inequality is proved .
{ "source": [ "https://stats.stackexchange.com/questions/569187", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/307044/" ] }
569,647
I'm looking for an algorithm that would do the following: Imagine that you need to sample uniformly at random and without replacement $k$ elements from a pool of $n$ elements. The catch is that $n$ is unknown. You can iterate over the whole set of the elements, so eventually, you would learn $n$ . The algorithm is ought to have $O(n)$ time complexity, you are allowed to iterate over the set only once. You are allowed to keep a small cache of size $m \approx k$ , but memorizing all the elements is not possible as $n$ could be very large. Example: you work at a factory, your job is to select exactly $k$ items from the production line for an inspection per day. You need to sample them uniformly at random (with a probability $k/n$ each). The problem is that the number of items produced by the production line is unpredictable, so you don't know if it would produce $k$ items, or $1,000k$ items, or a completely different number. You need to decide on the fly to keep an item or leave it, as they move over the production line and won't be available later. This should be possible. I guess what I can do is to take the first $k$ samples that I saw, and on each following step with some probability $p$ drop one of the already available samples and take instead a new one. The probability of drawing a new sample would need to evolve over time, but also the probabilities of rejecting the already sampled values would change. Probability of rejecting $x_t$ would need to depend on time order when it was observed $t$ . The first $k$ values were sampled with probability $1$ , $k+1$ value would be sampled with probability $k/k+1$ , the last value with probability $k/n$ , etc. By rejecting items already collected we would "correct" the sampling probabilities. I don't want to re-invent the wheel, so I wanted to ask if there already is an algorithm like this?
Yes. Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items uniformly at random. After you have been through the entire population, the cache will be the desired random sample. This algorithm is similar to a standard algorithm for creating a random permutation of $n$ items. It's essentially Durstenfeld's version of the Fisher-Yates shuffle . Here is a diagram of how such a sample of size $k=20$ evolved for a population that eventually was size $n=300.$ The lines at each iteration indicate the indexes of the sample members. At each iteration, the sample should be roughly uniformly distributed between $1$ and the iteration--conditional, of course, on how uniformly distributed it had previously been. Of crucial importance is to note how some of the earliest elements (shown in red) manage to persist in the sample to the end: these need to have the same chances of being in the sample as any of the later elements. To prove the algorithm works, we may view it as a Markov chain. The set of states after $n\ge k$ items have been processed can be identified with the set of $k$ -subsets $\mathcal{I} = \{i_1, i_2, \ldots, i_k\}$ of the indexes $1,2,\ldots, n$ denoting which items are currently in the sample. The algorithm makes a random transition from any subset $\mathcal I$ of $\{1,2,\ldots, n\}$ to $k+1$ distinct possible subsets of $\{1,2,\ldots, n, n+1\}.$ One of them is $\mathcal I$ itself, which occurs with probability $1 - k/(n+1).$ The other of them are the subsets where $i_j$ is replaced by $m+1$ for $j=1,2,\ldots, k.$ Each of these transitions occurs with probability $$\frac{1}{k}\left(\frac{k}{n+1}\right) = \frac{1}{n+1}.$$ We need to prove that after $n \ge k$ steps, every $k$ -subset of $\{1,2,\ldots, n\}$ has the same chance of being the sample. We can do this inductively. To this end, suppose after step $n\ge k$ that all $k$ subsets have equal chances of being the sample. These chances therefore are all $1/\binom{n}{k}.$ After step $n+1,$ a given subset $\mathcal I$ of $\{1,2,\ldots, n+1\}$ can have arisen as a transition from $n-k+2$ subsets of $\{1,2,\ldots, n\}:$ namely, If $\mathcal{I}$ does not contain $n+1,$ it arose as a transition of probability $1-k/(n+1)$ from itself, where it originally had a chance of $1/\binom{n}{k}$ of occurring. Such subsets therefore appear with individual chances of $$\Pr(\mathcal{I}) = \frac{1}{\binom{n}{k}} \times \left(1 - \frac{k}{n+1}\right) = \frac{1}{\binom{n+1}{k}}.$$ If $\mathcal{I}$ does contain $n+1,$ it arose upon replacing one of the $n-(k-1)$ indexes in $\{1,2,\ldots, n\}$ that do not appear in $\mathcal I$ with the new index $n+1.$ Each such transition occurs with chance $1/(n+1),$ again giving a total chance of $$\Pr(\mathcal I) = (n-(k-1)) \times \frac{1}{\binom{n}{k}} \times \frac{1}{n+1} = \frac{1}{\binom{n+1}{k}}.$$ Consequently, all possible $k$ -subsets of the first $n+1$ indexes have a common chance of $1/\binom{n+1}{k}$ of occurring, proving the induction step. To start the induction, notice that at step $n=k$ there is exactly one subset and it has the correct chance of $1$ to be the sample! This completes the proof. This R code demonstrates the practicality of the algorithm. In the actual application you would not have a full vector population : instead of looping over seq_along(population) , you would have a source of data from which you sequentially fetch the next element (as in population[j] ) and increment j until it is exhausted. sample.online <- function(k, population) { cache <- rep(NA, k) for (j in seq_along(population)) { if (j <= k) { cache[j] <- population[j] } else { if (runif(1, 0, j) <= k) cache[sample.int(k, 1)] <- population[j] } } cache }
{ "source": [ "https://stats.stackexchange.com/questions/569647", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/35989/" ] }
569,657
Background I have a dataset representing a large group of people that I'm using to specify a Cox proportional hazards model of a binary outcome on some explanatory variables. My outcome variable is a health condition of interest (coded 1/0), and, my "focal" explanatory variable -- the one I'm most interested in -- is a binary indication of whether a person has been given a certain treatment or not. The other explanatory variables are the usual sociodemographic suspects: age, sex, and a couple others. In planning my model, I'd like, if possible, so find some way to also control for subjects' geography in the estimation of my exposure variable (I have a reason to think it could matter). At first glance I've got a handy way to do this: a variable 3_digit_zip_code representing -- you guessed it -- the first 3 digits of a subject's US postal code. Question / problem The immediate objection to including ZIP in the model, of course, is that a hazard ratio (HR) for ZIP code would be uninterpretable: what could "a one digit increase" in ZIP code mean in practical terms? But then I think of an objection to the objection: wouldn't there be some utility of including ZIP in the interpretability of the other covariates in the model, above all the main one, treatment ? In other words, the HR for treatment would be saying something useful about the hazard of the outcome between the treated and untreated, given the same levels of the other covariates -- right? I suppose I have two questions, then: Is it worth it to include ZIP in such a Cox model, even if its HR is uninterpretable, if it adds to "control" in other HRs? Is there a better way of controlling for geography, e.g. matching of some kind?
Yes. Collect the first $k$ items encountered into the cache. At steps $j=k+1, \ldots, n,$ place item $j$ in the cache with probability $k/j,$ in which case you will remove one of the existing items uniformly at random. After you have been through the entire population, the cache will be the desired random sample. This algorithm is similar to a standard algorithm for creating a random permutation of $n$ items. It's essentially Durstenfeld's version of the Fisher-Yates shuffle . Here is a diagram of how such a sample of size $k=20$ evolved for a population that eventually was size $n=300.$ The lines at each iteration indicate the indexes of the sample members. At each iteration, the sample should be roughly uniformly distributed between $1$ and the iteration--conditional, of course, on how uniformly distributed it had previously been. Of crucial importance is to note how some of the earliest elements (shown in red) manage to persist in the sample to the end: these need to have the same chances of being in the sample as any of the later elements. To prove the algorithm works, we may view it as a Markov chain. The set of states after $n\ge k$ items have been processed can be identified with the set of $k$ -subsets $\mathcal{I} = \{i_1, i_2, \ldots, i_k\}$ of the indexes $1,2,\ldots, n$ denoting which items are currently in the sample. The algorithm makes a random transition from any subset $\mathcal I$ of $\{1,2,\ldots, n\}$ to $k+1$ distinct possible subsets of $\{1,2,\ldots, n, n+1\}.$ One of them is $\mathcal I$ itself, which occurs with probability $1 - k/(n+1).$ The other of them are the subsets where $i_j$ is replaced by $m+1$ for $j=1,2,\ldots, k.$ Each of these transitions occurs with probability $$\frac{1}{k}\left(\frac{k}{n+1}\right) = \frac{1}{n+1}.$$ We need to prove that after $n \ge k$ steps, every $k$ -subset of $\{1,2,\ldots, n\}$ has the same chance of being the sample. We can do this inductively. To this end, suppose after step $n\ge k$ that all $k$ subsets have equal chances of being the sample. These chances therefore are all $1/\binom{n}{k}.$ After step $n+1,$ a given subset $\mathcal I$ of $\{1,2,\ldots, n+1\}$ can have arisen as a transition from $n-k+2$ subsets of $\{1,2,\ldots, n\}:$ namely, If $\mathcal{I}$ does not contain $n+1,$ it arose as a transition of probability $1-k/(n+1)$ from itself, where it originally had a chance of $1/\binom{n}{k}$ of occurring. Such subsets therefore appear with individual chances of $$\Pr(\mathcal{I}) = \frac{1}{\binom{n}{k}} \times \left(1 - \frac{k}{n+1}\right) = \frac{1}{\binom{n+1}{k}}.$$ If $\mathcal{I}$ does contain $n+1,$ it arose upon replacing one of the $n-(k-1)$ indexes in $\{1,2,\ldots, n\}$ that do not appear in $\mathcal I$ with the new index $n+1.$ Each such transition occurs with chance $1/(n+1),$ again giving a total chance of $$\Pr(\mathcal I) = (n-(k-1)) \times \frac{1}{\binom{n}{k}} \times \frac{1}{n+1} = \frac{1}{\binom{n+1}{k}}.$$ Consequently, all possible $k$ -subsets of the first $n+1$ indexes have a common chance of $1/\binom{n+1}{k}$ of occurring, proving the induction step. To start the induction, notice that at step $n=k$ there is exactly one subset and it has the correct chance of $1$ to be the sample! This completes the proof. This R code demonstrates the practicality of the algorithm. In the actual application you would not have a full vector population : instead of looping over seq_along(population) , you would have a source of data from which you sequentially fetch the next element (as in population[j] ) and increment j until it is exhausted. sample.online <- function(k, population) { cache <- rep(NA, k) for (j in seq_along(population)) { if (j <= k) { cache[j] <- population[j] } else { if (runif(1, 0, j) <= k) cache[sample.int(k, 1)] <- population[j] } } cache }
{ "source": [ "https://stats.stackexchange.com/questions/569657", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/79385/" ] }
573,586
I read not too long ago Nelder and McCullagh's book Generalized Linear Models and thought the book was fantastic and I consider it a useful manual on the subject. Not surprising that's the case, considering Nelder's one of the authors. However, the book is 40 years old, and surely things have changed since when the book came out. Simon Wood's book, Generalized Additive Models , is very good, but the book focuses on GAMs, not GLMs. (Yes, GAMs generalize GLMs, but I do think focusing on GLMs specifically is worthwhile.) Hence, what would be the most important developments since Nelder and McCullagh's book came out regarding GLM theory and application? What am I missing from just reading that book? How should I supplement my knowledge?
Your premise that the elapsing of 40 years means that "surely things have changed" is quite dubious in a field relating to applied mathematics. In mathematical work it is often the case that early research on a model form provides all of its essential properties and theory pretty well, and then subsequent research makes smaller innovations/additions with diminishing marginal returns. We are still using mathematical rules from Euclid's Elements (circa 300BC) in many applied mathematical problems today, and while new geometries have been developed since this time, the subject certainly hasn't been transformed substantially every 40 years hence. In any case, I'll try to give you a rough guide to what I think are the most important developments in the field since Nelder and McCullagh's book. In my view, the biggest change that has occurred in the field since this time is not so much extension of theory for GLMs (though there have been some marginal advances), but the further development and popularising of competing models , some of which are extensions of GLMs and some of which are contrary model forms. In particular, the past 40 years has seen a rapid increase in the use of GLMMs and copula methods. Generalised Linear Mixed Models (GLMMs): Generalised linear mixed models (GLMMs) provide an extension of GLMs where there are added "random effects". You can find a nice review of these models in Dean and Nielsen (2007) . These models were popularised in the statistics profession in the 1990s with a series of publications showing fitting and inference methods (see e.g., Breslow and Clayton 1993 , Breslow and Lin 1995 , Lin and Breslow 1996 and Lin and Zhang 1999 ). Later work in the 2000s gave good overviews of these models, including several textbooks on the subject. These models are now seen as a useful extension of GLMs that can allow for simpler modelling of correlated errors based on explanatory variables in the model. They are now widely used in applied statistics work in a range of fields and are usually included in university programs in statistics. Copula methods: Copula methods were first introduced in Sklar (1959) but they didn't really start being used until later. The first statsitical conference on copula methods occurred in 1990 and they started being used more in finance after they were popularised by Li (2000) . It is only within the last few decades that copula models have become broadly known in the statistics profession, and probably only in the last decade or so that they've begun to creep into university programs. These models are now presented as an alternative means of modelling the kinds of problems that might previously have been modelled using GLMs. Some other major changes are set out in the other (excellent) answers below.
{ "source": [ "https://stats.stackexchange.com/questions/573586", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/183800/" ] }
574,333
This is an interesting problem I came across. I'm attempting to write a Python program to get a solution to it; however, I'm not sure how to proceed. So far, I know that I would expect the counts of heads to follow a binomial, and length of runs (of tails, heads, or both) to follow a geometric. Below are two sequences of 300 “coin flips” (H for heads, T for tails). One of these is a true sequence of 300 independent flips of a fair coin. The other was generated by a person typing out H’s and T’s and trying to seem random. Which sequence is truly composed of coin flips? Sequence 1: TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHH TTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHH TTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHT THHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHT HTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTT HHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT Sequence 2: HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTH THTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHH TTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTT THTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTH HHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHH HTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT Both sequences have 148 heads, two less than the expected number for a 0.5 probability of heads.
This is a variant on a standard intro stats demonstration: for homework after the first class I have assigned my students the exercise of flipping a coin 100 times and recording the results, broadly hinting that they don't really have to flip a coin and assuring them it won't be graded. Most will eschew the physical process and just write down 100 H's and T's willy-nilly. After the results are handed in at the beginning of the next class, at a glance I can reliably identify the ones who cheated. Usually there are no runs of heads or tails longer than about 4 or 5, even though in just 100 flips we ought to see a longer run that that. This case is subtler, but one particular analysis stands out as convincing: tabulate the successive ordered pairs of results. In a series of independent flips, each of the four possible pairs HH, HT, TH, and TT should occur equally often--which would be $(300-1)/4 = 74.75$ times each, on average. Here are the tabulations for the two series of flips: Series 1 Series 2 H T H T H 46 102 71 76 T 102 49 77 75 The first is obviously far from what we might expect. In that series, an H is more than twice as likely ( $102:46$ ) to be followed by a T than by another H; and a T, in turn, is more than twice as likely ( $102:49$ ) to be followed by an H. In the second series, those likelihoods are nearly $1:1,$ consistent with independent flips. A chi-squared test works well here, because all the expected counts are far greater than the threshold of 5 often quoted as a minimum. The chi-squared statistics are 38.3 and 0.085, respectively, corresponding to p-values of less than one in a billion and 77%, respectively. In other words, a table of pairs as imbalanced as the second one is to be expected (due to the randomness), but a table as imbalanced as the first happens less than one in every billion such experiments . ( NB : It has been pointed out in comments that the chi-squared test might not be applicable because these transitions are not independent: e.g., an HT can be followed only by a TT or TH. This is a legitimate concern. However, this form of dependence is extremely weak and has little appreciable effect on the null distribution of the chi-squared statistic for sequences as long as $300.$ In fact, the chi-squared distribution is a great approximation to the null sampling distribution even for sequences as short as $21,$ where the counts of the $21-1=20$ transitions that occur are expected to be $20/4=5$ of each type.) If you know nothing about chi-squared tests, or even if you do but don't want to program the chi-square quantile function to compute a p-value, you can achieve a similar result. First develop a way to quantify the degree of imbalance in a $2\times 2$ table like this. (There are many ways, but all the reasonable ones are equivalent.) Then generate, say, a few hundred such tables randomly (by flipping coins--in the computer, of course!). Compare the imbalances of these two tables to the range of imbalances generated randomly. You will find the first sequence is far outside the range while the second is squarely within it. This figure summarizes such a simulation using the chi-squared statistic as the measure of imbalance. Both panels show the same results: one on the original scale and the other on a log scale. The two dashed vertical lines in each panel show the chi-squared statistics for Series 1 (right) and Series 2 (left). The red curve is the $\chi^2(1)$ density. It fits the simulations extremely well at the right (higher values). The discrepancies for low values occur because this statistic has a discrete distribution which cannot be well approximated by any continuous distribution where it takes on small values -- but for our purposes that makes no difference at all.
{ "source": [ "https://stats.stackexchange.com/questions/574333", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/357485/" ] }
574,355
Suppose that we have a model such that $p(y\mid k, \theta_1,\dots,\theta_{k_\text{max}})$ depends only on $k,\theta_1,\dots,\theta_k$ . Hence, as $k$ assumes the values $1,\dots,k_\text{max}$ , we have a family of nested models. With suitable priors for $k$ and the $\theta_j$ 's, instead of using Green's reversible jump scheme, I was thinking if an iteration of the following much simpler procedure could work: Sample $k$ from the (available) full conditional distribution $k\mid y,\theta_1,\dots,\theta_{k_\text{max}}$ . Do a Metropolis-Hastings step, sampling en bloc only $\theta_1,\dots,\theta_k$ . Repeat. Is this already used in the literature? If not, what is the catch? Does it sample the space of nested models incorrectly or inefficiently?
This is a variant on a standard intro stats demonstration: for homework after the first class I have assigned my students the exercise of flipping a coin 100 times and recording the results, broadly hinting that they don't really have to flip a coin and assuring them it won't be graded. Most will eschew the physical process and just write down 100 H's and T's willy-nilly. After the results are handed in at the beginning of the next class, at a glance I can reliably identify the ones who cheated. Usually there are no runs of heads or tails longer than about 4 or 5, even though in just 100 flips we ought to see a longer run that that. This case is subtler, but one particular analysis stands out as convincing: tabulate the successive ordered pairs of results. In a series of independent flips, each of the four possible pairs HH, HT, TH, and TT should occur equally often--which would be $(300-1)/4 = 74.75$ times each, on average. Here are the tabulations for the two series of flips: Series 1 Series 2 H T H T H 46 102 71 76 T 102 49 77 75 The first is obviously far from what we might expect. In that series, an H is more than twice as likely ( $102:46$ ) to be followed by a T than by another H; and a T, in turn, is more than twice as likely ( $102:49$ ) to be followed by an H. In the second series, those likelihoods are nearly $1:1,$ consistent with independent flips. A chi-squared test works well here, because all the expected counts are far greater than the threshold of 5 often quoted as a minimum. The chi-squared statistics are 38.3 and 0.085, respectively, corresponding to p-values of less than one in a billion and 77%, respectively. In other words, a table of pairs as imbalanced as the second one is to be expected (due to the randomness), but a table as imbalanced as the first happens less than one in every billion such experiments . ( NB : It has been pointed out in comments that the chi-squared test might not be applicable because these transitions are not independent: e.g., an HT can be followed only by a TT or TH. This is a legitimate concern. However, this form of dependence is extremely weak and has little appreciable effect on the null distribution of the chi-squared statistic for sequences as long as $300.$ In fact, the chi-squared distribution is a great approximation to the null sampling distribution even for sequences as short as $21,$ where the counts of the $21-1=20$ transitions that occur are expected to be $20/4=5$ of each type.) If you know nothing about chi-squared tests, or even if you do but don't want to program the chi-square quantile function to compute a p-value, you can achieve a similar result. First develop a way to quantify the degree of imbalance in a $2\times 2$ table like this. (There are many ways, but all the reasonable ones are equivalent.) Then generate, say, a few hundred such tables randomly (by flipping coins--in the computer, of course!). Compare the imbalances of these two tables to the range of imbalances generated randomly. You will find the first sequence is far outside the range while the second is squarely within it. This figure summarizes such a simulation using the chi-squared statistic as the measure of imbalance. Both panels show the same results: one on the original scale and the other on a log scale. The two dashed vertical lines in each panel show the chi-squared statistics for Series 1 (right) and Series 2 (left). The red curve is the $\chi^2(1)$ density. It fits the simulations extremely well at the right (higher values). The discrepancies for low values occur because this statistic has a discrete distribution which cannot be well approximated by any continuous distribution where it takes on small values -- but for our purposes that makes no difference at all.
{ "source": [ "https://stats.stackexchange.com/questions/574355", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9394/" ] }
574,444
I am currently reading the paper Duncan J Murdoch, Yu-Ling Tsai & James Adcock (2008) P-Values are Random Variables , The American Statistician, 62:3, 242-245, DOI: 10.1198/000313008X332421 In this paper, the authors argue that a p-value is itself a random test statistic. Moreover, given some test statistic $T$ that takes in an i.i.d. sample $\mathbf{X}$ and outputs a real number, a p-value is the probability integral transform of $T(\mathbf{X})$ . That is, if $T(\mathbf{X})$ has a cumulative distribution function $F_{\tau}$ , then the corresponding p-value is $F_{\tau}(T(\mathbf{X}))$ . We can then decide whether to reject a null hypothesis based on this p-value. For example, a decision rule could be to reject the null hypothesis if the corresponding p-value is less than 0.05. However, because $F_{\tau}$ is monotone increasing, I am not sure why we need to compute a p-value in the first place to decide whether or not to reject a null hypothesis. Can't the value of the test statistic $T(\mathbf{X})$ be used to decide? For example, if the decision rule is to reject the null hypothesis when the p-value is less than 0.05, then, if the inverse of $F_{\tau}$ exists, we can obtain the threshold value for $T(\mathbf{X})$ beyond which the null hypothesis is rejected. Furthermore, we should be able to compute type I and II error rates just using this threshold value for $T(\mathbf{X})$ .
I think you are almost constructing p-values in the question. You are correct, you can just set a threshold using $t=T(X)$ , but as you point out, you'd like to calculate the error rate associated with that t-value. In order to do so you need to know the null distribution of your test statistic, which is $F$ . So in order to find the t-value that has a 5% false positive under the null, you need to find $t$ such that $F(t)=0.05$ , i.e. $t=F^{-1}(0.05)$ In many (most?) cases inverting the cdf is harder than evaluating it, hence it is much easier to calculate $p=F(T)$ and then check if $p<0.05$ than to check if $t<F^{-1}(0.05)$
{ "source": [ "https://stats.stackexchange.com/questions/574444", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/296197/" ] }
575,905
If the alpha is set to 0.05 I have encountered many scientific publications saying that a "tendency for an effect" is present based on a p-value of 0.05 < p-value < 0.1. On the other hand, I have had statisticians who criticize me for doing so, because there is only "reject" or "not reject". Therefore, it does not make sense to distinguish between a p-value of 0.08 or 0.97. Also, some statisticians have criticized me for reporting p-values as p<0.05 because it is not precise. My question is: how to deal with a p-values that are not below but close to my alpha?
There are two different approaches to interpreting statistical significance - the Fisher way, and the Neyman-Pearson way. We smush these together (into what Gerd Gigerenzer has called a 'bastardised approach'). The reason that statistical significance testing [Edit, n italics] as it is often taught and discussed doesn't seem to make sense is that, essentially, it doesn't. Neyman-Pearson said that you pick a cutoff and you use it. It's less than the cutoff(say, 0.05) or it's not. There's no other information to convey.In NP, 0.08 and 0.97 are the same. Fisher said you take the p-value and you treat it as the level of evidence that there is an effect. <0.2 is some evidence, but it's pretty weak; <0.1 is a bit better, but still kind of weak. <0.05 is what Fisher said is often good enough (but he also wrote that one should change one's significance level according to the situation, which no one does). Either report the exact significance level and interpret that appropriately. Or use 0.05. Don't do this nonsense of 0.10>p>0.05. Your p-value presents some evidence. It's not great evidence, but it's not no evidence. You shouldn't be trying to say "Yes" or "No" when maybe is an answer. In addition, people often say that their p-value of 0.06 is "approaching statistical significance". No one says it is "Running away from significance" or that their p-value of 0.04 is "Approaching non-significance."
{ "source": [ "https://stats.stackexchange.com/questions/575905", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/313156/" ] }
578,090
If we have iid random variables $X_1,X_2,...,X_N$ with $\mathbb{E}X_i=\mu$ , is it true that $\mathbb{E}\prod X_i=\mu^N$ ? I had no doubt that this is true, until I tried it out with Python, using random.normalvariate() to generate the set of samples, and surprisingly found that the product of all these data points are generally a lot smaller than $\mu^N$ . For example, I used that function to generate 2 million (there's absolutely no need to have a dataset of this size, but I went for it anyways) data points that are, supposedly, distributed as $N(1, 0.2)$ . I was hoping for their product to scatter around 1 as I repeat the trial, but instead I got numbers $\sim\pm10^{-18500}$ constantly. For what it's worth, I've tried sample sizes ranging from 1 to 1000, and they all fell below their respective $\mu^N$ , significantly -- the difference was visible on a log-scaled plot. I suspected random.normalvariate() generated something whose PDF is not $N(1,0.2)$ . But I plotted that 2 million data points and got a perfect bell-shape curve. I suspected that the data are correlated among themselves. But I tried to compute $\mathbb{E}\prod X_i$ with $\text{corr}(X_i,X_j)=\rho$ and found that my calculation could not explain it. I'm not hundred percent confident in my calculation though. And I tried to understand it intuitively, and had the following thought. Say we have a lot of $N(1,0.2)$ data points. It is conceivable that they lie symmetrically around 1. So, we can group the data points into pairs that are roughly like $\{(1-a_n),(1+a_n)\}$ . This should be feasible when the sample size is large enough. But each pair has their product being less than 1. Therefore, the total product is a lot smaller than 1. So, this seems to me a paradox. I can't dissuade myself from $\mathbb{E}\prod X_i=\mu^N$ , but neither can I find the loophole in the thought above (or my empirical tests). I feel that I must have made a blatant mistake somewhere, but I can't locate that error. Please help me out if you know the answer!
First, let's establish the correct identity. When $X_1, \ldots, X_N$ are independent variables with finite expectations $\mu_i=X_i,$ then by laws of conditional expectation, $$E\left[\prod_{i=1}^N X_i\right] =E\left[X_N E\left[\prod_{i=1}^{N-1} X_i \mid X_{N}\right]\right] = E\left[X_N \prod_{i=1}^{N-1} \mu_i\right] = \mu_N\prod_{i=1}^{N-1} \mu_i= \prod_{i=1}^N \mu_i$$ gives a proof by mathematical induction (beginning with the base case $N=1$ where $$E\left[\prod_{i=1}^N X_i\right] = E\left[X_1\right] = \mu_1 = \prod_{i=1}^N \mu_i$$ is trivially true). Now, let's find an explanation for the simulation results. The pairing argument in the question is an interesting one, because it shows that when multiplications by $(1-a)$ and $(1+a)$ occur in equal numbers (approximately $N/2$ each), the net product is $(1-a^2)^{N/2}\approx \exp(-Na^2/2).$ This suggests that when $N$ is sufficiently large, it's nearly certain that the product will be tiny--certainly less than the common mean of $1.$ The reason this is not a paradox is that there will be a vanishingly small--but still positive--probability of yielding a whopping big number on the order of $(1+a)^N \approx \exp(aN).$ This rare chance of a huge product balances out all the tiny products, keeping the mean at $1.$ It is not easy to analyze the product of many Normal variables. Instead, we may gain insight from a simpler case. Let $Y_1, Y_2, \ldots,$ be a sequence of independent Rademacher variables: that is, each of these has a $1/2$ chance of being either $1$ or $-1.$ Pick some number $0 \lt a \lt 1$ and define $X_i = 1 + aY_i,$ so that each $X_i$ has equal chances of being $1\pm a.$ Clearly $E[X_i] = 1 = \mu_i$ for all $i.$ Consider the product of the first $N$ of these $X_i.$ Suppose, in a simulation, that $k$ of these values equal $1-a$ and (therefore) the remaining $N-k$ of them equal $1+a.$ The product then is $(1-a)^k(1+a)^{N-k}.$ How small must $k$ be for this product to exceed $1$ ? Given $N$ and $a,$ we must solve the inequality $$(1-a)^k(1+a)^{N-k} \ge 1$$ for $k.$ By taking logarithms, this is equivalent to $$k \le N \frac{\log(1+a)}{\log(1+a) - \log(1-a)}.$$ Because each $X_i$ has equal and independent chances of being $1\pm a,$ the distribution of $k$ is Binomial $(N, 1/2),$ which even for moderate sizes of $N$ ( $N \ge 10$ is fine) is nicely approximated by a Normal $(N/2, \sqrt{N}/2)$ distribution. Thus, the chance that the product is $1$ or greater will be close to the value of the standard Normal distribution at $Z$ (the tail area under the Bell Curve left of $Z$ ) where $$Z = \frac{N \frac{\log(1+a)}{\log(1+a) - \log(1-a)} - \frac{N}{2}}{\sqrt{N}/2} = \text{constant}\times \sqrt{N}.$$ You can see where this is going! As $N$ grows large, $Z$ is pushed further out to the left, making it less and less likely to observe any product greater than $1$ in a simulation. In the question, where the standard deviation is $0.2,$ the value $a=0.2$ will closely reproduce the simulation behavior. In this case the constant is $$\text{constant} = \frac{2\log(1+a)}{\log(1+a)-\log(1-a)} - 1 = -0.100\ldots$$ Taking $N=2\times 10^6,$ for instance, as in the question, compute $Z \approx -142.$ The chance that $k$ is small enough to produce a value this negative is less than $10^{-10000}.$ You can't even represent that in double precision floats. It would take far more than the age of the universe to create a simulation that had the remotest chance of producing such an imbalance between the $1+a$ and $1-a$ values that the product exceeds $1.$ In short, for all practical purposes, when $N$ is sufficiently large ( $N \gg 5000$ will do when $a=1/5$ ), you will never observe a value above $1$ in this simulation, even though the mean of the product is $1.$
{ "source": [ "https://stats.stackexchange.com/questions/578090", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/360174/" ] }
578,736
Let the function $XOR:\{0,1\} \times \{0,1\} \to \{0,1\}$ be the function defined by $$\begin{align} XOR(0,0) &= 0, \\[6pt] XOR(0,1) &= 1, \\[6pt] XOR(1,0) &= 1, \\[6pt] XOR(1,1) &= 0. \\[6pt] \end{align}$$ Let $\mathcal{H}$ be the set of all linear classifiers on $\Bbb R^2$ , where \begin{align*} h(x_1,x_2) = \begin{cases} 1 & & \text{if} \ ax_1+bx_2+c < 0, \\[6pt] 0 & & \text{if} \ ax_1+bx_2+c \ge 0, \\[6pt] \end{cases} \end{align*} for all $h\in \mathcal{H}$ and for some $a,b,c \in \Bbb R$ . Show that there is no $h \in \mathcal{H}$ such that $h(x_1,x_2)=XOR(x_1,x_2)$ for any $(x_1,x_2) \in \{0,1\} \times \{0,1\}$ . Show that there are exactly $16$ distinct functions $f:\{0,1\} \times \{0,1\} \to \{0,1\}$ . I have no idea how to start with this problem. Any ideas?
Draw a picture. The question asks you to show it is not possible to find a half-plane and its complement that separate the blue points where XOR is zero from the red points where XOR is one (in the sense that the former lie in the half-plane and the latter lie in its complement). One (flawed) attempt is shown here, where the half-plane is shaded in contrast to its complement. This particular example doesn't work because both the half-plane and its complement each contain one blue point and one red point. After pondering this a little, you might be inspired to attempt a proof by contradiction: suppose there were numbers $(a,b,c)$ for which the sign of $ax_1 + bx_2 +c$ agreed with the posted values of XOR. Plugging in all four possibilities for $(x_1,x_2)$ leads to this table. $$\begin{array}{lcccr} \text{Location} & x_1 & x_2 & \operatorname{XOR}(x_1,x_2) & a x_1 + b x_2 + c \lt 0? \\ \hline \text{Bottom left} & 0 & 0 & \color{blue}0 & c \ge 0 \\ \text{Top left} &0 & 1 & \color{red}1 & b + c \lt 0\\ \text{Bottom right} &1 & 0 & \color{red}1 & a + c \lt 0\\ \text{Top right} &1 & 1 & \color{blue}0 & a + b + c \ge 0 \end{array}$$ The stated values of XOR determine what the inequalities in the right hand column must be. If we were to sum the first three inequalities, after rewriting the first as $-c \le 0,$ we would obtain $$a + b + c = (-c) + (b+c) + (a+c) \lt 0,$$ exactly the opposite of the last inequality: there's the contradiction. It's a simple algebraic way of showing that if the bottom left point were separated from the upper left and bottom right points, then it would also have to be separated from the top right point--but that's exactly the opposite of what we need to separate the values of XOR. The second question is a simple (and mathematically unrelated) counting problem. The possible values of $(x_1,x_2)$ designate four points, each of which may take on one of the two values $0$ or $1,$ giving $2^4=16$ possibilities. If this isn't completely obvious, then it's a worthwhile exercise to write down all sixteen functions. You can do it with a table of four rows and 18 columns: one column for $x_1,$ another for $x_2,$ and the remaining 16 for the distinct functions. If you do this systematically, you will notice that those 16 columns correspond to the 16 four-digit binary numbers 0000, 0001, ..., 1111. You can also depict all 16 functions in the manner of the figure above: take out your crayons; choose two of them; and color the four points in as many distinct ways as possible. The titles in this figure give some standard names for the functions. The background shading depicts separating half-planes wherever they exist.
{ "source": [ "https://stats.stackexchange.com/questions/578736", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/360683/" ] }
579,338
I'm reflecting about statistical significance and I think a variable can be irrelevant and significant at the same time. But, how do I explain that? And if that were the case, should I include that variable to my regression model?
Here is a good example. Suppose you are interested in modelling the effect of ice cream sales on incidence of shark attacks. Now, clearly there is no association; buying ice cream in no way affects the incidence of shark attacks. However there is a third variable which affects both, namely the temperature outside. On hot days, people will want ice cream and might also want to go swimming. Hence, hot days see increases in both ice cream sales and shark attacks (by virtue of more people going swimming and hence being at risk for an attack). This is known as confounding . So clearly, ice cream sales is irrelevant when studying shark attacks but were we to regress shark attack numbers on ice cream sales we would find a significant result. However, that statistical significance is confounded by temperature, and so the result is meaningless. EDIT: Because this has generated conversation around the interpretation of "irrelevant" I feel the need to make some additional comments and concede I should have said "it depends" (even though I object to the arguments made). If "relevant" is understood to mean "closely connected or appropriate to what is being done or considered" then "irrelevant" should be taken to mean "not closely connected or appropriate...". In this case, ice cream sales could be considered "relevant" since they would be correlated -- but I find interpreting "relevant" in a purely statistical sense to be an extremely narrow (and irregular) way to attach meaning to "relevant". None the less, it is an interpretation one may have. If, however, you interpret relevance or "closely connected..." in a mechanistic sense -- i.e. how closely connected are ice cream sales and shark attacks in the sense that I could change the former and affect the latter or vice versa -- then ice cream sales would be considered irrelevant and my original comment applies.
{ "source": [ "https://stats.stackexchange.com/questions/579338", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/354685/" ] }
580,086
We compute confidence intervals to estimate the true population mean of either a sample (when population standard deviation is unknown) or an entire population (when population standard deviation is known). But I wonder why, if we are given an entire population, why don't we calculate the average of that population instead of computing confidence interval to estimate the population mean since we already have the entire population? I'm not a Math or Stat major so I apologize if my question does not sound "smart".
We’d love to calculate population parameters! All of inferential statistics is about inferring. In other words, we are using our data at hand to guess about something greater than the data (e.g., the population from which the data are drawn). We can be silly with our guesses, or we can be thoughtful. Good statisticians intend to be thoughtful in order to make good guesses. Those guesses are the inferences. If we had the whole population, we wouldn’t have to guess, so inference would not be useful. We would just calculate the population parameters, and that’s the end. Alas, we tend to be interested in something greater than our data, so inferences are necessary. The specific example you give of doing a z-test with a known variance and unknown standard deviation is a special case. With real data, we never know the true variance. However, such a test is useful as a first example of how to do hypothesis testing, and it serves a useful educational purpose in a “Stat 101”-type of class.
{ "source": [ "https://stats.stackexchange.com/questions/580086", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/361704/" ] }
580,647
Note: I am a bit of a novice when it comes to statistics and data analysis. Reading the chapter on regression to the mean in Kahneman's Thinking Fast and Slow , I came across the following passage: The very idea of regression to the mean is alien and difficult to communicate and comprehend. Galton had a hard time before he understood it. Many statistics teachers dread the class in which the topic comes up, and their students often end up with only a vague understanding of this crucial concept. I really don't see what's difficult/complicated to explain or counter-intuitive, only that the concept might "hide" and be easy to miss in certain situations. But, the statement "Many statistics teachers dread the class..." make me actually believe that I have misunderstood something. Made simple, here's my intuitive understanding: If we observed an outcome that had a very low probability of happening, then the next time we do the same thing again, the outcome will not be as extreme. To me this is self-evident, since by definition, the first event was unlikely. If a bunch of people throw a "gaussian die" with values 1-10, and some throw 1 or 10, you don't expect them to repeat it the next time for the same reason you didn't expect them to do it the first time. This will then of course be more or less prominent depending on how much randomness (luck) is involved. Question: Is my intuitive understanding correct or am I missing something? I don't want to sound like I think I'm smarter than e.g. any statistics teachers, quite the contrary; I genuinely believe I've missed something here.
I think your intuitive understanding is correct. As for why it poses a conceptual problem for so many, I can only offer a guess. Most statistical procedures are designed around means: comparing means of groups, evaluating how the mean of one variable changes across gradients in other variables, etc. But we are occasionally very interested in evaluating hypotheses about or predicting extremes: the fastest runner, the best-performing stock, etc. Adapting the statistical procedures and mental toolkit/shortcuts we have developed that is based on means to deal with extremes can be nontrivial. It's often not as simple as doing a quantile regression, because the baseline itself has poor (or perhaps 'unstable' is a better term) statistical properties, as you note in your question. If people have not grappled with the importance of this baseline problem before, it can be challenging to understand because it is fairly fundamental and requires revisiting (and revising) some basic ideas. This can be especially challenging when dealing with time series, which poses its own distinct set of challenges even when we are not interested in extremes. In my experience, even experienced scientists can struggle with the idea. The fact that this seems obvious to you is excellent news - you have developed a mental toolkit which makes it easier for you to deal with this. Often, exposure to concepts like these early on in one's training can make otherwise difficult concepts clear. Choosing when to introduce ideas like this is an important challenge when designing courses for students: too early and it can confuse and seem too abstract, too late and it risks taking extra effort because of having to undo earlier mental models. EDIT: To add a realistic example that might shed some light on this. Imagine a business that tests all its employees for performance on some skill. The results show that there is clear room for improvement and the business could make some money by improving those skills. It would be too disruptive to retrain everyone , so instead it takes the people who score in the lowest 10% and puts them through extra training. At the end of the training, this group is tested again. And voila - their average score is higher than it was in the original test! The training has worked, the business has made a useful investment, and should now make more money - right? Not necessarily. If you understand regression to the mean, you would realise that just testing the bottom 10% a second time should lead to a higher average score, even without any additional training . This is because these individuals' position in the bottom 10% on the first test is because of a combination of (probably poor) ability and random factors that happen to reduce performance in that test. Since they were in the bottom 10%, those random factors were likely more negative in this instance; test them again and the random factors would probably not be quite so negative. They would seem to improve in performance, but it would be illusory. The crucial step here was the selection of the extreme values for extra training/testing - but this decision is entirely understandable ! It's just that it's easy to forget that inference and prediction are much trickier when you have selected an extreme sample (or a non-random sample more generally) . Put yourself in the place of a businessperson trying to improve performance, or even a student trained in simple statistical procedures - wouldn't every action taken by the business in the example seem reasonable?
{ "source": [ "https://stats.stackexchange.com/questions/580647", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/362080/" ] }
581,630
This is a question as part of a paper review which was already published. The authors of the paper publish $R^2$ and RMSE in training but only RMSE in validation. Utilizing the published code, $R^2$ can be calculated on the validation data and is in fact negative in all cases, while RMSE matches what is published. It is a regression rather than classification task. There are roughly $45$ test cases using $2$ separate models (RF, ANN), meaning around $90$ generated models and $90$ predictions. Only $2$ - $3$ of the $90$ predictions have a positive $R^2$ value and those are all below $0.1$ ! I am trying to convince my team that the results are poor, but they want to ignore the $R^2$ findings and suggest that a "good" RMSE is enough. The RMSE looks okay but based on a hunch (negative $R^2$ ) I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of training and uses that in all predictions. The dataset is a timeseries (time-varying, usually $1$ - $2$ samples per week), so the last sample model just uses the previous sample's value. As my team wants to ignore the bad $R^2$ , is there another way to show that the paper's RF and ANN models do not produce statistically relevant results? Perhaps there's a statistical test that I can use to show the results are not significant but I'm not sure where to begin. As an aside, the problem in this domain is often also formulated as a binary classification task with a given threshold. In this direction, the paper's code actually manually attempts to calculate AUROC but appears to fail in doing so. However, the details of the AUROC calculation are not provided in the paper, leaving readers to assume that the standard AUROC method is applied! Rather than using a library to calculate the AUROC, the code calculates it manually using some sort of a bootstrapping process. When I use sklearn's scoring methods for AUROC, it appears all of the $90$ models are around or below $0.5$ (i.e. completely random or even broken!) Perhaps $1$ - $3$ models (out of $90$ ) make a prediction around $0.6$ or $0.7$ . Again, the team wants to ignore this as the main focus of the paper is apparently the regression task. Edit: Regarding a negative $R^2$ value, the authors calculate $R^2$ using sklearn's r2_score method ( https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html ). According to the documentation: "Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse)." Edit 2: This question was previously posted on Data Science ( https://datascience.stackexchange.com/questions/112554/showing-machine-learning-results-are-statistically-irrelevant ) but moved here after feedback. Before moving, the feedback there suggested a few things including: $0$ or less $R^2$ means that a guess would be better (which is why I included models for mean and t-1); and perhaps it's wise to be skeptical of such a model. Also, it should be noted that as a team we're looking to improve on the paper's results leading to a publication. Perhaps, to help prove insignificance of the results, I could simply show a tally of how many times mean/last sample beat or match the paper's models? (Based on both RMSE and $R^2$ , the mean model beat the paper's models in a subset of 17/30 tests which we're presently reviewing.)
You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of training and uses that in all predictions. The dataset is a timeseries (time-varying, usually 1-2 samples per week), so the last sample model just uses the previous sample's value. You benchmarked the result with trivial models and they outperform the model. This is enough to discard the model. What you did is a pretty standard procedure for validating time-series models. Negative $R^2$ are consistent with your benchmarks. In fact, $R^2$ already compares the model to the mean model because it is defined as $$ R^2 = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bar y_i)^2} $$ so the numerator is the sum of squared errors of the model and the denominator is sum of squared errors of the mean model. Your model should have a smaller squared error than the mean model for it to be positive. Maybe the authors of the published paper didn't run the sanity check? Many crappy results somehow get published. I'm afraid that if reasonable arguments like comparing the results to the benchmarks don't convince your colleagues, I doubt “a statistical test” will. They already are willing to ignore the results they don't like, so it seems rather hopeless.
{ "source": [ "https://stats.stackexchange.com/questions/581630", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/338294/" ] }
583,464
I presume this is a rather stupid question, but I hope some of you can find a bit of time to entertain it. Looking at asymptotic behavior of estimators/test statistics etc means looking at their behavior as sample size approaches infinity. But if our sample size is infinite, doesn't that mean that we already have the whole population? Case in point: consistency of OLS estimates. Under certain assumptions we are guaranteed convergence of estimates to true parameters in the limit, but at infinite sample size we already have the whole population, so why would we even want to estimate anything when every possible data point is already contained in the sample, all we have to do is to perform a lookup.
The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will generally get better as we get more data, and it eventually becomes "perfect" as the amount of data gets to the full population. You are correct that when $n \rightarrow \infty$ we have the whole (super)population, so presumably we should then be able to know any identifiable parameters of interest. If that is the case, it suggests that non-consistent estimators should be ruled out of consideration as failing a basic sensibleness criterion. As you say, if we have the whole population then we should be able to determine parameters of interest perfectly, so if an estimator doesn't do this, it suggests that it is fundamentally flawed. There are a number of other asymptotic properties that are similarly of interest, but less important than consistency. Another reason we look at the asyptotics of estimators is that if we have a large sample size, we can often use the asymptotic properties as approximations to the finite-sample behaviour of the estimator. For example, if an estimator is known to be asymptotically normally distributed (which is true in a wide class of cases) then we will often perform statistical analysis that uses the normal distribution as an approximation to the true distribution of the estimator, so long as the sample size is large. Many statistical hypothesis tests (e.g., chi-squared tests) are built on this basis, and so are a lot of confidence intervals.
{ "source": [ "https://stats.stackexchange.com/questions/583464", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/348215/" ] }
586,385
Assume you sample two numbers, randomly drawn from 1 to 10; you could choose two strategies: 1) pick with replacement and 2) pick without replacement. Which strategy would you prefer to maximize the expected product? I encountered this problem while preparing for an interview. My intuition is picking with replacement would be better. However, I found it too complex to use the brute-force method to prove it. Could anyone suggest a better approach?
Hint: Note the relationship between $E[XY]$ and the covariance. Now think about the sign of the covariance - or if you prefer it in those terms, the sign of the correlation will work - under the two sampling schemes (it's zero under one but clearly not under the other, noting that we're here taking $X$ and $Y$ as the values on the two draws). The solution to maximizing $E[XY]$ is immediate. This sounds like the sort of thing one might encounter in one of those interviews where they try to ask you some odd question and see what you do with it -- there's typically a shortcut that avoids explicit calculation; this one definitely has a shortcut. Realizing the connection between $E[XY]$ and $\text{Cov}[XY]$ and then the connection to the two sampling methods, one should be able to answer it in a matter of seconds, and justify it. It seems further explanation may be helpful. These are the ideas involved. Unconditional expectations are unchanged whether you use with replacement or without replacement. That is $E[Y]=E[X]$ * under both schemes. If you sample without replacement, the covariance must be negative because when you take a value below the population mean there's now more values available above the population mean than below it for the second draw, and vice versa. That is, the second value is more likely to be the opposite side of the mean from the first value than on the same side in a way that will make the covariance negative for this case. $E[XY] = E[X]E[Y]+\text{Cov}[X,Y]$ With replacement, covariance is 0 and $E[XY]=E[X]E[Y]$ Without replacement, covariance is <0 and $E[XY] < E[X]E[Y]$ * If this doesn't seem obvious, consider the following thought experiment: I take a deck of cards numbered 1 to 10 and thoroughly shuffle them, placing the deck face down on the table. Person A will take the first card and person B the second card. But now person B asks that we extend the shuffling a little further and interchange the positions of the top two cards. Clearly that last step doesn't change the distributions experienced by A and B (the additional mixing step doesn't make it any less random). So B must experience the same distribution under both schemes, and thereby B has the same (unconditional) distribution as A does -- it doesn't matter if you take the first card or the second card. Plainly, then, $E[Y]=E[X]$ . (Naturally, however, if B observes what A got before drawing, B's conditional distribution and hence the conditional expectation $E[Y|X=x]$ is impacted by the specific value of $x$ . This is not the situation we're dealing with, since we were trying to find the unconditional expectation $E[Y]$ .)
{ "source": [ "https://stats.stackexchange.com/questions/586385", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/334639/" ] }
588,063
I have a continuous random variable $\tau$ and I want to evaluate $$ E\left(\sum_{i=1}^{\lfloor \tau \rfloor} Y_i\right), $$ where $Y_i$ are known, non-random, and $\lfloor . \rfloor$ is the floor function. If $Y_i$ s were iid I know I could use Wald's equation , for instance, but that is not the case. I am able to solve this through Monte Carlo , as I can simulate different $\tau$ s. However, this will be very time-consuming since $Y_i$ can be big and the Monte Carlo samples can be large. It would be significantly easier if I could approximate the expectation above with $$ \sum_{i=1}^{\lfloor E(\tau) \rfloor} Y_i. $$ Is there a theoretical guarantee of this approximation? Note on support: The vectors $Y_i$ are typically not large in magnitude, but they can be large in dimensions. The domain of $\tau$ is fixed to be $(1,N)$ , where $N$ is known in advance, and it is unimodal.
The expectation $$ \mathbb E\left[\sum_{i=1}^{\lfloor\tau\rfloor} Y_i\right]=\mathbb E\left[\sum_{i=1}^{\infty} \mathbb I_{\tau\ge i} Y_i\right]$$ simplifies into $$Y_1\underbrace{\mathbb P(\tau\ge 1)}_{=1}+Y_2\mathbb P(\tau\ge 2)+\cdots+Y_N\underbrace{\mathbb P(\tau=N)}_{=0}$$ and since the $Y_i$ 's are known, only the cdf of $\tau$ need be approximated by Monte Carlo, if I understand correctly, resulting in $$Y_1+Y_2\hat{\mathbb P}(\tau\ge 2)+\cdots+Y_{N-1}\hat{\mathbb P}(\tau\ge N-1)$$ where $\hat{\mathbb P}$ denotes the empirical distribution. The magnitude / dimension of the $Y_i$ 's thus does not impact the Monte Carlo effort.
{ "source": [ "https://stats.stackexchange.com/questions/588063", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/231764/" ] }
590,861
In the beginning of chapter 2 of Information Theory, Inference and Algorithms , the author says that he will refrain from being unnecessarily rigorous and provides the example of saying that something is "true" instead of saying more formally that said thing is "true with probability 1". Is there a difference between something being "true" and 'true with probability 1" ?
If something is true, then it is true with probability of one. Or at least let us assume that without raising further complications. But something with probability of one is not necessarily true. This is the notion of something being almost surely true. Example Suppose we sample uniformly on the interval $x \in [0,1] \subset \mathbb{R}$ . Since $P(x=\frac{1}{2}) = 0$ , we can infer that $P(\lnot [x = \frac{1}{2}]) = 1 - P(x = \frac{1}{2}) = 1$ . It is almost surely the case that you will not sample $x=\frac{1}{2}$ , but it isn't impossible. Example: Dart Throwing This example is from Wikipedia. Imagine throwing a dart at a unit square (a square with an area of 1) so that the dart always hits an exact point in the square, in such a way that each point in the square is equally likely to be hit. Since the square has area 1, the probability that the dart will hit any particular subregion of the square is equal to the area of that subregion. For example, the probability that the dart will hit the right half of the square is 0.5, since the right half has area 0.5. Next, consider the event that the dart hits exactly a point in the diagonals of the unit square. Since the area of the diagonals of the square is 0, the probability that the dart will land exactly on a diagonal is 0. That is, the dart will almost never land on a diagonal (equivalently, it will almost surely not land on a diagonal), even though the set of points on the diagonals is not empty, and a point on a diagonal is no less possible than any other point.
{ "source": [ "https://stats.stackexchange.com/questions/590861", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/243542/" ] }
591,546
Let's say we know that A is independent of B, or mathematically: $$P(A|B) = P(A)$$ Then how come we can't say the following is necessarily true: $$P(A|B,C) = P(A|C)$$ If the outcome of B doesn't have an effect on the outcome of $A$ , then why would the outcome of $B$ AND $C$ have an effect on $A$ that is different than the effect of the outcome of just $C$ ? Is there perhaps a simple counterexample to illustrate this? For example, let's say $A$ and $B$ are the event of picking an ace of spades from a deck of cards (each has their own deck of cards).
My favourite example is a chessboard: if you pick a point uniformly then the row, column, and colour are pairwise independent. Suppose A is colour, B is row, and C is column. Then P(A="white"|B)=0.5 and P(A="white"|C)=0.5 for any value of B, but P(A="white"|B,C) is either 0 or 1 for any values of B and C.
{ "source": [ "https://stats.stackexchange.com/questions/591546", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/134102/" ] }
593,170
When the null hypothesis is true, the p-value of a test should have the standard uniform distribution. Here is what I get with t.test(...) in R using two Gaussian samples of size 5. set.seed(123) p.val <- replicate(n=100000, t.test(rnorm(n=5), rnorm(n=5))$p.value) hist(p.val, breaks=50) You can see that there is a deficit of low p-values. Below is what I get with somewhat bigger samples of size 10. set.seed(123) p.val <- replicate(n=100000, t.test(rnorm(n=10), rnorm(n=10))$p.value) hist(p.val, breaks=50) The deficit of low p-values is gone. So what happens in the first example? Is there something wrong with t.test(...) in R for small-ish sample sizes? > sessionInfo() R version 4.2.1 (2022-06-23) Platform: x86_64-apple-darwin17.0 (64-bit) Running under: macOS Catalina 10.15.7 Matrix products: default BLAS: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib locale: [1] en_CA.UTF-8/en_CA.UTF-8/en_CA.UTF-8/C/en_CA.UTF-8/en_CA.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods base loaded via a namespace (and not attached): [1] compiler_4.2.1
t.test performs Welch's t -test if the argument var.equal is not explicitly set to TRUE . The distribution of the test statistic (under the null hypothesis) in Welch's t -test is only approximated by a t -distribution and this approximation gets better with increasing sample sizes. Therefore, the result of your simulation is not particularly surprising. Addendum The test statistics in Welch's t -test and Student's t -test coincide if the two sample sizes $n_1$ and $n_2$ (of group $1$ and $2$ , respectively) are equal. Hence, the discrepancy in the (simulated) p -value distributions of the two tests (note that the one of Student's t -test is uniform on $\left[0,1\right]$ ) under the null hypothesis is due to the discrepancy between the estimated degrees of freedom $\nu$ in Welch's t -test and the degrees of freedom $\tilde\nu=2\left(n-1\right)$ in Student's t -test, where $n=n_1=n_2$ . It is easy to see that, if $n_1=n_2$ , the estimated degrees of freedom are given by $$ \nu = \frac{\left(n-1\right)\left(s_1^2 + s_2^2\right)^2}{s_1^4+s_2^4} = \frac{\left(n-1\right)\left(s_1^4 + s_2^4 + 2s_1^2s_2^2 \right)}{s_1^4+s_2^4}, $$ where $s_1$ and $s_2$ are the Bessel-corrected sample standard deviations. By the AM–GM inequality and the non-negativity of sample standard deviations, $2s_1^2s_2^2 \leq s_1^4 + s_2^4$ (with equality only if $s_1 = s_2$ ) and $2s_1^2s_2^2 \geq 0$ , therefore $n-1 \leq \nu \leq 2\left(n-1\right)=\tilde\nu$ . This shows that the estimated degrees of freedom can only underestimate (or $-$ but almost never $-$ coincide with) the true degrees of freedom in the given situation, which leads to the conservative p -values seen in your simulation. This behavior is nicely illustrated in Thomas Lumley's answer . Since $s_1$ will tend to be closer to $s_2$ with increasing $n$ , we can also see that $\nu$ will tend to be closer to $\tilde\nu$ as $n$ increases. Additionally, for a fixed difference $\nu - \tilde\nu$ in degrees of freedom of two t -distributions, their PDFs become increasingly similar with increasing $\nu$ , and $\nu$ increases with $n$ in our case. This explains the improvement of the approximation and hence the p -value distribution with increasing group/total sample size.
{ "source": [ "https://stats.stackexchange.com/questions/593170", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10849/" ] }
603,023
I know that it is generally accepted that the two-sided test is the "Gold Standard". However, I just wanted to see if there are real life, practical applications of the one-sided test in the real world, or if this only exists in academia. Edit: "Generally accepted / Gold Standard" in the sense of being the default recommendation in the book Introduction to Statistical Learning , 2nd ed., p.558, footnote 8: A one-sided $p$ -value is the probability of seeing such an extreme value of the test statistic; e.g. the probability of seeing a test statistic greater than or equal to $T$ =2.33. A two-sided $p$ -value is the probability of seeing such an extreme value of the absolute test statistic; e.g. the probability of seeing a test statistic greater than or equal to 2.33 or less than or equal to −2.33. The default recommendation is to report a two-sided $p$ -value rather than a one-sided $p$ -value, unless there is a clear and compelling reason that only one direction of the test statistic is of scientific interest.
I would disagree that one-sided tests are academic and claim that they are more often used in industrial applications. Based on personal experience with journal referees, I would even go so far as to say that there is some bias against one-sided tests in (social science) academia. Most modern textbooks devote very little attention to them. There is some opposition in tech as well. There is a good list of pro and against examples here , going all the way back to Fisher in the 1930s. Equivalentce and non-inferiority tests in medical trials are another example. While there is no firm boundary between science in academia and industry, the distinction is still useful. I suspect that academic science is more concerned with demonstrating the existence/nonexistence of relationships, which needs two sides. But industrial scientists focus more on directional questions, where one-sided makes more sense and is more efficient. Efficient here means allowing for more/shorter experiments, with quicker feedback on ideas. This efficiency comes at the cost of partially unbounded confidence intervals. For practical advice, the question should determine the test: Is A any different from B? $\rightarrow$ two-sided test. Is A any worse/better than B? $\rightarrow$ one-sided tests. Both tests should be combined with pre-registration, ex-ante power calculations, and robustness checks to be safe. Switching to one-sided to get significance after peeking at the data is a bad idea. There is also nothing wrong with running another experiment when you see an effect in the other, unexpected direction. Questions or claims that produce two-sided tests tend to look like this: Is there any relationship between Y and X? (existence) X has no influence on Y whatsoever (nonexistence) X has no relationship with Y (also nonexistence) A is not any different from B (nonexistence again) One-sided tests come from directional questions: Is A better than B? Is doing X worse than doing Y? Is A better than B by at least k? Is the change in Y associated with changing X less than m? Here are two final examples from the business world. You are evaluating a marketing campaign for your company. You need the added revenue from advertising to exceed the cost of showing the ad. For the decision about launching the campaign, you don't care if the ad drives away customers; you would not launch it anyway. Quantifying the uncertainty about just how terrible the effect is wasteful. You are considering reducing the number of photos taken per product to lower photography costs and hosting expenses. You need to make sure that the dip in sales is smaller than the savings from shorter photoshoots.
{ "source": [ "https://stats.stackexchange.com/questions/603023", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/361781/" ] }
203
Shouldn't we reject questions for which the asking people obviously didn't do their homework? Like this one: What does ^d mean in ls -l | grep ^d? Not that I'm really annoyed, but I think it would be better for them to use man before asking things like this.
The theory on SO that I think should probably apply here is to be the top result in Google someday. That means: All questions are welcome, even if they're easily googled, since future searchers will find us instead of whatever's on Google now, and hopefully our answer is more helpful You shouldn't post answers like "you should google it", since that does nothing to further #1, and will frustrate people if that post does become the top Google result
{ "source": [ "https://unix.meta.stackexchange.com/questions/203", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/1946/" ] }
314
I don't really have a question in mind, but I'm thinking of the kind of things that are covered in APUE(2e) which is POSIX, SUS, etc. C API mostly. Obviously generalized C programming is not ontopic, but what about the Unix C API? It's obviously Unix, but is it too programming for here?
Unix's shell interfaces tend to be pretty close to the syscall interfaces. In particular shell utility errors are often straight lifts of system call errors. So people who understand the one often can help with the other. Therefore I think questions about the system interfaces (i.e. the C API) should be on-topic here, as long as they're about common unix interfaces (not necessarily POSIX, but excluding very different systems like Cocoa) and essentially language-agnostic. Note that this answer only applies to questions about issues like signal delivery, terminal modes, sockets, … which are exposed in similar ways in most languages. It can apply to other low-level APIs associated with unix, e.g. X library calls (that map closely to the X11 protocol) or D-Bus communication. If the question is about an API that only programmers would encounter (e.g. the C string library functions, the Gtk object model, the Linux kernel driver APIs, …), it's off-topic here and should be asked on Stack Overflow instead (or migrated there if it's already been asked here). The guideline is: will the question interest only programmers, or also users and administrators? A sysadmin debugging why a server won't start with truss / strace output is on-topic here. A programmer debugging why his kernel module is causing an OOPS is off-topic.
{ "source": [ "https://unix.meta.stackexchange.com/questions/314", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/29/" ] }
329
Is plan9 sufficiently unix to be on-topic here? (I suppose it might not not really be a question of sufficiency; perhaps plan9 is too unix for this site?)
plan9's design is hard-core UNIX philosophy, and I'd be happy to see a few questions here. I can't imagine we'll be overwhelmed.
{ "source": [ "https://unix.meta.stackexchange.com/questions/329", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/1820/" ] }
389
There was a comment and a mod flag on this post about possibly asking the question on AU instead. I don't think our previous discussions on this have been particularly clear-cut, so I'm just going to ask directly: should we migrate questions that only apply to Ubuntu to AskUbuntu ? I'm going to quote from one of Gilles' answer s: Ubuntu questions can be asked here or on AU, but with different expectations on the answers: AU answers tend to focus on the software and UI that's installed by default, and don't make any effort to apply to non-Ubuntu systems. Unix.SE answers tend to be more generic, as the answerer might know nothing about Ubuntu but suggest a generic method that works on all Linux distributions, say. This assumes that the question has a generic method that works on all distros; should we keep questions about things that don't exist outside of Ubuntu, or migrate them? We're going to be launching very soon, so we'll finally have migration paths 3k users can use, and I imagine AU will be one of the targets. Edit : Don't get too hung up on the particular post I linked to, it was just what reminded me I should ask this before the launch Edit : The migration policy is now explained in the FAQ : Note that Ubuntu posts are a special case . If your question applies to Ubuntu only, or you're looking for answers that are Ubuntu-specific, you should post it on the Ask Ubuntu Stack Exchange site. If your question applies to other distros or you welcome more generic solutions, you're in the right place here.
I like Gilles advice a lot: Unix.SE answers tend to be more generic, as the answerer might know nothing about Ubuntu but suggest a generic method that works on all Linux distributions, say. So I'd only migrate questions where the question is extremely and unavoidably Ubuntu-centric, and cannot be generalized to other *nixes in any meaningful way .
{ "source": [ "https://unix.meta.stackexchange.com/questions/389", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/73/" ] }
401
It seems the buttons have a strange color that doesn't match anything else on the site.
I like Gilles advice a lot: Unix.SE answers tend to be more generic, as the answerer might know nothing about Ubuntu but suggest a generic method that works on all Linux distributions, say. So I'd only migrate questions where the question is extremely and unavoidably Ubuntu-centric, and cannot be generalized to other *nixes in any meaningful way .
{ "source": [ "https://unix.meta.stackexchange.com/questions/401", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/2180/" ] }
528
I feel like I'm stuck below 1000 and can't seem to break the glass ceiling! Voting, Answering, Asking? What's the best bet for slowly but surely raising my points?
In order of decreasing rep benefit: Answer questions with bounties -- the minimum bounty is 50 rep, and they don't count towards the rep cap Post awesome answers that get accepted -- you get 15 rep for an accept, and it doesn't count towards the rep cap Post awesome answers that get oodles of upvotes -- you get 10 rep per upvote, but your total rep gain is capped at 200 per day Post awesome questions that also get upvotes -- 5 rep per upvote Accept answers on your questions -- 2 rep per accept Suggest edits to posts and tag wikis -- 2 rep per accepted suggestion Voting doesn't get you rep (in fact, downvotes cost you 1 rep each on answers only), but it does give other people rep, encouraging them to contribute, as well as sorting posts by quality -- yay voting
{ "source": [ "https://unix.meta.stackexchange.com/questions/528", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/298/" ] }
578
I've recently been hanging around this site because I enjoy problem solving on linux and learn heaps from seeing how other peoples solutions. I'm happy to pitch in to solve others problems, however -here more than other SE sites- I see questions that make me want to close my browser and walk away. I think there are some things which are productive for neither the asking party nor the answering. A case in point would be today's " Can you write an awk script for me? " (Note: editor changed title to something less provocative). I have no objection to writing scripts for people as proofs of concept or editing real code to help people get on track. What irked me about this question was that it was clearly a homework problem aimed at getting the student to read up on the documentation and understand the available options. It didn't help that the user has a history of asking homework questions, by which I've been bothered before. It presented nothing that would not be covered in any basic awk guide. There was no real-world application to engage or cause to further. I felt like somebody was trying to cheat me out of my time and wasn't even going to walk away having learned anything. If they wanted to learn they could read a man page first and then ask a specific question. There wasn't even room for people to present creative solutions with other software because the problem had a requirement of using awk. I say that makes for boring clutter on a QnA site. Should I suck it up and move on? Should I tag them myself as homework (or doitforme :P) so I can ignore the whole lot of them? Should we be a little tighter in our on-topic criteria?
I certainly defer to the others here who are active participants -- but our general philosophy is to heavily favor answerers . We feel that the world is awash in questions, but not answers. Answers are the real unit of work in any Q&A system. Therefore, the only logical thing to do is to maximize the happiness and enjoyment of answerers. If this means aggressively closing unworthy or uninteresting questions, so be it. Without a community of people willing to answer questions, it really doesn't matter if there are questions at all, does it?
{ "source": [ "https://unix.meta.stackexchange.com/questions/578", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/1925/" ] }
1,105
As it is December 2012, we are now going to reset our Community Promotion Ads for the new year. What are Community Promotion Ads? Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown. Why do we have Community Promotion Ads? This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things: interesting unix and linux related open source apps the site's twitter account script packs or power tools cool events or conferences anything else your community would genuinely be interested in The goal is for future visitors to find out about the stuff your community deems important . This also serves as a way to promote information and resources that are relevant to your own community's interests , both for those already in the community and those yet to join. Why do we reset the ads every year? Some services will maintain usefulness over the years, while other things will wane to allow for new faces to show up. Resetting the ads every year helps accommodate this, and allows old ads that have served their purpose to be cycled out for fresher ads for newer things. This helps keep the material in the ads relevant to not just the subject matter of the community, but to the current status of the community. We reset the ads once a year, every December. The community promotion ads have no restrictions against reposting an ad from a previous cycle. If a particular service or ad is very valuable to the community and will continue to be so, it is a good idea to repost it. It may be helpful to give it a new face in the process, so as to prevent the imagery of the ad from getting stale after a year of exposure. How does it work? The answers you post to this question must conform to the following rules, or they will be ignored. All answers should be in the exact form of: [![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url Please do not add anything else to the body of the post . If you want to discuss something, do it in the comments. The question must always be tagged with the magic community-ads tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form. Image requirements The image that you create must be 220 x 250 pixels Must be hosted through our standard image uploader (imgur) Must be GIF or PNG No animated GIFs Absolute limit on file size of 150 KB Score Threshold There is a minimum score threshold an answer must meet (currently 6 ) before it will be shown on the main site. You can check out the ads that have met the threshold with basic click stats here .
{ "source": [ "https://unix.meta.stackexchange.com/questions/1105", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/9541/" ] }
1,108
This question asking for use cases of a particular command is practically guaranteed to generate a list of answers, many of which will themselves be lists, and most of which will be equally valid. Should it be closed? It's still semi-on-topic, being about a traditional Unix command, so maybe it could be a community wiki instead. What's the policy on such questions?
{ "source": [ "https://unix.meta.stackexchange.com/questions/1108", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/9537/" ] }
1,172
Often there are questions of the form "How can I do this with x ?" where x is some tool such as bash, sed, awk or perl. Are answers to such question based on another tool than x not welcome, e.g. are awk answers to sed questions not welcome? In my opinion it is good to show alternatives to show that using x is not the only way of solving the problem. An alternative may even be a better approach. Also if the alternative is less fit it is good to know as e.g. a particular users setup might differ such that only particular tools are available or the alternative is clearer in communicating an algorithm solving the problem. A reason for raising this question is that my answer to How do I write a sed one-liner to add a character after every third character? received a number of downvotes without comments. It uses an alternative tool to the same result as the sed answers. Should such answers be downvoted or deleted?
Generally speaking, yes, alternative ways of accomplishing the same task seem not only welcome, but often rewarded. In the case you point to, there are a couple of mitigating issues: The question specifically, and repeatedly, asks for a sed response. This means the questioner is clearly seeking to improve their sed knowledge as much as they are trying to solve a problem. Your answer involves not just a different approach, but one that requires a tool ( ruby ) that is not part of coreutils and indeed not installed on (any?) systems by default. I suspect that if only one of those conditions was present, the answer would have either been recieved neutrally, or may have even garnered an upvote. But with both present, you have unwittingly invoked the wrath of the acolytes of UNIX the Wrathful, who is a harsh and unforgiving deity.
{ "source": [ "https://unix.meta.stackexchange.com/questions/1172", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/8677/" ] }
1,396
One of our users posted a comment on this question pointing out that the question had been asked (not cross-posted, different user) and answered on Ask Ubuntu. Since I had found the question intriguing and the answer over on AU was quite good, I took the liberty of copying the entire answer verbatim and posting it as an answer to the question here on U&L. Obviously, I marked it as community wiki (I have neither the right nor the desire to gain rep from someone else's work) and I also linked back to both the original answer on AU and to the original poster's profile on U&L. If this were a more active user, I would have pinged him to suggest he do so himself but since this is a 101 rep user last seen about two months ago, I did it myself. I felt that it is a good thing to have that information on both sites and that rephrasing a perfectly adequate answer just to make it look different would be silly. How do we feel about that? Does everyone agree with what I did or should I delete the answer?
I think it's entirely OK to do this and would in fact encourage it. Each site should have their own collections of questions and answers, and often times I find bits to solutions on one of the other SE sites, but then take it over here and either expand it since now I feel it's more appropriate to do so, or take it in an entirely different direction. I always just try to make sure to reference the other source material so it's obvious to readers where this information came from. When copying this material I always try to improve it when putting it into this site. Whether it's rephrasing sentences or cleaning up the grammar. If some of the phrasing isn't how I would say it, I re-work it in my own words, but the general ideas came from the other post, more times than not, since much of what we do on a daily basis is reuse more the newly created. EDIT #1 In response to @JoelDavis' comment... The rep. points are just to get you in the door 8-). After a while they become less important, the real value of the SE sites is they increase the signal to noise ratio on the internet so that others can find information that's organized in some fashion rather then having to mine it themselves from a forum post or a 3 year old blog post 8-). The type of reuse going on here is tantamount to using code from a library. We do that every day without a 2nd thought.
{ "source": [ "https://unix.meta.stackexchange.com/questions/1396", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/22222/" ] }
2,549
It's the time of the year again, namely it is December 2013, and so we shall now refresh our Community Promotion Ads for the new year. What are Community Promotion Ads? Community Promotion Ads are community-vetted advertisements that will show up on the main site, in the right sidebar. The purpose of this question is the vetting process. Images of the advertisements are provided, and community voting will enable the advertisements to be shown. Why do we have Community Promotion Ads? This is a method for the community to control what gets promoted to visitors on the site. For example, you might promote the following things: interesting unix and linux related open source apps the site's twitter account script packs or power tools cool events or conferences anything else your community would genuinely be interested in The goal is for future visitors to find out about the stuff your community deems important . This also serves as a way to promote information and resources that are relevant to your own community's interests , both for those already in the community and those yet to join. Why do we reset the ads every year? Some services will maintain usefulness over the years, while other things will wane to allow for new faces to show up. Resetting the ads every year helps accommodate this, and allows old ads that have served their purpose to be cycled out for fresher ads for newer things. This helps keep the material in the ads relevant to not just the subject matter of the community, but to the current status of the community. We reset the ads once a year, every December. The community promotion ads have no restrictions against reposting an ad from a previous cycle. If a particular service or ad is very valuable to the community and will continue to be so, it is a good idea to repost it. It may be helpful to give it a new face in the process, so as to prevent the imagery of the ad from getting stale after a year of exposure. How does it work? The answers you post to this question must conform to the following rules, or they will be ignored. All answers should be in the exact form of: [![Tagline to show on mouseover][1]][2] [1]: http://image-url [2]: http://clickthrough-url Please do not add anything else to the body of the post . If you want to discuss something, do it in the comments. The question must always be tagged with the magic community-ads tag. In addition to enabling the functionality of the advertisements, this tag also pre-fills the answer form with the above required form. Image requirements The image that you create must be 220 x 250 pixels Must be hosted through our standard image uploader (imgur) Must be GIF or PNG No animated GIFs Absolute limit on file size of 150 KB Score Threshold There is a minimum score threshold an answer must meet (currently 6 ) before it will be shown on the main site. You can check out the ads that have met the threshold with basic click stats here .
{ "source": [ "https://unix.meta.stackexchange.com/questions/2549", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/9541/" ] }
2,668
I thought it would be a nice idea if we could have a question thread where everyone posts an answer where they introduce themselves, talk about themselves a little bit, and tell the community their motivation for participating on this site. Some SE sites don't have much of a community (in the sense of people talking directly to each other), but unix.sx does have something of one, if only composed of people who hang out in chat, and community should be encouraged - there is never enough of it. So, unless you are royalty going incognito, step up and tell us all about yourself, and why you hang out here. :-)
Who am I (currently) I'm a computer engineer who's worked professionally first in electronic design (primarily as a design automation engineer for ~10+ years), followed by a stint in DevOps for 5+ years. I'm currently a Software Engineer working on web technologies, still wearing many hats as DevOps & software development. My past - phase 1 I grew up with a Tandy TRS-80 ( color computer ), which initially came with 16K RAM, which we had upgraded to 64K (if my memory serves me correctly). Here's a picture. Release date: 1980 Introductory price: 399 USD Operating system: Color BASIC 1.0 / 2.0 / OS-9 CPU: Motorola 6809E @ 0.895 MHz / 1.79 MHz Memory: 4 kB / 16 kB / 32 kB / 64 kB / 128 kB / 512 kB Graphics: MC6847 Video Display Generator (VDG) This computer carried me for a long time. I used it to play games, write BASIC programs and even got into telecommunications with it on BBSes (Bulletin Board Systems) . the main screen It had a cassette tape for storing programs, ( CLOAD and CSAVE ) were the commands. In 7th grade I saved enough of my own money to get a modem (300 BAUD), you had to dial phone numbers by hand and then press a red button to connect. cassette & modem My past - phase 2 In the summer between 4th and 5th grade (9-10 years old) I got the chance to do 1 week's worth of computer programming on an Apple IIe. After that week I was hooked. I begged my mom to sign me up for another more in depth second week where we wrote little guessing game programs and a State Captiols game. At the start of 5th grade I was the only kid that knew anything about our Apple IIe that we had in class, so I got tasked with showing everyone else how to use it, loading games etc. Our teacher would let me stay after to use it, and I did regularly. Our school also got a teletype machine, which was my first exposure to telecommunications, which ultimately drove me to wanting to get my own modem and exploring that world more. Teletype machine (similar to this one) My past - phase 3 When I was in the 10th grade (14-15 yrs.) I got an actual PC ( Intel 8088 - I think). It had a cool turbo button so it could run at ~5MHz. I don't know what other specs this system had, it ran some version of MS-DOS (3.3?), since I didn't really know much about such things at that time. I was the only tech type in my family, no one else was into such things, and so I really only had myself to extend my knowledge. Amazingly the libraries in my city would allow you to check out software. So you could "check out" games, Print Shop Pro , in addition to productivity applications so I did! At some point I purchased a 2400 Baud modem and started using that with this system as well, again getting into the whole BBS thing that was popular at the time. I'd use Kermit , XMODEM , YMODEM , & ZMODEM , as well as my beloved Telix to connect and upload and download shareware & chat. This ultimately was the computer that I took to college. I was one of the few people that had a computer in their dorm room when I went. I had a 24 bit dot matrix printer for it too. I think it was me an 1 other computer guy, so I'd use it to explore my college's VAX/VMS (Release 4 or 5) . It was also at this time I got exposed to online gaming ( VAXMUD ). Gaming was fun but never really held my attention. I was much more enthralled with the computer system itself, so I purchased a VAX/VMS book and began learning the command line interface, as well as, interacting with other users within the system using the built-in chat functions. Yes believe it or not instance messaging was available back then. My past - phase 4 After my first year I transferred to another university where I was exposed to assembly level programming as well as the Zilog Z80 micro processor . Writing in assembly was probably my favorite activity. We'd create banking applications and write various programs and I absolutely loved writing at that level. NOTE: We used MASM and debug (manual) which used to be included with MS-DOS. My past - phase 5 I eventually moved up to a Intel 486DX2 system that I saved up for and got MS-DOS 5 and then 6.22 and Windows For Workgroups (3.11) . I then got 2 systems, when I added a 486DX4 that I built myself, for the first time! At this point I started getting into networking and got my first Ethernet card (10MB 3COM 509). The reason I remember this is because back in the day these cards were ISA based . You really had to want to make peripherals and addon cards work back then. So you had to play a lot of games with changing the memory addresses and interrupts (IRQs) assigned to your various devices, as well as your COM1, COM2, serial ports, and LPT1 ports. So consequently you got to know your systems very well ! I eventually added a CDROM (1x) (you now how they're all 52x now - this was the first, a whopping 150 KiB/s!). This was also the time that I got the game Myst . Hands down the best game I've ever played! My past - phase 6 As a consequence of building my own computers I used to frequent a monthly computer fair (Peter Trapp) in our area to get parts and see what was new and interesting. Well at one of these events there was a booth where they were either giving away or selling (can't remember) CD sets with something called Linux on them. This booth was a Red Hat booth, and I'd gotten a very early edition of RHL . I then graduated and started my professional career. One of the first things I purchased was a copy of RHL 4.1 ( Vanderbilt, kernel ver. 2.0.27 ). Once I got my hands on this I was always working to get rid of Windows in my personal computing environments. First I started with servers. I used to have a dial on demand setup that would automatically dial via a 28KBps modem, then a 56KBps modem using a Linux box running diald , any time someone would attempt to surface the internet. I then graduated to broadband and RHL 9. These early versions are when I completely dumped Windows and was using Linux exclusively day to day. When Red Hat discontinued RHL, I was almost ready to jump to Debian, but then Fedora came out and then CentOS and I stayed. I toyed a bit with Gentoo, Mandrake, Ubuntu, and Debian, but I always came back to my RPM based distros. Many here will joke that I'm the Red Hat guy here, I wear that notoriety with honor! Recently (2014) I finally jumped from Fedora 14 to 19, and I still haven't been disappointed. Things change, but all in all the bar has always been moving up. Areas of expertise I have a very diverse range of areas, in what I would consider having a deep knowledge. These are in no particular order. Server technologies Samba (2,3, & 4) NIS NFS (3 & 4) DNS (Bind 9) Apache/Nginx Oracle (9i, 10g, 11g) MySQL/MariaDB (all versions) PosgreSQL JBOSS/Jetty/Tomcat GridEngine/Open Grid Scheduler SLURM link1 & link2 Flex Licensing Elasticsearch & *beats Security/Auth Kerberos SAML/Shibboleth Hadoop Zoo Keeper HDFS HBase Hive MapReduce Yarn Virtualization KVM VirtualBox VMWare OpenVZ libvirtd virt-manager Vagrant Docker Openstack AWS RHEV/RHV Kubernetes/Openshift Protocols HTTP/HTTPS SSL Languages Perl Bash sed, awk, grep Java/JDK Javascript Ruby Python Go Markups Textile Markdown XWiki HTML XML/XSL (including stylesheets) Twitter Bootstrap Jekyll Bootstrap Diagnostics Hardware issues Performance issues Networking issues Usability issues Monitoring Nagios Wireshark A whole slew of misc. tools such as: (ntop, htop, nethogs, etc.) Desktops GNOME OSes Linux (Red Hat, Ubuntu, Debian - all versions) Solaris (6,7,8,9, & 10) Windows (Vista, Win7, 2008, 2008R2, Win10) Why I'm here? If you haven't gotten this from reading my history, acquiring the knowledge of how various technologies work, was at times arduous. This has always bugged me because it consumed enormous amounts of my time in acquiring this knowledge, that once I had, was obvious. So I liken myself to a lighthouse operator now, trying to help others avoid the "cliffs", by hopefully guiding them through the "safe passage" while making mention of key landmarks on either side as they pass through. All I ask in return is that people in turn "pay it forward" !. Compliments This may seem like a dumb category but one day in the chat room @terdon paid me the highest compliment I think one could ever receive: "The way I see it, Stephane is some kind of mythical, magical creature, Gilles is a wizard and slm is a human paladin." References TRS-80 Computers: TRS-80 Model I Permalink: http://www.trs-80.com/wordpress/trs-80-computer-line/model-i/ Page Attributes: NONE old-computers.com Free Software for DOS Communication & Internet – 1
{ "source": [ "https://unix.meta.stackexchange.com/questions/2668", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/4671/" ] }
3,254
UPDATE We're in for the hats this year. We just submitted the form! Last year, Stack Exchange ran Winter Bash 2013 , in which users earned hats which they proudly displayed upon their gravatar. There was a leaderboard of hatters, mad or not: It's that time of the year again and we get to choose whether we want to do it again this year. Hats are enabled on a per-site basis, but each site has to opt-in and choose to participate. What are hats, I hear you ask. Well, hats are kinda like ephemeral badges. You can choose to have one displayed over your avatar and you win them by certain actions on the webpage. Users can turn off hats on a per-user basis, so if you're against them for any reason, you can disable the feature and not see your hats nor those of the users who have opted in. As far as you are concerned, the hats will not exist. If we choose to accept, the event will run from 15 December 2014 to 4 January 2015. After that, all the hats go away into Last Year's Hat Bin. We need to decide if we want hats by December 1. So, do we like hats or hate hats?
Yes, of course we want hats! Don't be a hater, be a hatter! 1 But I think the more important question is: Why is a raven like a writing desk?
{ "source": [ "https://unix.meta.stackexchange.com/questions/3254", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/22222/" ] }
3,398
I earned the tumbleweed badge. It's description: "Asked a question with no votes, no answers, no comments, and low views for a week." This sounds quite negative. (Why) should I be proud / happy about that? That is usually the point of earning badges. Or did I ask a bad question?
I think of it as a pick-me-up thing. It's like SE saying: You didn't get anything useful, so here's badge to cheer you up! So, yes, be a little happy! :) Poking around on Meta SE : this is exactly the intent. And it's sort of fun, also useful because it gives us a data point about how many questions are utter failures in the system (not due to the user's fault, necessarily, either) – Jeff Atwood♦
{ "source": [ "https://unix.meta.stackexchange.com/questions/3398", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/46717/" ] }
3,484
I want to be part of the community, helping to upvote good answers, and answering questions. But, I need to get 15 reputation to do those things. It seems to me the only way to do that is to ask questions. However, the community is a great place here, and all the questions I would ask, have already been asked. What's a guy to do? How can I get the required 15 reputation without spamming the questions with junk? I really just want to be able to participate.
You can answer questions, without the need for reputation. Upvotes on those answers earn you 10 reputation. Just find a question in an area you are familiar with, preferably one which is recent and not yet answered (or answered and you have a better solution). You can also earn 2 reputation by correcting errors in a post. E.g by removing chit-chat Like greetings, "thank you" and names from other people's posts. Asking questions is not that hard, but finding ones that have not been asked before can be a challenge. Take a few minutes to think about something U&L related where you had a problem that was never solved. Or something you found hard to find a solution for. It is fine to post such a question (as long as it is not a duplicate) and the answer to that question, with double chances of earning reputation. Make sure you read the help→tour and get familiar with the rest of the help pages.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3484", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/44394/" ] }
3,602
I'm more active on Ask Ubuntu and normally refer Kali users to U&L by telling them: We're sorry, but this site is all about Ubuntu and its official derivatives as posted on https://wiki.ubuntu.com/Releases so Kali is off-topic here as well. However, on Unix&Linux , a sister site to Ask Ubuntu, they're very good at all varieties of Linux and Unix, especially if you use the Kali-Linux tag. ;-) Should I stop referring them here and just post: We're sorry, but this site is all about Ubuntu and its official derivatives as posted on https://wiki.ubuntu.com/Releases so Kali is off-topic here as well. or continue doing the first one? Voting closed a while ago... ... and the AU chat room now shows:
They are, sadly, more appropriate here, seeing the distro is based on a Linux kernel. In order to minimise the impact on those of us who have little patience for the numbats that think that installing Kali will make them l337 haXX0r5 , please—as a public service— ensure that they are all tagged kali-linux so we can add them to our ignored tag list and thus remain happily oblivious to the bewildering array of ineptitude that consititutes those posts here. 1 On a personal note, I live in hope that, one day, they have their own StackExchange site where they can happily congregate, breathless with equal amounts of excitement and cluelessness, and their adolescent ardour for penetration is finally able to be appreciated by the audience it deserves. 1. This has been my own crusade for some significant time now, and I would welcome more hands to the pump...
{ "source": [ "https://unix.meta.stackexchange.com/questions/3602", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/90054/" ] }
3,629
Recently, while reviewing late answers, I've noticed a latecomer. The question was asked few years ago. Person who asked has accepted some answer(which in detail explained how to solve the problem). After few years, our latecomer decided to answer the question, too. And it'd be fine if he provided some meaningful input, but I think his input wasn't that great: (from now on, I'll refer to Accepted Answer(er) as AA) Firstly, his answer was less accurate version of AA(not a copy, but you would already know everything from the accepted answer). Secondly, his answer was rather poor; The language wasn't as precise as AA's, instructions weren't as clear as in AA. They might even be a little confusing; not misleading, but surely not as easy to follow as AA's. And thirdly, question had already an answer, and was asked few years ago. Third argument isn't strong - any solution that wasn't listed is valuable, as it might help someone(and actually be simpler to execute); however, the fact that he didn't show any new solution or didn't explain it in a simpler way makes me feel it's just redundant and unnecessary. I didn't know how to react. From the style I knew that asking him to refine his answer wouldn't help much; I also didn't know how to edit his post, since the best version was already in AA. Should I just leave it there as 0-score answer, flag it somehow, or ask user to take some action?
Just leave it alone. You could, if you like, suggest to the OP that they improve the answer, or you could do it yourself. Apart from that, just let it be. We all produced some pretty sad excuses for answers when we first started posting. With any luck, this new user will spend some time on the site, see how things work here and improve. In the case you describe, I don't think it deserves a downvote and there's not much point in doing anything else apart from attempting to improve it as I suggested above.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3629", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/129998/" ] }
3,710
When linking to man pages (example: man man ), what are good target sites? Why? I often use whatever Google finds when I search for man whatever , which in practice usually is http://linux.die.net/man/ but I think some people would prefer some other site. I know this is quite opinion-based, but still, a choice must be made when actually adding the link, and there has to be some kind of consensus of what are good and what are less good... I wouldn't be surprised if using the wrong site might even prevent up-vote from someone who might prefer another site (or even get a down-vote, on a bad day). So it's important to know the collective opinion of the community.
For Ubuntu derivatives, the canonical place is http://manpages.ubuntu.com 1 . For Debian, it's http://manpages.debian.org . For Linux system calls, and other kernel-related things, http://man7.org is apparently generated from the docs. For a variety of systems, especially CentOS and a number of BSD and Unix systems, http://freebsd.org/cgi/man.cgi is an excellent resource (though it might not have the latest release's documentation for other OSes). It's also very useful for historical interest, manpages dating back to 2.8 BSD are available. For GNU, the manpages supplied by the various distros are often derived from the info pages. As such, the info page can have more information than the manpage. The GNU documentation is available at http://www.gnu.org/manual/manual.html . I'm unsure of the source or canonicity of http://linux.die.net . I generally pick the resource most suited the question - it's important that both the asker and the answerer be on the same page w.r.t. documentation . Often, it's the case that your manpage may list features that the OP doesn't have, rendering your potential answer irrelevant. This problem can be considerably mitigated by sticking to the POSIX manpages (with Ubuntu, that's the sections 1posix , 3posix , etc.), to get a common core feature set. However, U&L being what it is, there's still no way to be sure without looking at documentation specific to the OS mentioned in the question. 1 Ubuntu's manpage site (and I suppose other sites as well) suffers from a bug where identically named manpages aren't listed separately - so, if an utility is provided by different sources, only one of them is likely to show up.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3710", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/33724/" ] }
3,744
Last year, Stack Exchange ran Winter Bash 2014 , in which users earned hats which they proudly displayed upon their gravatar. There was a leaderboard of hatters, mad or not: It's that time of the year again and we get to choose whether we want to do it again this time around. Hats are enabled on a per-site basis, but each site has to opt-in and choose to participate. What are hats, I hear you ask. Well, hats are kinda like ephemeral badges. You can choose to have one displayed over your avatar and you win them by certain actions on the webpage. Users can turn off hats on a per-user basis, so if you're against them for any reason, you can disable the feature and not see your hats nor those of the users who have opted in. As far as you are concerned, the hats will not exist. If we choose to accept, the event will run from 15 December 2015 to 4 January 2016. After that, all the hats go away into Last Year's Hat Bin. More information will be available on the 2015 Winter Bash page . We need to decide if we want hats by Thursday December 10. So, do we like hats or hate hats?
I don't know if I'll like them or not, but I think it's been designed correctly: Opt-in on a per-site basis, and opt-out on a per-user basis. So I say, go for it. If I don't like them I will disable the feature, but I won't know unless we enable it for this site. If you want to opt in to this years Winter Bash, upvote this answer to say a big Yes, gimme the hats!
{ "source": [ "https://unix.meta.stackexchange.com/questions/3744", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/22222/" ] }
3,770
Our chat room is called "Unix and Linux" . While this is all good, most (all?) of the other chat rooms in all the other Stack Exchange sites have funnier, cooler names than just their site names. Who are we to be behind? So, in the spirit of the Holiday Season, I propose a competition for a "better" name. One name per answer, please. Strive for the amusing, the unexpected, the original. But also, of course, something suitable for the Unix and Linux Stack Exchange main chat room. NOTE: Per @muru's suggestion, try sorting the answers by the active tab to see more recent responses to the question:
How about chat as a device? /dev/chat
{ "source": [ "https://unix.meta.stackexchange.com/questions/3770", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/4671/" ] }
3,867
I'm asking about this question (U&L) which was recently posted again here (AU) . Gilles once stated that cross-posting between UL and anywhere is against the FAQ/on-topic guidelines which specifically mention and strongly discourage it. The Ask Ubuntu policy, according to Oli , one of Ask Ubuntu's mods, is that TLDR, it's okay/tolerated in some cases but is discouraged . The AskUbuntu help center does not explicitly mention cross-posting . Is this question designated off-topic here or there? Will/should anything be done?
My take here is slightly different. I don't consider that question to be cross-posted at all. While it is true that posting the same question on multiple sites of the network is frowned upon, posting different versions of the same question to target the different audiences on each target site is not. This has been mentioned in various places by various Stack Exchange employees: Robert Cartaino (Director of Community Development for the Stack Exchange Network) If you do not receive an adequate answer, then it might be okay to ask your question again to another group of users — as long as the question is on topic and appropriate for that second site. But cutting-and-pasting between two sites is never okay. If you want a different perspective, you should phrase the question specifically for that group. Jeff Attwood (co-founder of SE) This can be OK, so long as the question is tailored to each audience on the different sites and is materially different in each case. Just to be 100% clear, copy-pasting a question across sites with no changes is considered abusive behavor. Shog9 (Community Manager for Stack Exchange) Cross-posting is discouraged when it's someone spamming multiple sites without bothering to identify the appropriate audience or tailor the question for each site , but if you have a question that hasn't been well-answered on one site - whether that's another SE site or something external - re-asking it in a more appropriate venue is perfectly fine. Jeff again It is also ok to ask two different versions of a question but you MUST tailor it to the audience on that site . Copying and pasting would put you on the road to account suspension. My interpretation of all the above and various other meta threads I've read over the years is that bad cross-posting is posting the exact same question on multiple sites. Posting different versions of the same question on multiple sites is fine. In this case, the OP took the time to modify their question to fit the different audiences. They didn't do a direct copy/paste. Granted, the original version of the question was more similar (OK, almost identical) but the current incarnations of the two questions are quite different. In addition, the OP has tried three different distributions. Mint, Elementary and Xubuntu. The first two are explicitly off topic on Ask Ubuntu but on topic here. He therefore quite correctly posted the two versions on different sites. This seems to me to be precisely what we want our users to do.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3867", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/136107/" ] }
3,892
Request for learning material , from what I gather, is a request for links or pointers to offsite learning materials, or books, courses, etc. In fact, How do we feel about requests for learning materials? is defines it quite clearly. That doesn't cover a request to be taught something, but there have been quite a few questions closed on here lately that use Request for learning materials as the justification. Here's an example from today – I'm not picking on that specific one; there have been many similar closures lately. This site claims to be for Question & Answers – doesn't that translate to Ask and Learn? In the example above, the OP is simply asking a question about Linux internals and is hoping to learn from the answer; not asking for tutorials or learning materials as the closure reason states. Here are some more examples: Execute command on ranger selection is a good example.  It makes the strategic error of referring to a search for a tutorial, but then asks a fairly specific, bounded, well-scoped question. What is the difference between Docker, LXD, and LXC? probably should have been closed as “too broad”, so it just got closed for the wrong reason. Installing Wi-Fi drivers when no Ethernet connection is available was deleted by its author (so  most of you won’t be able to see it) after it was closed as a “request for learning materials”.  It’s not a great question, and it also commits the sin of mentioning the word “tutorial”, but it boils down to a reasonably specific question that is an amplified version of its title. Sorting Words in Alphabetical Order made no mention of learning materials whatsoever, but was a plain old How do I? question asking how to sort into alphabetical order a text file dictionary with a slightly unusual structure containing pairs of English and Latin words. Are we too trigger-happy with Request for learning materials ?
I don't see how any of your examples were requests for learning materials. Execute command on ranger selection has now been reopened. What is the difference between Docker, LXD, and LXC looks like it should be too broad but has received what appear to be two very good answers, one of which has been accepted so I also reopened that one. https://unix.stackexchange.com/questions/247453/installing-wi-fi-drivers-when-no-ethernet-connection-is-available has been deleted by the OP and I let it lie. Your original example, Please explain "kobject" , is about to be closed again with various close reasons. While there may be a valid reason to close it, that is most certainly not that it's a request for learning materials. I'll be keeping an eye on it and will reopen if it is closed for that reason. Choosing the right close reason is very important. This is true even for those cases where the question is complete crap and the site is better off without it. Yes, one of the objectives of closing questions is to preserve the quality of the site. As far as that is concerned, one close reason is as good as another. However, an equally (at the very least) important objective of closing is to educate our users about the sort of questions we want/accept. When we close with the wrong close message, we fail that objective. We are not helping new users understand the site, and will either drive them away in bafflement at our arcane rules, or suffer an incessant barrage of low quality questions since we didn't take the time to actually explain what was wrong with what they asked. So yes, closing those questions as requests for learning materials when they clearly weren't is most certainly an abuse of the close reason. If none of the available close reasons are good enough, write your own. If you can't be bothered to spend a few seconds writing a custom close reason, then you probably shouldn't be voting to close in the first place.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3892", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/29406/" ] }
3,905
Can someone please explain why this question of mine was closed as "too broad". It seems to conform to the site guidelines. And it is very specific.
I don't see how any of your examples were requests for learning materials. Execute command on ranger selection has now been reopened. What is the difference between Docker, LXD, and LXC looks like it should be too broad but has received what appear to be two very good answers, one of which has been accepted so I also reopened that one. https://unix.stackexchange.com/questions/247453/installing-wi-fi-drivers-when-no-ethernet-connection-is-available has been deleted by the OP and I let it lie. Your original example, Please explain "kobject" , is about to be closed again with various close reasons. While there may be a valid reason to close it, that is most certainly not that it's a request for learning materials. I'll be keeping an eye on it and will reopen if it is closed for that reason. Choosing the right close reason is very important. This is true even for those cases where the question is complete crap and the site is better off without it. Yes, one of the objectives of closing questions is to preserve the quality of the site. As far as that is concerned, one close reason is as good as another. However, an equally (at the very least) important objective of closing is to educate our users about the sort of questions we want/accept. When we close with the wrong close message, we fail that objective. We are not helping new users understand the site, and will either drive them away in bafflement at our arcane rules, or suffer an incessant barrage of low quality questions since we didn't take the time to actually explain what was wrong with what they asked. So yes, closing those questions as requests for learning materials when they clearly weren't is most certainly an abuse of the close reason. If none of the available close reasons are good enough, write your own. If you can't be bothered to spend a few seconds writing a custom close reason, then you probably shouldn't be voting to close in the first place.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3905", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/102202/" ] }
3,925
I have seen many cases, every time exact answers are posting via comments instead of answers. So I would like to get clarification, is that the encouraged way here? I have seen at Stack Exchange traffic portal that unix.stackexchange.com has 77% questions answered and its better than other infamous sites like Ask Ubuntu and Stack Overflow. Please encourage answering questions by posting answers instead of comments. NOTE: All my thoughts are very clear and to make unix.stackexchange.com as a better place (of course it's already). I am not writing this to point anyone out intentionally.
You're always encouraged to answer questions as actual answers and not simply as comments. But it's really up to the individual users of the site to put in whatever effort they are most comfortable with. I would encourage you, or anyone that may come across this situation, to look at these comments as an opportunity to write them up as more complete answers yourself. I would ask the user who provided the comment first before doing so, but I've found most everyone here to be very encouraging to others, with respect to having others take their comments and convert them into formal answers.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3925", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/19072/" ] }
3,989
Maybe it's just me, but it seems like people's shift keys are getting flaky. Maybe it's a manufacturing problem, but I'm afraid it's not. Why would someone go to the effort of capitalizing some words but not others? Is there a trend in humanity to self-deprecate by using lower-case "i"? Am I just a prematurely grumpy old man who had a stricter English teacher than average? The tour makes a few references like: "It's built and run by you " "Your reputation score goes up when others vote up your questions ..." As you earn reputation, you'll unlock new privileges Our goal is to have the best answers to every question, so if you see questions or answers that can be improved, you can edit them. Seeing a sloppily-worded question immediately lowers my urge to help because if they can't hit the shift key, are they willing to learn the ins & outs of Unix? Something I saw in meta a while ago stuck with me -- these Questions and Answers may (should?) stick around for a while and be helpful. Don't you want your best foot forward? For now, I will be scratching my grammar itch with some data.sx queries and editing. Do poorly-written questions bother anyone else?
It's the downside of success: this site is growing, and that means we're attracting more and more people who don't necessarily know what they're doing. Seeing a sloppily-worded question lowers my urge to help, too. The less well-worded a question is, the higher the chance is that I'll simply ignore it when I'm browsing the question list. If I do decide to open a question, if it isn't clear at first glance, I'm likely to vote to close or just move on. With over 100 questions on a weekday, I can't afford to spend even 30 seconds per question — that would be an hour per day just to read the questions! I encourage everyone to edit questions into shape. But there's a limit: some questions are just unsalvageable, and some might be salvageable but aren't worth spending much effort on. If you answer a question, that presumes that the question is useful: please edit it, at least give it a meaningful title and appropriate tags. If you don't think a question is likely to be very useful, and it's so badly written that it needs to be deciphered rather than edited, feel free to vote to close as unclear and move on.
{ "source": [ "https://unix.meta.stackexchange.com/questions/3989", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/117549/" ] }
4,075
Can we change badge icons to something Unix&Linux specific? Recently many sites changed their icons, for example: close related to us android uses their robo mascot emacs uses parenthesis () to be identified as lisp code math uses different geometric figures (that's the coolest, because not only colour, but also shape is different) What do you think would be the best icons to shortly characterize this site?
Based on the idea from the math site where they use different shapes for bronze, silver, and gold badges, what about having different prompts for ours? Gold could be a (root shell) # , but I'm not sure how to differentiate silver and bronze ( > and $ ?) Update: Based on feedback in the comments, here's a clearer proposal: Gold: #_ Silver: $_ Bronze: >_ Update 2: I am harshly reminded why I'm not a graphical designer, but here's something to look at to compare & contrast foreground vs background colors: (Please don't use these as-is - they're hastily put together) The left-hand column is supposed to represent the symbol in the badge's color on a black background; the right-hand column is the symbol in black on the badge color's background.
{ "source": [ "https://unix.meta.stackexchange.com/questions/4075", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/80886/" ] }
4,150
I recently found this: Is it possible to convert audio to midi with the shell? As you can see, the person actually leading to the answer used the comments section to provide his answer instead of posting an actual answer post. The person who asked the question then added an answer based on what the other person had written in the comments and accepted his own answer. Now, this would seem like "reputation theft", but here's a different issue: the person that answered in the comments seems like he isn't active anymore. The answer was given eight months ago, and the user hasn't received any more rep. since. On the other hand, that user seems to be quite active on askubuntu.com . I'm unsure how to deal with things like that. Should it be transferred to the community wiki or something? Should I flag this question and/or answer in any way?
Answers posted as comments are fair game. Here's what Tim Post, the Director of Stack Overflow Communities has to say about it: You don't have anything to feel guilty about. If the person was interested in writing and maintaining an answer, they would have done so. If it was in fact the correct answer it's hardly likely that they would be the only one to think of it. Doing what you did by expanding it into a proper answer is perfectly fine. The question came off the unanswered list, the answer exists in a much more articulated state to help future visitors and at the end of the day everyone wins. Answers posted as comments are actively harmful to the site. They can't be voted for or against, and they leave the question unanswered. Answers should be posted as answers. So no, nobody did anything wrong here and nothing should be flagged. The OP waited a few months ( months !) and, when the person who commented never expanded their comment into an answer, they did the right thing and posted it themselves. All's well that ends well.
{ "source": [ "https://unix.meta.stackexchange.com/questions/4150", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/1290/" ] }
4,251
Last year, Stack Exchange ran Winter Bash 2015 , in which users earned hats which they proudly displayed upon their gravatar. There was a leaderboard of hatters, mad or not: It's that time of the year again and we get to choose whether we want to do it this time around. Because of the popularity enjoyed by the previous years' bashes, all sites now opt-in by default unless the community decides to opt out. What are hats, I hear you ask. Well, hats are kinda like ephemeral badges. You can choose to have one displayed over your avatar and you win them by certain actions on the webpage. Users can turn off hats on a per-user basis, so if you're against them for any reason, you can disable the feature and not see your hats nor those of the users who have opted in. As far as you are concerned, the hats will not exist. Unless we choose to opt out, the event will run from 19 December 2016 up to and including 08 January 2017. After that, all the hats go away into Last Year's Hat Bin. More information will be available on the 2016 Winter Bash page . If we want to opt out, we have to decide to do so by Tuesday, 13 December 2016. So, do we like hats or hate hats?
Upvote this answer to say a big YES to hats! Hats are always in fashion!
{ "source": [ "https://unix.meta.stackexchange.com/questions/4251", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/22222/" ] }
4,477
I asked a question and got an answer that solved my question, so I up-voted and then accepted. Then someone else posted a new answer which included the information of the first answer plus many additional points. I plan to switch my project's code to reflect the new answer. On one hand I already accepted so I don't want to cause the original poster to loose reputation. On the other hand objectively the newer answer is better. Is there an obligation to switch accepted answer if a better one comes along that builds and improves on another answer?
It is not an obligation, but it is in the spirit of the site. Think of future readers; you want to signpost the answer that you, the question asker, deemed most helpful so that it will be evident to others. Don't worry about the original answerer losing some rep: it comes and it goes. What is important is that the wiki contains helpful answers and recognizes the best of them with the green tick of goodness™…
{ "source": [ "https://unix.meta.stackexchange.com/questions/4477", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/16792/" ] }
4,586
A user has posted two answers but the problem is that [s]he has just copy pasted the answers given before in the thread. Here are the 2 answers I came across. https://unix.stackexchange.com/a/386860/224025 https://unix.stackexchange.com/a/386862/224025 Although I am new here, I don't think copy-pasting other answers "as is" is allowed in the community. What flag should I use to indicate that the answer is just a plain copy-paste of another answer to the same question?
Ideally, use a custom flag and tell us the answer is copied. Custom is better because copied answers are often good answers and the flag interface will only show us the flagged post. So it might look like a perfectly decent answer unless you tell us it's been copied and we open the relevant page to see the plagiarism.
{ "source": [ "https://unix.meta.stackexchange.com/questions/4586", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/224025/" ] }
4,626
This is related to Should we discourage people that delete their question after they receive an answer? , and to the AskUbuntu Meta question Please stop posting half answers and dumb advice as comments . It's a combination of the two. This evening I've had two instances of questions being deleted, literally from beneath my fingertips, as I was typing my answers. The reason? An answer was given as a comment, and the user withdrew the question immediately upon seeing it. It was a pity that this happened, not just because I had spent some time writing an answer, but since in one of the cases, the user left with a piece of code that was still wrong in more ways than what the question and comment-answer addressed. Could we (me included) stop giving actual answers in comments, please? It is infuriatingly annoying to have the "this question has been deleted" thing pop up when you've spent 10 minutes typing away trying to explain what's wrong, what they should be doing, and why, just because someone casually says "do this: ..." in a comment. Comments are for comments. Answers are for answers. If you have the answer, type it up. If you don't have the time to do that, then you don't have the answer . If you are dying to leave a witty comment about an issue in the code that may resolve the question in a wonderful puff of smoke, then write it up as an answer, or hold that thought and come back later when the question no longer can be deleted (>1 answers, or an answer with an upvote, or one accepted answer). Answers are peer reviewed, edited, upvoted and downvoted, referred to by other answers, and read by many people. The same goes for questions. Comments, not so much. This means that good answers generally provide more context to the issue in the question as well as a higher quality (in terms of both exposition and of correctness) than what comments do. This is what this site is about: providing good answers to questions related to Unix and Linux, so that the person asking, and others, may benefit and learn something. I know that writing an answer instead of a comments is no cure for users deleting their questions (unless the answer is good enough to get an immediate upvote), but it would make it more probable that the question may get enough undeletion votes to be resurrected. AFAIK, there is no undeletion review queue, but the more "reputable" people frequenting the chat could possibly help out with this, and a moderator would probably be able to track down the deleted question, especially if you wrote an answer to it that you think was worth rescuing . Sorry, no real question here other than "could we write real answers instead of comments, please?"
If you don't have the time to do that, then you don't have the answer. Correct. So leaving a comment may be an acceptable approach. I mention this because I have been chastised for leaving an "answer" as a comment. There are a couple of reasons why I do this: There isn't sufficient detail in the question for me to be sure that my answer will, in fact, address their issue. The comment can both elicit more information and, if my assumption proves correct, prove the basis for a more detailed response as an answer. The question is part of an obvious pattern (pretty much anything in text-processing , for example). Dropping the answer as a comment is unlikely to dissuade the avalanche of answers that will likely follow, and if the question is deleted it is similarly unlikely that the site loses anything of value. I also fundamentally disagree with the premise, that the people answering --even if in the comments--are the problem, or even part of it. What you are objecting to is people who selfishly abuse the goodwill of the community with no intention of reciprocating by contributing back to the commons. Those of us who are contributing should be free to do that, whether it be commenting, editing, reviewing or whatever.
{ "source": [ "https://unix.meta.stackexchange.com/questions/4626", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/116858/" ] }
4,640
It's well-known that we have a quality problem in the kali-linux tag. Some statistics corroborate this perception (though not to the extent that I'd expected): 16.4% of the kali-linux questions have a negative score, compared with 3% for the site as a whole. 39.6% of the kali-linux questions have a score ≤ 0, compared with 27% for the site as a whole. 21.9% have no answer, compared with 15.8% for the site as a whole. 21.9% of non-closed question have no answer, compared with 15.6% for the site as a whole. 7% are closed (excluding duplicates), compared with 3.5% for the site as a whole. 2.4% are closed as duplicates, compared with 4.2% for the site as a whole. There isn't a flood of Kali questions, mind you. About 1.3% of the recently asked questions are kali-linux (that's the rate for 2017 as a whole as well as for August and for September, rounded to the nearest 0.1%), excluding deleted questions (can a mod please edit in figures that account for deleted questions?). In chat, several regulars have said that they largely ignore Kali questions. (Personally, I don't ignore the tag, but I tend to skip the questions without reading them.) It seems that many Kali questions are a bad thing for answerers. And I suspect that they're also a bad thing for askers, because all too often, we aren't helping them. What should we do about Kali questions?
We've often talked in the chat about having a “reference question” telling people not to use Kali Linux. Well, here's my proposal . Edits welcome, but please keep the gist unless we decide on meta that this isn't the advice we want to give as a community. Note that I do not propose systematically closing Kali questions! Each question should be judged on its merit. But when people are clearly in way over their head and they've asked a question with the usual novice problems (unclear because they didn't explain what they were doing, they didn't copy-paste error messages, they didn't provide relevant information about their system, etc.), we could close it as a duplicate of this advice-giving question rather than letting the question rot with zero answers and zero comments or closing it as unclear .
{ "source": [ "https://unix.meta.stackexchange.com/questions/4640", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/885/" ] }
4,695
I made an edit to an answer. I feel the edit is a clear improvement in terms of spelling and format. The user sent me an angry email and rolled back the edit. I thought it was funny, so I made the edit again. Does a user have a right to not format and use incorrect spelling?
No. Every so often we have a new user who thinks they maintain total control over the content they post, but the FAQ is pretty clear about this: Editing is important for keeping questions and answers clear, relevant, and up-to-date. If you are not comfortable with the idea of your contributions being collaboratively edited by other trusted users, this may not be the site for you.
{ "source": [ "https://unix.meta.stackexchange.com/questions/4695", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/16792/" ] }
4,786
Is it OK to ask questions like "what is the etymology of hdparm command name?"? General topic matches https://unix.stackexchange.com/help/on-topic ("Shell scripting", "Applications packaged in *nix distributions"). It is not explicitly rejected by https://unix.stackexchange.com/help/dont-ask Main problem that I see is that while for some commands there may be some obviously correct answer for others the best answers is "it appears to be unclear" and it is impossible to guess it before asking (except cases of self-answered questions). I earlier thought that askubuntu would fit but I was directed here as answers typically would not be Ubuntu specific, though it was suggested that this type of question is rather OK . I found some examples of this type of questions that seem to be accepted: Why BitchX is called BitchX? Why is dmesg called dmesg? Etymology of "descriptor" in "file descriptor" What does 'touch' stand for? though it is impossible for me to guess to see what is nowadays a typical reaction to this type of this questions (I am unable to see deleted ones, maybe I encountered rare case of not deleted ones).
I hesitate to post this as an answer, but since this is a discussion , here goes. I can see some value in gathering historic information at U&L, if there are people who know (or know people who know) the answer to such a question. "Why" questions could have evidence behind the answers, or may have an unknown origin. The worst-case scenario is that it gets (and remains) closed.
{ "source": [ "https://unix.meta.stackexchange.com/questions/4786", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/92199/" ] }
5,011
Do answers to questions that didn't come with attempts, and are just like "do my homework", deserve a downvote? Example: Filtering out numbers that have sequential- consecutive or non-consecutive digits In just 5 minutes after I posted my answer there I got 3 down votes without any comments, so at first I thought it was maybe because I didn't include an explanation of what my answer does while I was on my way to adding explanation but I'm quite slow since my native language is not English. After I finished editing, I only got one down-vote reverted but still 2 were there and I really don't know why people voted down, except maybe it was inappropriate in their opinion to answer to the question which is asked as "do my homework" . If this is the reason of downvoting my answer or another answer there, then what is community saying for this kind of questions? Should we really not answer these questions? If yes then more than half of Q&A in this site are just asked for "do my homework" and we should downvote all then?
Both questions and answers should be voted on purely on their own merits. If someone writes a comprehensive answer to a marginally asked question, it's not appropriate to downvote such an answer, IMO. In all cases, the question should be edited if possible to improve it as much as possible and the answer should be voted on based on their own independent quality. There is even a badge to reinforce this point called the "reversal" which is awarded when someone is able to provide a solid answer to a poor question. https://unix.stackexchange.com/help/badges/50/reversal There's only ever been 2 awarded on this site 8-). I attribute this to our sites willingness to edit poorly written questions so that they never reach the -5 threshold required to trigger this.
{ "source": [ "https://unix.meta.stackexchange.com/questions/5011", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/72456/" ] }
5,138
The image also links to the question itself . Is something wrong with including that text? From my perspective, it helps define the scope of the question to 'I don't need to learn what a user is or why you need them (even 'fake' ones), but this other concept isn't intuitive to me'. I've tried soliciting reasoning in the edit history, but that didn't work. I should add that despite my rep on this site, I've been on SE sites for a long time (mostly TeX.SX ), so I get the general ideology.
I think this is (perhaps overenthusiastic) removal of parts of your question which seem extraneous to the editor (see also Should 'Hi', 'thanks', taglines, and salutations be removed from posts? for some context). The problem with “I’m not new to basic UNIX concepts, but I am pretty new to UNIX sysadmin” is that it doesn’t convey much actionable information. Drawing a line between “UNIX concepts” and “UNIX sysadmin” is difficult, and its position depends on the reader — but here we try to write answers which don’t depend (too much) on the question author’s knowledge. Your own clarification, “I don’t need to learn what a user is or why you need them (even ‘fake’ ones), but this other concept isn’t intuitive to me” is much better; I recommend editing that into your question (rephrasing “this other concept”).
{ "source": [ "https://unix.meta.stackexchange.com/questions/5138", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/28980/" ] }
5,178
I have found the main forum an excellent resource for concise, good quality questions and answers, so I do understand the philosophy to remove woolly, off-topic or old (stale) questions. But when I run through the review queues recently, I've seem to find many good questions are marked off-topic or too-broad , which have been active for less than a day, and feel that some we should at least allow to run for a couple of days. They are usually the difficult ones. The OP may be away, in meetings, other deadline and can't reply to a question, or maybe the person who could answer hasn't seen it yet, because they live in a different timezone and hasn't woken up yet, or just been away from their terminal. Are we being a little over zealous in trying to keep this forum clean ? Edit: Tks for all your comments, I think I will now choose to close and not feel too bad knowing that it can be edited by OP and then reviewed for re-open. I also like @Philip Couling's comment below will adopt to do this when I think it necessary, something like this, I am voting to close this question, because it doesn't appear to be answerable and I haven't seen further clarification; however, if you edit your question, it will be sent to the review queue for re-opening.
As Terdon mentioned in a comment these questions can easily be re-opened after being edited by the OP. I think it's more beneficial to close them as off-topic or too-broad while they are still fresh and before they get buried several pages back and out of sight. If OP comes back to provide clarification (surprisingly rare) the question can then be re-opened.
{ "source": [ "https://unix.meta.stackexchange.com/questions/5178", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/31760/" ] }
5,257
I know that I'll probably be down-voted to oblivion, but every time I see a Kali question on this site, it gets marked as a duplicate of this question , even when it has nothing to do with that question. So why is Kali so hated? This is supposed to be a place where you can ask and answer questions about Linux, and Kali is a Linux distro. And are there any ways that we can improve this system so that this happens less? How about making the question that people are redirected to more helpful or less condescending? Edit: this relates to the Unix & Linux Meta post: Why I closed the “Why is Kali so hard” question
Many, probably most, kali-linux questions on this site are extremely low quality. They show no effort, lack any useful detail, are often extremely basic, frequently trying to do things the distribution is unsuitable for, and from users who have no interest in learning or aiding others to help them learn, or some combination of those traits. That is the answer to your title question: negative associations from a number of low-quality questions from often-belligerent querents. That has led to a general distaste for questions about it that taints even the fair questions from users who mention that they are using it. Those questions get lumped into the same bucket as soon as the tag or a hint of the distribution appears. Your question was not a duplicate of the supposed duplicate target, and nor are almost all of the other questions closed as a duplicate of it. That question is wilfully, cartoonishly, and insultingly caricatured. It is, frankly, not a positive element of the site at this point, let alone as something to tell someone is the same as what they asked. Closing questions as a duplicate of that question is abusive . Voting to close as a duplicate like that is just a way of saying something that wouldn't be allowed in a comment 1 , mediated by the system. The questions are virtually never "why is it hard", they are "how do I X?". The questions might be unclear, they might be too broad, and they're often legitimate duplicates of on-topic questions with actual answers , but they're not duplicates of that . Votes that suggest otherwise are not good-faith acts 2 . At best, it's a helpful see-also, and not a duplicate; at worst, it's just people getting off on the superiority they feel from belittling people they think are beneath them. Kali questions ought to be edited, voted upon, closed, and reopened in the same manner as other questions are (which is still likely going to result in most, but not all, of them being legitimately downvoted and closed on current form). Generally questions closed in this way don't get reopened and the users disappear. For many of the questions, that was probably always going to be the case. On the other hand, it is also hardly strange that upon receiving that sort of welcome the asker doesn't go on to edit their question to add details such that it ends up in the reopen queue. They are driven off and the site preserves its reputation as hostile, elitist, and unwelcoming. Kali questions and question from people using Kali are in general on-topic here , though in many ways it would be preferable that they were categorically off-topic to the current situation. Regardless, each question ought to be addressed on its own merits. It is likely that for many of the questions that are both about appropriate uses of Kali and from users who have made reasonable effort, this community doesn't have the most suitable expertise. For example, questions about using wireless hardware in non-standard ways using Linux tools are on-topic, but not something that general sysadmins and users are likely to have experience with. Those questions may languish regardless. It's not always clear which questions are in which camp and it's not unreasonable for someone to ask. I think the "Why is Kali Linux so hard?" question is a problem in itself. Another formulation of it might be able to convey useful information, but as-is it is just an attractive nuisance and I don't think that's repairable. It certainly shouldn't be put forth as duplicate to other questions, and even less so to yours that already had an answer . 1 I will bowdlerise that comment as "go away, unintelligent person" 2 Users who have reached the close-vote threshold, let alone realms far beyond, are well aware that the new question does not duplicate the old and does not have the same answer, and so we know that they are not suggesting in good faith that it does. We could believe that they merely thought it would be useful related reading and used the close-vote mechanism as a way to convey the link, but as that is not the role of a close vote the action was not offered in good faith. They could have a range of motivations within that.
{ "source": [ "https://unix.meta.stackexchange.com/questions/5257", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/330104/" ] }
5,316
UPDATE: The question is now here on meta: Why is Kali Linux so hard to set up? Why won't people help me? As the regulars will know, there seem to be many people out there who think that Kali is a good way of starting with Linux. As a result, we get quite a few questions from newbies trying, and failing, to do basic things with Kali. The community is understandably tired of these since it's sort of like having someone trying to learn how to drive and getting a Formula 1 car to practice in. The Why is Kali Linux so hard to set up? Why won't people help me? post was created to be a dupe target that we can use when people who are obviously out of their depth come here and ask for help using Kali while they are clearly newbie Linux users and have no business using Kali in the first place. I've long felt that we've been abusing the Why is Kali Hard question (WKHQ). It is being used to close pretty much any question that mentions Kali. This has been discussed a few times already: Why are Kali questions hated so much? Systematically closing Kali questions In the vast majority of cases, the questions have other problems. Most often, they are either unclear or too broad. Well, we can close them for that reason then. The original intent of the WKHQ was to give users an explanation of why they shouldn't be using Kali. And that is a laudable goal. However, closing a question as a duplicate suggests that the question will have an answer in the dupe. The WKHQ provides no answer to any technical issue, it only explains what Kali is for. It is a great answer to link to, but not a good dupe target. And it was never supposed to be a catch-all for Kali questions: Note that I do not propose systematically closing Kali questions! Each question should be judged on its merit. But a catch-all is what it has become. I think Michael Homer put it very well in his answer : At best, it's a helpful see-also, and not a duplicate; at worst, it's just people getting off on the superiority they feel from belittling people they think are beneath them. My comment under that answer currently has 6 upvotes: @slm I am becoming more and more convinced that we should close the "Kali is hard" question so it can no longer be used as a dupe target. It was a good idea, but I agree 100% with Michael that it is being abused. – So, after seeing yet another (bad, but not duplicate) question getting a close vote as duplicate of the WKHQ, I made an executive decision and closed the WKHQ which, I hope, will encourage people to stop using it as a dupe target. Instead, I left this comment under the question (the link goes to the WKHQ): Please note that Kali is a tool designed for experts . It is not a normal operating system and should not be used as one. The error you are getting is quite clear: you don't have easy_install. But if you don't know how to correct that, I would strongly urge you to use a different operating system. I am posting here first to let everyone know what I did and why, and to give the community a chance to voice their disagreement (or support), and also to open a discussion about how to proceed. If we really, really feel so strongly against Kali questions, then maybe we should make it off topic. But I don't see how we could justify that: Kali is most certainly a *nix system. We could perhaps make only "expert" level questions on topic but then who would be the judge of what constitutes an expert question? I instead suggest that we treat Kali questions like any other on topic question. We close when the question is unclear, but because it is unclear . We close when it is too broad but because it is too broad . We can leave comments pointing the OP to the WKHQ, as a way of explaining why Kali might not be the best choice for them, but unless there's a clear consensus to the contrary, I vote for keeping the WKHQ closed and no longer using it as a catch-all duplicate for bad Kali questions.
In retrospect, I agree that the WIKSH thread isn't working out as intended. I do think that the WIKSH thread is useful, but it doesn't work as a duplicate target. It guides people towards solving their problem — by telling them to use a different distribution — but it doesn't answer their actual question: instead, it explains why their question isn't getting answers. So it should be on meta . It's useful, so it shouldn't be deleted. It could get new useful answers and the existing answers could be improved, so it shouldn't be closed or locked. Even the title of the question is a meta title — “why won't people help me” is a meta question. The content is more main-site content, but it works as a meta answer too. Moderators can't migrate the question because of an age restriction, but staff can. Please migrate this question to meta.
{ "source": [ "https://unix.meta.stackexchange.com/questions/5316", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/22222/" ] }
5,360
I've installed Kali Linux, or I'm trying to install it. Why is it so hard? Why doesn't it recognize my hardware? Why do I need to set up so many things manually? Why can't I install the applications I want to use? Why don't tutorials written for other distributions work? Help! Why won't people help me? Why is Linux so hard? Before you answer: There is a Meta question that complements this question: What should we do about Kali Linux questions?
Linux isn't hard, but Kali is! If you need to ask, then Kali Linux is not the right distribution for you. Kali Linux is a distribution for professional penetration testers who are already very familiar with Linux. It is meant to be used from a USB dongle for penetration testing. It can be installed, but it is not really meant to be. It is not meant for general use (even by professional penetration testers) such as Internet browsing, word processing, gaming, development, etc. If you aren't already a Linux pro, don't use Kali. Use a distribution for ordinary people, such as Ubuntu, Fedora, elementary OS, Linux Mint, etc. Even if you want to learn penetration testing, you need to learn the basics first! Do this on a “normal” distribution. From the official Kali Linux documentation: Should I Use Kali Linux? (…) Kali is a Linux distribution specifically geared towards professional penetration testers and security specialists, and given its unique nature, it is NOT a recommended distribution if you’re unfamiliar with Linux or are looking for a general-purpose Linux desktop distribution for development, web design, gaming, etc. (…) Even for experienced Linux users, Kali can pose some challenges. (…) While Kali Linux is architected to be highly customizable , don’t expect to be able to add random unrelated packages and repositories that are “out of band” of the regular Kali software sources and have it Just Work. In particular, there is absolutely no support whatsoever for the apt-add-repository command, LaunchPad, or PPAs. Trying to install Steam on your Kali Linux desktop is an experiment that will not end well. Even getting a package as mainstream as NodeJS onto a Kali Linux installation can take a little extra effort and tinkering . If you are unfamiliar with Linux generally, if you do not have at least a basic level of competence in administering a system, if you are looking for a Linux distribution to use as a learning tool to get to know your way around Linux, or if you want a distro that you can use as a general purpose desktop installation, Kali Linux is probably not what you are looking for. (…) If you are looking for a Linux distribution to learn the basics of Linux and need a good starting point, Kali Linux is not the ideal distribution for you. You may want to begin with Ubuntu , Mint , or Debian instead. If you’re interested in getting hands-on with the internals of Linux, take a look the “Linux From Scratch” project. But why won't people help me?! Since Kali is for experts, if you ask about Kali, people assume that you're an expert. If you ask a beginner question about Kali, many people will ignore you. Beginners and Kali are not compatible. What should I use then? “Which distribution is best for beginners?” is an endless debate. If you want a distribution that is designed to be easy for beginners and where beginners can find a lot of help, use Ubuntu . You can ask for help on our sister site Ask Ubuntu or on the Ubuntu forums . (Do NOT ask for help on Ask Ubuntu or the Ubuntu forums if you're using a distribution that is based on Ubuntu, but is not one of the official variants of Ubuntu!) Elementary OS is another Linux distribution designed to be easy to install and to use for people with no Linux experience. It also has a Stack Exchange site . With any distribution, even distributions targeted for beginners, you can learn by looking under the hood. The difference is that with easy-to-use distributions, you can install first, and then explore to learn.
{ "source": [ "https://unix.meta.stackexchange.com/questions/5360", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/885/" ] }
5,373
I ran into this question: Can the empty spaces/background in a terminal be replaced with a random(but pretty) pattern of ASCII characters? And while I thought it a very interesting question, I questioned whether or not it belonged under "Unix & Linux", and even gave feed back about how displaying random characters around real information can make things hard or near impossible to read and wondered what they were trying to accomplish, But the question goes into using bash to generate and/or maintain a background of an unrelated image and interspersing it around content text. Then they go into using bash's prompt vars to aid in generating this, as well as asking about how this output is generated or comes from a "VT subsystem" (a primitive vt102 emulator that runs on the console for purposes of using the console with curses-type programs when one would like more than a simple line-oriented interface during system repair). Then there is talk about how bash makes use of the terminfo database capabilities by including them in the prompt vars (PS1, et al), which is a misleading statement/point of view, as bash doesn't make use of that database and shouldn't -- though a user might they might do better to store output of 'tput' for various actions rather than hardcoding some terminal specific terminal codes -- though they do use tput later. I keep wondering what the question was -- where would this be used, but it seems clear it has little to do with bash, more than other posix compatible shells. If I knew what the question was, I might think it better on a computer-art forum or something else, but it's hard to see how this is a unix/linux question as I tried to answer how it's not the best way to communicate information as dynamically generated background around content, but somehow that got translated by him into my thinking it doesn't belong on any SE sites, which I wasn't saying though in thinking about it, it doesn't really seem to fit the Q/A format and might better belong on some QA's wiki, but not having much experience w/that, I can't say. Anyway, he was then justifying how the Q getting 22 points justified its inclusion on the unix.sa site, and saying if it wasn't allowed, he wouldn't have done 116 other contributions " which stands well above ̶y̶o̶u̶r̶s [ your contribution count] at this point in time " . At this point, I don't know what to do if anything, but hoped to bring it to the attention of those with more experience who might be able to do the right thing (if doing anything at all). Responding to a comment by @sim, You said it belongs here. Do you mean here in meta? Or that it is a question about Unix or Linux and belongs in the original forum. To clarify why I had a question about its current location being appropriate: Looking at what the forum is about (from https://unix.stackexchange.com/tour ): 1) Ask questions, get answers, no distractions This site is all about getting answers. It's not a discussion forum. There's no chit-chat. 2) Get answers to practical, detailed questions Focus on questions about an actual problem you have faced. Include details about what you have tried and exactly what you are trying to do. Topics to ask about: Using or administering a *nix desktop or server The Unix foundation underlying MacOS (but generally not frontend application questions) The underlying *nix OS on an embedded system or handheld device (e.g. an Android phone) Shell scripting Applications packaged in *nix distributions (note: being cross-platform does not disqualify) UNIX C API and System Interfaces ( within reason ) ---- I'm not sure ascii art in the background of normal foreground TTY traffic really fits... Questions that don't work: Not all questions work well in our format. Avoid questions that are primarily opinion-based , or that are likely to generate discussion rather than answers. *Don't ask about... 1) Anything not directly related to Unix or Linux (How is this question directly about either?) Questions that are primarily opinion-based Questions with too many possible answers or that would require an extremely long answer (um...how is this question not leading to extremely long answers?) Like I said -- it is interesting, just not sure that it belongs under the Unix/Linux site, nor after thinking about it, am I sure that it fits the format of this site.
Linux isn't hard, but Kali is! If you need to ask, then Kali Linux is not the right distribution for you. Kali Linux is a distribution for professional penetration testers who are already very familiar with Linux. It is meant to be used from a USB dongle for penetration testing. It can be installed, but it is not really meant to be. It is not meant for general use (even by professional penetration testers) such as Internet browsing, word processing, gaming, development, etc. If you aren't already a Linux pro, don't use Kali. Use a distribution for ordinary people, such as Ubuntu, Fedora, elementary OS, Linux Mint, etc. Even if you want to learn penetration testing, you need to learn the basics first! Do this on a “normal” distribution. From the official Kali Linux documentation: Should I Use Kali Linux? (…) Kali is a Linux distribution specifically geared towards professional penetration testers and security specialists, and given its unique nature, it is NOT a recommended distribution if you’re unfamiliar with Linux or are looking for a general-purpose Linux desktop distribution for development, web design, gaming, etc. (…) Even for experienced Linux users, Kali can pose some challenges. (…) While Kali Linux is architected to be highly customizable , don’t expect to be able to add random unrelated packages and repositories that are “out of band” of the regular Kali software sources and have it Just Work. In particular, there is absolutely no support whatsoever for the apt-add-repository command, LaunchPad, or PPAs. Trying to install Steam on your Kali Linux desktop is an experiment that will not end well. Even getting a package as mainstream as NodeJS onto a Kali Linux installation can take a little extra effort and tinkering . If you are unfamiliar with Linux generally, if you do not have at least a basic level of competence in administering a system, if you are looking for a Linux distribution to use as a learning tool to get to know your way around Linux, or if you want a distro that you can use as a general purpose desktop installation, Kali Linux is probably not what you are looking for. (…) If you are looking for a Linux distribution to learn the basics of Linux and need a good starting point, Kali Linux is not the ideal distribution for you. You may want to begin with Ubuntu , Mint , or Debian instead. If you’re interested in getting hands-on with the internals of Linux, take a look the “Linux From Scratch” project. But why won't people help me?! Since Kali is for experts, if you ask about Kali, people assume that you're an expert. If you ask a beginner question about Kali, many people will ignore you. Beginners and Kali are not compatible. What should I use then? “Which distribution is best for beginners?” is an endless debate. If you want a distribution that is designed to be easy for beginners and where beginners can find a lot of help, use Ubuntu . You can ask for help on our sister site Ask Ubuntu or on the Ubuntu forums . (Do NOT ask for help on Ask Ubuntu or the Ubuntu forums if you're using a distribution that is based on Ubuntu, but is not one of the official variants of Ubuntu!) Elementary OS is another Linux distribution designed to be easy to install and to use for people with no Linux experience. It also has a Stack Exchange site . With any distribution, even distributions targeted for beginners, you can learn by looking under the hood. The difference is that with easy-to-use distributions, you can install first, and then explore to learn.
{ "source": [ "https://unix.meta.stackexchange.com/questions/5373", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/37521/" ] }
5,916
Sometimes, users will post questions in a language other than English. What should we do about such posts? Should we try and translate with automatic translators? Should we close as unclear? Something else? Please note that this is a discussion post. This isn't a new policy, at least not yet. Mods don't get to decide policy, the community does. So if people disagree with my answer below, by all means downvote it and please post your own, competing answers arguing your position.
TL;DR Please don't just dump non-English posts into Google Translate (or equivalent) and hope for the best. If you can understand the language in question well enough to confirm that the translation makes sense, and the post is really worth keeping, then feel free to translate it, but don't just use an automatic translator and hope for the best. Questions posted in a language other than English on an English language site are unclear by definition and should be closed. We expect users to put at least the minimal amount of effort into their questions. That's one of the main reasons for downvoting and is what makes Stack Exchange so much better than the noise-filled forums that abound everywhere else on the internet. A user who simply posts their question in another language on an English-language site and can't even be bothered to i) attempt to write in English or, if they speak no English at all, ii) pass their question through Google Translate is failing to put any effort at all into the question. I can understand a few languages which means I am unfortunate enough to be able to understand many of the non-English questions posted on the site. Sadly, in almost all cases, the different language is the least of the post's problems. As can be expected, if a user can't even be bothered to notice that they're on an English language site, they also can't be bothered to write anything resembling a clear, on topic question. As far as I can recall, every time I translated a post, I also had to leave a comment requesting basic information from the OP. Frankly, posting non-English questions on an English language site is usually a good indication that the question itself is a poor fit for the site, irrespective of the language it happened to have been posted in. That said, let's imagine you find a question whose only problem is the language and translate it. Now what? If the OP can't ask in English, chances are they also can't understand the answer. Conversely, if they can understand the answer, then they should have asked in English in the first place. Unix & Linux isn't the one and only stop for *nix-related support and information on the internet. It is one of the top spots for English support and information, but there are dozens if not hundreds of helpful sites in other languages. Our target audience is not the world, but the portion of the world that uses *nix and can understand English well enough to ask questions in that language. It would never occur to me to go to an Urdu Ubuntu forum and post a question in English. I would consider that to be inconsiderate ("I don't care what language y'all speak, I'll just ignore that and post in mine") and rude. We should make every effort to help people whose English is limited by editing and doing our best to understand what they are saying. Writing in a foreign language is hard and I have a lot of respect for people who post here despite not speaking fluent English. But if they are so inconsiderate as to just dump their question with no effort, then I don't think we should encourage that by translating. Finally, automatic translations are bad. They rarely make much sense so when you do this, when you auto-translate a question, you have just added yet another bad question to the site. What's worse, it's a bad question that's very unlikely to be improved since the OP has already shown they're not willing or able to. This isn't helping anyone and is actively harming the site. So, for me, the only case where translating a question would be worthwhile is when you can do the translation yourself, and the question is really good and clear and can stand on its own with no further clarification needed. For every other case, I would argue that translating is actively harmful and automatically translating is doubly so. Proposed policy : Never use automatic translators to translate someone else's post. Avoid translating in general, except in exceptional cases where you can do the translation and the question is clear and good enough to stand alone.
{ "source": [ "https://unix.meta.stackexchange.com/questions/5916", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/22222/" ] }
6,021
New software has been released recently named ChatGPT that is causing quite a stir around the network: Could ChatGPT be a viable way to answer people's questions? Temporary policy: ChatGPT is banned Ban ChatGPT network-wide Slate, a Stack Exchange Community Manager, posted a comment on the Meta.SE link above: We have begun internal discussions to identify options for addressing this issue. We’re also reading what folks write about the topic on their individual sites, as one piece of assessing the overall impact. While we evaluate, we hope that folks on network sites feel comfortable establishing per-site policies responsive to their communities’ needs. This is fairly new territory, so I'm interested in gathering the UNIX & Linux community's thoughts on AI-generated answers. There are several key points that have been raised already, so I'll seed the discussion with some of them: You must attribute the answers to the source (sources: Machavity's answer ; Makoto's answer ) Generated answers may not be correct (sources: Nineberry's answer ; Journeyman Geek's answer ) Do we want to prohibit AI-generated answers? Do we allow them, with attribution?
I think such answers should be banned entirely, and anyone posting ChatGPT answers without attribution should be banned with prejudice. If the asker wants an answer from an AI, they can go to ChatGPT directly. I personally don't think that even ChatGPT answers with attribution should be allowed, but I am willing to compromise on this point providing the attribution is provided up front (not at the end of the answer and certainly NOT in an edit after the answer is first posted), and providing the entire ChatGPT text is in a block quote so nobody can mistake it for a directly written human answer to the question. Now this raises some questions, "What about people using ChatGPT for grammar and spelling? What about if the answer poster carefully checks the ChatGPT answer before posting?" My answer to both is, that's only fine if the answerer then writes or re-writes the answer themselves in their own words and takes responsibility for every word of it being what THEY actually want to say. This is in alignment with the ChatGPT terms and conditions, and with how we handle any other sort of source of information. In short, I think we should handle ChatGPT answers the same as we would handle copy-paste from other websites without attribution, but with added prejudice because of: wasting everybody's time, the difficulty of detection, and the need to dissuade other people from the easy "rep-farming" that will occur if we hold a tolerant stance on ChatGPT answers.
{ "source": [ "https://unix.meta.stackexchange.com/questions/6021", "https://unix.meta.stackexchange.com", "https://unix.meta.stackexchange.com/users/117549/" ] }
6
Share your command line features and tricks for Unix/Linux. Try to keep it shell/distro agnostic if possible. Interested in seeing aliases, one-liners, keyboard shortcuts, small shell scripts, etc.
This expands somewhat on the !! trick mentioned in this answer . There are actually a bunch of history-related commands that tend to get forgotten about (people tend to stab Up 100 times instead looking for a command they know they typed). The history command will show a list of recently run commands with an event designator to the left !N will substitute the command associated with event designator N !-N will substitute the N th most recent command; e.g. !-1 will substitute the most recent command, !-2 the second most recent, etc. As mentioned in the other answer, !! is shorthand for !-1 , to quickly substitute the last command !string will substitute the most recent command that begins with string !?string? will substitute the most recent command that contains string Word designators can be added on to a ! history command to modify the results. A colon separates the event and word designators, e.g. !!:0 . The event designator !! can be abbreviated to just ! when using a word designator, so !!:0 is equivalent to !:0 . !:0 will get the command that was executed !:1 will get the first argument (and !:2 the second, etc.) !:2-3 will get the second and third arguments !:^ is another way to get the first argument. !:$ will get the last !:* will get all arguments (but not the command) Modifiers can also be appended to a ! history command, each prefixed by a colon. Any number can be stacked on (e.g. !:t:r:p ). h -- Line up to the base filename t -- Only the base filename r -- Line up to the filename extension e -- Only the filename extension s/search/replacement -- Replace the first occurrence of search with replacement gs/search/replacement -- Replace all occurrences of search with replacement
{ "source": [ "https://unix.stackexchange.com/questions/6", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17/" ] }
7
I usually install Linux on a single partition since I only use it as a personal desktop. However, every now and then I reinstall the box. And what I do is to simply move my files around with an external hard disk. So how could I prevent that when reinstalling my box (e.g. switching to another distro)?
Keep your /home on a separate partition. This way, it will not be overwritten when you switch to another distro or upgrade your current one. It's also a good idea to have your swap on its own partition. But that should be done automatically by your distro's installer. The way my laptop is setup, I have the following partitions: / /home /boot swap
{ "source": [ "https://unix.stackexchange.com/questions/7", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48/" ] }
10
I am using Mac OSX. When I type ls -l I see something like drwxr-xr-x@ 12 xonic staff 408 22 Jun 19:00 . drwxr-xr-x 9 xonic staff 306 22 Jun 19:42 .. -rwxrwxrwx@ 1 xonic staff 6148 25 Mai 23:04 .DS_Store -rw-r--r--@ 1 xonic staff 17284 22 Jun 00:20 filmStrip.cpp -rw-r--r--@ 1 xonic staff 3843 21 Jun 21:20 filmStrip.h What do the @'s mean?
It indicates the file has extended attributes . You can use the xattr command-line utility to view and modify them: xattr -l file # lists the names of all xattrs. xattr -w attr_name attr_value file # sets xattr attr_name to attr_value. xattr -d attr_name file # deletes xattr attr_name. xattr -c file # deletes all xattrs. xattr -h # prints help
{ "source": [ "https://unix.stackexchange.com/questions/10", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59/" ] }
13
How do I stop a program running at startup in Linux. I want to remove some apps from startup to allow them to be managed by supervisord e.g apache2
Depending on your distro use the chkconfig or update-rc.d tool to enable/disable system services. On a redhat/suse/mandrake style system: sudo chkconfig apache2 off On Debian: sudo update-rc.d -f apache2 remove Checkout their man pages for more info.
{ "source": [ "https://unix.stackexchange.com/questions/13", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13/" ] }
30
I need to compile some software on my Fedora machine. Where's the best place to put it so not to interfere with the packaged software?
Rule of thumb, at least on Debian-flavoured systems: /usr/local for stuff which is "system-wide"—i.e. /usr/local tends to be in a distro's default $PATH , and follows a standard UNIX directory hierarchy with /usr/local/bin , /usr/local/lib , etc. /opt for stuff you don't trust to make system-wide, with per-app prefixes—i.e. /opt/firefox-3.6.8 , /opt/mono-2.6.7 , and so on. Stuff in here requires more careful management, but is also less likely to break your system—and is easier to remove since you just delete the folder and it's gone.
{ "source": [ "https://unix.stackexchange.com/questions/30", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62/" ] }
55
What does size of a directory mean in output of ls -l command?
This is the size of space on the disk that is used to store the meta information for the directory (i.e. the table of files that belong to this directory). If it is i.e. 1024 this means that 1024 bytes on the disk are used (it always allocate full blocks) for this purpose.
{ "source": [ "https://unix.stackexchange.com/questions/55", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57/" ] }
70
I want to run multiple Bash shell scripts in parallel. However, I want to avoid race conditions. What Unix commands are truly atomic that I could use for this purpose, and how can I use them?
If lockfile is not installed on your system, then mkdir will do the work: it's an atomic operation, and it fails if the directory already exists (as long as you don't add the -p command-line switch). create_lock_or_wait () { path="$1" wait_time="${2:-10}" while true; do if mkdir "${path}.lock.d"; then break; fi sleep $wait_time done } remove_lock () { path="$1" rmdir "${path}.lock.d" }
{ "source": [ "https://unix.stackexchange.com/questions/70", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59/" ] }