idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
50,701 | Prejudice in blind test | I do think it is a study design problem and a famous one in that some think RA Fisher did not actually realize it in making his famous Lady tasting cups of tea example and one that haunts clinical trials who try to prevent any unblinding of treatment assignment in clinical trials.
A solution suggested from what is done there (choosing block size randomly) is to first randomly choose the ratio of coke to pepsi and then chose the order. Do make the choice of all coke or all pepsi very very low but not zero and then try to get "there is a chance that all will be coke or pepsi” into the informed consent. | Prejudice in blind test | I do think it is a study design problem and a famous one in that some think RA Fisher did not actually realize it in making his famous Lady tasting cups of tea example and one that haunts clinical tri | Prejudice in blind test
I do think it is a study design problem and a famous one in that some think RA Fisher did not actually realize it in making his famous Lady tasting cups of tea example and one that haunts clinical trials who try to prevent any unblinding of treatment assignment in clinical trials.
A solution suggested from what is done there (choosing block size randomly) is to first randomly choose the ratio of coke to pepsi and then chose the order. Do make the choice of all coke or all pepsi very very low but not zero and then try to get "there is a chance that all will be coke or pepsi” into the informed consent. | Prejudice in blind test
I do think it is a study design problem and a famous one in that some think RA Fisher did not actually realize it in making his famous Lady tasting cups of tea example and one that haunts clinical tri |
50,702 | Prejudice in blind test | You could (truthfully) tell the participants that you will flip a coin each time you provide a soda and that the coin flip will determine P vs C. You can go on to explain to them, "If the last five were Coke (or Pepsi), Coke and Pepsi are equally likely on the next test." One problem is that some of your participants won't believe the explanation and will remain convinced that five heads in a row makes tails more likely on the next flip.
But you might want to carefully simulate how you will analyze the data once you get it because randomly scrambling three and three is not the same experiment as flipping a coin on each soda. | Prejudice in blind test | You could (truthfully) tell the participants that you will flip a coin each time you provide a soda and that the coin flip will determine P vs C. You can go on to explain to them, "If the last five w | Prejudice in blind test
You could (truthfully) tell the participants that you will flip a coin each time you provide a soda and that the coin flip will determine P vs C. You can go on to explain to them, "If the last five were Coke (or Pepsi), Coke and Pepsi are equally likely on the next test." One problem is that some of your participants won't believe the explanation and will remain convinced that five heads in a row makes tails more likely on the next flip.
But you might want to carefully simulate how you will analyze the data once you get it because randomly scrambling three and three is not the same experiment as flipping a coin on each soda. | Prejudice in blind test
You could (truthfully) tell the participants that you will flip a coin each time you provide a soda and that the coin flip will determine P vs C. You can go on to explain to them, "If the last five w |
50,703 | How do you detect if a given dataset has multivariate normal distribution? | By definition the random vector $X$ is multivariate normal if all linear combinations $a^T X$ have some (univariate) normal distribution. So one idea to test multivariate normality is to search among the vectors $a$ for one such that $a^T X$ is definitely not normal. That is the idea behind pp, projection pursuit methods. See https://en.wikipedia.org/wiki/Projection_pursuit | How do you detect if a given dataset has multivariate normal distribution? | By definition the random vector $X$ is multivariate normal if all linear combinations $a^T X$ have some (univariate) normal distribution. So one idea to test multivariate normality is to search among | How do you detect if a given dataset has multivariate normal distribution?
By definition the random vector $X$ is multivariate normal if all linear combinations $a^T X$ have some (univariate) normal distribution. So one idea to test multivariate normality is to search among the vectors $a$ for one such that $a^T X$ is definitely not normal. That is the idea behind pp, projection pursuit methods. See https://en.wikipedia.org/wiki/Projection_pursuit | How do you detect if a given dataset has multivariate normal distribution?
By definition the random vector $X$ is multivariate normal if all linear combinations $a^T X$ have some (univariate) normal distribution. So one idea to test multivariate normality is to search among |
50,704 | How do you detect if a given dataset has multivariate normal distribution? | A fast way of examining whether your data set is Gaussian distributed or not is to plot a histogram for each variable of your data set (if the dimensionality is small), or simply just calculate the sample skewness and kurtosis to check if they are Gaussian distributed. A Gaussian distributed data set will have skewness = 0 and kurtosis =3. | How do you detect if a given dataset has multivariate normal distribution? | A fast way of examining whether your data set is Gaussian distributed or not is to plot a histogram for each variable of your data set (if the dimensionality is small), or simply just calculate the sa | How do you detect if a given dataset has multivariate normal distribution?
A fast way of examining whether your data set is Gaussian distributed or not is to plot a histogram for each variable of your data set (if the dimensionality is small), or simply just calculate the sample skewness and kurtosis to check if they are Gaussian distributed. A Gaussian distributed data set will have skewness = 0 and kurtosis =3. | How do you detect if a given dataset has multivariate normal distribution?
A fast way of examining whether your data set is Gaussian distributed or not is to plot a histogram for each variable of your data set (if the dimensionality is small), or simply just calculate the sa |
50,705 | nth moment, for 0 < n < 1 or n <0, do they exist? | Yes, investigated to at least some extent, as is readily seen by googling 'inverse moment' or 'fractional moments'.
Edit: In some cases these moments are rather straightforward to calculate. Here's an example of computing $E(X^{3/2})$ for $X\sim\text{gamma}(\alpha,1)$:
\begin{eqnarray}
E(X^{3/2}) &=& \int_0^\infty x^{3/2} f(x) dx \\
&=& \frac{1}{\Gamma(\alpha)} \int_0^\infty x^{3/2} x^{\alpha-1} e^{-x} dx\\
&=& \frac{\Gamma(\alpha+3/2)}{\Gamma(\alpha)}\cdot \frac{1}{\Gamma(\alpha+3/2)} \int_0^\infty x^{(\alpha+3/2)-1} e^{-x} dx\\
&=& \Gamma(\alpha+3/2)/\Gamma(\alpha)
\end{eqnarray}
You can as easily do $E(X^{-1})$ (as long as $\alpha>1$). | nth moment, for 0 < n < 1 or n <0, do they exist? | Yes, investigated to at least some extent, as is readily seen by googling 'inverse moment' or 'fractional moments'.
Edit: In some cases these moments are rather straightforward to calculate. Here's an | nth moment, for 0 < n < 1 or n <0, do they exist?
Yes, investigated to at least some extent, as is readily seen by googling 'inverse moment' or 'fractional moments'.
Edit: In some cases these moments are rather straightforward to calculate. Here's an example of computing $E(X^{3/2})$ for $X\sim\text{gamma}(\alpha,1)$:
\begin{eqnarray}
E(X^{3/2}) &=& \int_0^\infty x^{3/2} f(x) dx \\
&=& \frac{1}{\Gamma(\alpha)} \int_0^\infty x^{3/2} x^{\alpha-1} e^{-x} dx\\
&=& \frac{\Gamma(\alpha+3/2)}{\Gamma(\alpha)}\cdot \frac{1}{\Gamma(\alpha+3/2)} \int_0^\infty x^{(\alpha+3/2)-1} e^{-x} dx\\
&=& \Gamma(\alpha+3/2)/\Gamma(\alpha)
\end{eqnarray}
You can as easily do $E(X^{-1})$ (as long as $\alpha>1$). | nth moment, for 0 < n < 1 or n <0, do they exist?
Yes, investigated to at least some extent, as is readily seen by googling 'inverse moment' or 'fractional moments'.
Edit: In some cases these moments are rather straightforward to calculate. Here's an |
50,706 | Topic modeling, LDA and NMF | Note on implementing LDA for this problem: there are well-designed inference algorithms for huge numbers of documents. Specifically, you should check out "Online LDA", which can adaptively train the topics looking at small chunks of documents at a time.
Paper: http://www.cs.princeton.edu/~blei/papers/HoffmanBleiBach2010b.pdf
Matt Hoffman has python code available: http://www.cs.princeton.edu/~blei/topicmodeling.html | Topic modeling, LDA and NMF | Note on implementing LDA for this problem: there are well-designed inference algorithms for huge numbers of documents. Specifically, you should check out "Online LDA", which can adaptively train the | Topic modeling, LDA and NMF
Note on implementing LDA for this problem: there are well-designed inference algorithms for huge numbers of documents. Specifically, you should check out "Online LDA", which can adaptively train the topics looking at small chunks of documents at a time.
Paper: http://www.cs.princeton.edu/~blei/papers/HoffmanBleiBach2010b.pdf
Matt Hoffman has python code available: http://www.cs.princeton.edu/~blei/topicmodeling.html | Topic modeling, LDA and NMF
Note on implementing LDA for this problem: there are well-designed inference algorithms for huge numbers of documents. Specifically, you should check out "Online LDA", which can adaptively train the |
50,707 | Inequality for bivariate normal distribution | The constants in this problem do not make much sense unless $X_1$ and $X_2$ have variance $1$ so that $X_1$ and $X_2-\mu_2$ are standard normal random variables, an assumption that the OP apparently is unwilling to make since this was asked about
in the comments, and the OP did not include the assumption in the revised version of the question.
Assumption: $X_1$ and $X_2$ have variance $1$.
If $X_1$ is a standard normal random variable, then
$P\{|X_1| \geq \Phi^{-1}(1-y/2)\} = y$.
This result holds for $X_2$ as well if $\mu_2 = 0$. Thus, if
$X_1$ and $X_2$ both are independent standard normal random variables,
then
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2)\right\}
P\left\{|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= \alpha^2
\end{align}$$
while
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4)\right\}
P\left\{|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= (\alpha/2)\alpha = \alpha^2/2
\end{align}$$
In short, for the case $\mu_2 = 0$, the conjectured result holds
(with equality) for the case $\rho = 0$. Continuing to look at the
case $\mu_2 = 0$, if $X_2 = \pm X_1$ (the case when $\rho = \pm 1$),
we have
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= \alpha
\end{align}$$
while
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4)\right\}\\
&= \alpha/2
\end{align}$$
and so once again the conjectured result holds
(with equality). What happens for other values of
$\rho \in [-1,1]$ is not immediately obvious since
the bivariate cumulative normal distribution function
must be used and the special meanings of $\Phi^{-1}$
are lost. The case $\mu_2 \neq 0$ only exacerbates the
messiness of the calculations. Simulation might be
the best option to check whether the conjectured bound
is reasonable. | Inequality for bivariate normal distribution | The constants in this problem do not make much sense unless $X_1$ and $X_2$ have variance $1$ so that $X_1$ and $X_2-\mu_2$ are standard normal random variables, an assumption that the OP apparently i | Inequality for bivariate normal distribution
The constants in this problem do not make much sense unless $X_1$ and $X_2$ have variance $1$ so that $X_1$ and $X_2-\mu_2$ are standard normal random variables, an assumption that the OP apparently is unwilling to make since this was asked about
in the comments, and the OP did not include the assumption in the revised version of the question.
Assumption: $X_1$ and $X_2$ have variance $1$.
If $X_1$ is a standard normal random variable, then
$P\{|X_1| \geq \Phi^{-1}(1-y/2)\} = y$.
This result holds for $X_2$ as well if $\mu_2 = 0$. Thus, if
$X_1$ and $X_2$ both are independent standard normal random variables,
then
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2)\right\}
P\left\{|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= \alpha^2
\end{align}$$
while
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4)\right\}
P\left\{|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= (\alpha/2)\alpha = \alpha^2/2
\end{align}$$
In short, for the case $\mu_2 = 0$, the conjectured result holds
(with equality) for the case $\rho = 0$. Continuing to look at the
case $\mu_2 = 0$, if $X_2 = \pm X_1$ (the case when $\rho = \pm 1$),
we have
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= \alpha
\end{align}$$
while
$$\begin{align}
&\quad P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4),
|X_2| \geq \Phi^{-1}(1-\alpha/2)\right\}\\
&= P\left\{|X_1| \geq \Phi^{-1}(1-\alpha/4)\right\}\\
&= \alpha/2
\end{align}$$
and so once again the conjectured result holds
(with equality). What happens for other values of
$\rho \in [-1,1]$ is not immediately obvious since
the bivariate cumulative normal distribution function
must be used and the special meanings of $\Phi^{-1}$
are lost. The case $\mu_2 \neq 0$ only exacerbates the
messiness of the calculations. Simulation might be
the best option to check whether the conjectured bound
is reasonable. | Inequality for bivariate normal distribution
The constants in this problem do not make much sense unless $X_1$ and $X_2$ have variance $1$ so that $X_1$ and $X_2-\mu_2$ are standard normal random variables, an assumption that the OP apparently i |
50,708 | Validating a logistic regression for a specific $x$ | By its construction the logistic regression models predicts probabilities,
$$\hat P(Y =1 \mid X=x_0) = \frac{1}{1+\exp{\{\hat{\alpha} + \hat{\beta} x_0\}}}$$
The proportion of $Y=1$ in the test sample of size $n$, with all $x$'s equal, is a different estimator of the same conditional probability, denote it $\hat p_{1|x_0}$.
Why the performance of the logistic regression model should be judged against $\hat p_{1|x_0}$? What $\hat p_{1|x_0}$ has in favor of it, so as to function as a test for the validity/adequacy of the logistic regression approach?
Well, one could argue that it is non-parametric and even "non-distributional" since it directly estimates a theoretical (conditional) moment from the sample using a method-of moments principle, without making the additional assumptions the logistic regression model does -and each assumption may be a source of misspecification...
These are valid points, but test the validity of a model based on how does it predict a single point of the theoretical distribution?
No, that's mistaken. Assume that we knew the true probabilities. Then, it wouldn't tell us much if the estimated probability from logistic regression was "away from" or "close to" a single true probability.
The inappropriateness of using a single probability increases when we want to pit our model against another estimate, and not the actual probability. As the OP states, with small sample sizes, the accuracy of $\hat p_{1|x_0}$ cannot be guaranteed, irrespective of its "robustness" to misspecification that may haunt the logistic regression model.
We should be able to compare how our model does by predicting many different probabilities, not just one. We should form the estimated probability distribution and then measure the distance of it from the true one, by some suitable metric.
With large test sample sizes, each based on a given value for the regressor, then, indeed we could more validly evaluate the predictions of our model against the many $\hat p_{1|x_j}$ estimated probabilities. | Validating a logistic regression for a specific $x$ | By its construction the logistic regression models predicts probabilities,
$$\hat P(Y =1 \mid X=x_0) = \frac{1}{1+\exp{\{\hat{\alpha} + \hat{\beta} x_0\}}}$$
The proportion of $Y=1$ in the test sample | Validating a logistic regression for a specific $x$
By its construction the logistic regression models predicts probabilities,
$$\hat P(Y =1 \mid X=x_0) = \frac{1}{1+\exp{\{\hat{\alpha} + \hat{\beta} x_0\}}}$$
The proportion of $Y=1$ in the test sample of size $n$, with all $x$'s equal, is a different estimator of the same conditional probability, denote it $\hat p_{1|x_0}$.
Why the performance of the logistic regression model should be judged against $\hat p_{1|x_0}$? What $\hat p_{1|x_0}$ has in favor of it, so as to function as a test for the validity/adequacy of the logistic regression approach?
Well, one could argue that it is non-parametric and even "non-distributional" since it directly estimates a theoretical (conditional) moment from the sample using a method-of moments principle, without making the additional assumptions the logistic regression model does -and each assumption may be a source of misspecification...
These are valid points, but test the validity of a model based on how does it predict a single point of the theoretical distribution?
No, that's mistaken. Assume that we knew the true probabilities. Then, it wouldn't tell us much if the estimated probability from logistic regression was "away from" or "close to" a single true probability.
The inappropriateness of using a single probability increases when we want to pit our model against another estimate, and not the actual probability. As the OP states, with small sample sizes, the accuracy of $\hat p_{1|x_0}$ cannot be guaranteed, irrespective of its "robustness" to misspecification that may haunt the logistic regression model.
We should be able to compare how our model does by predicting many different probabilities, not just one. We should form the estimated probability distribution and then measure the distance of it from the true one, by some suitable metric.
With large test sample sizes, each based on a given value for the regressor, then, indeed we could more validly evaluate the predictions of our model against the many $\hat p_{1|x_j}$ estimated probabilities. | Validating a logistic regression for a specific $x$
By its construction the logistic regression models predicts probabilities,
$$\hat P(Y =1 \mid X=x_0) = \frac{1}{1+\exp{\{\hat{\alpha} + \hat{\beta} x_0\}}}$$
The proportion of $Y=1$ in the test sample |
50,709 | Hierarchical decomposition of an imbalanced multiclass classification problem | Hierarchical classification models frequently fail for different reasons. This is why flat classification methods based on one-vs-rest are usually preferred.
One of the main reasons discussed in the literature is that once an error is made in the upper levels of the hierarchy the model has no way to recover. To analyze this type of problem in your case you should calculate the error in the first step P vs I binary problem. This will be highly indicative as if the accuracy of your model is low there, it will be even lower in the last level.
Following that, the design of the hierarchy is also an issue. Things that are intuitive for humans do not necessarily yield the best performance. In your case for instance, it may be intuitive to split the problem as you describe (P vs I). From a machine learning perspective however, this may be sub-optimal due the the specificity of your data etc.. To better understand this, imagine that P vs I is difficult (not many examples of the I class, similar features...). There may be though another problem like I1 Vs Others that is simpler and better suited for the root of your hierarchy. This is a serious issue to be considered when designing your hierarchy. You simply need to start from the simpler (easier) problem (top of the hierarchy) and put more difficult problems to the bottom because you can not overcome from errors.
By the way, there are several, arguably simpler ways to deal with imbalanced datasets like adding weights to the infrequent classes, subsampling the frequent class, oversampling the infrequent and others. You may want to try them first. | Hierarchical decomposition of an imbalanced multiclass classification problem | Hierarchical classification models frequently fail for different reasons. This is why flat classification methods based on one-vs-rest are usually preferred.
One of the main reasons discussed in the | Hierarchical decomposition of an imbalanced multiclass classification problem
Hierarchical classification models frequently fail for different reasons. This is why flat classification methods based on one-vs-rest are usually preferred.
One of the main reasons discussed in the literature is that once an error is made in the upper levels of the hierarchy the model has no way to recover. To analyze this type of problem in your case you should calculate the error in the first step P vs I binary problem. This will be highly indicative as if the accuracy of your model is low there, it will be even lower in the last level.
Following that, the design of the hierarchy is also an issue. Things that are intuitive for humans do not necessarily yield the best performance. In your case for instance, it may be intuitive to split the problem as you describe (P vs I). From a machine learning perspective however, this may be sub-optimal due the the specificity of your data etc.. To better understand this, imagine that P vs I is difficult (not many examples of the I class, similar features...). There may be though another problem like I1 Vs Others that is simpler and better suited for the root of your hierarchy. This is a serious issue to be considered when designing your hierarchy. You simply need to start from the simpler (easier) problem (top of the hierarchy) and put more difficult problems to the bottom because you can not overcome from errors.
By the way, there are several, arguably simpler ways to deal with imbalanced datasets like adding weights to the infrequent classes, subsampling the frequent class, oversampling the infrequent and others. You may want to try them first. | Hierarchical decomposition of an imbalanced multiclass classification problem
Hierarchical classification models frequently fail for different reasons. This is why flat classification methods based on one-vs-rest are usually preferred.
One of the main reasons discussed in the |
50,710 | Probability of independent events within a specified window | If the days are truly independent as you say then let us denote an event A as X events in any Y day period and denote P(A), the probability of X events in Y days:
P(A) = COMBIN(Y,X) * (0.01 ^ X) * [0.99 ^ (Y - X)]
This is true for any Y day window provided the probability of an event occurring is not time dependent.
Then you can apply the Binomial probability again to the event A. We have to be careful because now the event A is not independent for overlapping windows, i.e. for a window of length 10, the probability of event A on days 1-10 is not independent of days 2-11. Thus the following is assuming non-overlapping windows to ensure independence. Now, because you want the probability of at least one non-overlapping window having an event, it is easier to do:
1 - P(no overlapping windows satisfying event A).
Let n be the number of non-overlapping windows of size Y in time period Z. Then the probability of X events in a non-overlapping window of size Y over a time period Z is:
1 - COMBIN(n,0) * P(A)^0 * [1-P(A)]^(Y-0)
= 1 - n * [1-P(A)]^Y | Probability of independent events within a specified window | If the days are truly independent as you say then let us denote an event A as X events in any Y day period and denote P(A), the probability of X events in Y days:
P(A) = COMBIN(Y,X) * (0.01 ^ X) * [0. | Probability of independent events within a specified window
If the days are truly independent as you say then let us denote an event A as X events in any Y day period and denote P(A), the probability of X events in Y days:
P(A) = COMBIN(Y,X) * (0.01 ^ X) * [0.99 ^ (Y - X)]
This is true for any Y day window provided the probability of an event occurring is not time dependent.
Then you can apply the Binomial probability again to the event A. We have to be careful because now the event A is not independent for overlapping windows, i.e. for a window of length 10, the probability of event A on days 1-10 is not independent of days 2-11. Thus the following is assuming non-overlapping windows to ensure independence. Now, because you want the probability of at least one non-overlapping window having an event, it is easier to do:
1 - P(no overlapping windows satisfying event A).
Let n be the number of non-overlapping windows of size Y in time period Z. Then the probability of X events in a non-overlapping window of size Y over a time period Z is:
1 - COMBIN(n,0) * P(A)^0 * [1-P(A)]^(Y-0)
= 1 - n * [1-P(A)]^Y | Probability of independent events within a specified window
If the days are truly independent as you say then let us denote an event A as X events in any Y day period and denote P(A), the probability of X events in Y days:
P(A) = COMBIN(Y,X) * (0.01 ^ X) * [0. |
50,711 | Probability of independent events within a specified window | Here is some code, a brute-force approach, to get a ballpark for the probability.
In this I assume that the particular 30 day window does not matter. If there were 3 at the end of one month and 2 at the beginning of the next, I still count it as 5 in a row.
set.seed(1)
#number of runs
N <- 5e5
#max number of events per month (simulation tops out around 4)
n_max <- 10
#other parameters
n_years <- 10
days_per_year <- 365.24
#would be number of rows if year change was reset
n <- floor(n_years*days_per_year)
#sample is wrapped in this to handle workspace and memory
#find run lengths
my_rle <- rle(rbinom(n = n*N,prob = 0.01,size = 1))
#find run-lengths that refer to "TRUE" values sequences
idx <- which(my_rle$values ==1)
#subset out non-zero runs
cus_y <- my_rle$lengths[idx]
#pre-declare for loop
store<- numeric(length = n_max)
#put zeros into a single bin
store[1] <- sum(my_rle$lengths[-idx])/n/N
#find bin frequenceis
for (j in 1:(n_max-1)){
store[j+1] <- length(which(cus_y==j,arr.ind=T))/n/N
}
#stage for plot and model
x <- 0:(n_max-1)
y <- log10(store)
#subset to non-nan values
y1 <- y[1:5]
x1 <- seq(from=0,to=4,by=1)
#fit model
est <- lm(y1~x1)
summary( est)
#extrapolate
x2 <- 5
y2 <- est$coefficients[1]+est$coefficients[2]*x2
y2
#main plot
plot(x,y,ylab="Log10 frequency", xlab="run length",ylim = c(-10,0))
grid()
abline(est)
points(x2,y2,pch=19,col="Red",cex=1.2)
Here is the plot that I get
The fit gave me this summary:
> summary( est)
Call:
lm(formula = y1 ~ x1)
Residuals:
1 2 3 4 5
0.0072500 0.0007404 -0.0025619 -0.0260976 0.0206690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.01161 0.01528 -0.76 0.503
x1 -1.99795 0.00624 -320.21 6.72e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.01973 on 3 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.025e+05 on 1 and 3 DF, p-value: 6.717e-08
Using the coefficients gives -10.00137 as the expected log10 of frequency for runs of 5 in a row. This is ~1e-10. The estimated probability for a 5-element sequence at any time is 1e-10, or (0.01^5). | Probability of independent events within a specified window | Here is some code, a brute-force approach, to get a ballpark for the probability.
In this I assume that the particular 30 day window does not matter. If there were 3 at the end of one month and 2 at t | Probability of independent events within a specified window
Here is some code, a brute-force approach, to get a ballpark for the probability.
In this I assume that the particular 30 day window does not matter. If there were 3 at the end of one month and 2 at the beginning of the next, I still count it as 5 in a row.
set.seed(1)
#number of runs
N <- 5e5
#max number of events per month (simulation tops out around 4)
n_max <- 10
#other parameters
n_years <- 10
days_per_year <- 365.24
#would be number of rows if year change was reset
n <- floor(n_years*days_per_year)
#sample is wrapped in this to handle workspace and memory
#find run lengths
my_rle <- rle(rbinom(n = n*N,prob = 0.01,size = 1))
#find run-lengths that refer to "TRUE" values sequences
idx <- which(my_rle$values ==1)
#subset out non-zero runs
cus_y <- my_rle$lengths[idx]
#pre-declare for loop
store<- numeric(length = n_max)
#put zeros into a single bin
store[1] <- sum(my_rle$lengths[-idx])/n/N
#find bin frequenceis
for (j in 1:(n_max-1)){
store[j+1] <- length(which(cus_y==j,arr.ind=T))/n/N
}
#stage for plot and model
x <- 0:(n_max-1)
y <- log10(store)
#subset to non-nan values
y1 <- y[1:5]
x1 <- seq(from=0,to=4,by=1)
#fit model
est <- lm(y1~x1)
summary( est)
#extrapolate
x2 <- 5
y2 <- est$coefficients[1]+est$coefficients[2]*x2
y2
#main plot
plot(x,y,ylab="Log10 frequency", xlab="run length",ylim = c(-10,0))
grid()
abline(est)
points(x2,y2,pch=19,col="Red",cex=1.2)
Here is the plot that I get
The fit gave me this summary:
> summary( est)
Call:
lm(formula = y1 ~ x1)
Residuals:
1 2 3 4 5
0.0072500 0.0007404 -0.0025619 -0.0260976 0.0206690
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.01161 0.01528 -0.76 0.503
x1 -1.99795 0.00624 -320.21 6.72e-08 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.01973 on 3 degrees of freedom
Multiple R-squared: 1, Adjusted R-squared: 1
F-statistic: 1.025e+05 on 1 and 3 DF, p-value: 6.717e-08
Using the coefficients gives -10.00137 as the expected log10 of frequency for runs of 5 in a row. This is ~1e-10. The estimated probability for a 5-element sequence at any time is 1e-10, or (0.01^5). | Probability of independent events within a specified window
Here is some code, a brute-force approach, to get a ballpark for the probability.
In this I assume that the particular 30 day window does not matter. If there were 3 at the end of one month and 2 at t |
50,712 | Calculate probability for LibLinear classification results | You can use a sigmoid function $f(d) = \frac{1}{1 + e^{-\alpha(d-\beta)}}$
to convert your SVM decision value $d = (w, x) + b$ into a number between 0 and 1 which can be treated as probability. You can adjust parameters $\alpha$ and $\beta$ depending on your data.
For more elaborate approaches, see these papers:
B.Zadrozny, C. Elkan, Transforming classifier scores into accurate multiclass probability estimates.
J.Drish, Obtaining calibrated probability estimates from Support Vector Machines. | Calculate probability for LibLinear classification results | You can use a sigmoid function $f(d) = \frac{1}{1 + e^{-\alpha(d-\beta)}}$
to convert your SVM decision value $d = (w, x) + b$ into a number between 0 and 1 which can be treated as probability. You ca | Calculate probability for LibLinear classification results
You can use a sigmoid function $f(d) = \frac{1}{1 + e^{-\alpha(d-\beta)}}$
to convert your SVM decision value $d = (w, x) + b$ into a number between 0 and 1 which can be treated as probability. You can adjust parameters $\alpha$ and $\beta$ depending on your data.
For more elaborate approaches, see these papers:
B.Zadrozny, C. Elkan, Transforming classifier scores into accurate multiclass probability estimates.
J.Drish, Obtaining calibrated probability estimates from Support Vector Machines. | Calculate probability for LibLinear classification results
You can use a sigmoid function $f(d) = \frac{1}{1 + e^{-\alpha(d-\beta)}}$
to convert your SVM decision value $d = (w, x) + b$ into a number between 0 and 1 which can be treated as probability. You ca |
50,713 | Calculate probability for LibLinear classification results | At least in R, only two algorithms provide the probabilities in LiblineaR interface.
Here is the FAQ of the actual library:
Q: How do I choose the solver? Should I use logistic regression or linear SVM? How about L1/L2 regularization?
Generally we recommend linear SVM as its training is faster and the accuracy is competitive. However, if you would like to have probability outputs, you may consider logistic regression.
Moreover, try L2 regularization first unless you need a sparse model. For most cases, L1 regularization does not give higher accuracy but may be slightly slower in training.
Among L2-regularized SVM solvers, try the default one (L2-loss SVC dual) first. If it is too slow, use the option -s 2 to solve the primal problem.
It seems the svm algorithms don't provide probabilities as output. | Calculate probability for LibLinear classification results | At least in R, only two algorithms provide the probabilities in LiblineaR interface.
Here is the FAQ of the actual library:
Q: How do I choose the solver? Should I use logistic regression or linear SV | Calculate probability for LibLinear classification results
At least in R, only two algorithms provide the probabilities in LiblineaR interface.
Here is the FAQ of the actual library:
Q: How do I choose the solver? Should I use logistic regression or linear SVM? How about L1/L2 regularization?
Generally we recommend linear SVM as its training is faster and the accuracy is competitive. However, if you would like to have probability outputs, you may consider logistic regression.
Moreover, try L2 regularization first unless you need a sparse model. For most cases, L1 regularization does not give higher accuracy but may be slightly slower in training.
Among L2-regularized SVM solvers, try the default one (L2-loss SVC dual) first. If it is too slow, use the option -s 2 to solve the primal problem.
It seems the svm algorithms don't provide probabilities as output. | Calculate probability for LibLinear classification results
At least in R, only two algorithms provide the probabilities in LiblineaR interface.
Here is the FAQ of the actual library:
Q: How do I choose the solver? Should I use logistic regression or linear SV |
50,714 | How to determine a user's favorite content producer from individual ratings? | Your question indicates that you want a score that gives some weight both to watching a film (whether the user likes it or not) and some additional weight to liking it. I would start by defining $M_{ud}$ as the maximum possible number of films by director $d$ watched by user $u$ as a proportion of all films watched by $u$:
$M_{ud} = Min(N_d/W_u, 1)$
where $N_d$ is the total number of films made by $d$ and $W_u$ is the total number of films watched by $u$. (The $Min$ is there because this proportion logically can't exceed 1).
Then $W_{ud} / W_u$ is the actual number of films by $d$ watched by $u$ as a proportion of all films watched by $u$, and
$s_w = \frac{W_{ud}}{W_uM_{ud}}$
is a possible measure of how much $u$ likes $d$. But because we also have information on 'likes', we have a second possible measure
$s_l = \frac{L_{ud}}{W_uM_{ud}}$
where $L_{ud}$ is the number of films by $d$ liked by $u$. Finally you can combine $s_w$ and $s_l$ into a single score, for instance:
$s = (1 - b)s_w + bs_l$
where b is a number you choose between 0 and 1 to reflect the relative importance of liking a film rather than just watching it.
It should be stressed that the exact functional forms used are arbitrary and you should play with them (and the weighting, b) until you get scores that make sense for you. For instance raising the two scores to a power greater than 1 might be useful as it would assign lower weight to the first 1 or 2 films watched/liked and more weight to the 6th or 7th. | How to determine a user's favorite content producer from individual ratings? | Your question indicates that you want a score that gives some weight both to watching a film (whether the user likes it or not) and some additional weight to liking it. I would start by defining $M_{u | How to determine a user's favorite content producer from individual ratings?
Your question indicates that you want a score that gives some weight both to watching a film (whether the user likes it or not) and some additional weight to liking it. I would start by defining $M_{ud}$ as the maximum possible number of films by director $d$ watched by user $u$ as a proportion of all films watched by $u$:
$M_{ud} = Min(N_d/W_u, 1)$
where $N_d$ is the total number of films made by $d$ and $W_u$ is the total number of films watched by $u$. (The $Min$ is there because this proportion logically can't exceed 1).
Then $W_{ud} / W_u$ is the actual number of films by $d$ watched by $u$ as a proportion of all films watched by $u$, and
$s_w = \frac{W_{ud}}{W_uM_{ud}}$
is a possible measure of how much $u$ likes $d$. But because we also have information on 'likes', we have a second possible measure
$s_l = \frac{L_{ud}}{W_uM_{ud}}$
where $L_{ud}$ is the number of films by $d$ liked by $u$. Finally you can combine $s_w$ and $s_l$ into a single score, for instance:
$s = (1 - b)s_w + bs_l$
where b is a number you choose between 0 and 1 to reflect the relative importance of liking a film rather than just watching it.
It should be stressed that the exact functional forms used are arbitrary and you should play with them (and the weighting, b) until you get scores that make sense for you. For instance raising the two scores to a power greater than 1 might be useful as it would assign lower weight to the first 1 or 2 films watched/liked and more weight to the 6th or 7th. | How to determine a user's favorite content producer from individual ratings?
Your question indicates that you want a score that gives some weight both to watching a film (whether the user likes it or not) and some additional weight to liking it. I would start by defining $M_{u |
50,715 | How to determine a user's favorite content producer from individual ratings? | A really simple answer is the modal director, but that does not adjust for the composition, since some directors may be more prolific or simply older.
For each user, I would consider the ratio of liked movies by director $i$ to all movies watched by director $i$, scaled by the ratio of all movies watched to all movies made by director $i$. When a user has not seen any movies directed by $i$, this ratio is undefined, and can be reset to zero. The director with the highest value of this quantity is the favorite. I got the geometric intuition for this formula from a Venn diagram. I think this controls for the size of each director's corpus and is bounded between 0 and 1.
Here are some examples with made up numbers. Alice has liked 5 movies by Herzog out of the 10 he has made and all of which she has seen. She has watched 20 movies total. The Herzog score is $\frac{5}{10}/{\frac{20}{10}}=0.25$. Suppose she has only seen seven Herzogs. The score jumps to $\frac{5}{7}/{\frac{20}{10}}=0.35$.
People also tend to watch movies in groups, and so they see movies they dislike. This needs to be accounted for. Suppose Alice has only watched Herzog to humor her husband Bob and she liked none of them. The score is now $\frac{0}{10}/{\frac{20}{10}}=0$. To me, this is the sensible interpretation of watching and disliking. It does not depend on how many Herzogs she has seen.
This approach does not explicitly use the sequence of movies. It probably matters if she watched all the Greenway films before all the Herzogs. On the other hand, people develop tastes over time, so maybe the order is less interesting, though maybe you can use the timing to break ties. It also does not use the "clumpiness". If she watched every Herzog in a row after seeing the first one, that's a strong signal she likes his work, compared to if they were all scattered throughout her viewing history. Maybe you can scale the score above by an entropy measure, but I don't know enough about this to really help. | How to determine a user's favorite content producer from individual ratings? | A really simple answer is the modal director, but that does not adjust for the composition, since some directors may be more prolific or simply older.
For each user, I would consider the ratio of like | How to determine a user's favorite content producer from individual ratings?
A really simple answer is the modal director, but that does not adjust for the composition, since some directors may be more prolific or simply older.
For each user, I would consider the ratio of liked movies by director $i$ to all movies watched by director $i$, scaled by the ratio of all movies watched to all movies made by director $i$. When a user has not seen any movies directed by $i$, this ratio is undefined, and can be reset to zero. The director with the highest value of this quantity is the favorite. I got the geometric intuition for this formula from a Venn diagram. I think this controls for the size of each director's corpus and is bounded between 0 and 1.
Here are some examples with made up numbers. Alice has liked 5 movies by Herzog out of the 10 he has made and all of which she has seen. She has watched 20 movies total. The Herzog score is $\frac{5}{10}/{\frac{20}{10}}=0.25$. Suppose she has only seen seven Herzogs. The score jumps to $\frac{5}{7}/{\frac{20}{10}}=0.35$.
People also tend to watch movies in groups, and so they see movies they dislike. This needs to be accounted for. Suppose Alice has only watched Herzog to humor her husband Bob and she liked none of them. The score is now $\frac{0}{10}/{\frac{20}{10}}=0$. To me, this is the sensible interpretation of watching and disliking. It does not depend on how many Herzogs she has seen.
This approach does not explicitly use the sequence of movies. It probably matters if she watched all the Greenway films before all the Herzogs. On the other hand, people develop tastes over time, so maybe the order is less interesting, though maybe you can use the timing to break ties. It also does not use the "clumpiness". If she watched every Herzog in a row after seeing the first one, that's a strong signal she likes his work, compared to if they were all scattered throughout her viewing history. Maybe you can scale the score above by an entropy measure, but I don't know enough about this to really help. | How to determine a user's favorite content producer from individual ratings?
A really simple answer is the modal director, but that does not adjust for the composition, since some directors may be more prolific or simply older.
For each user, I would consider the ratio of like |
50,716 | How to determine a user's favorite content producer from individual ratings? | I think that some kind of recommender system might be what you are looking for. | How to determine a user's favorite content producer from individual ratings? | I think that some kind of recommender system might be what you are looking for. | How to determine a user's favorite content producer from individual ratings?
I think that some kind of recommender system might be what you are looking for. | How to determine a user's favorite content producer from individual ratings?
I think that some kind of recommender system might be what you are looking for. |
50,717 | Should you use normalized or non-normalized data to develope your model? | The difference between using normalized and nonnormalized data is one of interpretation. If you use the original data, the coefficients apply to changes of one unit on the original scale. If you use the normalized data, they apply to changes of one unit on the new scale (usually, one standard deviation).
This is an issue on which there is no universal agreement among statisticians. My own tendency is to use unstandardized data. However, the two models really mean the same thing. | Should you use normalized or non-normalized data to develope your model? | The difference between using normalized and nonnormalized data is one of interpretation. If you use the original data, the coefficients apply to changes of one unit on the original scale. If you use t | Should you use normalized or non-normalized data to develope your model?
The difference between using normalized and nonnormalized data is one of interpretation. If you use the original data, the coefficients apply to changes of one unit on the original scale. If you use the normalized data, they apply to changes of one unit on the new scale (usually, one standard deviation).
This is an issue on which there is no universal agreement among statisticians. My own tendency is to use unstandardized data. However, the two models really mean the same thing. | Should you use normalized or non-normalized data to develope your model?
The difference between using normalized and nonnormalized data is one of interpretation. If you use the original data, the coefficients apply to changes of one unit on the original scale. If you use t |
50,718 | Determining smoothing parameter in HP filter for hourly data | The equation you are looking for is
$$\lambda_\alpha = \frac{1}{\alpha^4}\lambda_1$$
which is the adjustment factor derived by Ravn and Uhlig (2002). They derived the smoothing factor for annual data with this formula using the $\lambda = 1600$ for monthly data which was originally suggested by Hodrick and Prescott. That is
$$\lambda_{\text{annual}} = \frac{1}{4^4}1600 = 6.25 $$
You can re-arrange the equation and then solve the optimal smoothing factor for any data frequency. You can get the monthly smoothing factor from
$$12^4 \cdot 6.25 = 129,600$$
where 12 is the data frequency in months. Now you just need to know how many hours there are in a year which, according to Google, is 8765.81 and then you just plug it in again to get some very large number:
$$8765.81^4 \cdot 6.25 = 36,901,857,672,400,771.793$$
I doubt though that this will get you far because the Hodrick Prescott filter was developed for aggregate macro data in order to study business cycles at a quarterly, annual or at most monthly frequency. The filter was not meant to be for hourly data and I cannot imagine that it will perform well for your kind of application. For instance, if you search on Google scholar for Hodrick-Prescott "hourly data" you will not find anything. So even though this should answer your question, I would still be vary of using this result. | Determining smoothing parameter in HP filter for hourly data | The equation you are looking for is
$$\lambda_\alpha = \frac{1}{\alpha^4}\lambda_1$$
which is the adjustment factor derived by Ravn and Uhlig (2002). They derived the smoothing factor for annual data | Determining smoothing parameter in HP filter for hourly data
The equation you are looking for is
$$\lambda_\alpha = \frac{1}{\alpha^4}\lambda_1$$
which is the adjustment factor derived by Ravn and Uhlig (2002). They derived the smoothing factor for annual data with this formula using the $\lambda = 1600$ for monthly data which was originally suggested by Hodrick and Prescott. That is
$$\lambda_{\text{annual}} = \frac{1}{4^4}1600 = 6.25 $$
You can re-arrange the equation and then solve the optimal smoothing factor for any data frequency. You can get the monthly smoothing factor from
$$12^4 \cdot 6.25 = 129,600$$
where 12 is the data frequency in months. Now you just need to know how many hours there are in a year which, according to Google, is 8765.81 and then you just plug it in again to get some very large number:
$$8765.81^4 \cdot 6.25 = 36,901,857,672,400,771.793$$
I doubt though that this will get you far because the Hodrick Prescott filter was developed for aggregate macro data in order to study business cycles at a quarterly, annual or at most monthly frequency. The filter was not meant to be for hourly data and I cannot imagine that it will perform well for your kind of application. For instance, if you search on Google scholar for Hodrick-Prescott "hourly data" you will not find anything. So even though this should answer your question, I would still be vary of using this result. | Determining smoothing parameter in HP filter for hourly data
The equation you are looking for is
$$\lambda_\alpha = \frac{1}{\alpha^4}\lambda_1$$
which is the adjustment factor derived by Ravn and Uhlig (2002). They derived the smoothing factor for annual data |
50,719 | How to control for market return in an (SPSS) OLS? | It'd be helpful if you told us what procedure you used. My answers rely on some guesses.
Question 1: If you're running the OLS regression using Analyze > Regression, then they cannot be random effects because this module does not allows it. So, they can be seen as fixed effects. If you have used Mixed module then it would depend where you put the variables: whether they were fed into the fixed, or random slot.
We use fixed effect to discern mean difference, and we use random effect to adjust for variance introduced by the variables. If a variable returns a regression coefficient (or a set of coefficients in the case of categorical variables), it belongs to fixed effect; if a variable ends up in the variance/covariance output, it's been treated as a random effect.
Another way to think if you have correctly modeled the variables is to imagine if you're to repeat the measurement, will the attributes inside that variable change? In your case, all the dummies' attributes probably wouldn't change, so I would agree that they are fixed effects. My concern, as I've stated in the comment, goes to the company ID. It's a repeated measurement design and you may want to consider using Mixed model. In addition, if the companies were randomly chosen, you may want to consider allowing a random intercept/slope for your regression model. But again, this is just my crude guess. You better discuss with your statistical support.
Question 2: Just by the wording, "added the market return as a control" usually just means "market return" is treated as one of the independent variables.
Question 3: Well, I am actually not entirely sure. I feel that control variable can either be fixed or random because there are needs to control for mean difference and to control for variance. For instance, you can "control" for the effect of gender as a fixed effect, and you can also "control" for the clustering due to state/province by treating it as a random effect. I have seen both of these wordings used. | How to control for market return in an (SPSS) OLS? | It'd be helpful if you told us what procedure you used. My answers rely on some guesses.
Question 1: If you're running the OLS regression using Analyze > Regression, then they cannot be random effects | How to control for market return in an (SPSS) OLS?
It'd be helpful if you told us what procedure you used. My answers rely on some guesses.
Question 1: If you're running the OLS regression using Analyze > Regression, then they cannot be random effects because this module does not allows it. So, they can be seen as fixed effects. If you have used Mixed module then it would depend where you put the variables: whether they were fed into the fixed, or random slot.
We use fixed effect to discern mean difference, and we use random effect to adjust for variance introduced by the variables. If a variable returns a regression coefficient (or a set of coefficients in the case of categorical variables), it belongs to fixed effect; if a variable ends up in the variance/covariance output, it's been treated as a random effect.
Another way to think if you have correctly modeled the variables is to imagine if you're to repeat the measurement, will the attributes inside that variable change? In your case, all the dummies' attributes probably wouldn't change, so I would agree that they are fixed effects. My concern, as I've stated in the comment, goes to the company ID. It's a repeated measurement design and you may want to consider using Mixed model. In addition, if the companies were randomly chosen, you may want to consider allowing a random intercept/slope for your regression model. But again, this is just my crude guess. You better discuss with your statistical support.
Question 2: Just by the wording, "added the market return as a control" usually just means "market return" is treated as one of the independent variables.
Question 3: Well, I am actually not entirely sure. I feel that control variable can either be fixed or random because there are needs to control for mean difference and to control for variance. For instance, you can "control" for the effect of gender as a fixed effect, and you can also "control" for the clustering due to state/province by treating it as a random effect. I have seen both of these wordings used. | How to control for market return in an (SPSS) OLS?
It'd be helpful if you told us what procedure you used. My answers rely on some guesses.
Question 1: If you're running the OLS regression using Analyze > Regression, then they cannot be random effects |
50,720 | How to control for market return in an (SPSS) OLS? | For Q1: since weekDay and your dummy variables are not coming from random causes, I think they can be considered as fixed effects. | How to control for market return in an (SPSS) OLS? | For Q1: since weekDay and your dummy variables are not coming from random causes, I think they can be considered as fixed effects. | How to control for market return in an (SPSS) OLS?
For Q1: since weekDay and your dummy variables are not coming from random causes, I think they can be considered as fixed effects. | How to control for market return in an (SPSS) OLS?
For Q1: since weekDay and your dummy variables are not coming from random causes, I think they can be considered as fixed effects. |
50,721 | How to control for market return in an (SPSS) OLS? | Treating the market return as additive by just including it as a regressor might not be the best approach. You might consider using the stockRet / marketRet as the dependent variable, which gives you a proportionality model. | How to control for market return in an (SPSS) OLS? | Treating the market return as additive by just including it as a regressor might not be the best approach. You might consider using the stockRet / marketRet as the dependent variable, which gives you | How to control for market return in an (SPSS) OLS?
Treating the market return as additive by just including it as a regressor might not be the best approach. You might consider using the stockRet / marketRet as the dependent variable, which gives you a proportionality model. | How to control for market return in an (SPSS) OLS?
Treating the market return as additive by just including it as a regressor might not be the best approach. You might consider using the stockRet / marketRet as the dependent variable, which gives you |
50,722 | Custom power analysis in R | If one assumes $d = \frac{(\hat N_2-\hat N_1)}{\hat N_2}$, which is the percent differnce. Then:
$$
Z = \frac{d}{\sqrt{2}*cv(\hat N)}
$$
$$
Z = \frac{\frac{(\hat N_2-\hat N_1)}{\hat N_2}}{\sqrt{2}*cv(\hat N)}
$$
$$
Z = \frac{(\hat N_2-\hat N_1)}{\sqrt{2}*cv(\hat N)*\hat N_2}
$$
$$
Z = \frac{(\hat N_2-\hat N_1)}{\sqrt{2}*se(\hat N)}
$$
$$
Z = \frac{(\hat N_2-\hat N_1)}{\sqrt{se(\hat N)^2+se(\hat N)^2}}
$$
Thus, the CREEM function assumes that $se(\hat N_1)$ = $se(\hat N_2)$ but my fuction assumes $cv(\hat N_1)$ = $cv(\hat N_2)$.
Therefore, the primary difference is in the assumption about the relationship between variance and abundance. For example, one could assume variance to be proportional to $\hat N$, $\hat N^2$, or $\hat N^3$ (Gerrodette 1987) or constant as in the case of the CREEM function. Gerrodette (1987) suggested that constant $cv(\hat N)$ was an appropriate assumption for distance sampling-based estimates of abundance. Estimates from mark-recapture based surveys might be better suited to the assumption that $cv(\hat N)$ is proportional to $\sqrt {\hat N}$.
Gerrodette, T. 1987. A power analysis for detecting trends. Ecology 68(5):1364-1372. [Link].
Also, see Program TRENDS at NOAA Southwest Fisheries Science Center for a cool little program designed to conduct power analysis for trend detection in wildlife population monitoring studies. | Custom power analysis in R | If one assumes $d = \frac{(\hat N_2-\hat N_1)}{\hat N_2}$, which is the percent differnce. Then:
$$
Z = \frac{d}{\sqrt{2}*cv(\hat N)}
$$
$$
Z = \frac{\frac{(\hat N_2-\hat N_1)}{\hat N_2}}{\sqrt{2}*cv | Custom power analysis in R
If one assumes $d = \frac{(\hat N_2-\hat N_1)}{\hat N_2}$, which is the percent differnce. Then:
$$
Z = \frac{d}{\sqrt{2}*cv(\hat N)}
$$
$$
Z = \frac{\frac{(\hat N_2-\hat N_1)}{\hat N_2}}{\sqrt{2}*cv(\hat N)}
$$
$$
Z = \frac{(\hat N_2-\hat N_1)}{\sqrt{2}*cv(\hat N)*\hat N_2}
$$
$$
Z = \frac{(\hat N_2-\hat N_1)}{\sqrt{2}*se(\hat N)}
$$
$$
Z = \frac{(\hat N_2-\hat N_1)}{\sqrt{se(\hat N)^2+se(\hat N)^2}}
$$
Thus, the CREEM function assumes that $se(\hat N_1)$ = $se(\hat N_2)$ but my fuction assumes $cv(\hat N_1)$ = $cv(\hat N_2)$.
Therefore, the primary difference is in the assumption about the relationship between variance and abundance. For example, one could assume variance to be proportional to $\hat N$, $\hat N^2$, or $\hat N^3$ (Gerrodette 1987) or constant as in the case of the CREEM function. Gerrodette (1987) suggested that constant $cv(\hat N)$ was an appropriate assumption for distance sampling-based estimates of abundance. Estimates from mark-recapture based surveys might be better suited to the assumption that $cv(\hat N)$ is proportional to $\sqrt {\hat N}$.
Gerrodette, T. 1987. A power analysis for detecting trends. Ecology 68(5):1364-1372. [Link].
Also, see Program TRENDS at NOAA Southwest Fisheries Science Center for a cool little program designed to conduct power analysis for trend detection in wildlife population monitoring studies. | Custom power analysis in R
If one assumes $d = \frac{(\hat N_2-\hat N_1)}{\hat N_2}$, which is the percent differnce. Then:
$$
Z = \frac{d}{\sqrt{2}*cv(\hat N)}
$$
$$
Z = \frac{\frac{(\hat N_2-\hat N_1)}{\hat N_2}}{\sqrt{2}*cv |
50,723 | How to write a poker player using Bayes networks | From the book you mention:
Note that the existing structure makes the assumption that the opponent's action depends
only on its current hand.
And a little bit further:
There are four action probability tables $P_i(OPP\_Action|OPP\_Current)$, corresponding to the four rounds of betting. These report the conditional probabilities per round of the actions — folding, passing/calling or betting/raising — given the opponent's current hand type. BPP adjusts these probabilities over time, using the relative frequency of these behaviors per opponent. Since the rules of poker do not allow the observation of hidden cards unless the hand is held to showdown, these counts are made only for such hands, undoubtedly introducing some bias.
If I understood, you do not have four such tables, but the logic is the same. You start with a prior belief about player's action. It simply has to be something that a reasonable player would do (raise with high probability if they have a high pair, fold with high probability if they have a very poor game etc).
When you get to the showdown, you can reconstitute how the opponent played the game at each step, so you update the probability of the observed actions given the observed hand with Bayes rule. | How to write a poker player using Bayes networks | From the book you mention:
Note that the existing structure makes the assumption that the opponent's action depends
only on its current hand.
And a little bit further:
There are four action proba | How to write a poker player using Bayes networks
From the book you mention:
Note that the existing structure makes the assumption that the opponent's action depends
only on its current hand.
And a little bit further:
There are four action probability tables $P_i(OPP\_Action|OPP\_Current)$, corresponding to the four rounds of betting. These report the conditional probabilities per round of the actions — folding, passing/calling or betting/raising — given the opponent's current hand type. BPP adjusts these probabilities over time, using the relative frequency of these behaviors per opponent. Since the rules of poker do not allow the observation of hidden cards unless the hand is held to showdown, these counts are made only for such hands, undoubtedly introducing some bias.
If I understood, you do not have four such tables, but the logic is the same. You start with a prior belief about player's action. It simply has to be something that a reasonable player would do (raise with high probability if they have a high pair, fold with high probability if they have a very poor game etc).
When you get to the showdown, you can reconstitute how the opponent played the game at each step, so you update the probability of the observed actions given the observed hand with Bayes rule. | How to write a poker player using Bayes networks
From the book you mention:
Note that the existing structure makes the assumption that the opponent's action depends
only on its current hand.
And a little bit further:
There are four action proba |
50,724 | Panel-data exploratory data analysis | I always start by doing a PCA (Principal Component Analysis) in R because it takes almost no writing. Say you have all this in a data.frame that we call data.
pca <- prcomp(data)
# Screeplot.
plot(pca)
# Biplot.
biplot(pca)
For R users, there is also the ggplot2 library. I know that it can do wonders for data representation, but I don't know how to use it. Maybe someone will suggest something with it? | Panel-data exploratory data analysis | I always start by doing a PCA (Principal Component Analysis) in R because it takes almost no writing. Say you have all this in a data.frame that we call data.
pca <- prcomp(data)
# Screeplot.
plot(pca | Panel-data exploratory data analysis
I always start by doing a PCA (Principal Component Analysis) in R because it takes almost no writing. Say you have all this in a data.frame that we call data.
pca <- prcomp(data)
# Screeplot.
plot(pca)
# Biplot.
biplot(pca)
For R users, there is also the ggplot2 library. I know that it can do wonders for data representation, but I don't know how to use it. Maybe someone will suggest something with it? | Panel-data exploratory data analysis
I always start by doing a PCA (Principal Component Analysis) in R because it takes almost no writing. Say you have all this in a data.frame that we call data.
pca <- prcomp(data)
# Screeplot.
plot(pca |
50,725 | Panel-data exploratory data analysis | It's not clear to me what you've graphed when you say "scatter plots between sales, R&D, and advertising". For example, have you done something like:
library (lattice)
xyplot (sale ~ xrd | year, groups=sicagg)
xyplot (sale ~ xrd | sicagg, groups=year)
Not sure what sicagg is; I assume it's a factor variable in my example.
In your plot of advertising per industry, did you plot lines for the averages and points for the specifics, coded by industry? Density plots might also be useful:
densityplot (~sale, groups=xrd)
densityplot (~xad, groups=xrd)
etc. Once you get complex and combine graph types, lattice gets complicated fairly fast, but it makes these kinds of plots easy. | Panel-data exploratory data analysis | It's not clear to me what you've graphed when you say "scatter plots between sales, R&D, and advertising". For example, have you done something like:
library (lattice)
xyplot (sale ~ xrd | year, group | Panel-data exploratory data analysis
It's not clear to me what you've graphed when you say "scatter plots between sales, R&D, and advertising". For example, have you done something like:
library (lattice)
xyplot (sale ~ xrd | year, groups=sicagg)
xyplot (sale ~ xrd | sicagg, groups=year)
Not sure what sicagg is; I assume it's a factor variable in my example.
In your plot of advertising per industry, did you plot lines for the averages and points for the specifics, coded by industry? Density plots might also be useful:
densityplot (~sale, groups=xrd)
densityplot (~xad, groups=xrd)
etc. Once you get complex and combine graph types, lattice gets complicated fairly fast, but it makes these kinds of plots easy. | Panel-data exploratory data analysis
It's not clear to me what you've graphed when you say "scatter plots between sales, R&D, and advertising". For example, have you done something like:
library (lattice)
xyplot (sale ~ xrd | year, group |
50,726 | Weighted clustering algorithm | You're really given a planar graph and you want to find connected components that have the smallest "spread" in values. While I don't know how to get an answer with provably guarantees, the following heuristic might work well.
Assume all states have weights between 0 and $2^k$ say (for some $k$). Label all states with weights between 0 and $2^{k-1}-1$ as "0" and the rest as "1". Find the connected components of the graph with the same label. Now recurse in each component.
Essentially what you're doing is finding connected components such that in each component, the values don't vary "too much". If 2 is too coarse a granularity for you, you can choose some other factor between 1 and 2.
The stopping point for the recursion is when the variance within a cluster thus formed is small enough. You'll end up with a hierarchical clustering in which the leaves are the desired clusters. | Weighted clustering algorithm | You're really given a planar graph and you want to find connected components that have the smallest "spread" in values. While I don't know how to get an answer with provably guarantees, the following | Weighted clustering algorithm
You're really given a planar graph and you want to find connected components that have the smallest "spread" in values. While I don't know how to get an answer with provably guarantees, the following heuristic might work well.
Assume all states have weights between 0 and $2^k$ say (for some $k$). Label all states with weights between 0 and $2^{k-1}-1$ as "0" and the rest as "1". Find the connected components of the graph with the same label. Now recurse in each component.
Essentially what you're doing is finding connected components such that in each component, the values don't vary "too much". If 2 is too coarse a granularity for you, you can choose some other factor between 1 and 2.
The stopping point for the recursion is when the variance within a cluster thus formed is small enough. You'll end up with a hierarchical clustering in which the leaves are the desired clusters. | Weighted clustering algorithm
You're really given a planar graph and you want to find connected components that have the smallest "spread" in values. While I don't know how to get an answer with provably guarantees, the following |
50,727 | Weighted clustering algorithm | This looks like a standard variation of bin packing problem with constraints to me.
https://en.wikipedia.org/wiki/Bin_packing_problem
It does not so much like clustering to me: the distances seems to be solely a constraint that only adjacent states must be selected. So none of the stuff you find under the term of "cluster analysis" will help you much. It's a constraint optimization that you are trying to do. | Weighted clustering algorithm | This looks like a standard variation of bin packing problem with constraints to me.
https://en.wikipedia.org/wiki/Bin_packing_problem
It does not so much like clustering to me: the distances seems to | Weighted clustering algorithm
This looks like a standard variation of bin packing problem with constraints to me.
https://en.wikipedia.org/wiki/Bin_packing_problem
It does not so much like clustering to me: the distances seems to be solely a constraint that only adjacent states must be selected. So none of the stuff you find under the term of "cluster analysis" will help you much. It's a constraint optimization that you are trying to do. | Weighted clustering algorithm
This looks like a standard variation of bin packing problem with constraints to me.
https://en.wikipedia.org/wiki/Bin_packing_problem
It does not so much like clustering to me: the distances seems to |
50,728 | Weighted clustering algorithm | What about using Graph Partition (http://en.wikipedia.org/wiki/Graph_partition)?
Where the graph here would be the USA, where the nodes are the states, the edges are the connections between states (i.e. there is an edge between two states if they are adjacent to each other). The subgraphs, or partitions would be the territories. You want to divide it into uniform components (equal revenue and maybe other constraints), so you would have some variation of uniform graph partition. | Weighted clustering algorithm | What about using Graph Partition (http://en.wikipedia.org/wiki/Graph_partition)?
Where the graph here would be the USA, where the nodes are the states, the edges are the connections between states (i | Weighted clustering algorithm
What about using Graph Partition (http://en.wikipedia.org/wiki/Graph_partition)?
Where the graph here would be the USA, where the nodes are the states, the edges are the connections between states (i.e. there is an edge between two states if they are adjacent to each other). The subgraphs, or partitions would be the territories. You want to divide it into uniform components (equal revenue and maybe other constraints), so you would have some variation of uniform graph partition. | Weighted clustering algorithm
What about using Graph Partition (http://en.wikipedia.org/wiki/Graph_partition)?
Where the graph here would be the USA, where the nodes are the states, the edges are the connections between states (i |
50,729 | $\sigma$-algebra intersection of infinite subsets | We have for each positive integer $n$ that $a-1/n \lt a\lt b\lt b+1/n$, hence $[a,b]\subset (a-1/n,b+1/n)$; in particular $[a,b]$ is contained in the intersection. If $x\in (a-1/n,b+1/n)$ for each positive $n$, then $a-1/n\lt x\lt b+1/n$. Taking the limit $n\to \infty$, we get $x\in [a,b]$.
Let $\mathcal A$ be a $\sigma$-algebra on $\mathbb R$ which contains all the open intervals. Then $\mathcal A$ being stable under countable intersections, this $\sigma$-algebra contains the countable intersections of open intervals. As a closed interval can be expressed like that, we are done. | $\sigma$-algebra intersection of infinite subsets | We have for each positive integer $n$ that $a-1/n \lt a\lt b\lt b+1/n$, hence $[a,b]\subset (a-1/n,b+1/n)$; in particular $[a,b]$ is contained in the intersection. If $x\in (a-1/n,b+1/n)$ for each pos | $\sigma$-algebra intersection of infinite subsets
We have for each positive integer $n$ that $a-1/n \lt a\lt b\lt b+1/n$, hence $[a,b]\subset (a-1/n,b+1/n)$; in particular $[a,b]$ is contained in the intersection. If $x\in (a-1/n,b+1/n)$ for each positive $n$, then $a-1/n\lt x\lt b+1/n$. Taking the limit $n\to \infty$, we get $x\in [a,b]$.
Let $\mathcal A$ be a $\sigma$-algebra on $\mathbb R$ which contains all the open intervals. Then $\mathcal A$ being stable under countable intersections, this $\sigma$-algebra contains the countable intersections of open intervals. As a closed interval can be expressed like that, we are done. | $\sigma$-algebra intersection of infinite subsets
We have for each positive integer $n$ that $a-1/n \lt a\lt b\lt b+1/n$, hence $[a,b]\subset (a-1/n,b+1/n)$; in particular $[a,b]$ is contained in the intersection. If $x\in (a-1/n,b+1/n)$ for each pos |
50,730 | How do I handle measurement error in sparse data? | Your problem has actually got two main parts.
The first is related to the statistics. You will need to assess the data in light of your knowledge of the system and the option to match different distributions to confirm the kind of data you have is a good first step. Once you have a good model you can then begin to make calls as to how best to analyse - i.e. can you get away with normal distribution type analysis e.g. using a simple mean or do you need to use median or is the distribution indicating that there are more underlying complexities.
This brings us to the second point - probably more in my area of expertise - this is whether you have sufficient sample. I am not referring to the statistical context (sort of) of a sample but actually the geological/metallurgical assessment of a mineral representative sample. As a metallurgist/mineral processing engineer, this is typically a bigger challenge than the statistics. If you haven't managed to get the right kind of sample you might as well halt!
To confirm that you have a sample of relevance, you would need to consider things sampling practice for your commodity. For example, if you are looking to understand the particle density distribution for an orebody you would need a LOT of sample to begin to represent the whole. I suspect that since you are looking at particles, you are more likely trying to understand the density of discrete minerals, Probably in an insitu context - but this is probably not the forum to go into detail about that! I can recommend jumping on the LinkedIn forum on sampling orebodies if you want more in this area.
For those not familiar with mineralogy, the issue at heart is that particle analysis does not allow the selection of discrete populations. This means that there is a lot of confounding of the data by associated minerals and the choice about where to get the sample.
Hope this helps.
Mark | How do I handle measurement error in sparse data? | Your problem has actually got two main parts.
The first is related to the statistics. You will need to assess the data in light of your knowledge of the system and the option to match different distr | How do I handle measurement error in sparse data?
Your problem has actually got two main parts.
The first is related to the statistics. You will need to assess the data in light of your knowledge of the system and the option to match different distributions to confirm the kind of data you have is a good first step. Once you have a good model you can then begin to make calls as to how best to analyse - i.e. can you get away with normal distribution type analysis e.g. using a simple mean or do you need to use median or is the distribution indicating that there are more underlying complexities.
This brings us to the second point - probably more in my area of expertise - this is whether you have sufficient sample. I am not referring to the statistical context (sort of) of a sample but actually the geological/metallurgical assessment of a mineral representative sample. As a metallurgist/mineral processing engineer, this is typically a bigger challenge than the statistics. If you haven't managed to get the right kind of sample you might as well halt!
To confirm that you have a sample of relevance, you would need to consider things sampling practice for your commodity. For example, if you are looking to understand the particle density distribution for an orebody you would need a LOT of sample to begin to represent the whole. I suspect that since you are looking at particles, you are more likely trying to understand the density of discrete minerals, Probably in an insitu context - but this is probably not the forum to go into detail about that! I can recommend jumping on the LinkedIn forum on sampling orebodies if you want more in this area.
For those not familiar with mineralogy, the issue at heart is that particle analysis does not allow the selection of discrete populations. This means that there is a lot of confounding of the data by associated minerals and the choice about where to get the sample.
Hope this helps.
Mark | How do I handle measurement error in sparse data?
Your problem has actually got two main parts.
The first is related to the statistics. You will need to assess the data in light of your knowledge of the system and the option to match different distr |
50,731 | How do I handle measurement error in sparse data? | As mentioned in the comments, the question is a bit vague so it is hard to make sure I actually answer it.
If your property X is the mean of twenty measurements, then you can compute a standard deviation from that sample, say σ. If you believe that measurements are independent, the standard deviation of X is σ / √20.
Then the question is whether m is a constant or if you actually want to estimate it from your data. If it is a constant, then the standard deviation of Y is m σ / √20. If you actually have a regression problem, like trying to fit m and b and then use that model to predict Y from X, it is probably better to use all your data points (no averaging). Then the variation is much larger and will depend on the value of X. If X is Gaussian you can look up the formula from Wikipedia at the paragraph "Normality assumption".
To my knowledge there is no general method to propagate uncertainty, which means you'll have to work your way through each problem. To convince you I will use a pathological case. If X has a uniform distribution between 0 and 1 (variance 1/12), then tan(π(X - 1/2)) has a Cauchy distribution and thus an infinite variance. | How do I handle measurement error in sparse data? | As mentioned in the comments, the question is a bit vague so it is hard to make sure I actually answer it.
If your property X is the mean of twenty measurements, then you can compute a standard deviat | How do I handle measurement error in sparse data?
As mentioned in the comments, the question is a bit vague so it is hard to make sure I actually answer it.
If your property X is the mean of twenty measurements, then you can compute a standard deviation from that sample, say σ. If you believe that measurements are independent, the standard deviation of X is σ / √20.
Then the question is whether m is a constant or if you actually want to estimate it from your data. If it is a constant, then the standard deviation of Y is m σ / √20. If you actually have a regression problem, like trying to fit m and b and then use that model to predict Y from X, it is probably better to use all your data points (no averaging). Then the variation is much larger and will depend on the value of X. If X is Gaussian you can look up the formula from Wikipedia at the paragraph "Normality assumption".
To my knowledge there is no general method to propagate uncertainty, which means you'll have to work your way through each problem. To convince you I will use a pathological case. If X has a uniform distribution between 0 and 1 (variance 1/12), then tan(π(X - 1/2)) has a Cauchy distribution and thus an infinite variance. | How do I handle measurement error in sparse data?
As mentioned in the comments, the question is a bit vague so it is hard to make sure I actually answer it.
If your property X is the mean of twenty measurements, then you can compute a standard deviat |
50,732 | Average of a tail of a normal distribution | Cyan offered a link that answers the question but since the question remains without an actual answer, I'll put one in.
While not strictly "closed form" by the usual definitions, I expect people doing statistical work will mostly want to admit the normal cdf (or the error function, which would serve the same purpose) into what they call "closed form".
Given $X\sim N(\mu,\sigma^2)$,
$$E(X\mid X<b)=\mu -\sigma \frac {\phi (\frac{b-\mu}{\sigma})}{\Phi (\frac{b-\mu}{\sigma})}$$
where $\phi$ is the standard normal density and $\Phi$ is the standard normal cdf.
For the present problem, $b=0$, giving us:
$$E(X\mid X<0)=\mu -\sigma \frac {\phi (\frac{-\mu}{\sigma})}{\Phi (\frac{-\mu}{\sigma})}$$
(An expression for the variance can be found at the above link.) | Average of a tail of a normal distribution | Cyan offered a link that answers the question but since the question remains without an actual answer, I'll put one in.
While not strictly "closed form" by the usual definitions, I expect people doin | Average of a tail of a normal distribution
Cyan offered a link that answers the question but since the question remains without an actual answer, I'll put one in.
While not strictly "closed form" by the usual definitions, I expect people doing statistical work will mostly want to admit the normal cdf (or the error function, which would serve the same purpose) into what they call "closed form".
Given $X\sim N(\mu,\sigma^2)$,
$$E(X\mid X<b)=\mu -\sigma \frac {\phi (\frac{b-\mu}{\sigma})}{\Phi (\frac{b-\mu}{\sigma})}$$
where $\phi$ is the standard normal density and $\Phi$ is the standard normal cdf.
For the present problem, $b=0$, giving us:
$$E(X\mid X<0)=\mu -\sigma \frac {\phi (\frac{-\mu}{\sigma})}{\Phi (\frac{-\mu}{\sigma})}$$
(An expression for the variance can be found at the above link.) | Average of a tail of a normal distribution
Cyan offered a link that answers the question but since the question remains without an actual answer, I'll put one in.
While not strictly "closed form" by the usual definitions, I expect people doin |
50,733 | Time-wise treatment effect / survival analysis | I think you're going to struggle with any sort of definitive treatment effect, because you lack a comparison group of any sort. You need not necessarily have a control group - there's plenty of methods in the case-crossover literature for using cases as their own controls during unexposed time periods, but if everyone got the drug at t=1, then no one has unexposed time to act as a control. | Time-wise treatment effect / survival analysis | I think you're going to struggle with any sort of definitive treatment effect, because you lack a comparison group of any sort. You need not necessarily have a control group - there's plenty of method | Time-wise treatment effect / survival analysis
I think you're going to struggle with any sort of definitive treatment effect, because you lack a comparison group of any sort. You need not necessarily have a control group - there's plenty of methods in the case-crossover literature for using cases as their own controls during unexposed time periods, but if everyone got the drug at t=1, then no one has unexposed time to act as a control. | Time-wise treatment effect / survival analysis
I think you're going to struggle with any sort of definitive treatment effect, because you lack a comparison group of any sort. You need not necessarily have a control group - there's plenty of method |
50,734 | Life after the Box-Cox transformation | First of all, if you mean a linear regression model, it does not assume the data are normally distributed, it assumes the error as estimated by the residuals is normally distributed (in fact, they should be iid $\mathcal{N}(0,\sigma)$).
Second, if that assumption is violated and you want to keep your original units, you can use some other form of regression - there are a variety of robust regression models, loess models, spline models, etc. | Life after the Box-Cox transformation | First of all, if you mean a linear regression model, it does not assume the data are normally distributed, it assumes the error as estimated by the residuals is normally distributed (in fact, they sh | Life after the Box-Cox transformation
First of all, if you mean a linear regression model, it does not assume the data are normally distributed, it assumes the error as estimated by the residuals is normally distributed (in fact, they should be iid $\mathcal{N}(0,\sigma)$).
Second, if that assumption is violated and you want to keep your original units, you can use some other form of regression - there are a variety of robust regression models, loess models, spline models, etc. | Life after the Box-Cox transformation
First of all, if you mean a linear regression model, it does not assume the data are normally distributed, it assumes the error as estimated by the residuals is normally distributed (in fact, they sh |
50,735 | Life after the Box-Cox transformation | It sounds like your model is of this form; $$Y_i|x_i = f(x_i, \beta) + \epsilon_i,$$ where $Y_i$ denotes the $i$th measured outcome, $x_i$ is a vector of covariates for that outcome (i.e. experimental circumstances), which with (unknown) parameters $\beta$ determines the expected value $f(x_i, \beta)$ for that observation. The $\epsilon_i$ are the error terms, which describe everything that affects $Y_i$ not captured by $f(x_i, \beta)$ - i.e. experimental errors.
Before getting into analysis, it's always good to ask "why do you want to do this analysis?". The answer to this question determines how much you should worry about Normality, or whether a transformation is needed. Suppose, as is common, you want inference on the value of $\beta$. If you believe that $f(x_i, \beta)$ captures the mean value of $Y_i$ correctly, and you believe that $Var(\epsilon_i)$ is the same for every measurement, then classical linear regression can be used for inference about the value of $\beta$. Despite what many textbooks advise, you do not need Normality here; in reasonable sample sizes your confidence intervals and tests will be almost perfectly-accurately calibrated.
If you still want inference, but don't believe the constant variance, use robust standard error estimates. If you don't believe the mean follows $f(x_i, \beta)$ or that the variance is constant, robust standard error estimates still give you accurate inference on the best-fitting line of the form $f(x_i, \beta)$, where "best-fitting" means "least-squares". And if you don't believe the mean follows $f(x_i, \beta)$, or that the best-fitting line of this form is a useful thing to know, you can always fit a more flexible mean - spline representations of covariates $x_i$ are a good way to do this. Absolutely none of the methods listed require Normality - or transformations of the $Y_i$.
So when do we require Normality? If you want to do predictions, of new $Y_i$, for most methods you'll need a model (though it need not assume Normality). If you want to compare models, well, you'll need some models, but that's a tautology. If you have a tiny sample size, doing model-based inference on $\beta$ may be the only viable approach - but then you'd likely have no way of assessing whether your assumption of Normality (or whatever you assumed) was reasonable.
When do we need Box-Cox? If we have little idea about the form of $f(x_i, \beta)$, but believe that errors around $f(x_i, \beta)$ "should" be Normal, then Box-Cox may help find a better form for $f(x_i, \beta)$. But it relies on there being underlying Normality, at the "right" model, and this is hard to justify in many situations.
In short, rather than deal with hard-to-justify transformations, there is a lot you can do with just a mean model. If the original units of measurement help you (and your colleagues) think about what the data tells them, I recommend hanging on to those units, if possible. | Life after the Box-Cox transformation | It sounds like your model is of this form; $$Y_i|x_i = f(x_i, \beta) + \epsilon_i,$$ where $Y_i$ denotes the $i$th measured outcome, $x_i$ is a vector of covariates for that outcome (i.e. experimental | Life after the Box-Cox transformation
It sounds like your model is of this form; $$Y_i|x_i = f(x_i, \beta) + \epsilon_i,$$ where $Y_i$ denotes the $i$th measured outcome, $x_i$ is a vector of covariates for that outcome (i.e. experimental circumstances), which with (unknown) parameters $\beta$ determines the expected value $f(x_i, \beta)$ for that observation. The $\epsilon_i$ are the error terms, which describe everything that affects $Y_i$ not captured by $f(x_i, \beta)$ - i.e. experimental errors.
Before getting into analysis, it's always good to ask "why do you want to do this analysis?". The answer to this question determines how much you should worry about Normality, or whether a transformation is needed. Suppose, as is common, you want inference on the value of $\beta$. If you believe that $f(x_i, \beta)$ captures the mean value of $Y_i$ correctly, and you believe that $Var(\epsilon_i)$ is the same for every measurement, then classical linear regression can be used for inference about the value of $\beta$. Despite what many textbooks advise, you do not need Normality here; in reasonable sample sizes your confidence intervals and tests will be almost perfectly-accurately calibrated.
If you still want inference, but don't believe the constant variance, use robust standard error estimates. If you don't believe the mean follows $f(x_i, \beta)$ or that the variance is constant, robust standard error estimates still give you accurate inference on the best-fitting line of the form $f(x_i, \beta)$, where "best-fitting" means "least-squares". And if you don't believe the mean follows $f(x_i, \beta)$, or that the best-fitting line of this form is a useful thing to know, you can always fit a more flexible mean - spline representations of covariates $x_i$ are a good way to do this. Absolutely none of the methods listed require Normality - or transformations of the $Y_i$.
So when do we require Normality? If you want to do predictions, of new $Y_i$, for most methods you'll need a model (though it need not assume Normality). If you want to compare models, well, you'll need some models, but that's a tautology. If you have a tiny sample size, doing model-based inference on $\beta$ may be the only viable approach - but then you'd likely have no way of assessing whether your assumption of Normality (or whatever you assumed) was reasonable.
When do we need Box-Cox? If we have little idea about the form of $f(x_i, \beta)$, but believe that errors around $f(x_i, \beta)$ "should" be Normal, then Box-Cox may help find a better form for $f(x_i, \beta)$. But it relies on there being underlying Normality, at the "right" model, and this is hard to justify in many situations.
In short, rather than deal with hard-to-justify transformations, there is a lot you can do with just a mean model. If the original units of measurement help you (and your colleagues) think about what the data tells them, I recommend hanging on to those units, if possible. | Life after the Box-Cox transformation
It sounds like your model is of this form; $$Y_i|x_i = f(x_i, \beta) + \epsilon_i,$$ where $Y_i$ denotes the $i$th measured outcome, $x_i$ is a vector of covariates for that outcome (i.e. experimental |
50,736 | Multiple regression with constraints on coefficients [closed] | I've used the MGCV package to fit a constrained regression where the coefficients could not be negative:
Constrained Regression
Also the 'quadprog' package with the solve.QP function may be useful.
Both, however, have a little bit of a learning curve, at least for me. | Multiple regression with constraints on coefficients [closed] | I've used the MGCV package to fit a constrained regression where the coefficients could not be negative:
Constrained Regression
Also the 'quadprog' package with the solve.QP function may be useful.
B | Multiple regression with constraints on coefficients [closed]
I've used the MGCV package to fit a constrained regression where the coefficients could not be negative:
Constrained Regression
Also the 'quadprog' package with the solve.QP function may be useful.
Both, however, have a little bit of a learning curve, at least for me. | Multiple regression with constraints on coefficients [closed]
I've used the MGCV package to fit a constrained regression where the coefficients could not be negative:
Constrained Regression
Also the 'quadprog' package with the solve.QP function may be useful.
B |
50,737 | Multiple regression with constraints on coefficients [closed] | You can use ConsReg package.
cran.r-project.org/web/packages/ConsReg/index.html
It's very easy to use | Multiple regression with constraints on coefficients [closed] | You can use ConsReg package.
cran.r-project.org/web/packages/ConsReg/index.html
It's very easy to use | Multiple regression with constraints on coefficients [closed]
You can use ConsReg package.
cran.r-project.org/web/packages/ConsReg/index.html
It's very easy to use | Multiple regression with constraints on coefficients [closed]
You can use ConsReg package.
cran.r-project.org/web/packages/ConsReg/index.html
It's very easy to use |
50,738 | Segmentation of employees | The best approach seems to be using Bayesian networks, which are used for just that purpose. Here's a free tool for automating the process.
Depending on how much effort you're willing to invest, you can go all the way to causal analysis and intervention calculus, which are the natural next step. | Segmentation of employees | The best approach seems to be using Bayesian networks, which are used for just that purpose. Here's a free tool for automating the process.
Depending on how much effort you're willing to invest, you c | Segmentation of employees
The best approach seems to be using Bayesian networks, which are used for just that purpose. Here's a free tool for automating the process.
Depending on how much effort you're willing to invest, you can go all the way to causal analysis and intervention calculus, which are the natural next step. | Segmentation of employees
The best approach seems to be using Bayesian networks, which are used for just that purpose. Here's a free tool for automating the process.
Depending on how much effort you're willing to invest, you c |
50,739 | How to calculate threshold level for mutual information scores? | You could try shuffling your data to make it independent, and use the same procedure to compute the MI score. This would provide a surrogate for the null hypothesis, and if you are okay with p-values, perhaps you can choose a threshold by selecting something like p-value of 0.05. | How to calculate threshold level for mutual information scores? | You could try shuffling your data to make it independent, and use the same procedure to compute the MI score. This would provide a surrogate for the null hypothesis, and if you are okay with p-values, | How to calculate threshold level for mutual information scores?
You could try shuffling your data to make it independent, and use the same procedure to compute the MI score. This would provide a surrogate for the null hypothesis, and if you are okay with p-values, perhaps you can choose a threshold by selecting something like p-value of 0.05. | How to calculate threshold level for mutual information scores?
You could try shuffling your data to make it independent, and use the same procedure to compute the MI score. This would provide a surrogate for the null hypothesis, and if you are okay with p-values, |
50,740 | How to calculate threshold level for mutual information scores? | Computing Normalized Mutual Information will put the values into more meaningful terms (NMI = 0, two variables contain no information about one another, NMI = 1, two variables contain perfect information about one another).
To determine a threshold I think it will really depends on what you plan to do after you state the dependence/independence between two nodes. NMI = 0.2 may seem low, but it means that the two variables still contain some information about one another, so your 'threshold' should depend on your specific goal. | How to calculate threshold level for mutual information scores? | Computing Normalized Mutual Information will put the values into more meaningful terms (NMI = 0, two variables contain no information about one another, NMI = 1, two variables contain perfect informat | How to calculate threshold level for mutual information scores?
Computing Normalized Mutual Information will put the values into more meaningful terms (NMI = 0, two variables contain no information about one another, NMI = 1, two variables contain perfect information about one another).
To determine a threshold I think it will really depends on what you plan to do after you state the dependence/independence between two nodes. NMI = 0.2 may seem low, but it means that the two variables still contain some information about one another, so your 'threshold' should depend on your specific goal. | How to calculate threshold level for mutual information scores?
Computing Normalized Mutual Information will put the values into more meaningful terms (NMI = 0, two variables contain no information about one another, NMI = 1, two variables contain perfect informat |
50,741 | What is conditioning in spatial statistics? | The answer is clearly "yes". Your resulting pattern at the end of step 5 is conditioned on the points in the top corner. Imagine doing steps 3,4 and 5 again. You'll get the same points in the top left corner, and different points elsewhere.
There's also the element of working out how you've generated the new points given the points in the corner. Did you use the number of points in the small square to estimate the density, and then generate the new points conditional on that density? There's another conditioning. | What is conditioning in spatial statistics? | The answer is clearly "yes". Your resulting pattern at the end of step 5 is conditioned on the points in the top corner. Imagine doing steps 3,4 and 5 again. You'll get the same points in the top left | What is conditioning in spatial statistics?
The answer is clearly "yes". Your resulting pattern at the end of step 5 is conditioned on the points in the top corner. Imagine doing steps 3,4 and 5 again. You'll get the same points in the top left corner, and different points elsewhere.
There's also the element of working out how you've generated the new points given the points in the corner. Did you use the number of points in the small square to estimate the density, and then generate the new points conditional on that density? There's another conditioning. | What is conditioning in spatial statistics?
The answer is clearly "yes". Your resulting pattern at the end of step 5 is conditioned on the points in the top corner. Imagine doing steps 3,4 and 5 again. You'll get the same points in the top left |
50,742 | How to determine the combination of factor levels for which the response variable is highest | This question is amenable to a decision tree analysis technique. With a continuous outcome, the software will simply put cut-points in the middle of measured values, so the cuts will fall between levels you have measured, rather than being actual levels. The categorical predictors work well with this method, as you'll see which levels lead to which outcomes. You'll get a box at the end of each terminal which will contain information like the mean of performance. The branches associated with higher levels of performance will be based on your 5 categorical factors so, all going well, it's a relatively simple and quick way to see if there are any clear associations between factor levels and your performance measure.
The main con to decision trees is they tend to be stepwise, with an "F-to-enter" calculation, so they can suffer from the same drawbacks as stepwise regression, in that you are not guaranteed the optimal solution.
Have a look here and here for an R example. I've only used SAS Enterprise Miner and AnswerTree but the R code looks easy to follow once you've got the general idea of how trees work. This is a nice introduction to decision trees, with some pretty images. :)
There are some issues you'll need to become familiar with if you decide to go down this path, basically these are all around the familiar problem of not using the same data to develop the model and test the model. Some references which you might find useful are here and here.
Update: the rule used by decision trees to choose the variable I believe is based on the amount of variance explained: the decision tree software will go through every available variable split and choose the one that explains the most variance, for example see here. So there is no input from the user on the splits, unless one changes the F-to-enter (for example) criterion.
The resulting tree will have a number of branches. The number of splits you get will depend on which method you use as some restrict the number of splits to two. The path that has the terminal node with the largest mean will show you the factor levels associated with that large mean. The variables, and the values of them, will be shown against each node in that branch. You can conclude that is the optimal path, but remember to do some testing with the model (e.g. cross validation). You also could end up with two nodes with similar means. It would pay do some data visualisation of the tree classifications to see whether it worked, for example the one with the largest mean could also have an excessively large standard deviation and may not classify as well as it appears just from the mean.
The terminal nodes may not be in any order of mean either, in my experience one doesn't get a nice ascending or descending order of nodes from left to right.
Have fun with the method. :)
Update 2: the tree will stop forming branches (nodes) when, across node, this situation occurs:
none of the variables that have not been included in the splits meet the F-to-enter rule (e.g. including them does not meet a minimum requirement for explaining variance) and
for the variables that have been entered, there are no more splits available that meet the F-to-enter rule, and
for the variables that have been entered, all splits have occurred higher up the branch, OR
if a maximum number of branches was stipulated, the maximum has been hit on every node where an F-to-enter rule could otherwise still be applied.
Assuming your maximum number of branches hasn't been reached, you can interpret your results this way, and note that these results are exploratory and not definitive unless you have validated them with a suitable validation method:
all variable splits that explain a reasonable amount of the variance in the model have been entered, so
any variables not entered either do not explain sufficient variance or are possibly correlated with other variables in the model (it is not possible to tell which of these two situations is more correct with a stepwise model), and
similarly, when variable splits don't occur (e.g. when you have two levels of a variable sitting together, like filedata levels "d" and "e"), the split will not explain sufficient variance, so the outcome may be invariant to which of the levels occurs.
Because this is a stepwise method, the results may not produce the optimal outcome. I recommend grouping the variables on the basis of the results of the decision tree and then looking at boxplots of performance based on these groups. That will clearly show whether the decision tree has given you a good result.
I'm pleased the method appears to have worked. I hope these additional suggestions provide you with a way forward with your analysis. Again, best of luck! | How to determine the combination of factor levels for which the response variable is highest | This question is amenable to a decision tree analysis technique. With a continuous outcome, the software will simply put cut-points in the middle of measured values, so the cuts will fall between leve | How to determine the combination of factor levels for which the response variable is highest
This question is amenable to a decision tree analysis technique. With a continuous outcome, the software will simply put cut-points in the middle of measured values, so the cuts will fall between levels you have measured, rather than being actual levels. The categorical predictors work well with this method, as you'll see which levels lead to which outcomes. You'll get a box at the end of each terminal which will contain information like the mean of performance. The branches associated with higher levels of performance will be based on your 5 categorical factors so, all going well, it's a relatively simple and quick way to see if there are any clear associations between factor levels and your performance measure.
The main con to decision trees is they tend to be stepwise, with an "F-to-enter" calculation, so they can suffer from the same drawbacks as stepwise regression, in that you are not guaranteed the optimal solution.
Have a look here and here for an R example. I've only used SAS Enterprise Miner and AnswerTree but the R code looks easy to follow once you've got the general idea of how trees work. This is a nice introduction to decision trees, with some pretty images. :)
There are some issues you'll need to become familiar with if you decide to go down this path, basically these are all around the familiar problem of not using the same data to develop the model and test the model. Some references which you might find useful are here and here.
Update: the rule used by decision trees to choose the variable I believe is based on the amount of variance explained: the decision tree software will go through every available variable split and choose the one that explains the most variance, for example see here. So there is no input from the user on the splits, unless one changes the F-to-enter (for example) criterion.
The resulting tree will have a number of branches. The number of splits you get will depend on which method you use as some restrict the number of splits to two. The path that has the terminal node with the largest mean will show you the factor levels associated with that large mean. The variables, and the values of them, will be shown against each node in that branch. You can conclude that is the optimal path, but remember to do some testing with the model (e.g. cross validation). You also could end up with two nodes with similar means. It would pay do some data visualisation of the tree classifications to see whether it worked, for example the one with the largest mean could also have an excessively large standard deviation and may not classify as well as it appears just from the mean.
The terminal nodes may not be in any order of mean either, in my experience one doesn't get a nice ascending or descending order of nodes from left to right.
Have fun with the method. :)
Update 2: the tree will stop forming branches (nodes) when, across node, this situation occurs:
none of the variables that have not been included in the splits meet the F-to-enter rule (e.g. including them does not meet a minimum requirement for explaining variance) and
for the variables that have been entered, there are no more splits available that meet the F-to-enter rule, and
for the variables that have been entered, all splits have occurred higher up the branch, OR
if a maximum number of branches was stipulated, the maximum has been hit on every node where an F-to-enter rule could otherwise still be applied.
Assuming your maximum number of branches hasn't been reached, you can interpret your results this way, and note that these results are exploratory and not definitive unless you have validated them with a suitable validation method:
all variable splits that explain a reasonable amount of the variance in the model have been entered, so
any variables not entered either do not explain sufficient variance or are possibly correlated with other variables in the model (it is not possible to tell which of these two situations is more correct with a stepwise model), and
similarly, when variable splits don't occur (e.g. when you have two levels of a variable sitting together, like filedata levels "d" and "e"), the split will not explain sufficient variance, so the outcome may be invariant to which of the levels occurs.
Because this is a stepwise method, the results may not produce the optimal outcome. I recommend grouping the variables on the basis of the results of the decision tree and then looking at boxplots of performance based on these groups. That will clearly show whether the decision tree has given you a good result.
I'm pleased the method appears to have worked. I hope these additional suggestions provide you with a way forward with your analysis. Again, best of luck! | How to determine the combination of factor levels for which the response variable is highest
This question is amenable to a decision tree analysis technique. With a continuous outcome, the software will simply put cut-points in the middle of measured values, so the cuts will fall between leve |
50,743 | Combining ratings from multiple raters of different accuracy | If the poorer raters are that bad, it suggests they are not adding information and could be dropped from the pool of raters. This would be preferable to weighting their ratings because:
sometimes their "5"s will really be "5"s according to your better raters. Given that your better raters are providing all the information you need for an accurate rating, you don't need to incorporate the information from the poorer raters. Your results are not going to change for those objects.
on objects that should have a lower rating than a "4" or a "5", you are obtaining information about what the rating "should" be from your better raters. To weight down the ratings from the poorer raters, you will base this on the differences between them and your good raters. Again, there appears to be no information gain from the poorer raters, as the final rating overall basically ignores their ratings.
Maybe I have missed something. However, if some of the ratings are basically useless, it is better to drop them entirely rather than try transformations - which aren't going to affect your overall ratings for each object anyways.
Update on comment: yes, exactly, the bad raters are "noise" that would need to be transformed to "signal". Given that any algorithm used to translate them to "signal" is based on the good raters and will only be approximate, there seems to be little point in going to this effort.
You could look at inter-rater reliability measures for the better poor raters and see what transpires. There are a number of factors to take into account even with this reduced approach:
If there are a lot of items that are rated at the extremes by your good raters ("1"s and "5"s) and your other raters are managing to give equivalent extreme ratings, the inter-rating reliability measure will be affected by these extreme value objects, and actual inter-rater reliability may be lower.
You could still get poor inter-rater reliability measures even with the subset who are "less bad".
So this is a path you could go down, and be prepared that you may not get a good result even with your subset.
To reframe this, removing the bad rater scores is not throwing away data, it is throwing away noise. | Combining ratings from multiple raters of different accuracy | If the poorer raters are that bad, it suggests they are not adding information and could be dropped from the pool of raters. This would be preferable to weighting their ratings because:
sometimes the | Combining ratings from multiple raters of different accuracy
If the poorer raters are that bad, it suggests they are not adding information and could be dropped from the pool of raters. This would be preferable to weighting their ratings because:
sometimes their "5"s will really be "5"s according to your better raters. Given that your better raters are providing all the information you need for an accurate rating, you don't need to incorporate the information from the poorer raters. Your results are not going to change for those objects.
on objects that should have a lower rating than a "4" or a "5", you are obtaining information about what the rating "should" be from your better raters. To weight down the ratings from the poorer raters, you will base this on the differences between them and your good raters. Again, there appears to be no information gain from the poorer raters, as the final rating overall basically ignores their ratings.
Maybe I have missed something. However, if some of the ratings are basically useless, it is better to drop them entirely rather than try transformations - which aren't going to affect your overall ratings for each object anyways.
Update on comment: yes, exactly, the bad raters are "noise" that would need to be transformed to "signal". Given that any algorithm used to translate them to "signal" is based on the good raters and will only be approximate, there seems to be little point in going to this effort.
You could look at inter-rater reliability measures for the better poor raters and see what transpires. There are a number of factors to take into account even with this reduced approach:
If there are a lot of items that are rated at the extremes by your good raters ("1"s and "5"s) and your other raters are managing to give equivalent extreme ratings, the inter-rating reliability measure will be affected by these extreme value objects, and actual inter-rater reliability may be lower.
You could still get poor inter-rater reliability measures even with the subset who are "less bad".
So this is a path you could go down, and be prepared that you may not get a good result even with your subset.
To reframe this, removing the bad rater scores is not throwing away data, it is throwing away noise. | Combining ratings from multiple raters of different accuracy
If the poorer raters are that bad, it suggests they are not adding information and could be dropped from the pool of raters. This would be preferable to weighting their ratings because:
sometimes the |
50,744 | Combining ratings from multiple raters of different accuracy | If I am understanding correctly, you can analyze your data with a simple random intercept model. You have raters indexed by j from 1 to J and items indexed by i from 1 to I. For each item, each rater produces a response $ R_{ij} $ Using the terminology of psychometrics, it seems that you want to estimate the "difficulty" (or quality) of each item. You can estimate it using the following model:
$$
(R_{ij} = x | \zeta_j) = \zeta_j + \delta_i + \epsilon_{ij} \\
$$
$$
\zeta_j \sim N(0,\psi) \\
\epsilon_{ij} \sim (0,\theta)
$$
Using this model, the interpretation is as follows:
$ \delta_i $ stands are fixed effects that will represent the difficulty/quality associated with each one the items.
$ \zeta_j $ will by the (random) rater intercept. Depending on what software do you use to estimate the model you may or may not get this automatically as part of the output.
$ \psi $ will be the variance of the random effect (the variance of your raters).
$ \theta $ wil be the residual variance.
If you are interested in the reliability of your raters, you can calculate the intra class correlation based on the two variance parameters:
$$
ICC = \dfrac{\psi}{\theta+\psi}
$$
This kind of model should be easily estimable in any statistical package. For instance, in Stata you can use xtreg and in R you can use the llme4 package. | Combining ratings from multiple raters of different accuracy | If I am understanding correctly, you can analyze your data with a simple random intercept model. You have raters indexed by j from 1 to J and items indexed by i from 1 to I. For each item, each rater | Combining ratings from multiple raters of different accuracy
If I am understanding correctly, you can analyze your data with a simple random intercept model. You have raters indexed by j from 1 to J and items indexed by i from 1 to I. For each item, each rater produces a response $ R_{ij} $ Using the terminology of psychometrics, it seems that you want to estimate the "difficulty" (or quality) of each item. You can estimate it using the following model:
$$
(R_{ij} = x | \zeta_j) = \zeta_j + \delta_i + \epsilon_{ij} \\
$$
$$
\zeta_j \sim N(0,\psi) \\
\epsilon_{ij} \sim (0,\theta)
$$
Using this model, the interpretation is as follows:
$ \delta_i $ stands are fixed effects that will represent the difficulty/quality associated with each one the items.
$ \zeta_j $ will by the (random) rater intercept. Depending on what software do you use to estimate the model you may or may not get this automatically as part of the output.
$ \psi $ will be the variance of the random effect (the variance of your raters).
$ \theta $ wil be the residual variance.
If you are interested in the reliability of your raters, you can calculate the intra class correlation based on the two variance parameters:
$$
ICC = \dfrac{\psi}{\theta+\psi}
$$
This kind of model should be easily estimable in any statistical package. For instance, in Stata you can use xtreg and in R you can use the llme4 package. | Combining ratings from multiple raters of different accuracy
If I am understanding correctly, you can analyze your data with a simple random intercept model. You have raters indexed by j from 1 to J and items indexed by i from 1 to I. For each item, each rater |
50,745 | Providing variance measures for speedup ratios | So, further research on this topic has led me to conclude that the correct way of doing this is going to involve Fieller's Theorem, which is for constructing the confidence interval of the ratio of two means --- a speedup ratio!
I've not completely worked this out, but for future people trying to figure this out, I'm hoping it will serve as a pointer.
This is the paper which set me on the right path, though, I am not entirely convinced of their methodology. | Providing variance measures for speedup ratios | So, further research on this topic has led me to conclude that the correct way of doing this is going to involve Fieller's Theorem, which is for constructing the confidence interval of the ratio of t | Providing variance measures for speedup ratios
So, further research on this topic has led me to conclude that the correct way of doing this is going to involve Fieller's Theorem, which is for constructing the confidence interval of the ratio of two means --- a speedup ratio!
I've not completely worked this out, but for future people trying to figure this out, I'm hoping it will serve as a pointer.
This is the paper which set me on the right path, though, I am not entirely convinced of their methodology. | Providing variance measures for speedup ratios
So, further research on this topic has led me to conclude that the correct way of doing this is going to involve Fieller's Theorem, which is for constructing the confidence interval of the ratio of t |
50,746 | Providing variance measures for speedup ratios | Have you considered using candlesticks over the top of a trendline?
The candlestick body could be placed on each thread interval, and expanded in height a certain number of pixels per unit of the standard deviation. The relative size differences of the candlestick bodies would then demonstrate the change in deviation from one interval to the next. | Providing variance measures for speedup ratios | Have you considered using candlesticks over the top of a trendline?
The candlestick body could be placed on each thread interval, and expanded in height a certain number of pixels per unit of the stan | Providing variance measures for speedup ratios
Have you considered using candlesticks over the top of a trendline?
The candlestick body could be placed on each thread interval, and expanded in height a certain number of pixels per unit of the standard deviation. The relative size differences of the candlestick bodies would then demonstrate the change in deviation from one interval to the next. | Providing variance measures for speedup ratios
Have you considered using candlesticks over the top of a trendline?
The candlestick body could be placed on each thread interval, and expanded in height a certain number of pixels per unit of the stan |
50,747 | Document classification with naive Bayes algorithm | You should construct your features (in this case, the words you're including as descriptors of each document) based only on your training set. This will calculate the probability of having a certain word given that it belongs to a particular class: $P(w_i|c_k)$. In case you're wondering, this probability is needed when calculating the probability of a document belonging to some class: $P(c_{k}|\text{document})$
When you want to predict the class for a new document in the test set, ignore the words that are not included in the training set. The reason is that you can't use the test set for anything other than testing your predictions. Furthermore, the training set must be representative of the test set. Otherwise, you won't get a good classifier. Therefore, it is to be expected that the majority of the words in the test set are also included in the training set.
Some people add an extra column for unknown words and try to calculate a probability of such words given a certain class: $P(\text{unknown} | c_{i})$. I don't think this is necessary or even appropriate because in order to obtain this probability, you need to peek at the test set. That's something you must never do. | Document classification with naive Bayes algorithm | You should construct your features (in this case, the words you're including as descriptors of each document) based only on your training set. This will calculate the probability of having a certain w | Document classification with naive Bayes algorithm
You should construct your features (in this case, the words you're including as descriptors of each document) based only on your training set. This will calculate the probability of having a certain word given that it belongs to a particular class: $P(w_i|c_k)$. In case you're wondering, this probability is needed when calculating the probability of a document belonging to some class: $P(c_{k}|\text{document})$
When you want to predict the class for a new document in the test set, ignore the words that are not included in the training set. The reason is that you can't use the test set for anything other than testing your predictions. Furthermore, the training set must be representative of the test set. Otherwise, you won't get a good classifier. Therefore, it is to be expected that the majority of the words in the test set are also included in the training set.
Some people add an extra column for unknown words and try to calculate a probability of such words given a certain class: $P(\text{unknown} | c_{i})$. I don't think this is necessary or even appropriate because in order to obtain this probability, you need to peek at the test set. That's something you must never do. | Document classification with naive Bayes algorithm
You should construct your features (in this case, the words you're including as descriptors of each document) based only on your training set. This will calculate the probability of having a certain w |
50,748 | Document classification with naive Bayes algorithm | You could first filter the stopwords and other meaningless frequent words, and then you could try some smaller amount and check how does it work. Generally, if you use big amount of words in your set, most of them will be pure noise and would not carry much information. Make few tries and check what rate is enough, but with predicting only two categories, I imagine that you could use much smaller amount of them.
What to do with missing words? They do not occur so they have frequency of zero. On the other hand, Naive Bayes uses products heavily and if you multiply anything by zero you get zero. And in most (probably all) rows you will have some words that did not occurred, so your matrix will become a collections of zeros. Because of that it is better to choose some arbitrary small number and add it to all the values in your matrix so there is no zeros (and most ready made algorithms do this for you).
Position of the words in the matrix does not matter. However, position of the words in text could matter, so you can include such a variable in your analysis (however that is beyond scope of simply using Naive Bayes algorithm).
Final general remark: pay very much attention on cleaning and preprocessing the data since it is crucial in NLP, remember: garbage in, garbage out. Also deciding on which words to include in your training set is important step - taking "top $n$ words" could be not enough in many cases. | Document classification with naive Bayes algorithm | You could first filter the stopwords and other meaningless frequent words, and then you could try some smaller amount and check how does it work. Generally, if you use big amount of words in your set, | Document classification with naive Bayes algorithm
You could first filter the stopwords and other meaningless frequent words, and then you could try some smaller amount and check how does it work. Generally, if you use big amount of words in your set, most of them will be pure noise and would not carry much information. Make few tries and check what rate is enough, but with predicting only two categories, I imagine that you could use much smaller amount of them.
What to do with missing words? They do not occur so they have frequency of zero. On the other hand, Naive Bayes uses products heavily and if you multiply anything by zero you get zero. And in most (probably all) rows you will have some words that did not occurred, so your matrix will become a collections of zeros. Because of that it is better to choose some arbitrary small number and add it to all the values in your matrix so there is no zeros (and most ready made algorithms do this for you).
Position of the words in the matrix does not matter. However, position of the words in text could matter, so you can include such a variable in your analysis (however that is beyond scope of simply using Naive Bayes algorithm).
Final general remark: pay very much attention on cleaning and preprocessing the data since it is crucial in NLP, remember: garbage in, garbage out. Also deciding on which words to include in your training set is important step - taking "top $n$ words" could be not enough in many cases. | Document classification with naive Bayes algorithm
You could first filter the stopwords and other meaningless frequent words, and then you could try some smaller amount and check how does it work. Generally, if you use big amount of words in your set, |
50,749 | Document classification with naive Bayes algorithm | order of variables is not an issue.I guess you are using the actual tokens as variables then randomforest or svm or any other model can understand that using variable names .THe issue can be when you dont have certain tokens in test data you might need to introduce dummy values | Document classification with naive Bayes algorithm | order of variables is not an issue.I guess you are using the actual tokens as variables then randomforest or svm or any other model can understand that using variable names .THe issue can be when you | Document classification with naive Bayes algorithm
order of variables is not an issue.I guess you are using the actual tokens as variables then randomforest or svm or any other model can understand that using variable names .THe issue can be when you dont have certain tokens in test data you might need to introduce dummy values | Document classification with naive Bayes algorithm
order of variables is not an issue.I guess you are using the actual tokens as variables then randomforest or svm or any other model can understand that using variable names .THe issue can be when you |
50,750 | Joint distribution of sum of independent normals | Not entirely clear to me from reading the comments if the OP has solved this but there is no answer so I will write one.
The distribution of each $Y_i$ will be normal with given means and variances:
$\mu_0+\mu_1$ and $\sigma_0^2+\sigma^2_1$ for $Y_0$ and
$\mu_1+\mu_2$ and $\sigma_1^2+\sigma^2_2$ for $Y_1$. Now finally we need to
determine if there is a correlation between $Y_0$ and $Y_1$. To do this we can calculate
$$\mathbb{C}ov(Y_0,Y_1)=\mathbb{C}ov(X_0+X_1,X_1+X_2)
=\mathbb{C}ov(X_1,X_1)
=\mathbb{V}ar(X_1)
=\sigma_1^2.
$$
Now you can turn this into a correlation by dividing by the square roots of the variances
$$\rho = \frac{\sigma_1^2}{\sqrt{(\sigma_0^2+\sigma^2_1)(\sigma_1^2+\sigma^2_2)} }.$$
Now we know that the sum of two normal random variables is normally distributed so that both $Y_0$ and $Y_1$ have normal distributions with the stated means and variances and the correlation is given by $\rho$ above. So the joint density of $Y_0, Y_1$ is
$$ f(y_0,y_1) = N\left(\vec{\mu} = \begin{bmatrix}
\mu_0+\mu_1 \\
\mu_1+\mu_2 \\
\end{bmatrix}, \Sigma = \begin{bmatrix}
\sigma^2_0+\sigma^2_1 &\sigma_1^2 \\
\sigma_1^2 & \sigma^2_1+\sigma^2_2 \\
\end{bmatrix} \right).
$$ | Joint distribution of sum of independent normals | Not entirely clear to me from reading the comments if the OP has solved this but there is no answer so I will write one.
The distribution of each $Y_i$ will be normal with given means and variances: | Joint distribution of sum of independent normals
Not entirely clear to me from reading the comments if the OP has solved this but there is no answer so I will write one.
The distribution of each $Y_i$ will be normal with given means and variances:
$\mu_0+\mu_1$ and $\sigma_0^2+\sigma^2_1$ for $Y_0$ and
$\mu_1+\mu_2$ and $\sigma_1^2+\sigma^2_2$ for $Y_1$. Now finally we need to
determine if there is a correlation between $Y_0$ and $Y_1$. To do this we can calculate
$$\mathbb{C}ov(Y_0,Y_1)=\mathbb{C}ov(X_0+X_1,X_1+X_2)
=\mathbb{C}ov(X_1,X_1)
=\mathbb{V}ar(X_1)
=\sigma_1^2.
$$
Now you can turn this into a correlation by dividing by the square roots of the variances
$$\rho = \frac{\sigma_1^2}{\sqrt{(\sigma_0^2+\sigma^2_1)(\sigma_1^2+\sigma^2_2)} }.$$
Now we know that the sum of two normal random variables is normally distributed so that both $Y_0$ and $Y_1$ have normal distributions with the stated means and variances and the correlation is given by $\rho$ above. So the joint density of $Y_0, Y_1$ is
$$ f(y_0,y_1) = N\left(\vec{\mu} = \begin{bmatrix}
\mu_0+\mu_1 \\
\mu_1+\mu_2 \\
\end{bmatrix}, \Sigma = \begin{bmatrix}
\sigma^2_0+\sigma^2_1 &\sigma_1^2 \\
\sigma_1^2 & \sigma^2_1+\sigma^2_2 \\
\end{bmatrix} \right).
$$ | Joint distribution of sum of independent normals
Not entirely clear to me from reading the comments if the OP has solved this but there is no answer so I will write one.
The distribution of each $Y_i$ will be normal with given means and variances: |
50,751 | Measuring 'synchrony' with time series correlations | Ok interesting question. Think I know a proper answer use Ramseyer and Tsachers model/method (Nonverbal Synchrony or Random Coincidence? How to Tell the Difference).
Your data seems excellent for it! Below a short description by head, might have some mistakes here so please read the referred papers as well.
They use Motion Energy Analyses (frame to frame pixel difference after passing some filters to exclude high-frequency lighting influences e.g. Bandworth filter as done by Paxton and Dale (to be published))
This is followed by Pearson's cross-correlation using different time-lags (gives you info on leading following behvaiour as well) (and followed by a peak finding algorithm). Then you can use a statistical analyses by testing it compared to 99 fake (time-shifted dyads). This will give you enough information for creating a synchrony-rating. Please send me a link if you publish it, wonder what you will find always good to have examples of performed studies.
Problem will be on the multi-person bit. My PhD focuses on this and a real-time measure to be used in interactive entertainment systems (instead of using the 99time shifted ones), haven't found solutions for that, could use multiple comparisons for the time being. If you want more info read into Emilie Delaherche's work as well, she gives a nice overview. Boker, Grammer Ramseyer Tsacher, they all provide more info on the peak picking, cross correlation etc. | Measuring 'synchrony' with time series correlations | Ok interesting question. Think I know a proper answer use Ramseyer and Tsachers model/method (Nonverbal Synchrony or Random Coincidence? How to Tell the Difference).
Your data seems excellent for it! | Measuring 'synchrony' with time series correlations
Ok interesting question. Think I know a proper answer use Ramseyer and Tsachers model/method (Nonverbal Synchrony or Random Coincidence? How to Tell the Difference).
Your data seems excellent for it! Below a short description by head, might have some mistakes here so please read the referred papers as well.
They use Motion Energy Analyses (frame to frame pixel difference after passing some filters to exclude high-frequency lighting influences e.g. Bandworth filter as done by Paxton and Dale (to be published))
This is followed by Pearson's cross-correlation using different time-lags (gives you info on leading following behvaiour as well) (and followed by a peak finding algorithm). Then you can use a statistical analyses by testing it compared to 99 fake (time-shifted dyads). This will give you enough information for creating a synchrony-rating. Please send me a link if you publish it, wonder what you will find always good to have examples of performed studies.
Problem will be on the multi-person bit. My PhD focuses on this and a real-time measure to be used in interactive entertainment systems (instead of using the 99time shifted ones), haven't found solutions for that, could use multiple comparisons for the time being. If you want more info read into Emilie Delaherche's work as well, she gives a nice overview. Boker, Grammer Ramseyer Tsacher, they all provide more info on the peak picking, cross correlation etc. | Measuring 'synchrony' with time series correlations
Ok interesting question. Think I know a proper answer use Ramseyer and Tsachers model/method (Nonverbal Synchrony or Random Coincidence? How to Tell the Difference).
Your data seems excellent for it! |
50,752 | Measuring 'synchrony' with time series correlations | Well, there are established measures for synchronization. There even is synchronization based clustering. Why don't you just use these measures?
Read up on ''Kuramoto model'':
http://en.wikipedia.org/wiki/Kuramoto_model | Measuring 'synchrony' with time series correlations | Well, there are established measures for synchronization. There even is synchronization based clustering. Why don't you just use these measures?
Read up on ''Kuramoto model'':
http://en.wikipedia.org/ | Measuring 'synchrony' with time series correlations
Well, there are established measures for synchronization. There even is synchronization based clustering. Why don't you just use these measures?
Read up on ''Kuramoto model'':
http://en.wikipedia.org/wiki/Kuramoto_model | Measuring 'synchrony' with time series correlations
Well, there are established measures for synchronization. There even is synchronization based clustering. Why don't you just use these measures?
Read up on ''Kuramoto model'':
http://en.wikipedia.org/ |
50,753 | Measuring 'synchrony' with time series correlations | Here's what I was suggesting in R code. I don't know what software you're working with, but at the very least you can download R for free and run the script pretty easily just to see what I was talking about and then create your own version. If you were in R, a lot of the loops could be replaced with the "rollapply" function in the "zoo" package. But this way the code is self contained.
What I've done is created three time series:
1. Simple signal, where the next value at point (i+1) correlates with the preceding value (i).
2. Signal based on the first so they should correlate highly but have different amplitudes
3. A random signal made with Gaussian noise
What this does leave out is phase shifts, that is, if a person a starts moving and then person b starts moving after but in time with, this method will underestimate the correlation. This can be rectified by including a number of time shifts. Or, possibly by increasing the length of time for the rolling average (acts like a low pass filter).
Of course there are other methods which might be more suitable, but those are based on oscillating signals, e.g., you could use wavelets for a time-frequency decomposition and then calculate a between person phase-locking across a number of frequencies. Then create phase and coherence maps. If you think this might be more what you're after, I have scripts for that too, but you may want to look into dedicated packages in matlab or R.
Before applying, you'd probably need to take a couple of random samples from your videos, or perhaps even a "training" video and see what parameters give you the information that you're looking for. Then apply this to your actual samples. E.g., changing the length of the rolling average, playing with phase shifts, adjusting the weighting parameter. You could even get boostrapped CIs if you wanted.
Here's the R script:
#highly correlated series
rl<-20 #rolling average length
x<-5 #Just a starting value
xvec<-3000
#1st time series, made so that the next value correlates with preceding value
for(i in 1:(xvec-1)) { x[i+1] <- x[i] +rnorm(1, 0, 0.2) }
y<-x+rnorm(xvec, 0, 0.3) #Second series based on 1st series for high correlation
xy<-(x*y)/max(x*y) #For weighting
#Calculating rolling correlation with 20 values either side
cxy<-sapply((rl+1):(xvec-rl+1), function(i) cor(x[(i-rl):(i+rl)], y[(i-rl):(i+rl)]))
#Smoothed rolling correlation by rolling average
cxym<-sapply((rl+1):(xvec-3*rl+1), function(i) mean(cxy[(i-rl):(i+rl)]))
#Smoothed weighting
xym<-sapply((2*rl+2):(xvec-2*rl+2), function(i) mean(xy[(i-rl):(i+rl)]))
par(mfcol = c(2,2)) #Create plot so that there are 4 figures per plot space
plot(1:xvec, x, type="l"); lines(1:xvec, y, col=2) #plot 1st and 2nd time series
#Plot correlations
plot((rl+1):(xvec-rl+1), cxy, type="l", xlim=c(0, xvec), ylim=c(-1,1))
lines((2*rl+2):(xvec-2*rl+2), cxym, col=2) #Smoothed rolling correlation
lines((2*rl+2):(xvec-2*rl+2), cxym*xym, col=3) #Smoothed weighted correlation
#No correlation between series and plot
y<-rnorm(xvec, 5, 1)
xy<-(x*y)/max(x*y)
cxy<-sapply((rl+1):(xvec-rl+1), function(i) cor(x[(i-rl):(i+rl)], y[(i-rl):(i+rl)]))
cxym<-sapply((rl+1):(xvec-3*rl+1), function(i) mean(cxy[(i-rl):(i+rl)]))
xym<-sapply((2*rl+2):(xvec-2*rl+2), function(i) mean(xy[(i-rl):(i+rl)]))
plot(1:xvec, x, type="l"); lines(1:xvec, y, col=2)
plot((rl+1):(xvec-rl+1), cxy, type="l", xlim=c(0, xvec), ylim=c(-1,1))
lines((2*rl+2):(xvec-2*rl+2), cxym, col=2)
lines((2*rl+2):(xvec-2*rl+2), cxym*xym, col=3) | Measuring 'synchrony' with time series correlations | Here's what I was suggesting in R code. I don't know what software you're working with, but at the very least you can download R for free and run the script pretty easily just to see what I was talkin | Measuring 'synchrony' with time series correlations
Here's what I was suggesting in R code. I don't know what software you're working with, but at the very least you can download R for free and run the script pretty easily just to see what I was talking about and then create your own version. If you were in R, a lot of the loops could be replaced with the "rollapply" function in the "zoo" package. But this way the code is self contained.
What I've done is created three time series:
1. Simple signal, where the next value at point (i+1) correlates with the preceding value (i).
2. Signal based on the first so they should correlate highly but have different amplitudes
3. A random signal made with Gaussian noise
What this does leave out is phase shifts, that is, if a person a starts moving and then person b starts moving after but in time with, this method will underestimate the correlation. This can be rectified by including a number of time shifts. Or, possibly by increasing the length of time for the rolling average (acts like a low pass filter).
Of course there are other methods which might be more suitable, but those are based on oscillating signals, e.g., you could use wavelets for a time-frequency decomposition and then calculate a between person phase-locking across a number of frequencies. Then create phase and coherence maps. If you think this might be more what you're after, I have scripts for that too, but you may want to look into dedicated packages in matlab or R.
Before applying, you'd probably need to take a couple of random samples from your videos, or perhaps even a "training" video and see what parameters give you the information that you're looking for. Then apply this to your actual samples. E.g., changing the length of the rolling average, playing with phase shifts, adjusting the weighting parameter. You could even get boostrapped CIs if you wanted.
Here's the R script:
#highly correlated series
rl<-20 #rolling average length
x<-5 #Just a starting value
xvec<-3000
#1st time series, made so that the next value correlates with preceding value
for(i in 1:(xvec-1)) { x[i+1] <- x[i] +rnorm(1, 0, 0.2) }
y<-x+rnorm(xvec, 0, 0.3) #Second series based on 1st series for high correlation
xy<-(x*y)/max(x*y) #For weighting
#Calculating rolling correlation with 20 values either side
cxy<-sapply((rl+1):(xvec-rl+1), function(i) cor(x[(i-rl):(i+rl)], y[(i-rl):(i+rl)]))
#Smoothed rolling correlation by rolling average
cxym<-sapply((rl+1):(xvec-3*rl+1), function(i) mean(cxy[(i-rl):(i+rl)]))
#Smoothed weighting
xym<-sapply((2*rl+2):(xvec-2*rl+2), function(i) mean(xy[(i-rl):(i+rl)]))
par(mfcol = c(2,2)) #Create plot so that there are 4 figures per plot space
plot(1:xvec, x, type="l"); lines(1:xvec, y, col=2) #plot 1st and 2nd time series
#Plot correlations
plot((rl+1):(xvec-rl+1), cxy, type="l", xlim=c(0, xvec), ylim=c(-1,1))
lines((2*rl+2):(xvec-2*rl+2), cxym, col=2) #Smoothed rolling correlation
lines((2*rl+2):(xvec-2*rl+2), cxym*xym, col=3) #Smoothed weighted correlation
#No correlation between series and plot
y<-rnorm(xvec, 5, 1)
xy<-(x*y)/max(x*y)
cxy<-sapply((rl+1):(xvec-rl+1), function(i) cor(x[(i-rl):(i+rl)], y[(i-rl):(i+rl)]))
cxym<-sapply((rl+1):(xvec-3*rl+1), function(i) mean(cxy[(i-rl):(i+rl)]))
xym<-sapply((2*rl+2):(xvec-2*rl+2), function(i) mean(xy[(i-rl):(i+rl)]))
plot(1:xvec, x, type="l"); lines(1:xvec, y, col=2)
plot((rl+1):(xvec-rl+1), cxy, type="l", xlim=c(0, xvec), ylim=c(-1,1))
lines((2*rl+2):(xvec-2*rl+2), cxym, col=2)
lines((2*rl+2):(xvec-2*rl+2), cxym*xym, col=3) | Measuring 'synchrony' with time series correlations
Here's what I was suggesting in R code. I don't know what software you're working with, but at the very least you can download R for free and run the script pretty easily just to see what I was talkin |
50,754 | Measuring 'synchrony' with time series correlations | A dynamic spontaneous synchronization type of visual would be useful here. Please see the example of firefly flashing simulation using star logo.
http://skyeome.net/wordpress/?p=56
http://education.mit.edu/starlogo/
You could use some measure of a member's pixel difference between frames (mean of difference in intensities?) as the sequential step measurement of an individual's movement; then taking a sequence of training samples on the first few minutes of footage, find a reasonable estimate of the absolute range of min and max movements among all individuals.
From there, you could quantize the range into some set of levels with a visual intensity dot corresponding to an individual's movements over time. All individual member dots would be plotted in a cluster as in the attached image.
From there you could generate the rolling synchronisation plot
as in Figure 2. Possibly using some type of kernel density related to the frequency of each of the quantized levels per each time slice on the rolling plot. If the level of all members were all aligned then the bin width would be minimized and intensity maximized; any lower correlation would result in larger dispersion and lower max intensity of the density snapshots. | Measuring 'synchrony' with time series correlations | A dynamic spontaneous synchronization type of visual would be useful here. Please see the example of firefly flashing simulation using star logo.
http://skyeome.net/wordpress/?p=56
http://education. | Measuring 'synchrony' with time series correlations
A dynamic spontaneous synchronization type of visual would be useful here. Please see the example of firefly flashing simulation using star logo.
http://skyeome.net/wordpress/?p=56
http://education.mit.edu/starlogo/
You could use some measure of a member's pixel difference between frames (mean of difference in intensities?) as the sequential step measurement of an individual's movement; then taking a sequence of training samples on the first few minutes of footage, find a reasonable estimate of the absolute range of min and max movements among all individuals.
From there, you could quantize the range into some set of levels with a visual intensity dot corresponding to an individual's movements over time. All individual member dots would be plotted in a cluster as in the attached image.
From there you could generate the rolling synchronisation plot
as in Figure 2. Possibly using some type of kernel density related to the frequency of each of the quantized levels per each time slice on the rolling plot. If the level of all members were all aligned then the bin width would be minimized and intensity maximized; any lower correlation would result in larger dispersion and lower max intensity of the density snapshots. | Measuring 'synchrony' with time series correlations
A dynamic spontaneous synchronization type of visual would be useful here. Please see the example of firefly flashing simulation using star logo.
http://skyeome.net/wordpress/?p=56
http://education. |
50,755 | Measuring 'synchrony' with time series correlations | I think it could be as simple as plotting the median activity level for the three participants. It wouldn't go up much just because one participant became active, but would go up much more if two or three participants were active. | Measuring 'synchrony' with time series correlations | I think it could be as simple as plotting the median activity level for the three participants. It wouldn't go up much just because one participant became active, but would go up much more if two or t | Measuring 'synchrony' with time series correlations
I think it could be as simple as plotting the median activity level for the three participants. It wouldn't go up much just because one participant became active, but would go up much more if two or three participants were active. | Measuring 'synchrony' with time series correlations
I think it could be as simple as plotting the median activity level for the three participants. It wouldn't go up much just because one participant became active, but would go up much more if two or t |
50,756 | Measuring 'synchrony' with time series correlations | You should use running correlations for pairs of individuals.
Here's an example:
Corbetta, D., & Thelen, E. (1996). The developmental origins of bimanual coordination: a dynamic perspective. Journal of Experimental Psychology: Human Perception and Performance, 22(2), 502-522.
You can do it easily on excel. Mail-me if you have difficulties doing this.
For multiple oscillators (i.e., persons) the Kuramoto Model, using cluster phase.
It is a lot more complicated. Probably you will need a Matlab routine for this. | Measuring 'synchrony' with time series correlations | You should use running correlations for pairs of individuals.
Here's an example:
Corbetta, D., & Thelen, E. (1996). The developmental origins of bimanual coordination: a dynamic perspective. Journal | Measuring 'synchrony' with time series correlations
You should use running correlations for pairs of individuals.
Here's an example:
Corbetta, D., & Thelen, E. (1996). The developmental origins of bimanual coordination: a dynamic perspective. Journal of Experimental Psychology: Human Perception and Performance, 22(2), 502-522.
You can do it easily on excel. Mail-me if you have difficulties doing this.
For multiple oscillators (i.e., persons) the Kuramoto Model, using cluster phase.
It is a lot more complicated. Probably you will need a Matlab routine for this. | Measuring 'synchrony' with time series correlations
You should use running correlations for pairs of individuals.
Here's an example:
Corbetta, D., & Thelen, E. (1996). The developmental origins of bimanual coordination: a dynamic perspective. Journal |
50,757 | Simple distance measure for financial time series | Consider calculating the squared difference of each daily return and taking the mean over all returns (mean square error). You could conisder each daily return to be an axis in a high dimensional space and user standard clustering techniques, e.g. k-means is the easiest to understand and implement and it may be sufficient for what you want although I was advised at one time that k-means may not be a good choice for high dimensional spaces (but I have no data to back this up - I personally would try it as a first step). | Simple distance measure for financial time series | Consider calculating the squared difference of each daily return and taking the mean over all returns (mean square error). You could conisder each daily return to be an axis in a high dimensional spac | Simple distance measure for financial time series
Consider calculating the squared difference of each daily return and taking the mean over all returns (mean square error). You could conisder each daily return to be an axis in a high dimensional space and user standard clustering techniques, e.g. k-means is the easiest to understand and implement and it may be sufficient for what you want although I was advised at one time that k-means may not be a good choice for high dimensional spaces (but I have no data to back this up - I personally would try it as a first step). | Simple distance measure for financial time series
Consider calculating the squared difference of each daily return and taking the mean over all returns (mean square error). You could conisder each daily return to be an axis in a high dimensional spac |
50,758 | Simple distance measure for financial time series | For the record, this is an ongoing research topic. Here is a recent review on this question and some methods from the academic literature. | Simple distance measure for financial time series | For the record, this is an ongoing research topic. Here is a recent review on this question and some methods from the academic literature. | Simple distance measure for financial time series
For the record, this is an ongoing research topic. Here is a recent review on this question and some methods from the academic literature. | Simple distance measure for financial time series
For the record, this is an ongoing research topic. Here is a recent review on this question and some methods from the academic literature. |
50,759 | Simple distance measure for financial time series | You could take a look at cluster analysis. Essentially treat each strategy+system as an object and your goal is to cluster objects that are similar to one another in the same cluster.
A similarity metric that you could use is that of distance using the returns that a given strategy+system would give. Thus, distance between two objects would be the euclidean distance (or equivalently the absolute difference) between the returns corresponding to the two objects.
As a first-cut you may want to use hierarchical clustering. The wiki (see link) has not only has a description of how this clustering approach works but also some suggestions regarding procedures to use using various software tools. | Simple distance measure for financial time series | You could take a look at cluster analysis. Essentially treat each strategy+system as an object and your goal is to cluster objects that are similar to one another in the same cluster.
A similarity met | Simple distance measure for financial time series
You could take a look at cluster analysis. Essentially treat each strategy+system as an object and your goal is to cluster objects that are similar to one another in the same cluster.
A similarity metric that you could use is that of distance using the returns that a given strategy+system would give. Thus, distance between two objects would be the euclidean distance (or equivalently the absolute difference) between the returns corresponding to the two objects.
As a first-cut you may want to use hierarchical clustering. The wiki (see link) has not only has a description of how this clustering approach works but also some suggestions regarding procedures to use using various software tools. | Simple distance measure for financial time series
You could take a look at cluster analysis. Essentially treat each strategy+system as an object and your goal is to cluster objects that are similar to one another in the same cluster.
A similarity met |
50,760 | Simple distance measure for financial time series | Please forgive me if I am not understanding the question, but I believe your "systems" are "strategies" that are being backtested or implemented. I cannot directly answer your question because I am not certain exactly what it is, so I will try and answer the one I think you are asking.
First, let me give you some observations. You have a massive model set if you are looking at 100k x 100k. I am assuming you did some form of combinatoric solution if that is the case. Ignoring the computational issues, this is problematic at many levels.
I have done extensive research on the capital markets and the data set is quite small because of the fact that the data points are not independent of each other. They share an extensive amount of information. Indeed, because of the competitive nature of the market actors must be updating relative valuations on a constant basis. Any attempt at a strategy that ignores the underlying non-price information is highly suspect and will result in a high false discovery rate.
The second problem with this is that your best choice for model selection is Bayesian model selection, but, in this case, your strategy size exceeds your degrees of freedom, to borrow a Frequentist idea. If a corporation is thought of as an information stream, then you cannot have more strategies than your smallest number of separate companies at any one time in your set. Indeed, due to nuisance parameters, you need even less.
An important problem you face is that you cannot use squared distance. It can be shown that the integrals diverge over the probability distribution for each conceptual portfolio. You can use mean absolute deviation. It has theoretical support under Theil's regression as well.
Your final challenge will be the cost of liquidity. If your data is not real portfolios that have had the cost of liquidity taken out by a market maker, then you need to model those costs. I would use Ashok Abbott's chapter in The Valuation Handbook to model these. This will separate your portfolios as well.
I was thinking about how I would do an exploratory analysis to differentiate portfolios. With that many, speed is important and Bayesian methods are slow. I would start by regressing portfolios values against their prior values, making adjustments for market closures. I would probably regress $\log(v_{t+1}^i)$ on $\log(v_t^i)$ using ordinary least squares, adjusting for days closed. I would ignore $\alpha$ because analysis of $\alpha$ in least squares style algorithms is at best problematic.
I would then find the portfolio with the median slope and if a tie, then with the median $\alpha$ among the ties. I would use this portfolio as my standard portfolio. I would then use this portfolio as a predictor for the remaining portfolios. I would regress $\log(v_{t+1}^k)$ on $\log(v_t^i)$. Any portfolio that can be significantly predicted by this standard portfolio should be in that cluster and any that cannot be predicted by this standard portfolio should be in another cluster.
I would then take those without significant prediction and repeat the process, creating new clusters.
I would not use returns in my regression, only portfolio values. Returns are not data, they are transformations of data.
If for some reason, you choose not to take the log of the value data, then you will need to use Theil's method of regression, otherwise you will get incorrect results with ordinary least squares.
This method differs from simply looking at the final value in that the portfolios do not need to start on the same date, though your standard portfolios do need to be long lived, and it better accounts for single idiosyncratic shocks.
This is not a canonical solution.
This should allow you to create a small set of segregated portfolios that you can then analyze separately using other analysis.
Do note that I have a lot of reservations regarding this method and I am hoping it will get a lot of criticism because I didn't spend a lot of time thinking about this. Your problem is that $\frac{v_{t+1}}{v_t}-1$ is the translation of a ratio, so you have a ratio distribution. If you assume normality for the appraisals of the underlying prices, then you have a Cauchy distribution, which would have to be truncated at -100%.
This creates no mean or variance, ruling out most solutions. The log solution gives you a biased solution, but the bias is probably consistent across portfolios and it is faster than Theil's regression.
Another concern is that your cut-off point for statistical significance will determine your number of clusters and that you cannot determine your false discovery rate.
With luck, someone will tear this answer apart. | Simple distance measure for financial time series | Please forgive me if I am not understanding the question, but I believe your "systems" are "strategies" that are being backtested or implemented. I cannot directly answer your question because I am n | Simple distance measure for financial time series
Please forgive me if I am not understanding the question, but I believe your "systems" are "strategies" that are being backtested or implemented. I cannot directly answer your question because I am not certain exactly what it is, so I will try and answer the one I think you are asking.
First, let me give you some observations. You have a massive model set if you are looking at 100k x 100k. I am assuming you did some form of combinatoric solution if that is the case. Ignoring the computational issues, this is problematic at many levels.
I have done extensive research on the capital markets and the data set is quite small because of the fact that the data points are not independent of each other. They share an extensive amount of information. Indeed, because of the competitive nature of the market actors must be updating relative valuations on a constant basis. Any attempt at a strategy that ignores the underlying non-price information is highly suspect and will result in a high false discovery rate.
The second problem with this is that your best choice for model selection is Bayesian model selection, but, in this case, your strategy size exceeds your degrees of freedom, to borrow a Frequentist idea. If a corporation is thought of as an information stream, then you cannot have more strategies than your smallest number of separate companies at any one time in your set. Indeed, due to nuisance parameters, you need even less.
An important problem you face is that you cannot use squared distance. It can be shown that the integrals diverge over the probability distribution for each conceptual portfolio. You can use mean absolute deviation. It has theoretical support under Theil's regression as well.
Your final challenge will be the cost of liquidity. If your data is not real portfolios that have had the cost of liquidity taken out by a market maker, then you need to model those costs. I would use Ashok Abbott's chapter in The Valuation Handbook to model these. This will separate your portfolios as well.
I was thinking about how I would do an exploratory analysis to differentiate portfolios. With that many, speed is important and Bayesian methods are slow. I would start by regressing portfolios values against their prior values, making adjustments for market closures. I would probably regress $\log(v_{t+1}^i)$ on $\log(v_t^i)$ using ordinary least squares, adjusting for days closed. I would ignore $\alpha$ because analysis of $\alpha$ in least squares style algorithms is at best problematic.
I would then find the portfolio with the median slope and if a tie, then with the median $\alpha$ among the ties. I would use this portfolio as my standard portfolio. I would then use this portfolio as a predictor for the remaining portfolios. I would regress $\log(v_{t+1}^k)$ on $\log(v_t^i)$. Any portfolio that can be significantly predicted by this standard portfolio should be in that cluster and any that cannot be predicted by this standard portfolio should be in another cluster.
I would then take those without significant prediction and repeat the process, creating new clusters.
I would not use returns in my regression, only portfolio values. Returns are not data, they are transformations of data.
If for some reason, you choose not to take the log of the value data, then you will need to use Theil's method of regression, otherwise you will get incorrect results with ordinary least squares.
This method differs from simply looking at the final value in that the portfolios do not need to start on the same date, though your standard portfolios do need to be long lived, and it better accounts for single idiosyncratic shocks.
This is not a canonical solution.
This should allow you to create a small set of segregated portfolios that you can then analyze separately using other analysis.
Do note that I have a lot of reservations regarding this method and I am hoping it will get a lot of criticism because I didn't spend a lot of time thinking about this. Your problem is that $\frac{v_{t+1}}{v_t}-1$ is the translation of a ratio, so you have a ratio distribution. If you assume normality for the appraisals of the underlying prices, then you have a Cauchy distribution, which would have to be truncated at -100%.
This creates no mean or variance, ruling out most solutions. The log solution gives you a biased solution, but the bias is probably consistent across portfolios and it is faster than Theil's regression.
Another concern is that your cut-off point for statistical significance will determine your number of clusters and that you cannot determine your false discovery rate.
With luck, someone will tear this answer apart. | Simple distance measure for financial time series
Please forgive me if I am not understanding the question, but I believe your "systems" are "strategies" that are being backtested or implemented. I cannot directly answer your question because I am n |
50,761 | Intervention analysis in time-series regression with seasonal ARIMA errors | The differencing implied by the denominator of your error term must be applied to $Y_t$, $S_t$ and $P_t$. That is, your model is equivalent to
$$
\nabla\nabla_{12}Y_t=\frac{\omega \nabla\nabla_{12}S_t}{1-\delta B}+\frac{\omega \nabla\nabla_{12}P_t}{1-\delta B}+\frac{\Theta(B)}{\Phi(B)} \eta_t,
$$
where $\nabla\nabla_{12} = (1-B)(1-B^{12})$. This is a transfer function model with ARMA errors which is how it would actually be estimated.
If you intended that the pulse and step apply to the differenced $Y$ series, then you need to doubly integrate $S$ and $P$ in the model (as suggested by @IrishStat). That is
$$
Y_t=\frac{\omega S_t}{\nabla\nabla_{12}(1-\delta B)}+\frac{\omega P_t}{\nabla\nabla_{12}(1-\delta B)}+\frac{\Theta(B)}{\nabla\nabla_{12}\Phi(B)} \eta_t.
$$ | Intervention analysis in time-series regression with seasonal ARIMA errors | The differencing implied by the denominator of your error term must be applied to $Y_t$, $S_t$ and $P_t$. That is, your model is equivalent to
$$
\nabla\nabla_{12}Y_t=\frac{\omega \nabla\nabla_{12}S_ | Intervention analysis in time-series regression with seasonal ARIMA errors
The differencing implied by the denominator of your error term must be applied to $Y_t$, $S_t$ and $P_t$. That is, your model is equivalent to
$$
\nabla\nabla_{12}Y_t=\frac{\omega \nabla\nabla_{12}S_t}{1-\delta B}+\frac{\omega \nabla\nabla_{12}P_t}{1-\delta B}+\frac{\Theta(B)}{\Phi(B)} \eta_t,
$$
where $\nabla\nabla_{12} = (1-B)(1-B^{12})$. This is a transfer function model with ARMA errors which is how it would actually be estimated.
If you intended that the pulse and step apply to the differenced $Y$ series, then you need to doubly integrate $S$ and $P$ in the model (as suggested by @IrishStat). That is
$$
Y_t=\frac{\omega S_t}{\nabla\nabla_{12}(1-\delta B)}+\frac{\omega P_t}{\nabla\nabla_{12}(1-\delta B)}+\frac{\Theta(B)}{\nabla\nabla_{12}\Phi(B)} \eta_t.
$$ | Intervention analysis in time-series regression with seasonal ARIMA errors
The differencing implied by the denominator of your error term must be applied to $Y_t$, $S_t$ and $P_t$. That is, your model is equivalent to
$$
\nabla\nabla_{12}Y_t=\frac{\omega \nabla\nabla_{12}S_ |
50,762 | Intervention analysis in time-series regression with seasonal ARIMA errors | If you wish to estimate the model that you specified you would specify regular and seasonal differencing on Y and provide the two doubly integrated intervention series. It appears you are doing intervention modelling and not intervention detection prior to intervention modelling. The differencing operator in the noise component will essentially return your two indicators to the required status. | Intervention analysis in time-series regression with seasonal ARIMA errors | If you wish to estimate the model that you specified you would specify regular and seasonal differencing on Y and provide the two doubly integrated intervention series. It appears you are doing interv | Intervention analysis in time-series regression with seasonal ARIMA errors
If you wish to estimate the model that you specified you would specify regular and seasonal differencing on Y and provide the two doubly integrated intervention series. It appears you are doing intervention modelling and not intervention detection prior to intervention modelling. The differencing operator in the noise component will essentially return your two indicators to the required status. | Intervention analysis in time-series regression with seasonal ARIMA errors
If you wish to estimate the model that you specified you would specify regular and seasonal differencing on Y and provide the two doubly integrated intervention series. It appears you are doing interv |
50,763 | Intervention analysis in time-series regression with seasonal ARIMA errors | I don't know enough to totally parse your question, but I believe the usual practice is to first do a regression with indicator variables $S_t$ and $P_t$, then do your ARIMA on the residuals. | Intervention analysis in time-series regression with seasonal ARIMA errors | I don't know enough to totally parse your question, but I believe the usual practice is to first do a regression with indicator variables $S_t$ and $P_t$, then do your ARIMA on the residuals. | Intervention analysis in time-series regression with seasonal ARIMA errors
I don't know enough to totally parse your question, but I believe the usual practice is to first do a regression with indicator variables $S_t$ and $P_t$, then do your ARIMA on the residuals. | Intervention analysis in time-series regression with seasonal ARIMA errors
I don't know enough to totally parse your question, but I believe the usual practice is to first do a regression with indicator variables $S_t$ and $P_t$, then do your ARIMA on the residuals. |
50,764 | How to test unit root in a timeseries with unknown structural change? | Zivot Andrews tests the alternative of a one time structural break against a null of a unit root process. Variations in the ZA paper test for a change in the intercept, in the trend, or in the intercept and the trend.
ZA endogenously selects the break point based on the point in time that gives the most weight to the alternative (ie that is most against the unit root null).
It is the prominent (only?) test in the literature for an endogenously determined structural break against a unit root null.
The approach has been extended to 2 breaks (both in the intercept, both in the trend, or one of each) (Lumsdaine, R. L., & Papell, D. H. (1997). Multiple trend breaks and the unit-root hypothesis. Review of Economics and Statistics, 79(2), 212-218.
Last time I looked at the R programs available addressing unit root testing they did not address serial correlation in the way that the original papers did.
Your addendum says you want to check if the series is constant without changes. That is different from saying you want to test for a structural break against a unit root null, as a unit root process need not have an expected value that is constant. There are other tests for structural breaks that do not have a unit root process as the null hypothesis if that is what you are really after. | How to test unit root in a timeseries with unknown structural change? | Zivot Andrews tests the alternative of a one time structural break against a null of a unit root process. Variations in the ZA paper test for a change in the intercept, in the trend, or in the interce | How to test unit root in a timeseries with unknown structural change?
Zivot Andrews tests the alternative of a one time structural break against a null of a unit root process. Variations in the ZA paper test for a change in the intercept, in the trend, or in the intercept and the trend.
ZA endogenously selects the break point based on the point in time that gives the most weight to the alternative (ie that is most against the unit root null).
It is the prominent (only?) test in the literature for an endogenously determined structural break against a unit root null.
The approach has been extended to 2 breaks (both in the intercept, both in the trend, or one of each) (Lumsdaine, R. L., & Papell, D. H. (1997). Multiple trend breaks and the unit-root hypothesis. Review of Economics and Statistics, 79(2), 212-218.
Last time I looked at the R programs available addressing unit root testing they did not address serial correlation in the way that the original papers did.
Your addendum says you want to check if the series is constant without changes. That is different from saying you want to test for a structural break against a unit root null, as a unit root process need not have an expected value that is constant. There are other tests for structural breaks that do not have a unit root process as the null hypothesis if that is what you are really after. | How to test unit root in a timeseries with unknown structural change?
Zivot Andrews tests the alternative of a one time structural break against a null of a unit root process. Variations in the ZA paper test for a change in the intercept, in the trend, or in the interce |
50,765 | Weighted regression for categorical variables | You should not define weights by hand. Use the gls function from nlme (see help, you probably want option weights = varIdent(form = ~ 1 | group) ) to estimate the weights, and then use Pearson residuals (which divide the raw residual by the expected / fitted variance) to check the model. | Weighted regression for categorical variables | You should not define weights by hand. Use the gls function from nlme (see help, you probably want option weights = varIdent(form = ~ 1 | group) ) to estimate the weights, and then use Pearson residua | Weighted regression for categorical variables
You should not define weights by hand. Use the gls function from nlme (see help, you probably want option weights = varIdent(form = ~ 1 | group) ) to estimate the weights, and then use Pearson residuals (which divide the raw residual by the expected / fitted variance) to check the model. | Weighted regression for categorical variables
You should not define weights by hand. Use the gls function from nlme (see help, you probably want option weights = varIdent(form = ~ 1 | group) ) to estimate the weights, and then use Pearson residua |
50,766 | Using the sde package in R to simulate a SV model with leverage | Hull-White/Vasicek Model: dX(t) = 3*(2-x)*dt+ 2*dw(t)
> library(Sim.DiffProc)
> drift <- expression( (3*(2-x)) )
> diffusion <- expression( (2) )
> snssde(N=1000,M=1,T=1,t0=0,x0=10,Dt=0.001,drift,diffusion,Output=FALSE)
Multiple trajectories of the OU process by Euler Scheme
> snssde(N=1000,M=50,T=1,t0=0,x0=10,Dt=0.001,drift,diffusion,Output=FALSE)
You can also use the package Sim.DiffProcGUI (Graphical User Interface for Simulation of Diffusion Processes). | Using the sde package in R to simulate a SV model with leverage | Hull-White/Vasicek Model: dX(t) = 3*(2-x)*dt+ 2*dw(t)
> library(Sim.DiffProc)
> drift <- expression( (3*(2-x)) )
> diffusion <- expression( (2) )
> snssde(N=1000,M=1,T=1,t0=0,x0=10,Dt=0.001,drift,diff | Using the sde package in R to simulate a SV model with leverage
Hull-White/Vasicek Model: dX(t) = 3*(2-x)*dt+ 2*dw(t)
> library(Sim.DiffProc)
> drift <- expression( (3*(2-x)) )
> diffusion <- expression( (2) )
> snssde(N=1000,M=1,T=1,t0=0,x0=10,Dt=0.001,drift,diffusion,Output=FALSE)
Multiple trajectories of the OU process by Euler Scheme
> snssde(N=1000,M=50,T=1,t0=0,x0=10,Dt=0.001,drift,diffusion,Output=FALSE)
You can also use the package Sim.DiffProcGUI (Graphical User Interface for Simulation of Diffusion Processes). | Using the sde package in R to simulate a SV model with leverage
Hull-White/Vasicek Model: dX(t) = 3*(2-x)*dt+ 2*dw(t)
> library(Sim.DiffProc)
> drift <- expression( (3*(2-x)) )
> diffusion <- expression( (2) )
> snssde(N=1000,M=1,T=1,t0=0,x0=10,Dt=0.001,drift,diff |
50,767 | Weighted spatial clustering | For anybody who wants to know the answer, this is what I finally did:
I implemented a normal K-Means algorithm, but with some modifications:
The calculation of the centroid is site = Sum(p * weight^alpha) / Sum(weight^alpha) for all the points that belong to that site.
The calculation of the squared distance between point p and site s is squareDistance(p,s)*weight^alpha where alpha is some constant > 0.
The only problem is that my implementation is very slow :( | Weighted spatial clustering | For anybody who wants to know the answer, this is what I finally did:
I implemented a normal K-Means algorithm, but with some modifications:
The calculation of the centroid is site = Sum(p * weight^a | Weighted spatial clustering
For anybody who wants to know the answer, this is what I finally did:
I implemented a normal K-Means algorithm, but with some modifications:
The calculation of the centroid is site = Sum(p * weight^alpha) / Sum(weight^alpha) for all the points that belong to that site.
The calculation of the squared distance between point p and site s is squareDistance(p,s)*weight^alpha where alpha is some constant > 0.
The only problem is that my implementation is very slow :( | Weighted spatial clustering
For anybody who wants to know the answer, this is what I finally did:
I implemented a normal K-Means algorithm, but with some modifications:
The calculation of the centroid is site = Sum(p * weight^a |
50,768 | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks in each year | Packages zoo and xts handle arbitrary time indices. Pick a day of the week that will reflect the discrepancy (late enough to already be in the first week of 2009 yet early enough to be in the last week of 2009) and add it to your date. Functions zoo() or xts() will then accept the date argument after as.Date() is applied to the index with a proper format argument. | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks | Packages zoo and xts handle arbitrary time indices. Pick a day of the week that will reflect the discrepancy (late enough to already be in the first week of 2009 yet early enough to be in the last wee | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks in each year
Packages zoo and xts handle arbitrary time indices. Pick a day of the week that will reflect the discrepancy (late enough to already be in the first week of 2009 yet early enough to be in the last week of 2009) and add it to your date. Functions zoo() or xts() will then accept the date argument after as.Date() is applied to the index with a proper format argument. | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks
Packages zoo and xts handle arbitrary time indices. Pick a day of the week that will reflect the discrepancy (late enough to already be in the first week of 2009 yet early enough to be in the last wee |
50,769 | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks in each year | I struggled with this for a while with a problem I was working on, and in the end decided that it was better to aggregate into monthly data. It (mostly) solves the number-of-weeks problem and it helped smooth out the noise so the results were better anyhow.
An added benefit is that people have a lot of context for months, but not much for weeks of the year, so the data is more meaningful. (I.e. if I say, "April", you have ideas about the weather, holidays, etc, but if I say "week 14", who knows what to associate with that week.)
I say "mostly solves" because each month can have a different number of business days in it, and that does have to be compensated for. (Maybe something as simple as dividing the monthly total by the number of business days and work with a total per day number.)
Of course, you may need weekly data... | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks | I struggled with this for a while with a problem I was working on, and in the end decided that it was better to aggregate into monthly data. It (mostly) solves the number-of-weeks problem and it helpe | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks in each year
I struggled with this for a while with a problem I was working on, and in the end decided that it was better to aggregate into monthly data. It (mostly) solves the number-of-weeks problem and it helped smooth out the noise so the results were better anyhow.
An added benefit is that people have a lot of context for months, but not much for weeks of the year, so the data is more meaningful. (I.e. if I say, "April", you have ideas about the weather, holidays, etc, but if I say "week 14", who knows what to associate with that week.)
I say "mostly solves" because each month can have a different number of business days in it, and that does have to be compensated for. (Maybe something as simple as dividing the monthly total by the number of business days and work with a total per day number.)
Of course, you may need weekly data... | Time series modeling in R on a weekly basis over multiple years featuring different number of weeks
I struggled with this for a while with a problem I was working on, and in the end decided that it was better to aggregate into monthly data. It (mostly) solves the number-of-weeks problem and it helpe |
50,770 | Problem in evaluating naive Bayes | Well: naive Bayes is called naive for a reason: the assumed conditional independence is often doubtful, even though it turns out to work well in a lot of practical cases.
Besides that: you have "chosen" your conditional probabilities so that it turns out this way. There is no (a priori) reason why P(tennis|News) and P(tennis|Sports) should sum to 1, but in this case this leads to the counterintuitive results. | Problem in evaluating naive Bayes | Well: naive Bayes is called naive for a reason: the assumed conditional independence is often doubtful, even though it turns out to work well in a lot of practical cases.
Besides that: you have "chose | Problem in evaluating naive Bayes
Well: naive Bayes is called naive for a reason: the assumed conditional independence is often doubtful, even though it turns out to work well in a lot of practical cases.
Besides that: you have "chosen" your conditional probabilities so that it turns out this way. There is no (a priori) reason why P(tennis|News) and P(tennis|Sports) should sum to 1, but in this case this leads to the counterintuitive results. | Problem in evaluating naive Bayes
Well: naive Bayes is called naive for a reason: the assumed conditional independence is often doubtful, even though it turns out to work well in a lot of practical cases.
Besides that: you have "chose |
50,771 | Problem in evaluating naive Bayes | A naive Bayes classifier, as the names suggests, is a simple application of Bayes' Theorem. Basically, it calculates the probabilities of quantities of interest (generally unobserved, called parameters or latent classes) based on the observed data. In your case the observed data are: news, football, and tennis. The quantities of interest for which you want to calculate the probabilities are: News and Sports. Seems like you are interested in calculating: $P(\text{News}|\text{news}, \text{football}, \text{tennis}), P(\text{News}|\text{news}, \text{football}, \text{tennis})$.
Now we will use Bayes theorem to get:
$$
P(\text{News}|\text{news}, \text{football}, \text{tennis}) = \frac{P(\text{news}, \text{football}, \text{tennis}|\text{News})P(\text{News})}{P(\text{news}, \text{football}, \text{tennis})}
$$
The first term in the numerator is calculated using the fact that given you observe the latent class, that is, News, the observed data, that is news, football, and tennis probabilities are independent (this may be a questionable assumption, but the answer depends on subject matter). You can use the law for calculating the probabilties of independent event.
$$
P(\text{news}, \text{football}, \text{tennis}|\text{News})=P(\text{news}|\text{News})P( \text{football}|\text{News})P(\text{tennis}|\text{News})
$$
Proceeding similarly for Sports, we get:
$$
P(\text{Sports}|\text{news}, \text{football}, \text{tennis}) = \frac{P(\text{news}, \text{football}, \text{tennis}|\text{Sports})P(\text{Sports})}{P(\text{news}, \text{football}, \text{tennis})}
$$
$$
P(\text{news}, \text{football}, \text{tennis}|\text{Sports})=P(\text{news}|\text{Sports})P( \text{football}|\text{Sports})P(\text{tennis}|\text{Sports})
$$
The denominator for both cases can be calculated by using the Law of total probability.
$$
P(\text{news}, \text{football}, \text{tennis}) =P(\text{news}, \text{football}, \text{tennis}|\text{News})P(\text{News})+ P(\text{news}, \text{football}, \text{tennis}|\text{Sports})P(\text{Sports})
$$
We are now left with only one probability in each case, that is,$P(\text{News})$ and $P(\text{Sports})$, respectively. If we know these, every probability until now can be calculated. This can be determined based on prior knowledge, or in your case it might be already provided to you.
Plugging in all the probabilities gives you the probabilities of interest.
A high probability value for a specific class implies that the observed document belongs to that class (News or Sports). But how do you decided "how high is high", depends, again, on subject matter and a lot of other issues. | Problem in evaluating naive Bayes | A naive Bayes classifier, as the names suggests, is a simple application of Bayes' Theorem. Basically, it calculates the probabilities of quantities of interest (generally unobserved, called parameter | Problem in evaluating naive Bayes
A naive Bayes classifier, as the names suggests, is a simple application of Bayes' Theorem. Basically, it calculates the probabilities of quantities of interest (generally unobserved, called parameters or latent classes) based on the observed data. In your case the observed data are: news, football, and tennis. The quantities of interest for which you want to calculate the probabilities are: News and Sports. Seems like you are interested in calculating: $P(\text{News}|\text{news}, \text{football}, \text{tennis}), P(\text{News}|\text{news}, \text{football}, \text{tennis})$.
Now we will use Bayes theorem to get:
$$
P(\text{News}|\text{news}, \text{football}, \text{tennis}) = \frac{P(\text{news}, \text{football}, \text{tennis}|\text{News})P(\text{News})}{P(\text{news}, \text{football}, \text{tennis})}
$$
The first term in the numerator is calculated using the fact that given you observe the latent class, that is, News, the observed data, that is news, football, and tennis probabilities are independent (this may be a questionable assumption, but the answer depends on subject matter). You can use the law for calculating the probabilties of independent event.
$$
P(\text{news}, \text{football}, \text{tennis}|\text{News})=P(\text{news}|\text{News})P( \text{football}|\text{News})P(\text{tennis}|\text{News})
$$
Proceeding similarly for Sports, we get:
$$
P(\text{Sports}|\text{news}, \text{football}, \text{tennis}) = \frac{P(\text{news}, \text{football}, \text{tennis}|\text{Sports})P(\text{Sports})}{P(\text{news}, \text{football}, \text{tennis})}
$$
$$
P(\text{news}, \text{football}, \text{tennis}|\text{Sports})=P(\text{news}|\text{Sports})P( \text{football}|\text{Sports})P(\text{tennis}|\text{Sports})
$$
The denominator for both cases can be calculated by using the Law of total probability.
$$
P(\text{news}, \text{football}, \text{tennis}) =P(\text{news}, \text{football}, \text{tennis}|\text{News})P(\text{News})+ P(\text{news}, \text{football}, \text{tennis}|\text{Sports})P(\text{Sports})
$$
We are now left with only one probability in each case, that is,$P(\text{News})$ and $P(\text{Sports})$, respectively. If we know these, every probability until now can be calculated. This can be determined based on prior knowledge, or in your case it might be already provided to you.
Plugging in all the probabilities gives you the probabilities of interest.
A high probability value for a specific class implies that the observed document belongs to that class (News or Sports). But how do you decided "how high is high", depends, again, on subject matter and a lot of other issues. | Problem in evaluating naive Bayes
A naive Bayes classifier, as the names suggests, is a simple application of Bayes' Theorem. Basically, it calculates the probabilities of quantities of interest (generally unobserved, called parameter |
50,772 | Is there a generic term for measures of correctness like "precision" and "recall"? | I don't know if there is a generally accepted generic term, but I think you might say "classifier performance metrics/measures" (like in the R package ROCR), or "measures of predictive/classification performance".
The widely cited paper by Fawcett, for example, talks about "common performance metrics" and lists true positive rate (tpr), fpr, sensitivity, speficitiy, precision, and recall. | Is there a generic term for measures of correctness like "precision" and "recall"? | I don't know if there is a generally accepted generic term, but I think you might say "classifier performance metrics/measures" (like in the R package ROCR), or "measures of predictive/classification | Is there a generic term for measures of correctness like "precision" and "recall"?
I don't know if there is a generally accepted generic term, but I think you might say "classifier performance metrics/measures" (like in the R package ROCR), or "measures of predictive/classification performance".
The widely cited paper by Fawcett, for example, talks about "common performance metrics" and lists true positive rate (tpr), fpr, sensitivity, speficitiy, precision, and recall. | Is there a generic term for measures of correctness like "precision" and "recall"?
I don't know if there is a generally accepted generic term, but I think you might say "classifier performance metrics/measures" (like in the R package ROCR), or "measures of predictive/classification |
50,773 | Is there a generic term for measures of correctness like "precision" and "recall"? | I would use precision and recall but explain it with a dart board analogy if necessary. | Is there a generic term for measures of correctness like "precision" and "recall"? | I would use precision and recall but explain it with a dart board analogy if necessary. | Is there a generic term for measures of correctness like "precision" and "recall"?
I would use precision and recall but explain it with a dart board analogy if necessary. | Is there a generic term for measures of correctness like "precision" and "recall"?
I would use precision and recall but explain it with a dart board analogy if necessary. |
50,774 | Estimating correlated parameters with multi-level model | Have you tried to use Bugs or Jags, calling one of them from R? The model you seem to be estimating is a simple varying slope model, with predictors at the second level.
I'd rewrite your model as:
Be $i = 1, ...n$ students and $k = 1, ... K$ classes. Assuming your data is in the form student-class (i.e. repeated measures), then your model is:
$y_{i} \sim N(\beta_{[k]}*x_{1,i} + \delta_{[k]}*x_{2,i} +..., \sigma^{2})$
$ \beta_{[k]} \sim N(\gamma_{1}*Z_{1,k}, \sigma_{\beta1}^{2})$
...
This model is quite easy to estimate using Bugs or jags and you can call them with function rjags or bugs. They're in package R2jags and here is a simple example o fitting a multilevel model (with winBugs) on R. | Estimating correlated parameters with multi-level model | Have you tried to use Bugs or Jags, calling one of them from R? The model you seem to be estimating is a simple varying slope model, with predictors at the second level.
I'd rewrite your model as:
Be | Estimating correlated parameters with multi-level model
Have you tried to use Bugs or Jags, calling one of them from R? The model you seem to be estimating is a simple varying slope model, with predictors at the second level.
I'd rewrite your model as:
Be $i = 1, ...n$ students and $k = 1, ... K$ classes. Assuming your data is in the form student-class (i.e. repeated measures), then your model is:
$y_{i} \sim N(\beta_{[k]}*x_{1,i} + \delta_{[k]}*x_{2,i} +..., \sigma^{2})$
$ \beta_{[k]} \sim N(\gamma_{1}*Z_{1,k}, \sigma_{\beta1}^{2})$
...
This model is quite easy to estimate using Bugs or jags and you can call them with function rjags or bugs. They're in package R2jags and here is a simple example o fitting a multilevel model (with winBugs) on R. | Estimating correlated parameters with multi-level model
Have you tried to use Bugs or Jags, calling one of them from R? The model you seem to be estimating is a simple varying slope model, with predictors at the second level.
I'd rewrite your model as:
Be |
50,775 | Estimating correlated parameters with multi-level model | How about just writing out the likelihood function and maximizing? | Estimating correlated parameters with multi-level model | How about just writing out the likelihood function and maximizing? | Estimating correlated parameters with multi-level model
How about just writing out the likelihood function and maximizing? | Estimating correlated parameters with multi-level model
How about just writing out the likelihood function and maximizing? |
50,776 | Estimating correlated parameters with multi-level model | How is this advantageous over a normal varying coefficient model such as:
fit<-lmer(score~1+vector of class_attributes+vector of student attributes
+(1+vector of class attributes+vector of student attributes)
+(1+vector of student attributes|class)
+(1+vector of class attributes|student))
?
In this example, there is an overall intercept and attribute effect, but each class has a different coefficient possible which can be viewed by typing ranef(fit)
Section 3.2 of the Bates book on lme4 seems exactly analogous to your situation.
https://r-forge.r-project.org/scm/viewvc.php/*checkout*/www/lMMwR/lrgprt.pdf?revision=656&root=lme4&pathrev=656
Update (I updated the line of code above):
I also ran these lines to try to simulate your situation, but without any student attributes
library(lme4)
n<-100 #class size
pool<-200 #student pool size
class=c(rep(1,n), rep(2,n), rep(3,n))
min_in_class=c(rep(45,n), rep(60,n), rep(90,n))
min_hw=c(rep(90,n), rep(60,n), rep(60,n))
student_id=c(sample(1:pool,n), sample(1:pool,n), sample(1:pool,n))
performance=55+10*class +.1*min_in_class +.2*min_hw+ -.001*min_in_class*min_hw +rnorm(3*n, 0,10)
df<-data.frame(class=as.factor(class), min_in_class, min_hw, student_id=as.factor(student_id), performance)
library(reshape2)
melted<-melt(df, id.vars=c('student_id', 'class'))
casted<-dcast(melted, student_id~class+variable)
casted$score<-rowMeans(casted[,c(4,7,10)],na.rm=T)+rnorm(nrow(casted),0,5)
df$score<-casted$score[match(df$student_id, casted$student_id)]
I thought what you needed trying to do was this:
fit<-lmer(score~1+min_in_class+min_hw+(1|class)+(1+min_in_class+min_hw|student_id), data=df)
I ran it with various class sizes and pools and didn't get the results I was expecting; but perhaps with more than a few classes, things will look better. | Estimating correlated parameters with multi-level model | How is this advantageous over a normal varying coefficient model such as:
fit<-lmer(score~1+vector of class_attributes+vector of student attributes
+(1+vector of class attributes+vector of student att | Estimating correlated parameters with multi-level model
How is this advantageous over a normal varying coefficient model such as:
fit<-lmer(score~1+vector of class_attributes+vector of student attributes
+(1+vector of class attributes+vector of student attributes)
+(1+vector of student attributes|class)
+(1+vector of class attributes|student))
?
In this example, there is an overall intercept and attribute effect, but each class has a different coefficient possible which can be viewed by typing ranef(fit)
Section 3.2 of the Bates book on lme4 seems exactly analogous to your situation.
https://r-forge.r-project.org/scm/viewvc.php/*checkout*/www/lMMwR/lrgprt.pdf?revision=656&root=lme4&pathrev=656
Update (I updated the line of code above):
I also ran these lines to try to simulate your situation, but without any student attributes
library(lme4)
n<-100 #class size
pool<-200 #student pool size
class=c(rep(1,n), rep(2,n), rep(3,n))
min_in_class=c(rep(45,n), rep(60,n), rep(90,n))
min_hw=c(rep(90,n), rep(60,n), rep(60,n))
student_id=c(sample(1:pool,n), sample(1:pool,n), sample(1:pool,n))
performance=55+10*class +.1*min_in_class +.2*min_hw+ -.001*min_in_class*min_hw +rnorm(3*n, 0,10)
df<-data.frame(class=as.factor(class), min_in_class, min_hw, student_id=as.factor(student_id), performance)
library(reshape2)
melted<-melt(df, id.vars=c('student_id', 'class'))
casted<-dcast(melted, student_id~class+variable)
casted$score<-rowMeans(casted[,c(4,7,10)],na.rm=T)+rnorm(nrow(casted),0,5)
df$score<-casted$score[match(df$student_id, casted$student_id)]
I thought what you needed trying to do was this:
fit<-lmer(score~1+min_in_class+min_hw+(1|class)+(1+min_in_class+min_hw|student_id), data=df)
I ran it with various class sizes and pools and didn't get the results I was expecting; but perhaps with more than a few classes, things will look better. | Estimating correlated parameters with multi-level model
How is this advantageous over a normal varying coefficient model such as:
fit<-lmer(score~1+vector of class_attributes+vector of student attributes
+(1+vector of class attributes+vector of student att |
50,777 | Which statistical test should I use for my experiment on aggressive interactions in killifish? | Sophie and I discussed this earlier (she is a student at my university) and I am still not satisfied with any of my suggestions so far. Here are two possibilities for the winner/loser data (assuming you always have a winner).
1) Compete each yellow against each red (64 competitions) and record which colour won. Test whether the proportion of fights won by yellow males is significantly different from that you'd expect if colour has no effect on competitive ability (i.e. significantly different from a binomial distribution with p=q=0.5). This is very simple and ignores weight.
2) Compete each fish against every other fish, regardless of colour (120 competitions). Construct a dominance hierarchy (see, for example, Bang et al. 2009 Anim. Behav. 79:631). Test either a) whether there is a significant difference in median dominance rank between the two colour morphs (e.g. Mann-Whitney test) or b) whether red and yellow are randomly dispersed through the hierarchy using a randomisation test. Better still, see if you can find a bespoke test for effects of phenotypic variables on dominance in the literature. | Which statistical test should I use for my experiment on aggressive interactions in killifish? | Sophie and I discussed this earlier (she is a student at my university) and I am still not satisfied with any of my suggestions so far. Here are two possibilities for the winner/loser data (assuming | Which statistical test should I use for my experiment on aggressive interactions in killifish?
Sophie and I discussed this earlier (she is a student at my university) and I am still not satisfied with any of my suggestions so far. Here are two possibilities for the winner/loser data (assuming you always have a winner).
1) Compete each yellow against each red (64 competitions) and record which colour won. Test whether the proportion of fights won by yellow males is significantly different from that you'd expect if colour has no effect on competitive ability (i.e. significantly different from a binomial distribution with p=q=0.5). This is very simple and ignores weight.
2) Compete each fish against every other fish, regardless of colour (120 competitions). Construct a dominance hierarchy (see, for example, Bang et al. 2009 Anim. Behav. 79:631). Test either a) whether there is a significant difference in median dominance rank between the two colour morphs (e.g. Mann-Whitney test) or b) whether red and yellow are randomly dispersed through the hierarchy using a randomisation test. Better still, see if you can find a bespoke test for effects of phenotypic variables on dominance in the literature. | Which statistical test should I use for my experiment on aggressive interactions in killifish?
Sophie and I discussed this earlier (she is a student at my university) and I am still not satisfied with any of my suggestions so far. Here are two possibilities for the winner/loser data (assuming |
50,778 | Which statistical test should I use for my experiment on aggressive interactions in killifish? | You might consider doing a round robin tournament and then estimating the effect of color controlling for weight within a hierarchical paired comparison model. With 120 comparisons, you still will not have much power, but you'll have more than the non-parametric techniques. You can get a little bit more power by having them interact more often, but not much more since you are just improving your estimate of the difference between the same fish.
See Ulf Böckenholt's "Hierarchical Modeling of Paired Comparison Data" also H.A. David's The Method of Paired Comparisons for discussion of different types of designs.
Also, I might worry that your experiment could be changing the behavior you are trying to measure, particularly if the fish are unused to interacting. It might be sound to have more than 120 interactions to evaluate whether there is a habituation or learning effect. | Which statistical test should I use for my experiment on aggressive interactions in killifish? | You might consider doing a round robin tournament and then estimating the effect of color controlling for weight within a hierarchical paired comparison model. With 120 comparisons, you still will not | Which statistical test should I use for my experiment on aggressive interactions in killifish?
You might consider doing a round robin tournament and then estimating the effect of color controlling for weight within a hierarchical paired comparison model. With 120 comparisons, you still will not have much power, but you'll have more than the non-parametric techniques. You can get a little bit more power by having them interact more often, but not much more since you are just improving your estimate of the difference between the same fish.
See Ulf Böckenholt's "Hierarchical Modeling of Paired Comparison Data" also H.A. David's The Method of Paired Comparisons for discussion of different types of designs.
Also, I might worry that your experiment could be changing the behavior you are trying to measure, particularly if the fish are unused to interacting. It might be sound to have more than 120 interactions to evaluate whether there is a habituation or learning effect. | Which statistical test should I use for my experiment on aggressive interactions in killifish?
You might consider doing a round robin tournament and then estimating the effect of color controlling for weight within a hierarchical paired comparison model. With 120 comparisons, you still will not |
50,779 | GLM for proportional data | Logistic regression, like this is, assumes a binomial distribution, or, as I prefer, a Bernoulli distribution per event. I know of no case nor reason where this should not be safely assumed by itself (either it happens or it doesn't, and in a population you can always assign a probability to this). There is no reason the upper limit on your number of events per nest should influence this.
That distribution, by linearity, is assumed conditionally on the year, where the logodds are linear in year. This could be faulty, but that has nothing to do with the possible number of events, just the fact that any model can be wrong.
You can (with predict(type="response")) get the probability of an egg hatching, conditional on the year from this type of model (technically that is not exactly the same as a rate, but for most practical purposes, it is). | GLM for proportional data | Logistic regression, like this is, assumes a binomial distribution, or, as I prefer, a Bernoulli distribution per event. I know of no case nor reason where this should not be safely assumed by itself | GLM for proportional data
Logistic regression, like this is, assumes a binomial distribution, or, as I prefer, a Bernoulli distribution per event. I know of no case nor reason where this should not be safely assumed by itself (either it happens or it doesn't, and in a population you can always assign a probability to this). There is no reason the upper limit on your number of events per nest should influence this.
That distribution, by linearity, is assumed conditionally on the year, where the logodds are linear in year. This could be faulty, but that has nothing to do with the possible number of events, just the fact that any model can be wrong.
You can (with predict(type="response")) get the probability of an egg hatching, conditional on the year from this type of model (technically that is not exactly the same as a rate, but for most practical purposes, it is). | GLM for proportional data
Logistic regression, like this is, assumes a binomial distribution, or, as I prefer, a Bernoulli distribution per event. I know of no case nor reason where this should not be safely assumed by itself |
50,780 | Probability distribution of fragment lengths | Let the rod have length $L$ and fix a segment of length $x$. The chance that any single breakpoint misses the segment equals the proportion of the rod not occupied by the segment, $1−x/L$. Because the breakpoints are independent, the chance that all of them miss it is the product of $n$ such chances, $(1 - x/L)^n$.
From comments following the question, it appears that $x$ is intended to be small compared to the rod's length: $x/L \ll 1$. Let $\xi = L/x$ (assumed to be large) and rewrite $n = \xi(n/\xi)$, leading (purely via substitutions) to
$$\Pr(\text{all miss}) = (1 - x/L)^n = (1 - 1/\xi)^{\xi(n/\xi)} = \left((1-1/\xi)^\xi\right)^{n/\xi}\text{.}$$
Asymptotically $\xi \to \infty$. If we assume that $n$ varies in a way that makes $n/\xi$ converge to a constant, this probability approaches a computable limit. Let this constant be some value $\lambda$ times $x$. It is the limiting value of $n/\xi/x = n/L$: notice how the length of the rod is involved here and effectively is incorporated in $\lambda$. Because $\exp(-1) = 1/e$ is the limiting value of $(1-1/\xi)^\xi$ and raising to (positive) powers is a continuous function, it follows readily that the limit is
$$\Pr(\text{all miss}) \to e^{-\lambda x}.$$
One application is when $n$ is a constant, entailing $\lambda = n/L$, and $x \ll L$. We obtain $$e^{-nx/L}$$ as a good approximation for the probability that all breaks miss the segment. This analysis shows that the approximation fails as $x$ grows large: the approximation is only as good as the approximation $1/e \sim (1-1/\xi)^\xi$. Finally, if you set $x = L$, the approximation is clearly wrong because it gives $e^{-n}$ instead of the correct answer, $0$. | Probability distribution of fragment lengths | Let the rod have length $L$ and fix a segment of length $x$. The chance that any single breakpoint misses the segment equals the proportion of the rod not occupied by the segment, $1−x/L$. Because t | Probability distribution of fragment lengths
Let the rod have length $L$ and fix a segment of length $x$. The chance that any single breakpoint misses the segment equals the proportion of the rod not occupied by the segment, $1−x/L$. Because the breakpoints are independent, the chance that all of them miss it is the product of $n$ such chances, $(1 - x/L)^n$.
From comments following the question, it appears that $x$ is intended to be small compared to the rod's length: $x/L \ll 1$. Let $\xi = L/x$ (assumed to be large) and rewrite $n = \xi(n/\xi)$, leading (purely via substitutions) to
$$\Pr(\text{all miss}) = (1 - x/L)^n = (1 - 1/\xi)^{\xi(n/\xi)} = \left((1-1/\xi)^\xi\right)^{n/\xi}\text{.}$$
Asymptotically $\xi \to \infty$. If we assume that $n$ varies in a way that makes $n/\xi$ converge to a constant, this probability approaches a computable limit. Let this constant be some value $\lambda$ times $x$. It is the limiting value of $n/\xi/x = n/L$: notice how the length of the rod is involved here and effectively is incorporated in $\lambda$. Because $\exp(-1) = 1/e$ is the limiting value of $(1-1/\xi)^\xi$ and raising to (positive) powers is a continuous function, it follows readily that the limit is
$$\Pr(\text{all miss}) \to e^{-\lambda x}.$$
One application is when $n$ is a constant, entailing $\lambda = n/L$, and $x \ll L$. We obtain $$e^{-nx/L}$$ as a good approximation for the probability that all breaks miss the segment. This analysis shows that the approximation fails as $x$ grows large: the approximation is only as good as the approximation $1/e \sim (1-1/\xi)^\xi$. Finally, if you set $x = L$, the approximation is clearly wrong because it gives $e^{-n}$ instead of the correct answer, $0$. | Probability distribution of fragment lengths
Let the rod have length $L$ and fix a segment of length $x$. The chance that any single breakpoint misses the segment equals the proportion of the rod not occupied by the segment, $1−x/L$. Because t |
50,781 | Probability distribution of fragment lengths | Let $\{X_i\}$ be the locations of the cuts.
I'd approach this problem by finding the order statistics $\{Y_i\}$ so that $Y_1$ would be the location of the leftmost cut. Then I'd calculate the probability distributions of the differences between the variables $Y_i-Y_{i-1}$. Don't forget to also calculate $Y_1-0$ and $L-Y_n$.
Can anyone think of a better way? | Probability distribution of fragment lengths | Let $\{X_i\}$ be the locations of the cuts.
I'd approach this problem by finding the order statistics $\{Y_i\}$ so that $Y_1$ would be the location of the leftmost cut. Then I'd calculate the probabil | Probability distribution of fragment lengths
Let $\{X_i\}$ be the locations of the cuts.
I'd approach this problem by finding the order statistics $\{Y_i\}$ so that $Y_1$ would be the location of the leftmost cut. Then I'd calculate the probability distributions of the differences between the variables $Y_i-Y_{i-1}$. Don't forget to also calculate $Y_1-0$ and $L-Y_n$.
Can anyone think of a better way? | Probability distribution of fragment lengths
Let $\{X_i\}$ be the locations of the cuts.
I'd approach this problem by finding the order statistics $\{Y_i\}$ so that $Y_1$ would be the location of the leftmost cut. Then I'd calculate the probabil |
50,782 | Statistical test for a series of data over time | As GaBorgulya pointed out one needs to have a model to detect the potential anomaly. This model needs to generate a "white noise" error series or be sufficient to separate signal and noise. With this model in hand based upon older data one could then compare the new value with the prediction interval. This is the classical , albeit limited approach called an "out off model test". A more comprehensive approach is to to include a "pulse variable" i.e. zeros and a 1 for the new data point and to estimate coefficients for the augmented model using all of the data. The probability of observing what you observed before you observed it ( i.e. the new value" ) is then available from the "t value" of the "pulse variable" in this augmented model. In general this approach is referred to as Intervention Detection which scans ( data mines ) the time periods to detect the points where Pulses , Level Shifts , Seasonal Pulses and Local Time Trends have been significantly evidented. In your case you are not searching for the null hypothesis but rather simply is there a potential change point at the last observation i.e. the last "1" period. Your question also suggests solutions that we have seen which detect a significant change in the mean of the last K periods alerting the analyst to the innovation. | Statistical test for a series of data over time | As GaBorgulya pointed out one needs to have a model to detect the potential anomaly. This model needs to generate a "white noise" error series or be sufficient to separate signal and noise. With this | Statistical test for a series of data over time
As GaBorgulya pointed out one needs to have a model to detect the potential anomaly. This model needs to generate a "white noise" error series or be sufficient to separate signal and noise. With this model in hand based upon older data one could then compare the new value with the prediction interval. This is the classical , albeit limited approach called an "out off model test". A more comprehensive approach is to to include a "pulse variable" i.e. zeros and a 1 for the new data point and to estimate coefficients for the augmented model using all of the data. The probability of observing what you observed before you observed it ( i.e. the new value" ) is then available from the "t value" of the "pulse variable" in this augmented model. In general this approach is referred to as Intervention Detection which scans ( data mines ) the time periods to detect the points where Pulses , Level Shifts , Seasonal Pulses and Local Time Trends have been significantly evidented. In your case you are not searching for the null hypothesis but rather simply is there a potential change point at the last observation i.e. the last "1" period. Your question also suggests solutions that we have seen which detect a significant change in the mean of the last K periods alerting the analyst to the innovation. | Statistical test for a series of data over time
As GaBorgulya pointed out one needs to have a model to detect the potential anomaly. This model needs to generate a "white noise" error series or be sufficient to separate signal and noise. With this |
50,783 | Statistical test for a series of data over time | With less than a year of data, it'll be impossible to account for any kind of yearly seasonal effect. (For example, if your data was shopping-related, you would have things like annual holidays, perhaps two sales a year, etc.)
You might want to look at Statistical Process Control tools like http://en.wikipedia.org/wiki/Control_chart perhaps? | Statistical test for a series of data over time | With less than a year of data, it'll be impossible to account for any kind of yearly seasonal effect. (For example, if your data was shopping-related, you would have things like annual holidays, perha | Statistical test for a series of data over time
With less than a year of data, it'll be impossible to account for any kind of yearly seasonal effect. (For example, if your data was shopping-related, you would have things like annual holidays, perhaps two sales a year, etc.)
You might want to look at Statistical Process Control tools like http://en.wikipedia.org/wiki/Control_chart perhaps? | Statistical test for a series of data over time
With less than a year of data, it'll be impossible to account for any kind of yearly seasonal effect. (For example, if your data was shopping-related, you would have things like annual holidays, perha |
50,784 | When is a randomised controlled trial (RCT) balanced? | I have always seen "balance" for a clinical trial described as you suggested - that there is some difference in the covariate patterns between the treatment and control arm. Note however, that there are ways this can arise beyond just misfortune during randomization. Two that come to mind quickly are:
Time-varying confounders. If confounding arises after randomization, randomization does not protect against it.
Informative censoring. If one set of people are more likely due to treatment to drop out - say a particular subgroup tends to have trouble complying with trial protocol when in the treatment arm - the treatment and control arms will be unbalanced even under situations of perfect randomization. | When is a randomised controlled trial (RCT) balanced? | I have always seen "balance" for a clinical trial described as you suggested - that there is some difference in the covariate patterns between the treatment and control arm. Note however, that there a | When is a randomised controlled trial (RCT) balanced?
I have always seen "balance" for a clinical trial described as you suggested - that there is some difference in the covariate patterns between the treatment and control arm. Note however, that there are ways this can arise beyond just misfortune during randomization. Two that come to mind quickly are:
Time-varying confounders. If confounding arises after randomization, randomization does not protect against it.
Informative censoring. If one set of people are more likely due to treatment to drop out - say a particular subgroup tends to have trouble complying with trial protocol when in the treatment arm - the treatment and control arms will be unbalanced even under situations of perfect randomization. | When is a randomised controlled trial (RCT) balanced?
I have always seen "balance" for a clinical trial described as you suggested - that there is some difference in the covariate patterns between the treatment and control arm. Note however, that there a |
50,785 | When is a randomised controlled trial (RCT) balanced? | Balanced designs have really just one goal, orthogonal treatment effects. Orthogonal design lowers the risk of unobservables sneaking into your effect estimates in an uneven way. See: http://www1.umn.edu/statsoft/doc/statnotes/stat06.txt for an excellent discussion of this topic. | When is a randomised controlled trial (RCT) balanced? | Balanced designs have really just one goal, orthogonal treatment effects. Orthogonal design lowers the risk of unobservables sneaking into your effect estimates in an uneven way. See: http://www1.umn. | When is a randomised controlled trial (RCT) balanced?
Balanced designs have really just one goal, orthogonal treatment effects. Orthogonal design lowers the risk of unobservables sneaking into your effect estimates in an uneven way. See: http://www1.umn.edu/statsoft/doc/statnotes/stat06.txt for an excellent discussion of this topic. | When is a randomised controlled trial (RCT) balanced?
Balanced designs have really just one goal, orthogonal treatment effects. Orthogonal design lowers the risk of unobservables sneaking into your effect estimates in an uneven way. See: http://www1.umn. |
50,786 | Estimating event probability from historical time series with clear seasonality | I think the joint distribution of temperature data on successive days could be reasonably modelled using a multi-variate Gaussian (Gaussian distributions are often used in statistical downscaling of temperature). What I would try would be to regress the mean and covariance matrix of the temperature time series on sine and cosine components of the day of year (to deal with the seasonality). The details on how to do that are given in a paper by Peter Williams, Williams uses a neural network, but I would start off with just a linear model. This will give you what climatologists would call a "weather generator" (of sorts). Using this you could generate as many synthetic time series as you want with the appropriate statistical properties, from which you could estimate the probabilities you require directly. You would need to estimate the window over which temperatures were usefully correllated - which may be quite high in winter due to blocking patterns (for the U.K. anyway). A bit baroque I suppose, but it would be the thing I would try! | Estimating event probability from historical time series with clear seasonality | I think the joint distribution of temperature data on successive days could be reasonably modelled using a multi-variate Gaussian (Gaussian distributions are often used in statistical downscaling of t | Estimating event probability from historical time series with clear seasonality
I think the joint distribution of temperature data on successive days could be reasonably modelled using a multi-variate Gaussian (Gaussian distributions are often used in statistical downscaling of temperature). What I would try would be to regress the mean and covariance matrix of the temperature time series on sine and cosine components of the day of year (to deal with the seasonality). The details on how to do that are given in a paper by Peter Williams, Williams uses a neural network, but I would start off with just a linear model. This will give you what climatologists would call a "weather generator" (of sorts). Using this you could generate as many synthetic time series as you want with the appropriate statistical properties, from which you could estimate the probabilities you require directly. You would need to estimate the window over which temperatures were usefully correllated - which may be quite high in winter due to blocking patterns (for the U.K. anyway). A bit baroque I suppose, but it would be the thing I would try! | Estimating event probability from historical time series with clear seasonality
I think the joint distribution of temperature data on successive days could be reasonably modelled using a multi-variate Gaussian (Gaussian distributions are often used in statistical downscaling of t |
50,787 | Estimating event probability from historical time series with clear seasonality | I know little about meteorology, so my following assumptions may be wrong: today's temperature is similar to yesterday's and the day before yesterday's (maybe more days going back), and also similar to temperature a year age, two years ago, three years ago, etc.
If these assumptions got reinforcement I would use an ARMA model using days -1, -2, … and -365, -365*2, -365*3, … as predictors of today's temperature, and maybe a few days looking back in the moving average terms. (You can imagine many variants of this model.)
After fitting the model I would make a large number of model based simulations predicting the temperatures for each of the following 365 days, and count the cases satisfying the two conditions. | Estimating event probability from historical time series with clear seasonality | I know little about meteorology, so my following assumptions may be wrong: today's temperature is similar to yesterday's and the day before yesterday's (maybe more days going back), and also similar t | Estimating event probability from historical time series with clear seasonality
I know little about meteorology, so my following assumptions may be wrong: today's temperature is similar to yesterday's and the day before yesterday's (maybe more days going back), and also similar to temperature a year age, two years ago, three years ago, etc.
If these assumptions got reinforcement I would use an ARMA model using days -1, -2, … and -365, -365*2, -365*3, … as predictors of today's temperature, and maybe a few days looking back in the moving average terms. (You can imagine many variants of this model.)
After fitting the model I would make a large number of model based simulations predicting the temperatures for each of the following 365 days, and count the cases satisfying the two conditions. | Estimating event probability from historical time series with clear seasonality
I know little about meteorology, so my following assumptions may be wrong: today's temperature is similar to yesterday's and the day before yesterday's (maybe more days going back), and also similar t |
50,788 | Is there a classification of physical measurements according to their statistical distribution? | Some people have started to look at this issue in the chemometrics literature. For instance, about 20 years ago Robert Gibbons started to do statistical analyses suggesting instrument responses (for low-level measurement of chemicals) were nonlinear, heteroscedastic, and had non-normal (perhaps lognormal) error distributions. I found an abstract of one of those papers on Springer's site, Some statistical and conceptual issues in the detection of low-level environmental pollutants (JEES 1995). | Is there a classification of physical measurements according to their statistical distribution? | Some people have started to look at this issue in the chemometrics literature. For instance, about 20 years ago Robert Gibbons started to do statistical analyses suggesting instrument responses (for | Is there a classification of physical measurements according to their statistical distribution?
Some people have started to look at this issue in the chemometrics literature. For instance, about 20 years ago Robert Gibbons started to do statistical analyses suggesting instrument responses (for low-level measurement of chemicals) were nonlinear, heteroscedastic, and had non-normal (perhaps lognormal) error distributions. I found an abstract of one of those papers on Springer's site, Some statistical and conceptual issues in the detection of low-level environmental pollutants (JEES 1995). | Is there a classification of physical measurements according to their statistical distribution?
Some people have started to look at this issue in the chemometrics literature. For instance, about 20 years ago Robert Gibbons started to do statistical analyses suggesting instrument responses (for |
50,789 | Correlation between two nodes of a single layer MLP for joint-Gaussian input | The question really concerns pairs of normal variates. Let's call them $x_1$ and $x_2$ with means $\mu_i$, standard deviations $\sigma_i$, and correlation $\rho$. Whence their joint pdf is
$$\frac{1}{2 \pi \sqrt{1 - \rho^2} \sigma_1 \sigma_2}
e^{-\frac{1}{1-\rho^2} \left(\frac{(x_1 - \mu_1)^2}{2 \sigma_1^2} + \frac{(x_2 - \mu_2)^2}{2 \sigma_2^2} - \frac{\rho (x_1 - \mu_1)(x_2 - \mu_2)}{\sigma_1 \sigma_2}\right)} dx_1 dx_2\text{.}$$
Let $f(x_1,x_2)$ be the product of this with the $y_i$ (as functions of the $x_i$). The first component of the gradient of $\log(f)$ is
$$\frac{\partial \log(f)}{\partial x_1}
= \frac{1}{1 + e^{x_1}} + \frac{\rho(\mu_2 - x_2) \sigma_1 + (x_1 - \mu_1)\sigma_2}{(\rho^2-1)\sigma_1^2 \sigma_2},$$
with a similar expression for the second component (via the symmetry achieved by exchanging the subscripts 1 and 2). There will be a unique global maximum, which we can detect by setting the gradient to zero. This pair of nonlinear equations has no closed form solution. It is rapidly found by a few Newton-Raphson iterations. Alternatively, we can linearize these equations. Indeed, through second order, the first component equals
$$\frac{1}{2} + x_1\left(\frac{-1}{4} + \frac{1}{(\rho^2-1)\sigma_1^2}\right) + \frac{-\rho x_2 \sigma_1 + \rho \mu_2 \sigma_1 - \mu_1 \sigma_2}{(\rho^2 -1)\sigma_1^2 \sigma_2}.$$
This gives a pair of linear equations in $(x_1, x_2)$, which therefore do have a closed form solution, say $\hat{x}_i(\mu_1, \mu_2, \sigma_1, \sigma_2, \rho)$, which obviously are rational polynomials.
The Jacobian at this critical point has 1,1 coefficient
$$\frac{e^\hat{x_1}\left(2 - (\rho^2-1)\sigma_1^2 + 2\cosh(\hat{x_1})\right)}{(1+e^\hat{x_1})^2(\rho^2-1)\sigma_1^2},$$
1,2 and 2,1 coefficients
$$\frac{\rho}{\sigma_1 \sigma_2(1 - \rho^2)},$$
and 2,2 coefficient obtained from the 1,1 coefficient by symmetry. Because this is a critical point (at least approximately), we can substitute
$$e^\hat{x_1} = \frac{(\rho^2-1)\sigma_1^2 \sigma_2}{(\mu_2 - \hat{x_2})\rho \sigma_1 + (\hat{x_1} - \mu_1)\sigma_2} - 1$$
and use that also to compute $\cosh(\hat{x_1}) = \frac{e^\hat{x_1} - e^{-\hat{x_1}}}{2}$, with a similar manipulation for $e^\hat{x_2}$ and $\cosh(\hat{x_2})$. This enables evaluation of the Hessian (the determinant of the Jacobian) as a rational function of the parameters.
The rest is routine: the Hessian tells us how to approximate the integral as a binormal integral (a saddlepoint approximation). The answer equals $\frac{1}{2\pi}$ times a rational function of the five parameters: that's your closed form (for what it's worth!). | Correlation between two nodes of a single layer MLP for joint-Gaussian input | The question really concerns pairs of normal variates. Let's call them $x_1$ and $x_2$ with means $\mu_i$, standard deviations $\sigma_i$, and correlation $\rho$. Whence their joint pdf is
$$\frac{1 | Correlation between two nodes of a single layer MLP for joint-Gaussian input
The question really concerns pairs of normal variates. Let's call them $x_1$ and $x_2$ with means $\mu_i$, standard deviations $\sigma_i$, and correlation $\rho$. Whence their joint pdf is
$$\frac{1}{2 \pi \sqrt{1 - \rho^2} \sigma_1 \sigma_2}
e^{-\frac{1}{1-\rho^2} \left(\frac{(x_1 - \mu_1)^2}{2 \sigma_1^2} + \frac{(x_2 - \mu_2)^2}{2 \sigma_2^2} - \frac{\rho (x_1 - \mu_1)(x_2 - \mu_2)}{\sigma_1 \sigma_2}\right)} dx_1 dx_2\text{.}$$
Let $f(x_1,x_2)$ be the product of this with the $y_i$ (as functions of the $x_i$). The first component of the gradient of $\log(f)$ is
$$\frac{\partial \log(f)}{\partial x_1}
= \frac{1}{1 + e^{x_1}} + \frac{\rho(\mu_2 - x_2) \sigma_1 + (x_1 - \mu_1)\sigma_2}{(\rho^2-1)\sigma_1^2 \sigma_2},$$
with a similar expression for the second component (via the symmetry achieved by exchanging the subscripts 1 and 2). There will be a unique global maximum, which we can detect by setting the gradient to zero. This pair of nonlinear equations has no closed form solution. It is rapidly found by a few Newton-Raphson iterations. Alternatively, we can linearize these equations. Indeed, through second order, the first component equals
$$\frac{1}{2} + x_1\left(\frac{-1}{4} + \frac{1}{(\rho^2-1)\sigma_1^2}\right) + \frac{-\rho x_2 \sigma_1 + \rho \mu_2 \sigma_1 - \mu_1 \sigma_2}{(\rho^2 -1)\sigma_1^2 \sigma_2}.$$
This gives a pair of linear equations in $(x_1, x_2)$, which therefore do have a closed form solution, say $\hat{x}_i(\mu_1, \mu_2, \sigma_1, \sigma_2, \rho)$, which obviously are rational polynomials.
The Jacobian at this critical point has 1,1 coefficient
$$\frac{e^\hat{x_1}\left(2 - (\rho^2-1)\sigma_1^2 + 2\cosh(\hat{x_1})\right)}{(1+e^\hat{x_1})^2(\rho^2-1)\sigma_1^2},$$
1,2 and 2,1 coefficients
$$\frac{\rho}{\sigma_1 \sigma_2(1 - \rho^2)},$$
and 2,2 coefficient obtained from the 1,1 coefficient by symmetry. Because this is a critical point (at least approximately), we can substitute
$$e^\hat{x_1} = \frac{(\rho^2-1)\sigma_1^2 \sigma_2}{(\mu_2 - \hat{x_2})\rho \sigma_1 + (\hat{x_1} - \mu_1)\sigma_2} - 1$$
and use that also to compute $\cosh(\hat{x_1}) = \frac{e^\hat{x_1} - e^{-\hat{x_1}}}{2}$, with a similar manipulation for $e^\hat{x_2}$ and $\cosh(\hat{x_2})$. This enables evaluation of the Hessian (the determinant of the Jacobian) as a rational function of the parameters.
The rest is routine: the Hessian tells us how to approximate the integral as a binormal integral (a saddlepoint approximation). The answer equals $\frac{1}{2\pi}$ times a rational function of the five parameters: that's your closed form (for what it's worth!). | Correlation between two nodes of a single layer MLP for joint-Gaussian input
The question really concerns pairs of normal variates. Let's call them $x_1$ and $x_2$ with means $\mu_i$, standard deviations $\sigma_i$, and correlation $\rho$. Whence their joint pdf is
$$\frac{1 |
50,790 | Measuring and analyzing sample complexity | Let's say we want to bound empirical risk of a model. Given an arbitrary $(\epsilon, \delta)$, the sample complexity is $n(\epsilon, \delta)$ such that for $n\geq n(\epsilon, \delta)$
$$
P(|\hat{L}(f) - L(f) | \geq \epsilon ) \leq \delta
$$
The function $\delta(n,\epsilon)$ is a bound on the deviation from the main (unknown) risk (loss).
As a higher-level intuition: Sample complexity is the smallest number of samples for which we can make sure that we are close enough to the correct model. | Measuring and analyzing sample complexity | Let's say we want to bound empirical risk of a model. Given an arbitrary $(\epsilon, \delta)$, the sample complexity is $n(\epsilon, \delta)$ such that for $n\geq n(\epsilon, \delta)$
$$
P(|\hat{L}(f | Measuring and analyzing sample complexity
Let's say we want to bound empirical risk of a model. Given an arbitrary $(\epsilon, \delta)$, the sample complexity is $n(\epsilon, \delta)$ such that for $n\geq n(\epsilon, \delta)$
$$
P(|\hat{L}(f) - L(f) | \geq \epsilon ) \leq \delta
$$
The function $\delta(n,\epsilon)$ is a bound on the deviation from the main (unknown) risk (loss).
As a higher-level intuition: Sample complexity is the smallest number of samples for which we can make sure that we are close enough to the correct model. | Measuring and analyzing sample complexity
Let's say we want to bound empirical risk of a model. Given an arbitrary $(\epsilon, \delta)$, the sample complexity is $n(\epsilon, \delta)$ such that for $n\geq n(\epsilon, \delta)$
$$
P(|\hat{L}(f |
50,791 | Kolmogorov-Smirnov and lattice paths | To add to @Cardinal 's answers in the comments, I think there is work that addresses the "claim the null distribution of the Kolmogorov-Smirnov maps onto another lattice path problem that could be solved by a "two-sided ballot theorem" and "is there a general framework around all of this? The two-sample KS test?":
This paper (preprint) is concerned with r-sample Kolmogorov-Smirnov tests and they derive the exact null distribution by counting lattice paths by using a generalization of the classical reflection principle. In Section 2, I find that they lay out nicely how lattice path counting comes into play when deriving the null hypothesis.
In the introduction the paper also features a discussion on how lattice path counting and the reflection principle tie in here by reviewing ideas started with Kiefer 1959 and David 1958. They also briefly discuss how it can be seen as an r-ballot counting problem, referring to Filaseta 1985.
They provide a lattice path counting framework for KS type tests for any number of samples. From the paper:
We consider the problem of testing whether $r ≥ 2$ samples are drawn
from the same continuous distribution $F(x)$. As a test statistic we
will use the circular differences $\delta_r (n) = \max [\delta_{1,2}
> (n), \delta_{2,3} (n), . . . , \delta_{r−1,r} (n), \delta_{r,1} (n)],$
where $\delta_{ij} (n) = \sup_x [F_{n,i} (x) − F_{n,j} (x)]$, and
$F_{n,i} (x), i = 1, 2, . . . , r$ denote the empirical distribution
functions of these samples. We derive the null distribution of
$\delta_r(n)$ by considering lattice paths in $r$-dimensional space
with standard steps in the positive direction, i.e., steps are given
by the unit vectors $e_i , i = 1, 2, . . . , r$. By a simple
transformation we show that for some positive integer $k$ the number
of ways the event $\{n\delta_r (n) < k\}$ can occur is just the number
of paths $X$ with the property that for each point $X_m$ on the path
there holds the chain of inequalities $x_{1,m} > x_{2,m} > . . . >
> x_{r,m} > x_{1,m − rk}$. Indeed, the enumeration of such paths is a
well studied problem in combinatorics. Again the reflection principle
comes into play as we have to count paths in alcoves of affine (and
therefore infinite) Weyl groups; for references on the technical
background of this topic see Gessel and Zeilberger 1992,
Grabiner 2002 and Krattenthaler 2007.
Hopefully this is a good starting point for further investigations into the Kuiper and AD statistics for example. | Kolmogorov-Smirnov and lattice paths | To add to @Cardinal 's answers in the comments, I think there is work that addresses the "claim the null distribution of the Kolmogorov-Smirnov maps onto another lattice path problem that could be sol | Kolmogorov-Smirnov and lattice paths
To add to @Cardinal 's answers in the comments, I think there is work that addresses the "claim the null distribution of the Kolmogorov-Smirnov maps onto another lattice path problem that could be solved by a "two-sided ballot theorem" and "is there a general framework around all of this? The two-sample KS test?":
This paper (preprint) is concerned with r-sample Kolmogorov-Smirnov tests and they derive the exact null distribution by counting lattice paths by using a generalization of the classical reflection principle. In Section 2, I find that they lay out nicely how lattice path counting comes into play when deriving the null hypothesis.
In the introduction the paper also features a discussion on how lattice path counting and the reflection principle tie in here by reviewing ideas started with Kiefer 1959 and David 1958. They also briefly discuss how it can be seen as an r-ballot counting problem, referring to Filaseta 1985.
They provide a lattice path counting framework for KS type tests for any number of samples. From the paper:
We consider the problem of testing whether $r ≥ 2$ samples are drawn
from the same continuous distribution $F(x)$. As a test statistic we
will use the circular differences $\delta_r (n) = \max [\delta_{1,2}
> (n), \delta_{2,3} (n), . . . , \delta_{r−1,r} (n), \delta_{r,1} (n)],$
where $\delta_{ij} (n) = \sup_x [F_{n,i} (x) − F_{n,j} (x)]$, and
$F_{n,i} (x), i = 1, 2, . . . , r$ denote the empirical distribution
functions of these samples. We derive the null distribution of
$\delta_r(n)$ by considering lattice paths in $r$-dimensional space
with standard steps in the positive direction, i.e., steps are given
by the unit vectors $e_i , i = 1, 2, . . . , r$. By a simple
transformation we show that for some positive integer $k$ the number
of ways the event $\{n\delta_r (n) < k\}$ can occur is just the number
of paths $X$ with the property that for each point $X_m$ on the path
there holds the chain of inequalities $x_{1,m} > x_{2,m} > . . . >
> x_{r,m} > x_{1,m − rk}$. Indeed, the enumeration of such paths is a
well studied problem in combinatorics. Again the reflection principle
comes into play as we have to count paths in alcoves of affine (and
therefore infinite) Weyl groups; for references on the technical
background of this topic see Gessel and Zeilberger 1992,
Grabiner 2002 and Krattenthaler 2007.
Hopefully this is a good starting point for further investigations into the Kuiper and AD statistics for example. | Kolmogorov-Smirnov and lattice paths
To add to @Cardinal 's answers in the comments, I think there is work that addresses the "claim the null distribution of the Kolmogorov-Smirnov maps onto another lattice path problem that could be sol |
50,792 | Geostatistical analysis using spatial.exp in WinBugs | I have worked this out myself.
The lower bound for phi can be estiamted from
-ln(0.5)/(max separating distance between points)
To find the max separating distance I used the following code in R. My data are in a flat file with x and y coords renamed to long and lat respectively:
data <- read.csv(file="file.csv", header=T, sep=",")
coords <- data.frame(data$long,data$lat)
library(sp)
pointDist <- apply(coords, 1, function(eachPoint) spDistsN1(as.matrix(coords), eachPoint, longlat=TRUE))
distances <- as.vector(pointDist)
max(distances) | Geostatistical analysis using spatial.exp in WinBugs | I have worked this out myself.
The lower bound for phi can be estiamted from
-ln(0.5)/(max separating distance between points)
To find the max separating distance I used the following code in R. My | Geostatistical analysis using spatial.exp in WinBugs
I have worked this out myself.
The lower bound for phi can be estiamted from
-ln(0.5)/(max separating distance between points)
To find the max separating distance I used the following code in R. My data are in a flat file with x and y coords renamed to long and lat respectively:
data <- read.csv(file="file.csv", header=T, sep=",")
coords <- data.frame(data$long,data$lat)
library(sp)
pointDist <- apply(coords, 1, function(eachPoint) spDistsN1(as.matrix(coords), eachPoint, longlat=TRUE))
distances <- as.vector(pointDist)
max(distances) | Geostatistical analysis using spatial.exp in WinBugs
I have worked this out myself.
The lower bound for phi can be estiamted from
-ln(0.5)/(max separating distance between points)
To find the max separating distance I used the following code in R. My |
50,793 | How to do a repeated measures multinomial logistic regression using SPSS? | One way is to build an SPSS PLUM or NOMREG model that checks for an interaction between each predictor and a binary predictor, “time.” In that scenario you'd use just a single column for all the values of your outcome variable. For 1/2 the data set, time would be marked 0, and for the other half it'd be marked 1. Essentially you’d be treating time as if it were like gender or any other binary predictor that potentially could interact with other predictors. | How to do a repeated measures multinomial logistic regression using SPSS? | One way is to build an SPSS PLUM or NOMREG model that checks for an interaction between each predictor and a binary predictor, “time.” In that scenario you'd use just a single column for all the valu | How to do a repeated measures multinomial logistic regression using SPSS?
One way is to build an SPSS PLUM or NOMREG model that checks for an interaction between each predictor and a binary predictor, “time.” In that scenario you'd use just a single column for all the values of your outcome variable. For 1/2 the data set, time would be marked 0, and for the other half it'd be marked 1. Essentially you’d be treating time as if it were like gender or any other binary predictor that potentially could interact with other predictors. | How to do a repeated measures multinomial logistic regression using SPSS?
One way is to build an SPSS PLUM or NOMREG model that checks for an interaction between each predictor and a binary predictor, “time.” In that scenario you'd use just a single column for all the valu |
50,794 | Non-parametric regression | If your response variable is ordinal, you may want to consider and "ordered logistic regression". This is basically where you model the cumulative probabilities {in the simple example, you would model $Pr(Y\leq 1),Pr(Y\leq 2),Pr(Y\leq 3)$}. This incorporates the ordering of the response into the model, without the need for an arbitrary assumption which transforms the ordered response into a numerical one (although having said that, this can be a useful first step in exploratory analysis, or in selecting which $X$ and $Z$ variables are not necessary)
There is a way that you can get the glm() function in R to give you the MLE's for this model (other wise you would need to write your own algorithm to get the MLEs). You define a new set of variables, say $W$, where these are defined as
$$W_{1jk} = \frac{Y_{1jk}}{\sum_{i=1}^{i=I} Y_{ijk}}$$
$$W_{2jk} = \frac{Y_{2jk}}{\sum_{i=2}^{i=I} Y_{ijk}}$$
$$...$$
$$W_{I-1,jk} = \frac{Y_{I-1,jk}}{\sum_{i=I-1}^{i=R} Y_{ijk}}$$
Where $i=1,..,I$ indexes the $Y$ categories, $j=1,..,J$ indexes the $X$ categories, and $k=1,..,K$ indexes the $Z$ categories. Then fit a glm() of W on X and Z using the complimentary log-log link function. Denoting $\theta_{ijk}=Pr(Y_{ijk}\leq i)$ as the cumulative probability, the MLE's of the theta's (assuming a multi-nomial distribution for $Y_{ijk}$ values) is then
$$\hat{\theta}_{ijk}=\hat{W}_{ijk}+\hat{\theta}_{(i-1)jk}(1-\hat{W}_{ijk}) \ \ \ i=1,\dots ,I-1$$
Where $\hat{\theta}_{0jk}=0$ and $\hat{\theta}_{Ijk}=1$ and $\hat{W}_{ijk}$ are the fitted values from the glm.
You can then use the deviance table (use the anova() function on the glm object) to assess the significance of the regressor variables.
EDIT: one thing I forgot to mention in my original answer was that in the glm() function, you need to specify weights when fitting the model to $W$, which are equal to the denominators in the respective fractions defining each $W$.
You could also try a Bayesian approach, but you would most likely need to use sampling techniques to get your posterior, and using the multinomial likelihood (but parameterised with respect to $\theta_{ijk}$, so the likelihood function will have differences of the form $\theta_{ijk}-\theta_{i-1,jk}$), the MLE's are a good "first crack" at genuinely fitting the model, and give an approximate Bayesian solution (as you may have noticed, I prefer Bayesian inference)
This method is in my lecture notes, so I'm not really sure how to reference it (there are no references given in the notes) apart from what I've just said.
Just another note, I won't harp on it, but I p-values are not all they are cracked up to be. A good post discussing this can be found here. I like Harlod Jeffrey's quote above p-values (from his book probability theory) "A null hypothesis may be rejected because it did not predict something that was not observed" (this is because p-values ask for the probability of events more extreme than what was observed). | Non-parametric regression | If your response variable is ordinal, you may want to consider and "ordered logistic regression". This is basically where you model the cumulative probabilities {in the simple example, you would mode | Non-parametric regression
If your response variable is ordinal, you may want to consider and "ordered logistic regression". This is basically where you model the cumulative probabilities {in the simple example, you would model $Pr(Y\leq 1),Pr(Y\leq 2),Pr(Y\leq 3)$}. This incorporates the ordering of the response into the model, without the need for an arbitrary assumption which transforms the ordered response into a numerical one (although having said that, this can be a useful first step in exploratory analysis, or in selecting which $X$ and $Z$ variables are not necessary)
There is a way that you can get the glm() function in R to give you the MLE's for this model (other wise you would need to write your own algorithm to get the MLEs). You define a new set of variables, say $W$, where these are defined as
$$W_{1jk} = \frac{Y_{1jk}}{\sum_{i=1}^{i=I} Y_{ijk}}$$
$$W_{2jk} = \frac{Y_{2jk}}{\sum_{i=2}^{i=I} Y_{ijk}}$$
$$...$$
$$W_{I-1,jk} = \frac{Y_{I-1,jk}}{\sum_{i=I-1}^{i=R} Y_{ijk}}$$
Where $i=1,..,I$ indexes the $Y$ categories, $j=1,..,J$ indexes the $X$ categories, and $k=1,..,K$ indexes the $Z$ categories. Then fit a glm() of W on X and Z using the complimentary log-log link function. Denoting $\theta_{ijk}=Pr(Y_{ijk}\leq i)$ as the cumulative probability, the MLE's of the theta's (assuming a multi-nomial distribution for $Y_{ijk}$ values) is then
$$\hat{\theta}_{ijk}=\hat{W}_{ijk}+\hat{\theta}_{(i-1)jk}(1-\hat{W}_{ijk}) \ \ \ i=1,\dots ,I-1$$
Where $\hat{\theta}_{0jk}=0$ and $\hat{\theta}_{Ijk}=1$ and $\hat{W}_{ijk}$ are the fitted values from the glm.
You can then use the deviance table (use the anova() function on the glm object) to assess the significance of the regressor variables.
EDIT: one thing I forgot to mention in my original answer was that in the glm() function, you need to specify weights when fitting the model to $W$, which are equal to the denominators in the respective fractions defining each $W$.
You could also try a Bayesian approach, but you would most likely need to use sampling techniques to get your posterior, and using the multinomial likelihood (but parameterised with respect to $\theta_{ijk}$, so the likelihood function will have differences of the form $\theta_{ijk}-\theta_{i-1,jk}$), the MLE's are a good "first crack" at genuinely fitting the model, and give an approximate Bayesian solution (as you may have noticed, I prefer Bayesian inference)
This method is in my lecture notes, so I'm not really sure how to reference it (there are no references given in the notes) apart from what I've just said.
Just another note, I won't harp on it, but I p-values are not all they are cracked up to be. A good post discussing this can be found here. I like Harlod Jeffrey's quote above p-values (from his book probability theory) "A null hypothesis may be rejected because it did not predict something that was not observed" (this is because p-values ask for the probability of events more extreme than what was observed). | Non-parametric regression
If your response variable is ordinal, you may want to consider and "ordered logistic regression". This is basically where you model the cumulative probabilities {in the simple example, you would mode |
50,795 | Experiment design for proportion | Thank you, whuber, for making me aware of Wald's Sequential Probability Ratio Test (SPRT). At your recommendation, I will relist this Quantitative Skills site. They will give you an out-of-the-box table to determine whether to continue or stop testing.
I also took the time to research that site's references, and was directed toward a comprehensive article that is intended for medical testing, but is easily transferable to other domains. It is Increasing Efficiency in Evaluation Research: The Use of Sequential Analysis (Howe, Holly L., American Journal of Public Health July 1982, Vol. 72, No. 7, pp. 690-697.) This article may be downloaded in its entirety.
Since I have not seen SPRT in my stats courses, I will provide a cookbook that I hope will he helpful for the stackexchange community.
For my null hypothesis, I tested for a level of 95% correct. If, however, the level was below 80%, it would be a cause for concern. So I have
$p_1 = .95$ (null hypothesis), and $p_2 = .80$ (alternative hypothesis)
I will use $\alpha = 0.05$ and $\beta = 0.10$.
Howe shows a graph with two parallel lines, with plots of the cumulative errors. Testing continues while the cumulative errors (and in my case, cumulative count of correct data points) lie between the two lines.
If the cumulative errors exceed either line, then either:
accept the null hypothesis (if cumulative error count falls below the bottom line, $d_1$), or
reject the null hypothesis (if cumulative error count exceeds the top line, $d_2$).
Here are the equations. I am adding a denominator because it is used several times.
$denom = log\left [ \left ( \frac{p_2}{p_1}\right )(\frac{1 - p_1}{1 - p_2}) \right ]$
The slopes of the lines are the same, and represented by s.
$s = \frac{log\left ( \frac{1 - p_1}{1 - p_2} \right )}{denom}$
The intercepts, $h_1$ and $h_2$, are computed as follows:
$h_1 = \frac{log\left ( \frac{1 - \alpha }{\beta }\right )}{denom}$
$h_2 = \frac{log\left ( \frac{1 - \beta }{\alpha }\right )}{denom}$
I set up a spreadsheet with data point N going from 1 to 50. Then I added two columns for acceptance threshold ($d_1$) and rejection threshold ($d_2$).
$d_1 = -h_1 + sN$
$d_2 = h_2 + sN$
In my experiment,
$denom = -0.67669$
$h_1 = -1.44485$
$h_2 = -1.85501$
The values of $d_1$ at N=2, N=5, N=10 are 3.224, 5.893, 10.342.
I then added columns for success and cumSuccess. I picked data points until the cumulative number exceeded the acceptance threshold, and I accepted the null hypothesis. | Experiment design for proportion | Thank you, whuber, for making me aware of Wald's Sequential Probability Ratio Test (SPRT). At your recommendation, I will relist this Quantitative Skills site. They will give you an out-of-the-box tab | Experiment design for proportion
Thank you, whuber, for making me aware of Wald's Sequential Probability Ratio Test (SPRT). At your recommendation, I will relist this Quantitative Skills site. They will give you an out-of-the-box table to determine whether to continue or stop testing.
I also took the time to research that site's references, and was directed toward a comprehensive article that is intended for medical testing, but is easily transferable to other domains. It is Increasing Efficiency in Evaluation Research: The Use of Sequential Analysis (Howe, Holly L., American Journal of Public Health July 1982, Vol. 72, No. 7, pp. 690-697.) This article may be downloaded in its entirety.
Since I have not seen SPRT in my stats courses, I will provide a cookbook that I hope will he helpful for the stackexchange community.
For my null hypothesis, I tested for a level of 95% correct. If, however, the level was below 80%, it would be a cause for concern. So I have
$p_1 = .95$ (null hypothesis), and $p_2 = .80$ (alternative hypothesis)
I will use $\alpha = 0.05$ and $\beta = 0.10$.
Howe shows a graph with two parallel lines, with plots of the cumulative errors. Testing continues while the cumulative errors (and in my case, cumulative count of correct data points) lie between the two lines.
If the cumulative errors exceed either line, then either:
accept the null hypothesis (if cumulative error count falls below the bottom line, $d_1$), or
reject the null hypothesis (if cumulative error count exceeds the top line, $d_2$).
Here are the equations. I am adding a denominator because it is used several times.
$denom = log\left [ \left ( \frac{p_2}{p_1}\right )(\frac{1 - p_1}{1 - p_2}) \right ]$
The slopes of the lines are the same, and represented by s.
$s = \frac{log\left ( \frac{1 - p_1}{1 - p_2} \right )}{denom}$
The intercepts, $h_1$ and $h_2$, are computed as follows:
$h_1 = \frac{log\left ( \frac{1 - \alpha }{\beta }\right )}{denom}$
$h_2 = \frac{log\left ( \frac{1 - \beta }{\alpha }\right )}{denom}$
I set up a spreadsheet with data point N going from 1 to 50. Then I added two columns for acceptance threshold ($d_1$) and rejection threshold ($d_2$).
$d_1 = -h_1 + sN$
$d_2 = h_2 + sN$
In my experiment,
$denom = -0.67669$
$h_1 = -1.44485$
$h_2 = -1.85501$
The values of $d_1$ at N=2, N=5, N=10 are 3.224, 5.893, 10.342.
I then added columns for success and cumSuccess. I picked data points until the cumulative number exceeded the acceptance threshold, and I accepted the null hypothesis. | Experiment design for proportion
Thank you, whuber, for making me aware of Wald's Sequential Probability Ratio Test (SPRT). At your recommendation, I will relist this Quantitative Skills site. They will give you an out-of-the-box tab |
50,796 | Nonparametric sign test for correlated variables | Under one interpretation of your situation there is no need to modify the p values at all.
For example, let's posit that a sequence of (unknown) bivariate distributions $p_i(x,y)$ govern $A$ and $B$ for each organism $i$. That is, $\Pr(A=x, B=y) = p_i(x,y)$ for all possible outcomes $(x,y)$ of $(A,B)$. To test whether the measurement procedures $A$ and $B$ differ, a reasonable null hypothesis is that these distributions are all symmetric:
$$H_0: p_i(x,y) = p_i(y,x) \text{ for all } i, x, y.$$
The sign statistic (difference between number of $+$ and number of $-$ results) is still a reasonable one to use in this test. (It actually tests the null hypothesis $H_0: \Pr(A<B) = \Pr(B<A)$.) Its distribution depends on the chances of ties; namely on the values $t_i = \sum_{x}p_i(x,x)$ (one for each organism $i$). The question, which appears not to contemplate the possibility of ties at all, suggests their chances are fairly small. In any case, the symmetry assumption in the null implies the chance of organism $i$ yielding a $+$ sign equals the chance of organism $i$ yielding a $-$ sign and the assumption that ties are unlikely implies both these chances are close to $1/2$. This implies the distribution of the sign statistic is binomial, as usual, despite any correlation (or lack thereof) between $A$ and $B$.
If there is a substantial chance of ties, it looks like you cannot make any progress towards quantitative bounds until you specify something about those chances. For example, if you provide an upper bound for the $t_i$ you can say something about the distribution of the sign statistic. | Nonparametric sign test for correlated variables | Under one interpretation of your situation there is no need to modify the p values at all.
For example, let's posit that a sequence of (unknown) bivariate distributions $p_i(x,y)$ govern $A$ and $B$ f | Nonparametric sign test for correlated variables
Under one interpretation of your situation there is no need to modify the p values at all.
For example, let's posit that a sequence of (unknown) bivariate distributions $p_i(x,y)$ govern $A$ and $B$ for each organism $i$. That is, $\Pr(A=x, B=y) = p_i(x,y)$ for all possible outcomes $(x,y)$ of $(A,B)$. To test whether the measurement procedures $A$ and $B$ differ, a reasonable null hypothesis is that these distributions are all symmetric:
$$H_0: p_i(x,y) = p_i(y,x) \text{ for all } i, x, y.$$
The sign statistic (difference between number of $+$ and number of $-$ results) is still a reasonable one to use in this test. (It actually tests the null hypothesis $H_0: \Pr(A<B) = \Pr(B<A)$.) Its distribution depends on the chances of ties; namely on the values $t_i = \sum_{x}p_i(x,x)$ (one for each organism $i$). The question, which appears not to contemplate the possibility of ties at all, suggests their chances are fairly small. In any case, the symmetry assumption in the null implies the chance of organism $i$ yielding a $+$ sign equals the chance of organism $i$ yielding a $-$ sign and the assumption that ties are unlikely implies both these chances are close to $1/2$. This implies the distribution of the sign statistic is binomial, as usual, despite any correlation (or lack thereof) between $A$ and $B$.
If there is a substantial chance of ties, it looks like you cannot make any progress towards quantitative bounds until you specify something about those chances. For example, if you provide an upper bound for the $t_i$ you can say something about the distribution of the sign statistic. | Nonparametric sign test for correlated variables
Under one interpretation of your situation there is no need to modify the p values at all.
For example, let's posit that a sequence of (unknown) bivariate distributions $p_i(x,y)$ govern $A$ and $B$ f |
50,797 | Is there any relation between Power Law and Negative Binomial distribution? | There are many power-law distributions, so you have a lot of possible models. You might start by trying to fit a log-series distribution, which is a limiting case of the negative binomial.
Don't think a priori that you have a mixture distribution as suggested by whuber until you've estimated model parameters and done at least a goodness of fit test. Long-tail distributions, like power-law, log-series, Zipf, etc., typically have what look like outliers in the right-hand tail; their separation from the bulk of the observations is just an artifact of (relatively) small sample size. Mixtures are a pain in the butt to estimate, since some regions overlap. You can often avoid that sort of problem by stepping up your modeling one level with something like Poisson regression, assuming you have some covariate (predictor) data about each user -- this basically does the mixing for you.
The Johnson, Kemp, and Kotz reference given at the end of the referenced Wikipedia article has everything you'd ever want to know about all these distributions, including many methods of parameter estimation. | Is there any relation between Power Law and Negative Binomial distribution? | There are many power-law distributions, so you have a lot of possible models. You might start by trying to fit a log-series distribution, which is a limiting case of the negative binomial.
Don't th | Is there any relation between Power Law and Negative Binomial distribution?
There are many power-law distributions, so you have a lot of possible models. You might start by trying to fit a log-series distribution, which is a limiting case of the negative binomial.
Don't think a priori that you have a mixture distribution as suggested by whuber until you've estimated model parameters and done at least a goodness of fit test. Long-tail distributions, like power-law, log-series, Zipf, etc., typically have what look like outliers in the right-hand tail; their separation from the bulk of the observations is just an artifact of (relatively) small sample size. Mixtures are a pain in the butt to estimate, since some regions overlap. You can often avoid that sort of problem by stepping up your modeling one level with something like Poisson regression, assuming you have some covariate (predictor) data about each user -- this basically does the mixing for you.
The Johnson, Kemp, and Kotz reference given at the end of the referenced Wikipedia article has everything you'd ever want to know about all these distributions, including many methods of parameter estimation. | Is there any relation between Power Law and Negative Binomial distribution?
There are many power-law distributions, so you have a lot of possible models. You might start by trying to fit a log-series distribution, which is a limiting case of the negative binomial.
Don't th |
50,798 | Stochastic coordinate descent for $\ell_1$ regularization | I believe that in the specific case of L2 loss (ordinary linear regression), the convergence rate of coordinate descent will depend on the correlation structure of the predictors ($X_i$’s). Consider the case where they are uncorrelated. Then cyclic coordinate descent converges after one cycle.
Another heuristic that has had more empirical evidence in its favor is the idea of active set convergence. Rather than cycling through all coordinates, only cycle through the ones that are active ($i$’s where $\beta_i$ is non-zero) until convergence, then sweep through the all coordinates to update the active set. Convergence occurs when the active set does not change. | Stochastic coordinate descent for $\ell_1$ regularization | I believe that in the specific case of L2 loss (ordinary linear regression), the convergence rate of coordinate descent will depend on the correlation structure of the predictors ($X_i$’s). Consider | Stochastic coordinate descent for $\ell_1$ regularization
I believe that in the specific case of L2 loss (ordinary linear regression), the convergence rate of coordinate descent will depend on the correlation structure of the predictors ($X_i$’s). Consider the case where they are uncorrelated. Then cyclic coordinate descent converges after one cycle.
Another heuristic that has had more empirical evidence in its favor is the idea of active set convergence. Rather than cycling through all coordinates, only cycle through the ones that are active ($i$’s where $\beta_i$ is non-zero) until convergence, then sweep through the all coordinates to update the active set. Convergence occurs when the active set does not change. | Stochastic coordinate descent for $\ell_1$ regularization
I believe that in the specific case of L2 loss (ordinary linear regression), the convergence rate of coordinate descent will depend on the correlation structure of the predictors ($X_i$’s). Consider |
50,799 | How to check that a sample suits multi-dimensional uniform distribution? | For the 1D continuous uniform distribution U(a,b) the uniformly minimum variance unbiased (UMVU) estimates of a and b can be obtained in closed form by a straightforward example of maximum spacing estimation. Can't see any reason that applying this separately for each dimension wouldn't give you UMVU estimates of all parameters of your multivariate uniform distribution | How to check that a sample suits multi-dimensional uniform distribution? | For the 1D continuous uniform distribution U(a,b) the uniformly minimum variance unbiased (UMVU) estimates of a and b can be obtained in closed form by a straightforward example of maximum spacing est | How to check that a sample suits multi-dimensional uniform distribution?
For the 1D continuous uniform distribution U(a,b) the uniformly minimum variance unbiased (UMVU) estimates of a and b can be obtained in closed form by a straightforward example of maximum spacing estimation. Can't see any reason that applying this separately for each dimension wouldn't give you UMVU estimates of all parameters of your multivariate uniform distribution | How to check that a sample suits multi-dimensional uniform distribution?
For the 1D continuous uniform distribution U(a,b) the uniformly minimum variance unbiased (UMVU) estimates of a and b can be obtained in closed form by a straightforward example of maximum spacing est |
50,800 | How can I control the false positives rate? | Does this make sense: To me, mostly yes... although I think you might be doing something I don't expect (see below).
What is this method called and where can I find more about it: You are building up an empirical reference distribution through permutation of your genome labels. There may be fancier terms. I don't know what a good citation might be, consider: Good, P. (2005) Permutation, Parametric, and Bootstrap Tests of Hypotheses, Springer-Verlag, NY, 3rd edition.
How should I scan the CDFs to find the right x? Sometimes for low x's CDF_simulations(x) > CDF_realdata(x): This is the part that makes less sense to me. I'm not sure what you are doing here exactly. Maybe the thing to do is to find the 90th percentile for the CDF_simulations and use that as your cutoff for saying there might be something interesting going on in CDF_realdata?
Where does the number of simulations come into play? Does it make sense to simply build an averaged CDF as I did?: The number of simulations you run will produce a larger and more reliable reference distribution. Your averaged CDF approach seems a little odd to me. | How can I control the false positives rate? | Does this make sense: To me, mostly yes... although I think you might be doing something I don't expect (see below).
What is this method called and where can I find more about it: You are building up | How can I control the false positives rate?
Does this make sense: To me, mostly yes... although I think you might be doing something I don't expect (see below).
What is this method called and where can I find more about it: You are building up an empirical reference distribution through permutation of your genome labels. There may be fancier terms. I don't know what a good citation might be, consider: Good, P. (2005) Permutation, Parametric, and Bootstrap Tests of Hypotheses, Springer-Verlag, NY, 3rd edition.
How should I scan the CDFs to find the right x? Sometimes for low x's CDF_simulations(x) > CDF_realdata(x): This is the part that makes less sense to me. I'm not sure what you are doing here exactly. Maybe the thing to do is to find the 90th percentile for the CDF_simulations and use that as your cutoff for saying there might be something interesting going on in CDF_realdata?
Where does the number of simulations come into play? Does it make sense to simply build an averaged CDF as I did?: The number of simulations you run will produce a larger and more reliable reference distribution. Your averaged CDF approach seems a little odd to me. | How can I control the false positives rate?
Does this make sense: To me, mostly yes... although I think you might be doing something I don't expect (see below).
What is this method called and where can I find more about it: You are building up |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.