Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
7240
1
null
null
21
23580
I do not know if this has been asked before, but I do not found anything about it. My question is if anyone can provide a good reference to learn how to obtain the proportion of variance explained by each one of the fixed and random factors in a mixed-effects model.
Proportion of explained variance in a mixed-effects model
CC BY-SA 2.5
null
2011-02-15T09:11:10.857
2022-04-03T03:14:27.767
null
null
221
[ "mixed-model", "variance" ]
7241
2
null
7240
13
null
I can provide some references: Xu, R. (2003). Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527-3541. [DOI:10.1002/sim.1572](http://dx.doi.org/10.1002/sim.1572) Edwards, L. J., Muller, K. E., Wolfinger, R. D., Qaqish, B. F., & Schabenberger, O. (2008). An $R^2$ statistic for fixed effects in the linear mixed model. Statistics in Medicine, 27, 6137-6157. [DOI:10.1002/sim.3429](http://dx.doi.org/10.1002/sim.3429) Hössjer, O. (2008). On the coefficient of determination for mixed regression models. Journal of Statistical Planning and Inference, 138, 3022-3038. [DOI:10.1016/j.jspi.2007.11.010](http://dx.doi.org/10.1016/j.jspi.2007.11.010) Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining $R^2$ from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4, 133-142. [DOI:10.1111/j.2041-210x.2012.00261.x](http://dx.doi.org/10.1111/j.2041-210x.2012.00261.x) Happy reading!
null
CC BY-SA 3.0
null
2011-02-15T10:27:16.157
2017-03-28T21:01:05.427
2017-03-28T21:01:05.427
1934
1934
null
7243
2
null
7233
5
null
As @suncooolsu has already pointed out, we need more information. Residuals are often used when it comes to identifying outliers and to gain a better understanding of the appropriateness of a certain model (here: FEM vs REM). Given your "Std Residual", I would say that a REM seems more appropriate. However, you definitely might want to check other heterogeneity tests (e.g., Q, I^2). Recently Viechtbauer/Cheung (2010) have published a really nice article on "[Outlier and influence diagnostics for meta-analysis](http://onlinelibrary.wiley.com/doi/10.1002/jrsm.11/abstract)" (see esp. section 3).
null
CC BY-SA 2.5
null
2011-02-15T13:01:56.817
2011-02-15T14:29:12.967
2011-02-15T14:29:12.967
449
307
null
7244
1
7245
null
6
5620
Lets say I have a website which gets 100 hits per day (mu = 100). Yesterday my website got 130 hits (x = 130). If I assume a Poisson distribution, then the probability of getting 130 hits is: ``` > dpois(130, 100) [1] 0.0005752527 # about 0.06% ``` So this tells me that getting 130 hits is quite unusual for my website due to the low probability. My understanding of statistical significance is that it is used to determine whether the outcome of an experiment is due either to chance or some kind of deterministic relationship. - How would I apply that in this situation? - What test should one use? (and is it in R?) Many thanks in advance for your time. Note: I saw someone at a business talk asked something very similar to this and I had no idea what they meant by it, and so now I'm just trying to educate myself. I'm new to R, but that seems like the software most used for these kind of questions, hence my request.
Poisson distribution and statistical significance
CC BY-SA 2.5
null
2011-02-15T13:48:07.757
2011-02-15T15:35:20.957
null
null
3253
[ "r", "distributions", "statistical-significance", "poisson-distribution" ]
7245
2
null
7244
8
null
There are two points to make: - It is not the specific value of 130 that is unusual, but that it is much larger than 100. If you got more than 130 hits, that would have been even more surprising. So we usually look at the P(X>=130), not just P(X=130). By your logic even 100 hits would be unusual, because dpois(100,100)=0.04. So a more correct calculation is to look at ppois(129, 100, lower=F)=0.00228. This is still small, but not as extreme as your value. And this does not even take into account, that an unusually low number of hits might also surprise you. We often multiply the probability of exceeding the observed count by 2 to account for this. - If you keep checking your hits every day, sooner or later even rare events will occur. For example P(X>=130) happens to be close to 1/365, so such an event would be expected to occur once a year.
null
CC BY-SA 2.5
null
2011-02-15T14:15:43.147
2011-02-15T14:15:43.147
null
null
279
null
7246
1
null
null
7
884
It seems non-convexity of loss function is not such a problem for [boosting with a normalized sigmoid loss function](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.19.2603&rep=rep1&type=pdf). Do you know any further work showing better results with this kind of boosting than [Adaboost](http://en.wikipedia.org/wiki/AdaBoost) (exponential loss function)? Here are the graphs of different loss functions adapted from the graphs shown in [A note on margin-based loss functions in classification](http://scholar.google.fr/scholar?cluster=5316650282096740506&hl=fr&as_sdt=0,5&as_vis=1) by Yi Lin: ![enter image description here](https://i.stack.imgur.com/ATixh.jpg) --- Edit: The first Google Scholar link now points to: Boosting algorithms as gradient descent in function space by Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean (1999).
Normalized sigmoid loss function for boosting?
CC BY-SA 4.0
null
2011-02-15T14:22:27.763
2021-01-11T15:18:55.843
2021-01-11T15:18:55.843
22452
1351
[ "boosting", "loss-functions" ]
7247
2
null
7244
3
null
First, note that `dpois(130, 100)` will give you the probability of exactly 130 hits if you are assuming that the true rate is 100. That probability is indeed very low. However, in the usual hypothesis testing framework, what we calculate is the probability of the observed outcome or an even more extreme outcome. You can obtain this for the Poisson distribution with: ``` > ppois(129, lambda=100, lower.tail=FALSE) [1] 0.002282093 ``` So, there is a ~.2% probability of observing the 130 hits or even more hits if you are assuming a true rate of 100. By convention, if this value is below .025 (which it is), we would consider this finding "statistically significant" at $\alpha = .05$ (two-sided). What this means is that you are willing to take a 5% risk that your decision (calling the deviation statistically significant and rejecting the hypothesis that the true rate is 100 for that observation) is wrong. That is, if the true rate is indeed 100 for that day, then in 2.5% of the cases, the observed rate will in fact be 120 or larger (`qpois(.975, lambda=100)`) and in 2.5% of the cases, the observed rate will be 81 or lower (`qpois(.025, lambda=100)`). So, if you are using $\alpha = .05$, then in 5% of the cases, your decision will be false.
null
CC BY-SA 2.5
null
2011-02-15T14:24:58.077
2011-02-15T15:35:20.957
2011-02-15T15:35:20.957
1934
1934
null
7249
1
11361
null
7
1918
I was wondering if there is a free tool to build a decision tree in interactive fashion like in SAS Enterprise Mining. I'm used to work with Weka. But nothing fits to my needs. I would like that before splitting every node, the program asks to user which attribute (maybe from a list of the "best" attributes) to choose. I saw that in SAS it is implemented. Should I write some code to get what I want? Thanks
Interactive decision trees
CC BY-SA 2.5
null
2011-02-15T15:52:09.717
2013-08-11T21:16:09.673
2011-02-15T17:21:45.207
null
2719
[ "sas", "cart", "weka" ]
7250
1
7273
null
10
9158
I'm having difficulty understanding one or two aspects of the cluster package. I'm following the example from [Quick-R](http://www.statmethods.net/advstats/cluster.html) closely, but don't understand one or two aspects of the analysis. I've included the code that I am using for this particular example. ``` ## Libraries library(stats) library(fpc) ## Data mydata = structure(list(a = c(461.4210925, 1549.524107, 936.42856, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 131.4349206, 0, 762.6110846, 3837.850406), b = c(19578.64174, 2233.308842, 4714.514274, 0, 2760.510002, 1225.392118, 3706.428246, 2693.353714, 2674.126613, 592.7384164, 1820.976961, 1318.654162, 1075.854792, 1211.248996, 1851.363623, 3245.540062, 1711.817955, 2127.285272, 2186.671242 ), c = c(1101.899095, 3.166506463, 0, 0, 0, 1130.890295, 0, 654.5054857, 100.9491289, 0, 0, 0, 0, 0, 789.091922, 0, 0, 0, 0), d = c(33184.53871, 11777.47447, 15961.71874, 10951.32402, 12840.14983, 13305.26424, 12193.16597, 14873.26461, 11129.10269, 11642.93146, 9684.238583, 15946.48195, 11025.08607, 11686.32213, 10608.82649, 8635.844964, 10837.96219, 10772.53223, 14844.76478), e = c(13252.50358, 2509.5037, 1418.364947, 2217.952853, 166.92007, 3585.488983, 1776.410835, 3445.14319, 1675.722506, 1902.396338, 945.5376228, 1205.456943, 2048.880329, 2883.497101, 1253.020175, 1507.442736, 0, 1686.548559, 5662.704559), f = c(44.24828759, 0, 485.9617601, 372.108855, 0, 509.4916263, 0, 0, 0, 212.9541122, 80.62920455, 0, 0, 30.16525587, 135.0501384, 68.38023073, 0, 21.9317122, 65.09052886), g = c(415.8909649, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 637.2629479, 0, 0, 0), h = c(583.2213618, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), i = c(68206.47387, 18072.97762, 23516.98828, 13541.38572, 15767.5799, 19756.52726, 17676.00505, 21666.267, 15579.90094, 14351.02033, 12531.38237, 18470.59306, 14149.82119, 15811.23348, 14637.35235, 13588.64291, 12549.78014, 15370.90886, 26597.08152)), .Names = c("a", "b", "c", "d", "e", "f", "g", "h", "i"), row.names = c(NA, -19L), class = "data.frame") ``` Then I standardize the variables: ``` # standardize variables mydata <- scale(mydata) ## K-means Clustering # Determine number of clusters wss <- (nrow(mydata)-1)*sum(apply(mydata,2,var)) for (i in 2:15) wss[i] <- sum(kmeans(mydata, centers=i)$withinss) # Q1 plot(1:15, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares") # K-Means Cluster Analysis fit <- kmeans(mydata, 3) # number of values in cluster solution # get cluster means aggregate(mydata,by=list(fit$cluster),FUN=mean) # append cluster assignment mydata <- data.frame(mydata, cluster = fit$cluster) # Cluster Plot against 1st 2 principal components - vary parameters for most readable graph clusplot(mydata, fit$cluster, color=TRUE, shade=TRUE, labels=0, lines=0) # Q2 # Centroid Plot against 1st 2 discriminant functions plotcluster(mydata, fit$cluster) ``` My question is, how can the plot which shows the number of clusters (marked `Q1` in my code) be related to the actual values (cluster number and variable name) ? Update: I now understand that the `clusplot()` function is a bivariate plot, with PCA1 and PCA2. However, I don't understand the link between the PCA components and the cluster groups. What is the relationship between the PCA values and the clustering groups? I've read elsewhere about the link between kmeans and PCA, but I still don't understand how they can be displayed on the same bivariate graph.
Using the stats package in R for kmeans clustering
CC BY-SA 2.5
null
2011-02-15T16:02:30.703
2016-02-16T08:46:41.250
2011-02-22T19:05:52.250
2635
2635
[ "r", "clustering" ]
7251
1
null
null
6
1081
Framework. Fix $\alpha\in ]0,1[$. Imagine you have $n$ $\alpha$-quantile forecast methodologies that give you, at time $t$ for look ahead time $t+h$, an estimation of the quantile of wind power. Formally, for $i=1,\dots,n$, you know how to produce $\hat{q}_{t+h|t}^{(i)}$ at time $t$ for look ahead time $t+h$ an estimation. Each methodology is based on a different modeling+estimation and can have performance that depend, for example, on the weather situation. Question. How do you construct a weighting scheme to combine quantile estimation (say with a linear combination) that can adapt along time $t$? Formally, how to best construct weights $\lambda_1(t,h),\dots,\lambda_n(t,h)$ such that $$\hat{q}_{t+h|t}=\sum_{i=1}^n \lambda_i(t,h) \hat{q}_{t+h|t}^{(i)}$$ is a very good quantile forecast. Side Note. For Msc students interested in proposing and elaborating their ideas with the real data, I propose an internship on that subject for summer 2011 (see [here](http://www-cep.cma.fr/Public/recrutement/proposition_de_stage/stage_prevision_eol/), it's in french but I can translate to those interested).
How to make a combination (aggregation) of quantile forecast?
CC BY-SA 2.5
null
2011-02-15T16:09:10.773
2017-05-11T17:39:13.173
2017-05-11T17:39:13.173
53690
223
[ "time-series", "forecasting", "quantiles", "aggregation", "forecast-combination" ]
7252
2
null
7236
0
null
I think no extra package is needed for the task, just use the basic `sample` function, e.g.: Get sample from the first group: ``` sample <- sample(data[data$"Care Type" == "Acute Care",], size = 25) ``` Get the choosen IDs out of the orig. dataset (making a backup could be a good idea before that): ``` data <- data[setdiff(data$pat_id, sample_pat_id),] ``` Get sample from second group in the rest of the dataset and concatenate to sample: ``` sample <- rbind(sample, sample(data[(data$"Care Type" == "Acute Care"),], size = 25) ``` Repeat for each segment: ``` data <- data[setdiff(data$pat_id, sample_pat_id),] sample <- rbind(sample, sample(data[(data$"Care Type" == "?"),], size = ?) ``` Sorry, not tested, but I think the point can be seen. And also: I am sure the above code could be improved and minified.
null
CC BY-SA 2.5
null
2011-02-15T16:12:24.807
2011-02-15T16:12:24.807
null
null
2714
null
7253
2
null
7249
6
null
Try the examples under dendrogram. You can make it as interactive as you want. ``` require(graphics); require(utils) hc <- hclust(dist(USArrests), "ave") (dend1 <- as.dendrogram(hc)) # "print()" method str(dend1) # "str()" method str(dend1, max = 2) # only the first two sub-levels op <- par(mfrow= c(2,2), mar = c(5,2,1,4)) plot(dend1) ## "triangle" type and show inner nodes: plot(dend1, nodePar=list(pch = c(1,NA), cex=0.8, lab.cex = 0.8), type = "t", center=TRUE) plot(dend1, edgePar=list(col = 1:2, lty = 2:3), dLeaf=1, edge.root = TRUE) plot(dend1, nodePar=list(pch = 2:1,cex=.4*2:1, col = 2:3), horiz=TRUE) ``` Edit 1 ==================================== The interactivity depends on what you want to do. It all comes down to the structure of the data that goes to `plot`. To make it easier to see what's going on, I'll only use the first 3 lines of data from the above example: ``` #Use only the first 3 lines from USArrests (df <- USArrests[1:3,]) #Perform the hc analysis (hcdf <- hclust(dist(df), "ave")) #Plot the results plot(hcdf) #Look at the names of hcdf names(hcdf) #Look at the structure of hcdf dput(hcdf) ``` The next segment is the output of the above `dput` statement. This structure tells `plot` how to draw the tree. ``` structure(list(merge = structure(c(-1L, -3L, -2L, 1L), .Dim = c(2L, 2L)), height = c(37.1770090243957, 54.8004107236398), order = c(3L, 1L, 2L), labels = c("Alabama", "Alaska", "Arizona"), method = "average", call = hclust(d = dist(df), method = "ave"), dist.method = "euclidean"), .Names = c("merge", "height", "order", "labels", "method", "call", "dist.method"), class = "hclust") ``` ![enter image description here](https://i.stack.imgur.com/UgagD.jpg) You can easily change the data and see what `plot` does. Just copy/paste the `structure` statement from your screen and assign it to a new variable, make your changes, and plot it. ``` newvar <- structure(list(merge = structure(c(-1L, -3L, -2L, 1L), .Dim = c(2L, 2L)), height = c(37.1770090243957, 54.8004107236398), order = c(3L, 1L, 2L), labels = c("Alabama", "Alaska", "Arizona"), method = "average", call = hclust(d = dist(df), method = "ave"), dist.method = "euclidean"), .Names = c("merge", "height", "order", "labels", "method", "call", "dist.method"), class = "hclust") plot(newvar) ``` As far as making the clustering more interactive, you'll have to explore the different methods and determine what you want to do. [http://cran.cnr.berkeley.edu/web/views/Cluster.html](http://cran.cnr.berkeley.edu/web/views/Cluster.html) [http://wiki.math.yorku.ca/index.php/R:_Cluster_analysis](http://wiki.math.yorku.ca/index.php/R:_Cluster_analysis) [http://www.statmethods.net/advstats/cluster.html](http://www.statmethods.net/advstats/cluster.html) [http://www.statmethods.net/advstats/cart.html](http://www.statmethods.net/advstats/cart.html)
null
CC BY-SA 3.0
null
2011-02-15T16:41:24.487
2011-05-31T19:42:30.857
2011-05-31T19:42:30.857
2775
2775
null
7255
2
null
4364
10
null
This is not a characterization but a conjecture, which dates back from 1917 and is due to Cantelli: > If $f$ is a positive function on $\mathbb{R}$ and $X$ and $Y$ are $N(0,1)$ independent random variables such that $X+f(X)Y$ is normal, then $f$ is a constant almost everywhere. Mentioned by Gérard Letac [here](https://mathoverflow.net/questions/37151/what-are-the-big-problems-in-probability-theory/39999#39999).
null
CC BY-SA 2.5
null
2011-02-15T17:43:50.737
2011-02-15T18:07:59.233
2017-04-13T12:58:32.177
-1
2592
null
7256
1
7283
null
8
2746
I am working on binary classification problem. Data set is very large and highly imbalanced. Data dimensionality is also very high. Now I want to balance data by under-sampling the majority class, and I also want to reduce data dimensionality by applying PCA, etc... So my question is that which one should be applied first: data sampling or dimensionality reduction? Please also give argument in favor of your answer. Thanks in advance
Which one should be applied first: data sampling or dimensionality reduction?
CC BY-SA 2.5
null
2011-02-15T18:20:56.520
2011-02-16T20:14:40.750
2011-02-15T19:03:27.827
null
2534
[ "classification", "sampling", "dataset" ]
7257
1
null
null
14
2049
I'm trying to understand how Boltzmann machines work, but I'm not quite sure how weights are learned, and haven't been able to find a clear description. Is the following correct? (Also, pointers to any good Boltzmann machine explanations would also be great.) We have a set of visible units (e.g., corresponding to black/white pixels in an image) and a set of hidden units. Weights are initialized somehow (e.g., uniformly from [-0.5, 0.5]), and then we alternate between the following two phases until some stopping rule is reached: - Clamped phase - In this phase, all the values of the visible units are fixed, so we only update the states of the hidden units (according to the Boltzmann stochastic activation rule). We update until the network has reached equilibrium. Once we reach equilibrium, we continue updating $N$ more times (for some predefined $N$), keeping track of the average of $x_i x_j$ (where $x_i, x_j$ are the states of nodes $i$ and $j$). After those $N$ equilibrium updates, we update $w_ij = w_ij + \frac{1}{C} Average(x_i x_j)$, where $C$ is some learning rate. (Or, instead of doing a batch update at the end, do we update after we equilibrium step?) - Free phase - In this phase, the states of all units are updated. Once we reach equilibrium, we similarly continue updating N' more times, but instead of adding correlations at the end, we subtract: $w_{ij} = w_{ij} - \frac{1}{C} Average(x_i x_j)$. So my main questions are: - Whenever we're in the clamped phase, do we reset the visible units to one of the patterns we want to learn (with some frequency that represents the importance of that pattern), or do we leave the visible units in the state they were in at the end of the free phase? - Do we do a batch update of the weights at the end of each phase, or update the weights at each equilibrium step in the phase? (Or, is either one fine?)
Learning weights in a Boltzmann machine
CC BY-SA 2.5
null
2011-02-15T18:39:29.390
2015-01-07T14:19:29.467
2011-02-15T20:35:20.667
1106
1106
[ "neural-networks" ]
7258
1
null
null
5
160
As I have written in my question "[How much undersampling should be done?](https://stats.stackexchange.com/questions/7209/how-much-undersampling-should-be-done)", I want to predict defaults, where a default is per se really unlikely (average ~ 0.3 percent). My models are not affected by the unequal distribution: It's all about saving computing time. Undersampling the majority class to a ratio [defaulting/non-defaulting examples] of 1:1 is the same as expressing the believe that I think examples are equally important in increasing the prediction quality. Does anyone know a reason why/when equal importance could not be the case? Is there literature on this specific topic (I could not find sampling-literature that modeling/computation-oriented)? Thanks a lot for your help!
Information content of examples and undersampling
CC BY-SA 2.5
null
2011-02-15T18:50:27.707
2011-03-10T09:26:59.513
2017-04-13T12:44:33.310
-1
2549
[ "sampling" ]
7259
1
7309
null
4
3845
I have a data-table that has about 26000 rows and about 35 columns. The columns are paired, so the values in columns 6 and 7 (for example) are related to each other, so are 8 and 9 and so on. There are 23 different types of annotations in the table, which I have read in as "factor". The ratio of these pairs of columns gives me a meaningful number, that I have to plot for each of the annotation. I was wondering if there is any way to have a lattice plot that will have say 15 boxplots in each panel, and 23 panels one for each annotation? UPDATE: Sample table. ``` structure(list(chromosome = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("chr1", "chr2", "chr3"), class = "factor"), start = c(1, 1, 1, 5663, 5726, 6360, 7548, 7619, 11027, 12158 ), end = c(5662, 7265, 5579133, 7265, 6331, 6755, 12710, 9274, 11556, 12994), strand = structure(c(1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 1L, 3L), .Label = c("-", ".", "+"), class = "factor"), annotation = structure(c(4L, 13L, 8L, 2L, 13L, 18L, 18L, 13L, 12L, 13L), .Label = c("3'-UTR", "5'-UTR", "BLASTN_HIT", "CDS", "CDS_motif", "CDS_parts", "conflict", "Contig", "intron", "LTR", "misc_feature", "misc_RNA", "mRNA", "polyA_site", "promoter", "real_mRNA", "rep_origin", "repeat_region", "repeat_unit", "rRNA", "snoRNA", "snRNA", "tRNA"), class = "factor"), Abp1D.sense = c(274.043090077, 222.027002967, 273.083037487, 38.3559401569, 80.7384755736, 15.9496926371, 54.9087080745, 127.744117176, 11.7165833969, 96.1925577965), Abp1D.antisense = c(125.681512904, 151.232091139, 254.813202986, 241.034453038, 84.3769908653, 199.467664241, 54.1912835565, 94.2017362521, 66.5142677515, 63.28607875), Iki3D.sense = c(1214.1686727, 969.99693773, 261.416187303, 107.770848316, 151.518863438, 55.9449713698, 66.0800496533, 144.470307921, 21.9708783825, 52.6163190329), Iki3D.antisense = c(786.364743311, 728.647444388, 248.288893165, 523.636519401, 263.419180997, 351.558399018, 73.754086788, 130.973198864, 93.7873464478, 30.858803946), Iki3D.Rrp6D.sense = c(3068.90441567, 2486.4012139, 278.274812147, 428.928792511, 639.682546716, 134.968168726, 223.376134645, 491.4747595, 72.255001742, 201.429779476), Iki3D.Rrp6D.antisense = c(1928.37423684, 1764.06364622, 271.050084744, 1181.76403142, 1276.54960008, 990.571280057, 196.88970278, 398.206798139, 62.7937319455, 111.92795268), Rdp1D.sense = c(197.403527744, 168.849473212, 399.588620598, 68.0531849874, 128.833494553, 30.8082175235, 59.9086910765, 134.404417978, 24.2425410143, 85.4825519212), Rdp1D.antisense = c(86.097230688, 254.128565899, 388.725581635, 846.769716459, 82.1986385122, 281.872704472, 49.97022677, 77.2892621321, 44.6799202033, 1.60870068737), Wt.sense = c(150.835381912, 132.061554165, 607.58955888, 65.8027665102, 89.3919476073, 83.4968237124, 7.90112304898, 10.714546021, 5e-04, 5e-04), Wt.antisense = c(150.374084859, 131.8668254, 659.887826114, 65.7197527173, 45.4289405873, 40.4019469576, 7.40733410843, 8.83958796731, 43.5756796108, 12.3289419357), Rdp1D.Rrp6D.sense = c(278.940777843, 227.050371919, 266.352999304, 43.8265653895, 86.2348572529, 5.1007112686, 63.5315969071, 138.590379851, 17.1377883364, 47.2571674648 ), Rdp1D.Rrp6D.antisense = c(122.812370852, 165.478532861, 262.217884557, 315.685821866, 196.899101029, 181.217276367, 64.9492021228, 111.77461648, 62.2771817975, 20.3596716974 ), Dcr1D.sense = c(5e-04, 120.491414743, 1325.93762159, 546.346320658, 5e-04, 5e-04, 66.3486618734, 5e-04, 5e-04, 5e-04), Dcr1D.antisense = c(5e-04, 8346.5035927, 1479.42139464, 37845.8172699, 5e-04, 28845.1503745, 1194.26663745, 5e-04, 647.428121154, 5e-04), Er1D.sense = c(387.657094655, 332.176880363, 570.413411676, 136.333361806, 228.023187499, 5e-04, 24.0778502632, 62.6341480521, 32.1717485621, 5e-04 ), Er1D.antisense = c(382.664804454, 343.714717963, 618.13806355, 205.325286003, 162.81296098, 145.575708252, 15.3360737154, 30.5382985528, 5e-04, 13.8803856753), Rrp6D.sense = c(716.001844534, 605.02996247, 444.912126049, 213.265421331, 398.7252034, 73.8307932225, 90.5802807096, 172.093792998, 5e-04, 135.365316918 ), Rrp6D.antisense = c(690.534019176, 592.944889017, 409.413915909, 247.869927895, 160.655498164, 371.504850116, 56.7600331059, 119.421944835, 16.7787329876, 20.0208426702), Mlo3D.Ago1D.sense = c(119.466474712, 329.741829677, 993.941348153, 1072.99933641, 5e-04, 377.539482989, 113.878508361, 50.428609435, 5e-04, 5e-04), Mlo3D.Ago1D.antisense = c(120.543892198, 2711.8968975, 1257.1652648, 11870.674213, 125.725150183, 8902.64920707, 206.72008398, 37.8215820763, 5e-04, 5e-04), Ago1D.Clr3D.sense = c(184.712264891, 179.831117561, 444.487152139, 162.69482267, 202.293495599, 5.61159966339, 63.6233691066, 90.544306737, 5e-04, 170.284591079), Ago1D.Clr3D.antisense = c(57.5740294693, 67.5638155026, 386.644572497, 102.906975334, 79.4664091704, 2.1204925561, 14.4184581702, 35.3125846275, 5e-04, 5e-04), Dcr1D.Rrp6D.sense = c(45.8846113251, 63.7325750806, 360.192351832, 126.841847799, 277.614908589, 54.2822292313, 33.9452752392, 83.1313557186, 5e-04, 12.8242338794), Dcr1D.Rrp6D.antisense = c(19.3160147626, 55.5834301591, 363.594792664, 183.776577157, 18.3768674716, 322.564097746, 17.907465048, 33.1088927537, 5e-04, 5e-04), Ago1D.sense = c(29.0628360487, 31.9691923002, 387.82120669, 42.2593617334, 64.0004397647, 68.0567121551, 65.0088334947, 189.345502766, 5e-04, 26.5639424914), Ago1D.antisense = c(10.918535798, 84.6095118936, 373.635073395, 345.064708329, 40.1150042497, 266.756186351, 4.38085691952, 5e-04, 5e-04, 5e-04), Mlo3D.sense = c(2798.34040679, 2353.07409522, 330.364494647, 781.101862885, 1312.81871554, 376.811874795, 124.564566466, 353.76677093, 5e-04, 31.5118039429 ), Mlo3D.antisense = c(2532.2553647, 2248.78653802, 292.881120203, 1246.84984213, 1981.14439149, 564.070923014, 164.753382721, 449.669663275, 5e-04, 5e-04), Ago1D.Rrp6D.sense = c(86.379996345, 90.4014346003, 468.105009795, 104.668452639, 203.155350014, 62.3955638527, 44.5603393841, 84.3076975857, 16.0419716595, 42.5345756816), Ago1D.Rrp6D.antisense = c(45.0506816078, 80.7182081997, 481.700138654, 206.646370214, 67.1332741403, 129.669542952, 23.7209335341, 26.0270063646, 28.9823086155, 16.4901597751)), .Names = c("chromosome", "start", "end", "strand", "annotation", "Abp1D.sense", "Abp1D.antisense", "Iki3D.sense", "Iki3D.antisense", "Iki3D.Rrp6D.sense", "Iki3D.Rrp6D.antisense", "Rdp1D.sense", "Rdp1D.antisense", "Wt.sense", "Wt.antisense", "Rdp1D.Rrp6D.sense", "Rdp1D.Rrp6D.antisense", "Dcr1D.sense", "Dcr1D.antisense", "Er1D.sense", "Er1D.antisense", "Rrp6D.sense", "Rrp6D.antisense", "Mlo3D.Ago1D.sense", "Mlo3D.Ago1D.antisense", "Ago1D.Clr3D.sense", "Ago1D.Clr3D.antisense", "Dcr1D.Rrp6D.sense", "Dcr1D.Rrp6D.antisense", "Ago1D.sense", "Ago1D.antisense", "Mlo3D.sense", "Mlo3D.antisense", "Ago1D.Rrp6D.sense", "Ago1D.Rrp6D.antisense" ), row.names = c(NA, 10L), class = "data.frame") ``` The question asked above is when you have a `data.frame` with all the data. What if I now want to create a `list` so that each entry in the list is actually a `data.frame` with a structure similar to one given above. How do I combine the boxplots in the lattice? Does the `ggplot2` have a solution for this? Can someone guide me to such a solution?
Combine multiple boxplots in a lattice
CC BY-SA 2.5
null
2011-02-15T18:54:10.333
2011-02-22T07:57:34.973
2011-02-22T07:57:34.973
2116
3263
[ "r", "data-visualization", "boxplot" ]
7260
2
null
7256
5
null
Do the dimensionality reduction first: Your error in estimating the principal components will be smaller due to the larger sample (your Corr/Cov-matrix used in PCA has to be estimated!). The other way around only makes sense for computational reasons.
null
CC BY-SA 2.5
null
2011-02-15T18:55:09.617
2011-02-15T18:55:09.617
null
null
2549
null
7261
1
7262
null
26
29480
My question relates mostly around the practical differences between General Linear Modeling (GLM) and Generalized Linear Modelling (GZLM). In my case it would be a few continuous variables as covariates and a few factors in an ANCOVA, versus GZLM. I want to examine the main effects of each variable, as well as one three-way interaction that I will outline in the model. I can see this hypothesis being tested in an ANCOVA, or using GZLM. To some extent I understand the math processes and reasoning behind running a General Linear Model like an ANCOVA, and I somewhat understand that GZLMs allow for a link function connecting the linear model and the dependent variable (ok, I lied, maybe I don't really understand the math). What I really don't understand are the practical differences or reasons for running one analysis and not the other when the probability distribution used in the GZLM is normal (i.e., identity link function?). I get very different results when I run one over the other. Could I run either? My data is somewhat non-normal, but works to some extent both in the ANCOVA and the GZLM. In both cases my hypothesis is supported, but in the GZLM the p value is "better". My thought was that an ANCOVA is a linear model with a normally distributed dependent variable using an identity link function, which is exactly what I can input in a GZLM, but these are still different. Please shed some light on these questions for me, if you can! --- Based on the first answer I have the additional question: If they are identical except for the significance test that it utilized (i.e., F test vs. Wald Chi Square), which would be most appropriate to use? ANCOVA is the "go-to method", but I am unsure why the F test would be preferable. Can someone shed some light on this question for me?
General Linear Model vs. Generalized Linear Model (with an identity link function?)
CC BY-SA 4.0
null
2011-02-15T19:28:22.333
2020-06-28T15:49:34.997
2020-06-28T15:49:34.997
154402
3262
[ "generalized-linear-model", "modeling", "linear-model" ]
7262
2
null
7261
24
null
A generalized linear model specifying an identity link function and a normal family distribution is exactly equivalent to a (general) linear model. If you're getting noticeably different results from each, you're doing something wrong. Note that specifying an identity link is not the same thing as specifying a normal distribution. The distribution and the link function are two different components of the generalized linear model, and each can be chosen independently of the other (although certain links work better with certain distributions, so most software packages specify the choice of links allowed for each distribution). Some software packages may report noticeably different $p$-values when the residual degrees of freedom are small if it calculates these using the asymptotic normal and chi-square distributions for all generalized linear models. All software will report $p$-values based on Student's $t$- and Fisher's $F$-distributions for general linear models, as these are more accurate for small residual degrees of freedom as they do not rely on asymptotics. Student's $t$- and Fisher's $F$-distributions are strictly valid for the normal family only, although some other software for generalized linear models may also use these as approximations when fitting other families with a scale parameter that is estimated from the data.
null
CC BY-SA 2.5
null
2011-02-15T20:10:32.780
2011-02-18T10:23:28.347
2011-02-18T10:23:28.347
449
449
null
7263
1
7269
null
16
7284
I am writing about using a 'joint probability distribution' for an audience that would be more likely to understand 'multivariate distribution' so I am considering using the later. However, I do not want to loose meaning while doing this. [Wikipedia](http://en.wikipedia.org/wiki/Joint_probability_distribution) seems to indicate that these are synonyms. Are they? If not, why not?
Difference between the terms 'joint distribution' and 'multivariate distribution'?
CC BY-SA 2.5
null
2011-02-15T20:21:36.447
2017-12-01T13:01:50.277
2013-07-12T13:03:29.237
22468
1381
[ "probability", "terminology", "joint-distribution", "definition" ]
7264
2
null
7239
4
null
disclaimer I still don't fully understand your model; but without at least a reproducible example, this is the best I can offer. It is not clear exactly what you are doing here. For example, how are `pvr` and `pir` calculated? Would it make sense to calculate them inside the same model? Answer I am assuming that your data includes observations for `mu[]` but not `pmu[]` and you want to estimate `pmu[j]` given `j` values of `pvr` and `pir`. Append the `pir` and `pvr` to the `ir` and `vr` columns, get rid of the second for loop, and then consider the values of `mu[]` estimated using `pir` and `pvr` to be the posterior predictive estimates of `mu`. Then replace the two `for` loops with this: ``` for (i in 1:length(ri)+length(pri)){ ri[i] ~ dnorm(mu[i],tau) mu[i] <- alpha + b.vr*vr[i] + b.ir*ir[i] } ``` I have done something similar, but without predicted regressors, similar to the example given by [Gelman et al in 'Bayesian Data Analysis' (pp 598-599](http://www.stat.columbia.edu/~gelman/bugsR/software.pdf) starting under posterior predictive simulations).
null
CC BY-SA 2.5
null
2011-02-15T20:30:35.940
2011-02-18T21:58:49.337
2011-02-18T21:58:49.337
1381
1381
null
7265
2
null
7263
2
null
I'd be inclined to say that "multivariate" describes the random variable, i.e., it is a vector, and that the components of a multivariate random variable have a joint distribution. "Multivariate random variable" sounds a bit strange, though; I'd call it a random vector.
null
CC BY-SA 2.5
null
2011-02-15T20:39:58.100
2011-02-15T20:39:58.100
null
null
401
null
7266
2
null
7263
0
null
I think they are mostly synonyms, and that if there is any difference, it lies in details that are likely irrelevant to your audience.
null
CC BY-SA 2.5
null
2011-02-15T20:45:55.713
2011-02-15T20:45:55.713
null
null
2044
null
7267
2
null
7263
1
null
The [canonical textbooks describing properties of the various probability distributions by Johnson & Kotz](http://www.google.com/search?hl=en&tbs=bks%3A1&q=inauthor%3Ajohnson+inauthor%3Akotz+intitle%3Adistributions) and later co-authors are entitled Univariate Discrete Distributions, Continuous Univariate Distributions, Continuous Multivariate Distributions and Discrete Multivariate Distributions. So I think you're on safe ground describing a distribution as 'multivariate' rather than 'joint'. Conflict of interest statement: The author is a member of [Wikipedia:WikiProject Statistics](http://en.wikipedia.org/wiki/Wikipedia%3aWikiProject_Statistics).
null
CC BY-SA 2.5
null
2011-02-15T20:56:41.460
2011-02-15T20:56:41.460
null
null
449
null
7268
1
7306
null
15
42334
How would you get hourly means for multiple data columns, for a daily period, and show results for twelve "Hosts" in the same graph? That is, I'd like to graph what a 24 hour period looks like, for a weeks worth of data. The eventual goal would be to compare two sets of this data, before and after samplings. ``` dates Hos CPUIOWait CPUUser CPUSys 1 2011-02-11 23:55:12 db 0 14 8 2 2011-02-11 23:55:10 app1 0 6 1 3 2011-02-11 23:55:09 app2 0 4 1 ``` I've been able to run `xyplot(CPUUser ~ dates | Host)` with good effect. However, rather than showing each date in the week, I'd like the X axis to be the hours of the day. Trying to get this data into an xts object results in errors such as: > "order.by requires an appropriate time-based object" Here is a `str()` of the data frame: ``` 'data.frame': 19720 obs. of 5 variables: $ dates : POSIXct, format: "2011-02-11 23:55:12" "2011-02-11 23:55:10" ... $ Host : Factor w/ 14 levels "app1","app2",..: 9 7 5 4 3 10 6 8 2 1 ... $ CPUIOWait: int 0 0 0 0 0 0 0 0 0 0 ... $ CPUUser : int 14 6 4 4 3 10 4 3 4 4 ... $ CPUSys : int 8 1 1 1 1 3 1 1 1 1 ... ``` UPDATE: Just for future reference, I decided to go with a boxplot, to show both the median, and the 'outliers'. Essentially: ``` Data$hour <- as.POSIXlt(dates)$hour # extract hour of the day boxplot(Data$CPUUser ~ Data$hour) # for a subset with one host or for all hosts xyplot(Data$CPUUser ~ Data$hour | Data$Host, panel=panel.bwplot, horizontal=FALSE) ```
How to aggregate by minute data for a week into hourly means?
CC BY-SA 4.0
null
2011-02-15T21:00:59.150
2020-05-01T09:42:41.837
2020-05-01T09:42:41.837
18417
2770
[ "r", "time-series", "aggregation" ]
7269
2
null
7263
14
null
The terms are basically synonyms, but the usages are slightly different. Think about the univariate case: you may talk about "distributions" in general, you might more specifically refer to "univariate distributions", and you refer to "the distribution of $X$". You don't normally say "the univariate distribution of $X$". Similarly, in the multivariate case you may talk about "distributions" in general, you might more specifically refer to "multivariate distribution", and you refer to "the distribution of $(X,Y)$" or "the joint distribution of $X$ and $Y$". Thus the joint distribution of $X$ and $Y$ is a multivariate distribution, but you don't normally say "the multivariate distribution of $(X,Y)$" or "the multivariate distribution of $X$ and $Y$".
null
CC BY-SA 2.5
null
2011-02-15T21:05:16.413
2011-02-15T21:43:16.040
2011-02-15T21:43:16.040
89
89
null
7270
1
7274
null
9
3230
I have a CSV file with 4 million edges of a directed network representing people communicating with each other (e.g. John sends a message to Mary, Mary sends a message to Ann, John sends another message to Mary, etc.). I would like to do two things: - Find degree, betweeness and (maybe) eigenvector centrality measures for each person. - Get a visualization of the network. I would like to do this on the command-line on a Linux server since my laptop does not have much power. I have R installed on that server and the statnet library. I found [this 2009 post](http://www.cybaea.net/Blogs/Data/SNA-with-R-Loading-your-network-data.html) of someone more competent than me trying to do the same thing and having problems with it. So I was wondering if anyone else has any pointers on how to do this, preferably taking me step by step since I only know how to load the CSV file and nothing else. Just to give you an idea, this is how my CSV file looks like: ``` $ head comments.csv "src","dest" "6493","139" "406705","369798" $ wc -l comments.csv 4210369 comments.csv ```
How to calculate centrality measures in a 4 million edge network using R?
CC BY-SA 2.5
null
2011-02-15T22:11:35.660
2011-10-08T21:14:42.427
2011-02-16T16:25:07.737
1762
1762
[ "r", "data-visualization", "networks" ]
7271
1
7284
null
6
2840
Hellwig's method is a method of selection of variables in linear model. It is widely used in Poland, probably only in Poland because it is really hard to find it in any scientific paper written in English. Description of method: $m_{k}$ - set of variables in k'th combination (there are $2^{p}-1$ combinations, where p is number of variables) $r_{j}$ - correlation between $Y$ and $X_{j}$ $r_{ij}$ - correlation between $X_{i}$ and $X_{j}$ $H_{k}=\sum\limits_{j \in m_{k}}\frac{r_{j}^2}{\sum\limits_{i \in m_{k}}|r_{ij}|}$ Choose the combination of variables with the highest $H_{k}$ Question Is this method used anywhere outside the Poland? Does it have any scientific background? It seems that it based only on intuition that variables in a model should by highly correlated with $Y$ and poorly correlated with eachoter.
Hellwig's method of selection of variables
CC BY-SA 2.5
null
2011-02-15T22:30:47.803
2011-02-16T08:49:24.693
null
null
1643
[ "model-selection" ]
7272
2
null
7256
1
null
Devil's advocate: I could imagine the principal components differing depending on who's sampled. I'd think this validity issue would take precedence over the precision issue Richard points out.
null
CC BY-SA 2.5
null
2011-02-15T23:43:27.807
2011-02-15T23:43:27.807
null
null
2669
null
7273
2
null
7250
8
null
I did not grasp question 1 completely, but I'll attempt an answer. The plot of Q1 shows how the within sum of squares (wss) changes as cluster number changes. In this kind of plots you must look for the kinks in the graph, a kink at 5 indicates that it is a good idea to use 5 clusters. WSS has a relationship with your variables in the following sense, the formula for WSS is $\sum_{j} \sum_{x_i \in C_j} ||x_i - \mu_j||^2$ where $\mu_j$ is the mean point for cluster $j$ and $x_i$ is the $i$-th observation. We denote cluster j as $C_j$. WSS is sometimes interpreted as "how similar are the points inside of each cluster". This similarity refers to the variables. The answer to question 2 is this. What you are actually watching in the `clusplot()` is the plot of your observations in the principal plane. What this function is doing is calculating the principal component score for each of your observations, plotting those scores and coloring by cluster. Principal component analysis (PCA) is a dimension reduction technique; it "summarizes" the information of all variables into a couple of "new" variables called components. Each component is responsible of explaining certain percentage of the total variability. In the example you read "This two components explain 73.95% of the total variability". The function `clusplot()` is used to identify the effectiveness of clustering. In case you have a successful clustering you will see that clusters are clearly separated in the principal plane. On the other hand, you will see the clusters merged in the principal plane when clustering is unsuccessful. For further reference on principal component analysis you may read wiki. if you want a book I suggest Modern Multivariate Techniques by Izenmann, there you will find PCA and k-means. Hope this helps :)
null
CC BY-SA 2.5
null
2011-02-15T23:48:05.417
2011-02-15T23:48:05.417
null
null
2902
null
7274
2
null
7270
7
null
What you have is an edge list, which can be converted to a network object using the network library. Here is an example using fictitious data. ``` library(network) src <- c("A", "B", "C", "D", "E", "B", "A", "F") dst <- c("B", "E", "A", "B", "B", "A", "F", "A") edges <- cbind(src, dst) Net <- as.network(edges, matrix.type = "edgelist") summary(Net) plot(Net) ``` However, a warning is in order: you have a very large network and I am not sure a plot will be all that informative. It will probably look like a big ball of yarn. I am also not sure how well these libraries deal with such large datasets. I suggest you take a look at the documentation for the network, statnet, and ergm libraries. The Journal of Statistical Software (v24/3) offers several articles covering these libraries. The issue can be found here: [http://www.jstatsoft.org/v24](http://www.jstatsoft.org/v24)
null
CC BY-SA 2.5
null
2011-02-16T00:40:01.187
2011-02-16T00:40:01.187
null
null
3265
null
7276
2
null
4999
2
null
I believe that in the specific case of L2 loss (ordinary linear regression), the convergence rate of coordinate descent will depend on the correlation structure of the predictors ($X_i$’s). Consider the case where they are uncorrelated. Then cyclic coordinate descent converges after one cycle. Another heuristic that has had more empirical evidence in its favor is the idea of active set convergence. Rather than cycling through all coordinates, only cycle through the ones that are active ($i$’s where $\beta_i$ is non-zero) until convergence, then sweep through the all coordinates to update the active set. Convergence occurs when the active set does not change.
null
CC BY-SA 2.5
null
2011-02-16T02:07:31.860
2011-02-16T02:07:31.860
null
null
1670
null
7277
2
null
7261
5
null
I would like to include my experience in this discussion. I have seen that a generalized linear model (specifying an identity link function and a normal family distribution) is identical to a general linear model only when you use the maximum likelihood estimate as scale parameter method. Otherwise if "fixed value = 1" is chosen as scale parameter method you get very different p values. My experience suggest that usually "fixed value = 1" should be avoided. I'm curious to know if someone knows when it is appropriate to choose fixed value = 1 as scale parameter method. Thanks in advance. Mark
null
CC BY-SA 2.5
null
2011-02-16T03:22:22.480
2011-02-16T03:22:22.480
null
null
null
null
7278
1
7280
null
8
35909
I'm tasked with deriving the MGF of a $\chi^2$ random variable. I think the way to do is is by using the fact that $\Sigma_{j=1}^{m} Z^2_j$ is a $\chi^2$ R.V. and that MGF of a sum is the product of the MGFs of the individual terms. Although that may not be right and it may be $E(e^{tX})$ way. I don't need it solved really just need to get down the track a little further.
Finding the Moment Generating Function of chi-squared distribution
CC BY-SA 4.0
null
2011-02-16T04:02:25.177
2021-02-05T12:52:40.250
2021-02-05T12:52:40.250
11887
2387
[ "mathematical-statistics", "moments", "moment-generating-function", "chi-squared-distribution" ]
7279
1
null
null
18
21889
What is paired t-test, and under which circumstances should I use paired t-test? Is there any difference between paired t-test and pairwise t-test?
Is there any difference between the terms "paired t-test" and "pairwise t-test"?
CC BY-SA 3.0
null
2011-02-16T04:34:46.397
2017-06-29T04:48:37.377
2011-09-23T05:43:12.140
183
3269
[ "hypothesis-testing", "anova", "t-test" ]
7280
2
null
7278
9
null
Yes, since $\chi^2$ is a sum of $Z_i^2$ the MGF is a product of individual summands. But then you need the MGF of $Z_i^2$ which is $\chi^2$ with 1 degree of freedom. The obvious way of calculating the MGF of $\chi^2$ is by integrating. It is not that hard: $$Ee^{tX}=\frac{1}{2^{k/2}\Gamma(k/2)}\int_0^\infty x^{k/2-1}e^{-x(1/2-t)}dx$$ Now do the change of variables $y=x(1/2-t)$, then note that you get [Gamma](http://en.wikipedia.org/wiki/Gamma_function) function and the result is yours. If you want deeper insights (if there are any) try asking at [http://math.stackexchange.com](http://math.stackexchange.com).
null
CC BY-SA 2.5
null
2011-02-16T05:46:50.920
2011-02-16T16:58:31.657
2011-02-16T16:58:31.657
279
2116
null
7281
2
null
7279
15
null
Roughly, [paired t-test](http://en.wikipedia.org/wiki/Paired_difference_test) is a t-test in which each subject is compared with itself or, in [other words](http://mathworld.wolfram.com/Pairedt-Test.html), determines whether they differ from each other in a significant way under the assumptions that the paired differences are independent and identically normally distributed. Pairwise t-test, on the other hand is a function in R which performs all possible pairwise comparisons. See [this](https://stat.ethz.ch/pipermail/r-help/2004-August/056136.html) discussion for more information
null
CC BY-SA 2.5
null
2011-02-16T05:57:30.550
2011-02-16T05:57:30.550
null
null
1496
null
7282
2
null
7270
3
null
I don't think that R is a first choice here (maybe I'm wrong). You will need huge arrays here to index and prepare your networks files in the appropriate data format. First of all, I will try to use Jure's (Rob mention him in the post above) [SNAP](http://goo.gl/L4jSO) library; it's written in C++ and works very well on large networks.
null
CC BY-SA 2.5
null
2011-02-16T06:11:07.573
2011-02-16T06:11:07.573
null
null
609
null
7283
2
null
7256
4
null
Generally, you want your training and validation data sets be separate as much as possible. Ideally, the validation set data would have been obtained only after the model has been trained. If you perform dimensionality reduction before splitting your data to separate sets, you break this isolation between the training and the validation and you won't be sure whether the dimensionality reduction process was over-fitted until your model is tested in real life. Having said that, there are cases, where efficient separation to training, testing and validation sets is not feasible and other sampling techniques, such as cross validation, leave k out etc are used. In these cases reducing the dimensionality before the sampling might be the right approach.
null
CC BY-SA 2.5
null
2011-02-16T06:21:29.233
2011-02-16T06:21:29.233
null
null
1496
null
7284
2
null
7271
5
null
After spending too long on web research, I'm pretty sure the source of 'Hellwig's method' is: Hellwig, Zdzisław. [On the optimal choice of predictors.](http://www.worldcat.org/title/on-the-optimal-choice-of-predictors/oclc/217223232) Study VI in Z. Gostkowski (ed.): Toward a system of quantitative indicators of components of human resources development; Paris: UNESCO, 1968; 23 pages. [[pdf](http://unesdoc.unesco.org/images/0015/001585/158559eb.pdf)] [Google Scholar finds 3 papers that have cited it](http://scholar.google.com/scholar?cites=16684078648545295641&as_sdt=2005&sciodt=0,5&hl=en). None of them appear particularly noteworthy. So I think the answer to your first question is 'No'. As for your second question, I'll leave you to study the paper as i've spent far too long on this already. But from a skim, it appears the motivation behind his method was to avoid calculations that were very tedious without an electronic computer: "... generally speaking one has to compute $2^n-1$ times the inverse matrices, which is of course an extremely dull perspective. The method we are going to present in this paper does not require finding inverse matrices." (p3-4). --- Biographical note: A little further googling reveals Zdzisław Hellwig was born on 26 May 1925 in [Dobrzyca](http://en.wikipedia.org/wiki/Dobrzyca), Poland, and was for many years professor of statistics at the [Wrocław University of Economics](http://www.ue.wroc.pl/english/). There was [a scientific meeting to honor his 85th birthday in November 2010](https://secure.imstat.org/meetings/2010.htm).
null
CC BY-SA 2.5
null
2011-02-16T08:49:24.693
2011-02-16T08:49:24.693
null
null
449
null
7285
1
7303
null
10
447
### My Aim: I'd like to a have a function that takes an email address and outputs a quasi-random number of 1, 2, 3, or 4. ### A little detail: By quasi-random number I mean that given a typical population of email addresses, the probabilities of getting a value of 1, 2, 3, or 4 are roughly equal, and that obvious systematic properties of the email address such as the domain name do not affect the probability of getting a value of 1, 2, 3, or 4. ### A little background: I have an online experiment written in [inquisit](http://www.millisecond.com/support/docs/v1/index.htm) where participants log in on two occasions. I want to randomly assign participants to one of four groups. While this is easy to do for one session (I can just use a random number generator), I need some way of remembering the allocation across sessions. Thus, I thought that I could extract a quasi-random group allocation from the participant email. I'm also limited in the set of functions that I have at my disposal ([see here for full list](http://www.millisecond.com/support/docs/v3/html/language/expressions/functions.htm)). The string functions are: tolower toupper capitalize concat search replaceall contains startswith endswith substring trim trimright trimleft length format evaluate ### Initial Thoughts: I thought about trying to extract a set of features of the email address that returned a value of 1, 2, 3, or 4 with roughly equal probabilities. Then, I could sum these properties and get the mod 4 plus 1 of that. Thus, assuming something like the central limit theorem, I might get close. Possible features that came to my mind: - length of string - position of first "a", "b", etc.
From an email address to a quasi-random number
CC BY-SA 2.5
null
2011-02-16T08:54:55.157
2018-02-15T19:26:23.593
2018-02-14T22:53:43.290
11887
183
[ "algorithms", "random-generation" ]
7286
1
7294
null
15
3290
Random walk Metropolis-Hasitings with symmetric proposal $q(x|y)= g(|y-x|)$ has the property that the acceptance probability $$P(accept\ y) = \min\{1, f(y)/f(x)\}$$ does not depend on proposal $g(\cdot)$. Does that mean that I can change the $g(\cdot)$ as a function of previous performance of the chain, without affecting the markovianity of the chain? Of particular interest to me is the adjustment of the scaling of Normal proposal as a function of acceptance rate. Would also greatly appreciate if someone can point out to the adaptation algorithms used in practice for this type of problem. Many thanks. [edit: Starting with the references given by robertsy and wok I found the following references on MH adaptive algorithms: Andrieu, Christophe, and Éric Moulines. 2006. On the Ergodicity Properties of Some Adaptive MCMC Algorithms. The Annals of Applied Probability 16, no. 3: 1462-1505. [http://www.jstor.org/stable/25442804](http://www.jstor.org/stable/25442804). Andrieu, Christophe, and Johannes Thoms. 2008. A tutorial on adaptive MCMC. Statistics and Computing 18, no. 4 (12): 343-373. doi:10.1007/s11222-008-9110-y. [Link](https://link.springer.com/article/10.1007/s11222-008-9110-y?from=SL). Atchadé, Y., G. Fort, E. Moulines, and P. Priouret. 2009. Adaptive Markov Chain Monte Carlo: Theory and Methods. Preprint. Atchadé, Yves. 2010. Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli 16, no. 1 (February): 116-154. doi:10.3150/09-BEJ199. [Link](https://projecteuclid.org/journals/bernoulli/volume-16/issue-1/Limit-theorems-for-some-adaptive-MCMC-algorithms-with-subgeometric-kernels/10.3150/09-BEJ199.full). Cappé, O., S. J Godsill, and E. Moulines. 2007. An overview of existing methods and recent advances in sequential Monte Carlo. Proceedings of the IEEE 95, no. 5: 899-924. Giordani, Paolo. 2010. Adaptive Independent Metropolis–Hastings by Fast Estimation of Mixtures of Normals. Journal of Computational and Graphical Statistics 19, no. 2 (6): 243-259. doi:10.1198/jcgs.2009.07174. [http://pubs.amstat.org/doi/abs/10.1198/jcgs.2009.07174](http://pubs.amstat.org/doi/abs/10.1198/jcgs.2009.07174). Latuszynski, Krzysztof, Gareth O Roberts, and Jeffrey S Rosenthal. 2011. Adaptive Gibbs samplers and related MCMC methods. 1101.5838 (January 30). [http://arxiv.org/abs/1101.5838](http://arxiv.org/abs/1101.5838). Pasarica, C., and A. Gelman. 2009. Adaptively scaling the Metropolis algorithm using expected squared jumped distance. Statistica Sinica. Roberts, Gareth O. 2009. Examples of Adaptive MCMC. Journal of Computational and Graphical Statistics 18, no. 2 (6): 349-367. doi:10.1198/jcgs.2009.06134. [http://pubs.amstat.org/doi/abs/10.1198/jcgs.2009.06134](http://pubs.amstat.org/doi/abs/10.1198/jcgs.2009.06134). ]
Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
CC BY-SA 4.0
null
2011-02-16T10:20:43.147
2022-08-22T04:22:20.033
2022-06-30T23:49:55.560
79696
1542
[ "markov-chain-montecarlo", "metropolis-hastings" ]
7287
2
null
7285
3
null
Why not just have a look-up table of numbers for each possible character in an email. Then concatenate the numbers to form a seed. For example, ``` A 1 B 2 C 3 .... @ 27 .... ``` So abc@ccc, would be converted to 12327333. This would give you a unique seed for each person. You would then use this to generate the 1, 2, 3, 4. --- From your question, it looks like you don't mind a "quick and dirty solution". One problem with my solution is that email addresses aren't random - for example you will probably get very few email addresses that contain the letter "z", but all email addresses contain "@".
null
CC BY-SA 2.5
null
2011-02-16T10:59:12.323
2011-02-16T10:59:12.323
null
null
8
null
7288
2
null
7141
1
null
Ok, I've thought of two possible ways to answer to this problem using Bayesian analysis. I will assume $\sigma$ to be known throughout this answer. First start with a "baby" case, where $n=2$ (or alternatively, only using the last two observations as a first approximation). You would usually start this by assuming a "flat" prior for $\mu$, just proportion to 1. But you have additional information, so we just restrict the prior to conform to this. So the prior is: $$f(\mu_1,\mu_2|\gamma) \propto I_{|\mu_1-\mu_2|\leq\gamma}$$ (the improper prior should be fine, because you are dealing with normal RVs, and you aren't dividing them) Combining this prior with the likelihood, and integrating out $\mu_1$ gives (Writing $\phi(x)$ as standard normal pdf and $\Phi(x)$ as standard normal cdf): $$f(\mu_2 | X_1,X_2,\sigma,\gamma) \propto \phi\big(\frac{\mu_2-X_2}{\sigma}\big) \Bigg[\Phi\Big(\frac{\mu_2-X_1+\gamma}{\sigma}\Big)-\Phi\Big(\frac{\mu_2-X_1-\gamma}{\sigma}\Big)\Bigg]$$ So in order to calculate the "p-value" for the hypothesis, we need to take $Pr(\mu_2 > 0 |X_1,X_2,\sigma,\gamma)=P$. This is given by the ratio of two integrals of the posterior: $$P=\frac{\int_{-\frac{X_2}{\sigma}}^{\infty}\phi\big(y\big) \Bigg[\Phi\Big(y+\frac{X_2-X_1+\gamma}{\sigma}\Big)-\Phi\Big(y+\frac{X_2-X_1-\gamma}{\sigma}\Big)\Bigg]dy}{\int_{-\infty}^{\infty}\phi\big(z\big) \Bigg[\Phi\Big(z+\frac{X_2-X_1+\gamma}{\sigma}\Big)-\Phi\Big(z+\frac{X_2-X_1-\gamma}{\sigma}\Big)\Bigg]dz}$$ It is beyond my abilities to do either of these integrals exactly, and even if it was possible, you probably would learn anything intuitive about the problem (except that the integral was friggin hard! you'd think it could be derived using something to do with convolutions, but I couldn't work it out). So I would just numerically evaluate these two integrals. For the whole data set, you will almost surely need some kind of numerical technique, or analytic approximation. This is a rather quick numerical technique. Okay, so it basically goes like this: if you knew $\mu_1$, then you could generate a sample of the remaining $\mu_i$ values sequentially, using the uniform distribution $(\mu_{i}|\mu_{i-1}) \sim U(\mu_{i-1}-\gamma,\mu_{i-1}+\gamma)$. An obvious way to sample $\mu_1$ is from a gaussian with large variance $\mu_1 \sim N(0,\delta^2)$ ("large" meaning relative to your data, say $\delta\approx 10\sigma$). Use the notation $\mu_{i}^{(b)}$ for the $b$th sample of means $b=1,\dots,B$. Now you calculate the total likelihood for each iteration. This will be used as a weight: $$w^{(b)}=\prod_{i=1}^{n} \phi \Big(\frac{\mu_{i}^{(b)}-X_i}{\sigma}\Big)$$ Then you take a "weighted probability" of the alternative hypothesis: $$\hat{P}=\frac{\sum_{b=1}^{B}w^{(b)} I(\mu_{n}^{(b)}>0)}{\sum_{b=1}^{B}w^{(b)}}$$ If $P$ is too big (in either case), then you reject the null hypothesis. A standard value would be $P>0.95$.
null
CC BY-SA 2.5
null
2011-02-16T12:31:45.973
2011-03-15T07:16:37.997
2011-03-15T07:16:37.997
2392
2392
null
7289
2
null
7223
2
null
Regarding 1) Yes, you do lose this. See e.g. Harrell Regression Modeling Strategies, a book published by Wiley or a paper I presented with David Cassell called "Stopping Stepwise" available e.g. www.nesug.org/proceedings/nesug07/sa/sa07.pdf
null
CC BY-SA 2.5
null
2011-02-16T13:18:20.643
2011-02-16T13:18:20.643
null
null
686
null
7290
2
null
7225
4
null
My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to: - The number of data points ("big p small n") - The amount of time you have to investigate the variables - The computational cost of inverting a giant matrix My reaction was based on "large" with respect to point 1. This is because in this case it is usually worth the trade-off in bias for the reduction in variance that you get. Bias is only important "in-the-long-run". So if you have a small sample, then who care's about "the-long-run"? Having said all that above, $R^2$ is probably not a particularly good quantity to calculate, especially when you have lots of variables (because that's pretty much all $R^2$ tells you: you have lots of variables). I would calculate something more like a "prediction error" using cross validation. Ideally this "prediction error" should be based on the context of your modeling situation. You basically want to answer the question "How well does my model reproduce the data?". The context of your situation should be able to tell you what "how well" means in the real world. You then need to translate this into some sort of mathematical equation. However, I have no obvious context to go off from the question. So a "default" would be something like PRESS: $$PRESS=\sum_{i=1}^{N} (Y_{i}-\hat{Y}_{i,-i})^2$$ Where $\hat{Y}_{i,-i}$ is the predicted value for $Y_{i}$ for a model fitted without the ith data point ($Y_i$ doesn't influence the model parameters). The terms in the summation are also known as "deletion residuals". If this is too computationally expensive to do $N$ model fits (although most programs usually gives you something like this with the standard output), then I would suggest grouping the data. So you set the amount of time you are prepared to wait for $T$ (preferably not 0 ^_^), and then divide this by the time it takes to fit your model $M$. This will give a total of $G=\frac{T}{M}$ re-fits, with a sample size of $N_{g}=\frac{N\times M}{T}$. $$PRESS=\sum_{g=1}^{G}\sum_{i=1}^{N_{g}} (Y_{ig}-\hat{Y}_{ig,-g})^2$$ A way you can get an idea of how important each variable is, is to re-fit an ordinary regression (variables in the same order). Then check proportionately how much each estimator has been shrunk towards zero $\frac{\beta_{LASSO}}{\beta_{UNCONSTRAINED}}$. Lasso, and other constrained regression can be seen as "smooth variable selection", because rather than adopt a binary "in-or-out" approach, each estimate is brought closer to zero, depending on how important it is for the model (as measured by the errors).
null
CC BY-SA 2.5
null
2011-02-16T14:10:31.810
2011-02-16T14:10:31.810
null
null
2392
null
7291
2
null
7285
0
null
You could try converting each character to an ascii number, multiplying them all together to force overflow, and then performing a modulus operation on the least significant digits. If this is not pseudo-random enough, you can perform a bit-shift the numbers a bit... -Ralph Winters
null
CC BY-SA 2.5
null
2011-02-16T14:25:46.130
2011-02-16T14:25:46.130
null
null
3489
null
7292
1
7327
null
14
4297
If you can measure a time series of observations at any level of precision in time, and your goal of the study is to identify a relationship between X and Y, is there any empirical justification for choosing a specific level of aggregation over another, or should the choice be simply taken based on theory and/or practical limitations? I have three sub-questions to this main one: - Is any non-random variation in X or Y within a larger level sufficient reasoning to choose a smaller level of aggregation (where non-random is any temporal pattern of the observations)? - Is any variation in the relationship between X and Y at a smaller level of aggregation sufficient reasoning to justify the smaller unit of analysis? If some variation is acceptable how does one decide how much variation is too much? - Can people cite arguments they feel are compelling/well defined for one unit of analysis over another, either for empirical reasons or for theoretical reasons? I am well aware of the [modifiable area unit problem](http://en.wikipedia.org/wiki/Modifiable_areal_unit_problem) in spatial analysis ([Openshaw 1984](http://qmrg.org.uk/files/2008/11/38-maup-openshaw.pdf)). I don't claim to be expert on the material, but all I am to think so far of it is that a smaller unit of analysis is always better, as one is less likely to commit an ecological fallacy ([Robinson 1950](http://dx.doi.org/10.2307/2087176)). If one has a directly pertinent reference or answer concerning aggregation geographical units I would appreciate that answer as well.
How do you choose a unit of analysis (level of aggregation) in a time series?
CC BY-SA 2.5
null
2011-02-16T14:47:53.653
2019-02-07T02:10:00.637
2019-02-07T02:10:00.637
11887
1036
[ "time-series", "aggregation", "disaggregation" ]
7293
1
7296
null
5
247
The following problem comes from a max likelihood calculation for gaussian families, but is of independent interest. Is it possible to find a closed-form approximation for small values of $x$ for $\text{det}(B + xI)$ where I is the identity matrix and B is hermitian rank-deficient positive semidefinite?
Determinant perturbation approximation
CC BY-SA 2.5
null
2011-02-16T14:57:58.397
2011-02-16T18:07:22.083
2011-02-16T18:07:22.083
null
30
[ "maximum-likelihood", "matrix" ]
7294
2
null
7286
7
null
I think that this [paper](https://projecteuclid.org/journals/bernoulli/volume-7/issue-2/An-adaptive-Metropolis-algorithm/bj/1080222083.full) from Heikki Haario et al. will give you the answer you need. The markovianity of the chain is affected by the adaptation of the proposal density, because then a new proposed value depends not only of the previous one but on the whole chain. But it seems that the sequence has still the good properties if great care is taken.
null
CC BY-SA 4.0
null
2011-02-16T15:15:42.040
2022-08-22T04:22:20.033
2022-08-22T04:22:20.033
79696
3108
null
7295
1
7299
null
15
20512
Currently i am using RF toolbox on MATLAB for a binary classification Problem Data Set: 50000 samples and more than 250 features So what should be the number of trees and randomly selected feature on each split to grow the trees? can any other parameter greatly affect the results?
What should be the optimal parameters for Random Forest classifier?
CC BY-SA 2.5
null
2011-02-16T15:20:21.510
2017-05-31T08:44:39.347
2011-02-16T17:39:26.287
null
2534
[ "machine-learning", "classification", "random-forest" ]
7296
2
null
7293
7
null
I'll assume that you already know the eigenvalues of $B$. Since $B$ is symmetric positive semidefinite, it can be decomposed as $$ B = U D U^T $$ where $U$ is an orthogonal matrix and $D$ is the diagonal of nonnegative eigenvalues (some of which may be exactly zero). Now $$ B+xI = U D U^T + x U U^T = U (D + x I) U^T $$ and since the determinant of a matrix is the product of its eigenvalues and the determinant is distributive over matrix products, then $$ |B+xI| = |D+xI| = \prod_n (d_n + x) $$ where $d_n$ is the $n$th diagonal entry of $D$.
null
CC BY-SA 2.5
null
2011-02-16T15:23:49.660
2011-02-16T15:23:49.660
null
null
2970
null
7297
1
null
null
4
1654
I have a smooth but rather complex curve, sampled with a good frequency. I apply the discrete Fourier transform to it using the fast Fourier transform (FFT) algorithm and get its Fourier image. I need to find peaks on the resulting Fourier transform curve, but the image I get contains a substantial amount of noise, which is a real problem because peaks can't be clearly seen. So, my question is as follows: What are the sources of noise in the Fourier transform, and how it can be reduced? Typical plots look like this: Initial function ![Initial function](https://i.stack.imgur.com/nCb7B.png) Initial function, narrow range ![Initial function, narrow range](https://i.stack.imgur.com/vk3ld.png) Fourier transform ![Fourier transform](https://i.stack.imgur.com/3HpdW.png) Fourier transform, narrow range ![Fourier transform, narrow range](https://i.stack.imgur.com/DuwNL.png) Before transformation a constant was subtracted from the the function so that it goes to zero.
Noise in the Fourier transform
CC BY-SA 2.5
null
2011-02-16T15:58:10.347
2011-02-17T09:11:52.597
2011-02-16T22:57:59.120
3272
3272
[ "data-transformation", "fourier-transform" ]
7298
2
null
7286
3
null
You can improve the acceptance rate using delayed rejection as described in [Tierney, Mira (1999)](http://scholar.google.fr/scholar?cluster=8248856686379090018). It is based on a second proposal function and a second acceptance probability, which guarantees the Markov chain is still reversible with the same invariant distribution: you have to be cautious since "it is easy to construct adaptive methods that might seem to work but in fact sample from the wrong distribution".
null
CC BY-SA 2.5
null
2011-02-16T16:18:35.263
2011-02-16T16:18:35.263
null
null
1351
null
7299
2
null
7295
8
null
Pick a large number of trees, say 100. From what I have read on the Internet, pick $\sqrt{250}$ randomly selected features. However, in [the original paper](http://www.stat.berkeley.edu/~breiman/randomforest2001.pdf), Breiman used about the closest integer to $\frac{\log{M}}{\log{2}}$. I would say cross-validation is usually the key to finding optimal parameters, but I do not know enough about random forests.
null
CC BY-SA 3.0
null
2011-02-16T16:26:18.673
2017-05-31T08:44:39.347
2017-05-31T08:44:39.347
1351
1351
null
7300
2
null
7270
3
null
Gephi ( [http://gephi.org/](http://gephi.org/) ) might be an easy way to explore the data. You can almost certainly visualize it, and perform some calculations (though I have not used it for some time so I can't remember all the functions).
null
CC BY-SA 2.5
null
2011-02-16T16:52:02.890
2011-02-16T16:52:02.890
null
null
2635
null
7301
2
null
7259
0
null
Are you familiar with [ggplot2](http://had.co.nz/ggplot2)? I'm not certain I understand the question, but you can look at histograms colored by a certain paramater ([http://had.co.nz/ggplot2/geom_histogram.html](http://had.co.nz/ggplot2/geom_histogram.html)), and also it has a useful facetting function ([http://had.co.nz/ggplot2/facet_grid.html](http://had.co.nz/ggplot2/facet_grid.html)) There is a very useful GUI developed called Deducer ([http://www.deducer.org/pmwiki/pmwiki.php?n=Main.DeducerManual](http://www.deducer.org/pmwiki/pmwiki.php?n=Main.DeducerManual)) that is useful for exploring data using ggplot2. Once familiar with the structure of ggplot2 the GUI is no longer required but it can be great for exploring data. --- Here is an example of the code, assuming your data frame is called 'dataframename': ``` p0 <- ggplot(dataframename, aes(factor(Iki3D.Rrp6D.sense), Iki3D.Rrp6D.antisense)) + facet_grid(.~Ago1D.antisense) p0 <- p0 + geom_boxplot() + xlab('x axis') + ylab('y axis') ```
null
CC BY-SA 2.5
null
2011-02-16T16:55:04.833
2011-02-16T19:50:51.380
2011-02-16T19:50:51.380
2635
2635
null
7302
1
null
null
5
257
suppose you might buy some software to be used in a ML/NLP/DM laboratory. What software package would you ask for? Let's say: MATLAB (with some toolboxes), SPSS, and what else? I know that there is a lot of free software one can use like R, Weka, Rapidminer, python packages, and so on. However, the above question targets commercial software. Thanks.
Software for ML/NLP/DM laboratory
CC BY-SA 2.5
null
2011-02-16T17:29:10.427
2012-06-05T17:17:26.633
2011-02-16T18:00:10.717
null
976
[ "machine-learning", "data-mining", "software" ]
7303
2
null
7285
10
null
Look up hash functions, for example at [http://en.wikipedia.org/wiki/Hash_function](http://en.wikipedia.org/wiki/Hash_function)
null
CC BY-SA 2.5
null
2011-02-16T17:31:19.580
2011-02-16T17:31:19.580
null
null
247
null
7304
2
null
7295
12
null
Number of trees the bigger, the better. You almost can't overshoot with this parameter, but of course the upper limit depends on the computational time you want to spend on RF. The good idea is to make a long forest first and then see (I hope it is available in MATLAB implementation) when the OOB accuracy converges. Number of tried attributes the default is square root of the whole number of attributes, yet usually the forest is not very sensitive about the value of this parameter -- in fact it is rarely optimized, especially because stochastic aspect of RF may introduce larger variations.
null
CC BY-SA 2.5
null
2011-02-16T17:39:07.300
2011-02-16T17:39:07.300
null
null
null
null
7305
2
null
7293
4
null
I second @cardinal's answer, but provide a simple trick: If $p(z)$ is a polynomial (with integer powers), and $\mathbf{v}, \lambda$ are eigenvector and corresponding eigenvalue of matrix $M$, then $\mathbf{v}, p(\lambda)$ are eigenvector and corresponding eigenvalue of $p(M)$. The proof is a simple exercise. The polynomial $p$ may contain negative powers of $z$ and a constant term, which in the case of $p(M)$ corresponds to adding the constant times the identity matrix. Since the determinant is the product of the eigenvalues, the determinant of $A = p(B)$, where $p(z) = z^1 + x$ is the product $\prod_i \left(\lambda_i + x\right)$, where $\lambda_i$ are the eigenvalues of $B$. You can also use this trick to find the trace of $p(B)$, of course, but it is overkill! This polynomial trick is a classic in numerical analysis, used, for example, to prove convergence of the Gauss-Seidel method. See Cheney & Kincaid, or [my answer](https://stats.stackexchange.com/questions/2615/analytical-solutions-to-limits-of-correlation-stress-testing/2814#2814) to another question involving this trick.
null
CC BY-SA 2.5
null
2011-02-16T18:01:53.177
2011-02-16T18:01:53.177
2017-04-13T12:44:39.283
-1
795
null
7306
2
null
7268
14
null
Here is one approach using cut() to create the appropriate hourly factors and ddply() from the plyr library for calculating the means. ``` library(lattice) library(plyr) ## Create a record and some random data for every 5 seconds ## over two days for two hosts. dates <- seq(as.POSIXct("2011-01-01 00:00:00", tz = "GMT"), as.POSIXct("2011-01-02 23:59:55", tz = "GMT"), by = 5) hosts <- c(rep("host1", length(dates)), rep("host2", length(dates))) x1 <- sample(0:20, 2*length(dates), replace = TRUE) x2 <- rpois(2*length(dates), 2) Data <- data.frame(dates = dates, hosts = hosts, x1 = x1, x2 = x2) ## Calculate the mean for every hour using cut() to define ## the factors and ddply() to calculate the means. ## getmeans() is applied for each unique combination of the ## hosts and hour factors. getmeans <- function(Df) c(x1 = mean(Df$x1), x2 = mean(Df$x2)) Data$hour <- cut(Data$dates, breaks = "hour") Means <- ddply(Data, .(hosts, hour), getmeans) Means$hour <- as.POSIXct(Means$hour, tz = "GMT") ## A plot for each host. xyplot(x1 ~ hour | hosts, data = Means, type = "o", scales = list(x = list(relation = "free", rot = 90))) ```
null
CC BY-SA 2.5
null
2011-02-16T19:21:35.107
2011-02-16T19:21:35.107
null
null
3265
null
7307
1
114363
null
21
27201
Can somebody explain me clear the mathematical logic that would link two statements (a) and (b) together? Let us have a set of values (some distribution). Now, a) Median does not depend on every value [it just depends on one or two middle values]; b) Median is the locus of minimal sum-of-absolute-deviations from it. And likewise, and in contrast, a) (Arithmetic) mean depends on every value; b) Mean is the locus of minimal sum-of-squared-deviations from it. My grasp of it is intuitive so far.
Mean and Median properties
CC BY-SA 3.0
null
2011-02-16T19:33:34.640
2022-01-25T15:20:16.673
2020-07-25T12:12:51.043
7290
3277
[ "mean", "median", "robust", "sensitivity-analysis", "types-of-averages" ]
7308
1
7629
null
18
3930
Jeffrey Wooldridge in his Econometric Analysis of Cross Section and Panel Data (page 357) says that the empirical Hessian "is not guaranteed to be positive definite, or even positive semidefinite, for the particular sample we are working with.". This seems wrong to me as (numerical problems apart) the Hessian must be positive semidefinite as a result of the definition of the M-estimator as the value of the parameter which minimizes the objective function for the given sample and the well-known fact that at a (local) minimum the Hessian is positive semidefinite. Is my argument right? [EDIT: The statement has been removed in the 2nd ed. of the book. See comment.] BACKGROUND Suppose that $\widehat \theta_N$ is an estimator obtained by minimizing $${1 \over N}\sum_{i=1}^N q(w_i,\theta),$$ where $w_i$ denotes the $i$-th observation. Let's denote the Hessian of $q$ by $H$, $$H(q,\theta)_{ij}=\frac{\partial^2 q}{\partial \theta_i \partial \theta_j}$$ The asymptotic covariance of $\widehat \theta_n$ involves $E[H(q,\theta_0)]$ where $\theta_0$ is the true parameter value. One way to estimate it is to use the empirical Hesssian $$\widehat H=\frac{1}{N}\sum_{i=1}^N H(w_i,\widehat \theta_n)$$ It is the definiteness of $\widehat H$ which is in question.
Can the empirical Hessian of an M-estimator be indefinite?
CC BY-SA 2.5
null
2011-02-16T19:52:33.287
2018-07-07T19:08:49.507
2011-02-28T04:09:22.650
1393
1393
[ "estimation", "maximum-likelihood", "econometrics", "asymptotics" ]
7309
2
null
7259
4
null
Sam, I think I understood what you are after, so let me know if I've misinterpreted anything: - You want a separate box_plot for the ratio of each pairs of columns. There are 15 ratios we are interested in...(column 6 / column 7, column 8 / column 9, etc.) - This plot should have a separate "window" or facet for each annotation, for which there are 23 different annotations. Assuming both of those are right, I think this will give you what you are after. First, we will make the 15 new ratio columns with a for-loop and some indexing. After we make these 15 new columns, we will `melt` the data into long format for easy plotting with `ggplot2`. Since we are only interested in the columns `annotation` and the new ratio columns, we'll specify those in the call to `melt`. Then it is a relatively straight forward call to `ggplot` to specify the axes and faceting variable. These plots don't make much sense with 10 rows of data, but I think it will look better with your full dataset. ``` library(ggplot2) #EDIT: this removes the call to cbind which should improve performance. for (i in seq(6, ncol(df), by = 2)) { df[, paste(i, i+1, sep = "_", collapse = "")] <- df[, i ] / df[, i + 1 ] } df.m <- melt(df, id.vars = "annotation", measure.vars = 36:ncol(df)) #Note that we use the column name for the id.vars and the column order for #the measure.vars. In the case of the latter, this is simply to save on #typing. ggplot(data = df.m, aes(x = variable, y = value)) + geom_boxplot() + facet_wrap(~ annotation) + coord_flip() ```
null
CC BY-SA 2.5
null
2011-02-16T20:01:30.047
2011-02-16T22:11:00.050
2011-02-16T22:11:00.050
696
696
null
7311
2
null
7256
0
null
You should perform sampling and dimensionality reduction in combination. The best way to do this is undersample the majority class, and run a decision tree. It is the best variable selector you can imagine. Perform this a number of times (each time another sample). The result will be a number of list of candidate predictors. And ... yes : combination of your decision trees is already a great model. Find out why decision trees is the best data mining algorithm at [http://bit.ly/a2qDWJ](http://bit.ly/a2qDWJ)
null
CC BY-SA 2.5
null
2011-02-16T20:14:40.750
2011-02-16T20:14:40.750
null
null
null
null
7312
2
null
7307
3
null
- Roughly speaking, the median is the "middle value". Now, if you change the highest value (which is supposed to be positive here) from $x_{(n)}$ to $2 * x_{(n)}$, say, it does not change the median. But it does change the arithmetic mean. This shows, in simple terms, that the median does not depend on every value while the mean does. Actually, the median only depends on the ranks. The mathematical logic behind this simply arises from the mathematical definitions of the median and the mean. - Now, it can be shown that, for any $ a \in \mathbb{R}$ $\sum_{i=1}^{n} |x_{i} - median| \leq \sum_{i=1}^{n} |x_{i} - a|$ and $\sum_{i=1}^{n} (x_{i} - mean)^{2} \leq \sum_{i=1}^{n} (x_{i} - a)^{2}$
null
CC BY-SA 2.5
null
2011-02-16T20:19:38.270
2011-02-16T20:26:15.590
2011-02-16T20:26:15.590
3019
3019
null
7313
1
31835
null
4
1063
Per my [earlier question](https://stats.stackexchange.com/q/7115/1026) I'm trying to find a reasonable metric for the semantic distance between two short text strings. One metric mentioned in the answers of that question was to use shortest hypernym path to create a metric for phrases. So for instance, if I was to find the semantic distance between pig and dog, I could ask [WordNet](http://wordnet.princeton.edu/) for all of their hypernyms: pig=> swine=> even-toed ungulate=> hoofed mammal=> placental mammal=> mammal=> vertebrate=> chordate=> animal=> organism=> living thing=> object=> physical entity=> entity dog=> canine=> carnivore=> placental mammal=> mammal=> vertebrate=> chordate=> animal=> organism=> living thing=> object=> physical entity=> entity and I would find that the shortest path between pig and dog is 8 jumps - so semantic distance = 8. If I wanted to extend this concept to entire phrases, then perhaps I could (naively) find the average distance between all word pairs in the phrases. (Obviously, one should be able to find something much better than this.) My question: I'm sure someone has thought of this before. Where should I look in literature to find more information. And what are the hidden gotchas when using such an approach.
Closest distance in hypernym tree as measure of semantic distance between phrases
CC BY-SA 2.5
null
2011-02-16T21:18:23.097
2012-07-07T10:51:14.693
2017-04-13T12:44:33.977
-1
1026
[ "text-mining", "distance-functions" ]
7314
1
null
null
2
1305
Winbugs seem to support either stochastic or deterministic relationship between variables. However, many Bayesian Networks represent relationships between variables using conditional probability tables. The "visit to Asia", "burglar alarm", "smoking & cancer" examples are classic introductory material. However, conditional probability tables used in this approach does not correspond to stochastic or deterministic relationships in Winbugs. Google searches bring examples of Gibbs sampling on Bayesian Networks, but these are mostly algorithms that I'd have to implement in either R or another programming language (for example see: [http://www-users.cselabs.umn.edu/classes/Spring-2010/csci5512/notes/gibbs.pdf](http://www-users.cselabs.umn.edu/classes/Spring-2010/csci5512/notes/gibbs.pdf)) Is there a way of using Winbugs for complex Bayesian Network inference? I need to express causal relationships between different variables (a continuous variable and a discrete one, a categorical one etc), so I need to perform MCMC based inference on hybrid Bayesian networks. I am not so sure if I can use Winbugs for this Finally, would it make sense to try to transform the conditional probability table based, or causal relationships into a stochastic relationships? If I turn CPT based relationships between variables into regression for example, would that be an acceptable way of performing inference on Bayesian Networks using Winbugs?
How can I represent Conditional Probability Table based Bayesian Networks in Winbugs?
CC BY-SA 3.0
null
2011-02-16T22:30:17.503
2014-08-30T20:13:48.720
2014-08-30T20:13:48.720
3280
3280
[ "bayesian", "causality", "bugs" ]
7315
2
null
7307
12
null
For the computation of the median, let $x_1,x_2,\ldots,x_n$ be the data. Assume, for simplicity, that $n$ is even, and the points are distinct! Let $y$ be some number. Let $f(y)$ be the 'sum-of-absolute deviations' of $y$ to the points $x_i$. This means that $f(y) = |x_1 - y| + |x_2 - y| + \ldots + |x_n - y|$. Your goal is to find the $y$ that minimizes $f(y)$. Let $l$ be the number of the $x_i$ that are less than or exactly equal to $y$ at a given point in time, and let $r = n - l$ be the number that are strictly greater than $y$. Pretend you are 'moving $y$ to the right', that is, increase $y$ slightly. What happens to $f(y)$? Suppose you add an amount of $\Delta y$ to $y$. For those $x_i$ which are less than or equal to $y$, we have $|x_i - y|$ increases by $\Delta y$. And for those greater than $y$, we have $|x_i - y|$ decreases by $\Delta y$. (This assumes $\Delta y$ is so small that $y$ does not cross any of the points). Thus the change in $f(y)$ is $l\Delta y - r \Delta y = (l-r)\Delta y$. Note that this change in $f(y)$ does not depend on the values of the $x_i$ but only on the number to the left and right of $y$. By definition, $y$ is a median value when moving it to the left or right does not increase or decrease $f(y)$. This would mean that $l-r = 0$, and thus the number of $x_i$ to the left of $y$ is equal to the number to the right of $y$. And thus the median does not depend on the values of $x_i$, just their locations. edit For the mean: the function $f(y)$ becomes $f(y) = (x_1 - y)^2 + \ldots + (x_n - y)^2$. Clearly the change in $f(y)$ for a small change in $y$ now depends on the magnitudes of the $x_i$, not just the number to the left and right of $y$. Note that this business about the 'small change' is just covert talk for the derivative of $f(y)$...
null
CC BY-SA 2.5
null
2011-02-16T23:13:35.653
2011-02-17T00:59:26.753
2011-02-17T00:59:26.753
795
795
null
7316
1
7317
null
27
36779
I have data with many correlated features, and I want to start by reducing the features with a smooth basis function, before running an LDA. I'm trying to use natural cubic splines in the `splines` package with the `ns` function. How do I go about assigning the knots? Here's the basic R code: ``` library(splines) lda.pred <- lda(y ~ ns(x, knots=5)) ``` But I have no idea about how to chose the knots in `ns`.
Setting knots in natural cubic splines in R
CC BY-SA 3.0
null
2011-02-17T03:01:41.173
2018-05-28T01:54:47.840
2017-08-18T16:54:06.147
7290
988
[ "r", "splines" ]
7317
2
null
7316
45
null
How to specify the knots in R The `ns` function generates a natural regression spline basis given an input vector. The knots can be specified either via a degrees-of-freedom argument `df` which takes an integer or via a knots argument `knots` which takes a vector giving the desired placement of the knots. Note that in the code you've written ``` library(splines) lda.pred <- lda(y ~ ns(x, knots=5)) ``` you have not requested five knots, but rather have requested a single (interior) knot at location 5. If you use the `df` argument, then the interior knots will be selected based on quantiles of the vector `x`. For example, if you make the call ``` ns(x, df=5) ``` Then the basis will include two boundary knots and 4 internal knots, placed at the 20th, 40th, 60th, and 80th quantiles of `x`, respectively. The boundary knots, by default, are placed at the min and max of `x`. Here is an example to specify the locations of the knots ``` x <- 0:100 ns(x, knots=c(20,35,50)) ``` If you were to instead call `ns(x, df=4)`, you would end up with 3 internal knots at locations 25, 50, and 75, respectively. You can also specify whether you want an intercept term. Normally this isn't specified since `ns` is most often used in conjunction with `lm`, which includes an intercept implicitly (unless forced not to). If you use `intercept=TRUE` in your call to `ns`, make sure you know why you're doing so, since if you do this and then call `lm` naively, the design matrix will end up being rank deficient. Strategies for placing knots Knots are most commonly placed at quantiles, like the default behavior of `ns`. The intuition is that if you have lots of data clustered close together, then you might want more knots there to model any potential nonlinearities in that region. But, that doesn't mean this is either (a) the only choice or (b) the best choice. Other choices can obviously be made and are domain-specific. Looking at histograms and density estimates of your predictors may provide clues as to where knots are needed, unless there is some "canonical" choice given your data. In terms of interpreting regressions, I would note that, while you can certainly "play around" with knot placement, you should realize that you incur a model-selection penalty for this that you should be careful to evaluate and should adjust any inferences as a result.
null
CC BY-SA 3.0
null
2011-02-17T04:00:30.850
2011-09-27T00:25:53.663
2011-09-27T00:25:53.663
2970
2970
null
7318
1
null
null
6
4606
I have an experiment producing results (dependent variables) that don't pass tests of normality, thus I am testing hypotheses using non-parametric tests. My DVs are continuous, while my factors (independent variables) are ordinal or nominal. I've been using the Kruskal-Wallis test and Friedman test (using Matlab). Most of the time I am only interested I testing 2 IVs for significant effects, though sometimes I test 3. I would like to know whether there are any significant interaction effects on the DV between my IVs. Normally I'd use a 2-way ANOVA to do this, however that's not appropriate given the non-normal distributions. I don't wish to use transformation of my IVs, nor go ahead with ANOVA despite non-normality. How can I find which interaction effects are significant? What non-parametric test could I use? Hope someone can help. Nick
Which non-parametric test can I use to identify significant interactions of independent variables?
CC BY-SA 2.5
null
2011-02-17T04:34:41.540
2017-05-28T16:55:43.337
null
null
3285
[ "hypothesis-testing", "anova", "statistical-significance", "nonparametric", "interaction" ]
7319
1
7331
null
5
4049
I am new to statistics, so pardon any mistakes in my question. I have two time series $X_i$ and $Y_i$. Assuming that they're stationary AR(1) processes with possibly different means, how do I test for difference of means? I found this link (but tests only one sample) which discusses using gls: [https://stat.ethz.ch/pipermail/r-help/2006-May/105495.html](https://stat.ethz.ch/pipermail/r-help/2006-May/105495.html) So I am wondering how to extend this to multiple samples (use a dummy variable?) or if there are any other methods (preferably in R).
Two sample t-test for data (maybe time series) with autocorrelation?
CC BY-SA 2.5
null
2011-02-17T04:50:17.777
2011-08-16T12:18:34.573
2011-02-17T08:49:27.607
null
null
[ "hypothesis-testing", "mean", "autocorrelation" ]
7320
2
null
5430
1
null
You could simply smooth the data and find the peaks. Since there are presumably several pertinent, distinct (larger) objects amongst many irrelevant, indistinct (smaller) objects providing the noisy distance environment, you could probably assume that the distance distribution of pertinent objects is likely to be uniform, no? If you can't rely on any particular distance distribution for pertinent objects, then fitting distribution functions won't help at all. Thus you're left with identifying real peaks amongst false peaks. A low-pass filter can help with that - even as simple as a moving average filter. You could tune the filter using the likely range of distances of each pertinent object (e.g. a non-uniform object of about 2 metres in size might give peaks that vary within a 2m range). There may also be further machine learning approaches(?)
null
CC BY-SA 2.5
null
2011-02-17T04:51:40.427
2011-02-17T04:51:40.427
null
null
3285
null
7321
1
null
null
5
313
Consider a single x axis representing points along a line in space. I've got a set of data (in this case the receptive fields of hippocampal place cells recorded as a rat runs along a linear track), which are themselves spatially extensive, ie. if displayed graphically against the axis they would look like a series of horizontal bars, distributed at different points along the axis. --- ``` ____ ____ ____ fields ``` ____________________axis What I want to do is to define a point of interest on the line, and determine whether there are significantly more fields overlapping this point than elsewhere. I've achieved a simpler version of this by finding the centre of the fields and then measuring distances from this point and another arbitrarily defined point, but this approach loses all of the information that comes from the fact that they are spatially extensive. Can anyone point me in the right general direction? Even knowing the broad class of statistical test I should use for this kind of situation would be a big help! Thanks!
Determining whether a group of spatially extensive data is centered around one point in space
CC BY-SA 3.0
null
2011-02-17T05:29:19.383
2011-07-25T10:29:10.680
2011-06-25T09:39:29.480
null
null
[ "data-visualization", "spatial", "continuous-data" ]
7322
1
7329
null
5
1369
I am working with the dataset of some heights and weights at different ages. My professor wants me to plot the residuals from regression of `soma.WT9` against the residuals from regression of `HT9.WT9` (don't mind the notation it's just two columns where soma is being regressed on `WT9` and `HT9` being regressed on `WT9`). What is the purpose of this plot.
What does plotting residuals from one regression against the residuals from another regression give us?
CC BY-SA 2.5
null
2011-02-17T06:46:02.780
2011-04-29T00:51:09.930
2011-04-29T00:51:09.930
3911
3008
[ "regression", "residuals" ]
7323
2
null
7322
6
null
Judging by the details and variable names, soma.WT9 and HT9.WT9, you are obtaining the residuals by first regressing, soma on WT9 and HT9 on WT9 (right?). If I understood you correctly, the scatter plot between soma.WT9 and HT9.WT9 will tell you -- if after removing the effects of WT9 (possibly linear effects in your case) from HT9 and soma, is there a relationship between HT9 and soma. This is beneficial in the case when WT9 explains all the source of variation in soma, then the scatter plot between soma.WT9 and HT9.WT9 will not show any particular (recognizable/standard) pattern. It may be also called partial residual plots.
null
CC BY-SA 2.5
null
2011-02-17T07:00:28.667
2011-02-17T07:00:28.667
null
null
1307
null
7324
2
null
7297
5
null
This seems like $e^{-ax}\sin(bx)$ function -- FT of such are two Dirac deltas, so it is not surprising at all that they appear as a noisy peaks after DFT (this is a variation of ultraviolet crisis). So, well, don't worry -- you can do nothing wise about it, at least smooth the transform (for instance with moving mean) to find peak locations easier (but better do not report smoothed curve). On the other hand, if you are interested in the later signal rather than this initial "bang", it is better just to cut it off -- this will clear those major peaks and show the more subtle details.
null
CC BY-SA 2.5
null
2011-02-17T09:11:52.597
2011-02-17T09:11:52.597
null
null
null
null
7325
2
null
4451
1
null
[This paper](http://arxiv.org/abs/1102.2166) uses a [facebook dataset](http://people.maths.ox.ac.uk/~porterm/data/facebook100.zip) that is available here. Here is the description from the authors: > The data includes the complete set of nodes and links (and some demographic information) from 100 US colleges and universities from a single-time snapshot in September 2005.
null
CC BY-SA 2.5
null
2011-02-17T10:55:37.620
2011-02-17T10:55:37.620
null
null
3291
null
7326
1
7363
null
4
454
I am new here, so I might have missed a similar question. I am trying to do things good in my research and using proper statistical approaches, but in computer science, we had quite a weak training on statistical methods applied to our research. So, what I am asking is probably trivial. Here is the problem: We have a dataset of del.icio.us (the social bookmarking website) and want to work on a subset of this dataset for a manual study of the usage of tags. In this dataset, we have urls, users and tags that are linked by triplets defining an association of a tag by a user to a url. We have 401 970 328 such triplets in our dataset, so we can't look at all of them and want to choose a sample of it. Naively, at the beginning, we started with a purely random sampling to choose 500 user-bookmark pairs and all their associated tags. Because there are some more popular tags on del.icio.us, our sample has a long-tail distribution of tag usage, similar to the one from the whole dataset. What I would like to check is that the actual distribution of each tag is similar in the sample as in the original dataset. That is, if "java" appears 10 times more often than "indonesia" in the dataset, it should be a similar distribution in the sample. How would I go at doing this? one of the issue is that in the sample, some classes (tag) present in the full dataset will never appear. The second question is also: how to decide of a good size for the sample?
Checking the distribution of classes in a sample of a dataset
CC BY-SA 2.5
null
2011-02-17T11:12:27.553
2011-02-18T06:07:14.057
null
null
3291
[ "distributions", "sampling", "dataset" ]
7327
2
null
7292
11
null
Introduction My interest in the topic is now about 7 years and resulted in PhD thesis [Time series: aggregation, disaggregation and long memory](http://uosis.mif.vu.lt/~celov/DC/dcelov%20thesis%2009_05.pdf), where attention was paid to a specific question of cross-sectional disaggregation problem for AR(1) scheme. Data Working with different approaches to aggregation the first question you need to clarify is what type of data you deal with (my guess is spatial, the most thrilling one). In practice you may consider temporal aggregation (see [Silvestrini, A. and Veridas, D. (2008)](http://ideas.repec.org/p/bdi/wptemi/td_685_08.html)), cross-sectional (I loved the article by [Granger, C. W. J. (1990)](http://ideas.repec.org/p/fip/fedmem/1.html)) or both time and space (spatial aggregation is nicely surveyed in [Giacomini, R. and Granger, C. W. J. (2004)](http://www.homepages.ucl.ac.uk/~uctprgi/Files/GiacominiGranger04.pdf)). Answers (lengthy) Now, answering your questions, I put some rough intuition first. Since the problems I meet in practice are often based on inexact data (Andy's assumption > you can measure a time series of observations at any level of precision in time seems too strong for macro-econometrics, but good for financial and micro-econometrics or any experimental fields, were you do control the precision quite well) I do have to bear in mind that my monthly time series are less precise than when I work with yearly data. Besides more frequent time series at least in macroeconomics do have seasonal patterns, that may lead to spurious results (seasonal parts do correlate not the series), so you need to seasonally adjust your data - another source of smaller precision for higher frequency data. Working with cross-sectional data revealed that high level of disaggregation brings more problems with probably, lots of zeroes to deal with. For instance, a particular household in the panel of data may purchase a car once per 5-10 years, but aggregated demand for new (used) cars is much smoother (even for a small town or region). The weakest point aggregation always results in the loss of information, you may have the GDP produced by the cross-section of EU countries during the whole decade (say period of 2001-2010), but you will loose all the dynamic features that may be present in your analysis considering detailed panel data set. Large scale cross-sectional aggregation may turn to be even more interesting: you, roughly, take simple things (short memory AR(1)) average them over the quite large population and get "representative" long memory agent that resembles none of the micro units (one more stone to the representative agent's concept). So aggregation ~ loss of information ~ different properties of the objects and you would like to take control over the level of this loss and/or new properties. In my opinion, it is better to have precise micro level data at as high frequency as possible, but... there is a usual measurement trade-off, you can't be everywhere perfect and precise :) Technically producing any regression analysis you do need more room (degrees of freedom) to be more or less confident that (at least) statistically your results are not junk, though they still may be a-theoretical and junk :) So I do put equal weights to question 1 and 2 (usually choose quarterly data for the macro-analysis). Answering the 3rd sub-question, all you decide in practical applications what is more important to you: more precise data or degrees of freedom. If you take the mentioned assumption into account the more detailed (or higher frequency) data is preferable. Probably the answer will be edited latter after some sort of discussion if any.
null
CC BY-SA 2.5
null
2011-02-17T11:23:36.643
2011-02-17T11:23:36.643
null
null
2645
null
7329
2
null
7322
4
null
This sounds like what I call an "added variable" plot. The idea behind these is to provide a visual way of whether adding a variable to a model (ht9 in your case) is likely to add anything to the model (soma on wt9 in your case). It was explained to me like this. When you fit a linear regression, the order of the variables matters. It's kind of like imagining the variance in the soma variable as an "island". The first variables "claims" a portion of the variance on the island, and the second variable "claims" what it can from what is left over. So basically this plot will show you if "what is left to explain" in soma's variation (residuals from soma.wt9) can be explained by "the capacity of ht9 to explain anything over and above wt9" (residuals from ht9.wt9). You can also show mathematically what is going on. Residuals from soma.wt9 are calculate as: $$e_{i}=soma-\beta_{0}-\beta_{1}wt9$$ residuals from ht9.wt9 are: $$f_{i}=ht9-\alpha_{0}-\alpha_{1}wt9$$ Regression of $e_i$ on $f_i$ through the origin (because $\overline{e}=\overline{f}=0$, so line will pass through origin) gives $$e_{i}=\delta f_{i}$$ Substituting the residual equations into this one gives: $$soma-\beta_{0}-\beta_{1}wt9=\delta (ht9-\alpha_{0}-\alpha_{1}wt9)$$ Re-arranging terms gives: $$soma=(\beta_{0}-\delta\alpha_{0})+(\beta_{1}-\delta\alpha_{1})wt9+\delta ht9$$ Hence, the estimated slope (using OLS regression) will be the same in the model with $soma = \beta_0+\beta_{wt9}wt9 + \beta_{ht9}ht9$ as in the model $resid.soma=\beta_{ht9} resid.ht9$ This also shows explicitly why having correlated regressor variables ($\alpha_{1}$ is a rescaled correlation) will make the estimated slopes change, and possibly be the "opposite sign" to what is expected. I think this method was actually how Multiple regression was carried out before computers were able to invert large matrices. It may be quicker to invert lots of $2\times 2$ matrices than it is to invert one huge matrix.
null
CC BY-SA 2.5
null
2011-02-17T13:03:10.287
2011-02-17T13:03:10.287
null
null
2392
null
7330
1
7332
null
4
389
I have a question about a rotation matrix, which can be represented in 2 dimensions as: $$R_{2}(\theta)=\begin{bmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{bmatrix}$$ For some arbitrary angle $\theta$. This can be extended to an arbitrary number of dimensions by adding an identity matrix: $$R_{n}(\theta)=\begin{bmatrix} R_{2}(\theta) & 0 \\ 0 & I_{n-2}\end{bmatrix}$$ I have found some "invariance" properties of a n-dimensional prior distribution when rotated in 2 arbitrary dimensions. My question is: can any rotation in arbitrary dimensions be represented by a sequence of 2-D rotations? It doesn't matter if the sequence is unique or not for my purposes. Or perhaps a better question is: if a prior distribution is invariant when rotated about 2 arbitrary dimensions, is it invariant when rotated about an arbitrary number of dimensions?
Rotation matrices and prior invariance for arbitrary dimensions
CC BY-SA 2.5
null
2011-02-17T13:55:51.813
2011-02-17T22:00:09.717
2011-02-17T19:33:00.870
2970
2392
[ "prior", "rotation" ]
7331
2
null
7319
4
null
Since $X_i$ and $Y_i$ are AR(1) processes, we can treat $(X_i,Y_i)$ as a VAR(1) process. Then testing the difference of means means testing the restrictions on VAR coefficients. For this simple case there is a [formula](http://books.google.com/books?id=B8_1UBmqVUoC&lpg=PP1&dq=hamilton%20time%20series%20analysis&hl=fr&pg=PA300#v=onepage&q&f=false) derived in Hamilton's book "Time series analysis". The formula in the link is the one before the start of new section. Unfortunately it is not implemented in R, as far as I know. I checked only vars package, but I can say with some confidence that if it is not implemented there, it is not implemented in other packages. Here is some code I've cobbled to calculate this statistic for simulated AR(1) processes: ``` n <- 1000 e1 <- rnorm(n,2,sd=1) e2 <- rnorm(n,2.5,sd=1) xm <- arima.sim(n,model=list(ar=0.5),innov=e1) ym <- arima.sim(n,model=list(ar=0.6),innov=e2) eq1<-dynlm(xm~L(xm)+L(ym)) eq2<-dynlm(ym~L(xm)+L(ym)) res<-rbind(residuals(eq1),residuals(eq2)) omega <- 1/n*tcrossprod(res,res) xx <- rbind(1,xm[-1000],ym[-1000]) xxinv <- solve(tcrossprod(xx,xx)) c1 <- coef(eq1)[1] c2 <- coef(eq2)[1] chi <- (c1-c2)^2/(xxinv[1,1]*(omega[1,1]-2*omega[1,2]+omega[2,2])) 1-pchisq(chi,1) ``` After playing with different values of means, it seems that statistic is working more or less. Of course the results should be tested for different values of variance of errors and different AR specifications and different sample sizes. On the other hand from theoretical point of view statistic is quite nice, since it allows unequal variances. Update Out of curiosity I did some MC simulations. For this I wrote a bit more efficient version of test: ``` ham.test.lm2 <- function(xm,ym) { n <- length(xm) df <- cbind(x=xm[-1],y=ym[-1],lx=xm[-n],ly=ym[-n]) eq1<-lsfit(df[,3:4],df[,1]) eq2<-lsfit(df[,3:4],df[,2]) res<-rbind(residuals(eq1),residuals(eq2)) omega <- 1/n*tcrossprod(res,res) xx <- rbind(1,xm[-n],ym[-n]) ch <- chol(tcrossprod(xx,xx)) xxinv <- chol2inv(ch) c1 <- coef(eq1)[1] c2 <- coef(eq2)[1] chi <- (c1-c2)^2/(xxinv[1,1]*(omega[1,1]-2*omega[1,2]+omega[2,2])) res<-1-pchisq(chi,1) names(res) <-NULL res } ``` This runs about 60 times faster than the original. Then I wrote special function for generating two AR(1) processes given their parameters. ``` gen.arima <-function(n,N,mn1,mn2,rho1,rho2,sd1=1,sd2=2,burn.in=50){ foreach(i=1:N,.combine=c) %dopar% { e1 <- rnorm(n+burn.in,mn1,sd=sd1) e2 <- rnorm(n+burn.in,mn2,sd=sd2) xm <- arima.sim(n+burn.in,model=list(ar=rho1),innov=e1) ym <- arima.sim(n+burn.in,model=list(ar=rho2),innov=e2) list(window(cbind(xm,ym),start=burn.in+1)) } } ``` This function generates `N` pairs of `n` sized sample of two AR(1) processes. Then I ran the following simulation. ``` library("iterators") library("foreach") library("multicore") library("doMC") registerDoMC(16) ##the simulations ran on server with 2 quad-core Xeon processors with HT rho <- seq(0.1,0.9,by=0.1) nr <- length(rho) rr <- numeric() for(i in 1:nr) { for(j in (i):nr) rr <- rbind(rr,cbind(rho[i],rho[j])) } mu <- seq(0,2,by=0.1) rrm <- foreach(m=mu,.combine=rbind) %do% { cbind(rr,m) } nn <- c(10,30,100,500,1000) rrmn <- foreach(n=nn,.combine=rbind) %do% { cbind(rrm,n) } rrmns <- foreach(s=c(0.1,0.5,1),.combine=rbind) %do% { cbind(rrmn,s) } colnames(rrmns) <- c("rho1","rho2","mudiff","slen","sigma") cc <- system.time(res <- foreach(i=1:nrow(rrmns),.combine=rbind) %dopar% { a <- rrmns[i,] aa <- gen.arima(a[4],500,2,2+a[3],a[1],a[2],a[5],a[5]) hm<-sapply(aa,function(x)ham.test.lm2(x[,1],x[,2])) tt<-sapply(aa,function(x)t.test(x[,1],x[,2])$p.value) h1<-sum(hm<0.05)/length(hm) t1<-sum(tt<0.05)/length(tt) c(a,h1,t1) }) write.csv(file="hammc.csv",res) ``` This simulation calculates the number of times the statistic rejects the null hypothesis that the means are zero for various sample sizes and various combinations of parameters of AR(1) processes. I used the maximum burn in for $\rho=0.9$ when generating AR(1) processes. I also compare the performance of the statistic to simple t-test. The resulting data.frame can be downloaded [here](http://mif.vu.lt/~zemlys/download/source/hammc.csv.gz). Here is a few graphs illustrating the perfomance. ![enter image description here](https://i.stack.imgur.com/vweYF.jpg) This is for AR(1) processes with $\rho=0.1$, 0.1 and 1 indicate the variance of the errors. `mudiff` is the true difference of the means. Here is the similar graph for $\rho=0.9$: ![enter image description here](https://i.stack.imgur.com/4GWFc.png) Now the same graphs for t.test ($\rho=0.1$): ![enter image description here](https://i.stack.imgur.com/rxn25.png) And $\rho=0.9$: ![enter image description here](https://i.stack.imgur.com/c6DW6.png) Here is the code which produced the graphs: ``` mc <- read.csv("hammc.csv") colnames(mc)[7] <- "Hamilton" colnames(mc)[8] <- "t.test" colnames(mc)[5] <- "sample.size" qplot(x=sample.size,y=Hamilton,data=mc[mc$sigma!=0.5 & mc$rho1==0.1 & mc$rho2==0.1,],group=mudiff,geom="line",color=mudiff)+facet_wrap(~sigma) qplot(x=sample.size,y=Hamilton,data=mc[mc$sigma!=0.5 & mc$rho1==0.9 & mc$rho2==0.9,],group=mudiff,geom="line",color=mudiff)+facet_wrap(~sigma) qplot(x=sample.size,y=t.test,data=mc[mc$sigma!=0.5 & mc$rho1==0.1 & mc$rho2==0.1,],group=mudiff,geom="line",color=mudiff)+facet_wrap(~sigma) qplot(x=sample.size,y=t.test,data=mc[mc$sigma!=0.5 & mc$rho1==0.9 & mc$rho2==0.9,],group=mudiff,geom="line",color=mudiff)+facet_wrap(~sigma) ``` Conclusions Note that the graphs measure the [power](http://en.wikipedia.org/wiki/Statistical_power) of the statistic for `mudiff` greater than zero and size of the test for zero `mudiff`, so ideally the the blue line for `mudiff=0` should be straight line at 0.05, and all the other lines should be straight lines at 0.95. Since the statistic is assymptotic this does not happen. What it is clear that t.test is not suitable for higher values of $\rho$, it performs quite nice for $\rho=0.1$, but very bad for $\rho=0.9$. Hamilton's test on the other hand has low power even for larger sample sizes when the true difference in means is not very large. Also the results are the same for error variance 0.1 and 1, so I do not rule out the error in my code.
null
CC BY-SA 2.5
null
2011-02-17T14:33:45.927
2011-02-20T12:25:06.973
2011-02-20T12:25:06.973
2116
2116
null
7332
2
null
7330
7
null
The answer, I believe, to your first question is "yes". This can be accomplished with [Givens rotations](http://en.wikipedia.org/wiki/Givens_rotation), which allow for the annihilation of arbitrary elements of a matrix via a $2\times 2$ rotation matrix. The implication is that if you start with a rotation matrix, then you can reduce it to a diagonal matrix via Givens rotations. But, since orthogonality of a matrix is preserved by multiplication with another orthogonal matrix, this means that the diagonal matrix must be orthogonal, and hence, must contain only 1's and -1's. Additional rotations then reduce this matrix to the identity. The affirmative answer to your first question follows immediately. Thus the space of $n\times n$ orthogonal matrices is spanned by Givens rotations with respect to matrix multiplication. If this doesn't give you enough detail, let me know and I'll fill it in.
null
CC BY-SA 2.5
null
2011-02-17T14:41:10.700
2011-02-17T19:44:10.390
2011-02-17T19:44:10.390
449
2970
null
7333
2
null
7084
3
null
The best argument I have heard about using the normal distribution is one of "sufficient statistics" given by Larry Bretthorst in his PhD thesis on spectral estimation. Basically because the normal distribution has a set of sufficient statistics (mean and covariance matrix), then this is the only thing that matters when fitting your data. So if the "true" distribution is not normal, but it gives you the same set of sufficient statistics, then you will get exactly the same answer as if it was normal, whenever the parameters are linear in these sufficient statistics (if non-linear, then its a projection onto the error space). This is in line with "BLUE" type estimators. So in this sense, testing for normality is "stupid". But checking the residuals is certainly a good idea. This is because it can help improve your model, by looking for systematic patterns in them. The normality tests are usually designed to pick up these systematic patterns, so in this sense they are good. The problem with them, is that they don't "give you a clue" about how to fix the "non-normality". Looking at the residuals does, though. e.g. see curvature in residuals: put a quadratic term in, see "harmonic" behaviour: put a sine or cosine function in, etc. Another justification comes from the MaxEnt world. The normal distribution is the one with the largest entropy (i.e. uncertainty) among all continuous distributions on the reals with given mean and variance. So this means that the normal "assumes the least", in a sense for a given mean and variance. This means that "Nature" has to work incredibly hard to move to a part of the "distribution space" which is non-normal and keep the mean and variance fixed. The normal can be realised in a massively larger number of ways compared to any other with fixed mean and fixed variance. So a departure usually means there is some "unspecified" constraint operating on the data - which could be thought as a "covariate missed". It is because the normal distribution is non-robust that it allows you to "learn from your mistakes". If you are wrong, the normal distribution will pick this up by producing large residuals. When you use "robust" or "non-parametric" methods, it becomes more difficult to see where a model can be improved. And as a final note: it matters not one iota if the normal distribution is an awful fit, unless you have something to replace it with!
null
CC BY-SA 2.5
null
2011-02-17T15:47:56.670
2011-02-17T15:47:56.670
null
null
2392
null
7334
2
null
4978
3
null
It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood. In any case, it is not an issue anymore as Bayes Rule (not the product rule) can be obtained from Maximum relative Entropy (MrE), and not in an ambiguous way: - Updating Probabilities with Data and Moments - From Physics to Economics: An Econometric Example Using Maximum Relative Entropy It's a new world...
null
CC BY-SA 2.5
null
2011-02-17T16:03:19.943
2011-02-17T16:03:19.943
null
null
null
null
7335
2
null
7270
3
null
From past experience with a network of 7 million nodes, I think visualizing your complete network will give you an uninterpretable image. I might suggest different visualizations using subsets of your data such as just using the top 10 nodes with the most inbound or outbound links. I second celenius's suggestion on using gephi.
null
CC BY-SA 2.5
null
2011-02-17T16:06:24.620
2011-02-17T16:06:24.620
null
null
3298
null
7336
1
7341
null
10
754
I'm running a simulation on R and a cluster of computers and have the following problem. On each of X computers I run: ``` fxT2 <- function(i) runif(10) nessay <- 100 c(mclapply(1:nessay, fxT2), recursive=TRUE) ``` There are 32 computers, each with 16 cores. However, around 2% of the random numbers are identical. What strategies would you adopt to avoid this? I've been able to avoid this problem for fxT2 by setting a latency (i.e. delaying by a sec the time at which each job is send to each of the X computers). But it's seems very ad-hoc to fxt2. The problem is that in reality fxT2 is a long task involving pseudo random numbers. At the end of the process, I expect to get X*nessay reproduction of the same statistical experiment, not nessay reproductions. How to make sure that this is indeed the case and is there a way to check this?.
RNG, R, mclapply and cluster of computers
CC BY-SA 2.5
null
2011-02-17T16:18:29.617
2011-02-17T18:27:08.447
2011-02-17T17:52:48.100
8
603
[ "r", "random-generation", "parallel-computing", "multicore" ]
7337
2
null
7336
3
null
You need to use a RNG specifically designed for parallel computing. See the "Parallel computing: Random numbers" section of the [High Performance Computing Task View](http://cran.r-project.org/web/views/HighPerformanceComputing.html).
null
CC BY-SA 2.5
null
2011-02-17T16:39:34.187
2011-02-17T16:39:34.187
null
null
1657
null
7338
2
null
7208
4
null
The "chance correction" in Cohen's $\kappa$ estimates probabilities with which each rater chooses the existing categories. The estimation comes from the marginal frequencies of the categories. When you only have 1 judgement for each rater, this means that $\kappa$ assumes the category chosen for this single judgement in general has a probability of 1. This obviously makes no sense since the number of judgements (1) is too small to reliably estimate the base rates of all categories. An alternative might be a simple binomial model: without additional information, we might assume that the probability of agreement between two raters for one judgement is 0.5 since judgements are binary. This means that we implicitly assume that both raters pick each category with probability 0.5 for all criteria. The number of agreements expected by chance over all criteria then follows a binomial distribution with $p=0.5$.
null
CC BY-SA 2.5
null
2011-02-17T17:14:48.070
2011-02-17T17:14:48.070
null
null
1909
null
7340
2
null
7326
0
null
What you might want to try is to take many bootstrap samples of 500 pairs and then build a confidence interval of this distribution to see if the population mean is included. However you really first need to go through your tags and spell correct, replace synonyms etc. www.kamalnigam.com/papers/bootstrap-ijcaiws99.pdf -Ralph Winters
null
CC BY-SA 2.5
null
2011-02-17T18:25:00.827
2011-02-17T18:25:00.827
null
null
3489
null
7341
2
null
7336
6
null
The [snow](http://cran.r-project.org/package=snow) has explicit support to initialise the given number of RNG streams in a cluster computation. It can employ one of two RNG implementations: - rsprng and - rlecuyer Otherwise you have to do the coordination by hand.
null
CC BY-SA 2.5
null
2011-02-17T18:27:08.447
2011-02-17T18:27:08.447
null
null
334
null
7342
2
null
7268
2
null
You might checkout the `aggregate.zoo` function from the package `zoo`: [http://cran.r-project.org/web/packages/zoo/zoo.pdf](http://cran.r-project.org/web/packages/zoo/zoo.pdf) Charlie
null
CC BY-SA 2.5
null
2011-02-17T20:19:02.183
2011-02-17T20:19:02.183
null
null
401
null
7343
1
7345
null
5
1519
Suppose I have a function `f`, and I want to sample it at 100 points in the interval `[0, 100]`. For some reason (that seemed smart to me at the time), I decided to not sample at equidistant intervals, but rather use the following function to determine the sample points: ``` log2(x)*(100/log2(100)) ``` This gives me a sequence of sample points that becomes denser as it approaches 100. The problem is, that now I need to calculate the mean over the values I have sampled, but due to the bad sampling that would be heavily biased. I cannot resample the data, this would take way too long (several days), and I am on a very tight schedule. So, the solution that comes to mind is to calculate a weighted average to correct the error. My question is, how do I determine the weights?
How to correct uneven sampling distribution when calculating the mean?
CC BY-SA 2.5
null
2011-02-17T20:44:05.473
2011-02-17T21:41:38.403
null
null
977
[ "sampling", "error", "mean" ]
7344
1
7369
null
6
16190
I'm trying to write a function to graphically display predicted vs. actual relationships in a linear regression. What I have so far works well for linear models, but I'd like to extend it in a few ways. - Handle glm models - Deal with NAs in the predicted values Does what I have so far seem like a good solution, or is there an existing package somewhere that's already implemented this? ``` DF <- as.data.frame(na.exclude(airquality)) DF$Month <- as.factor(DF$Month) DF$Day <- as.factor(DF$Day) my_model <- lm(Ozone~Solar.R+Wind+Temp+Month+Day,DF) PvA<- function(model,varlist=NULL,smooth=.5) { #Plot predicted vs actual for a model indvars <- attr(terms(model),"term.labels") if (is.null(varlist)) { varlist <- indvars } Y <- as.character(as.list(attr(terms(model),"variables"))[2]) P.Y <- paste('P',Y,sep='.') DF <- as.data.frame(get(as.character(model$call$data))) DF[,P.Y] <- predict.lm(model) par(ask=TRUE) for (X in varlist) { print(X) A <- na.omit(DF[,c(X,Y)]) P <- na.omit(DF[,c(X,P.Y)]) plot(A) points(P,col=2) lines(lowess(A,f=smooth),col=1) lines(lowess(P,f=smooth),col=2) } } PvA(my_model) ```
How to graphically compare predicted and actual values from multivariate regression in R?
CC BY-SA 2.5
null
2011-02-17T20:44:25.817
2011-02-18T10:04:53.950
2011-02-17T23:03:57.857
2817
2817
[ "r", "regression", "multivariate-analysis", "multiple-regression" ]
7345
2
null
7343
7
null
The answer depends on the characteristics of $f$. Regardless, its average (by definition) is $\frac{1}{100}\int_0^{100}{f(x)dx}$, so your problem is one of estimating that integral from values at a discrete set of points. For a highly discontinuous function you can use any [Riemann sum](http://en.wikipedia.org/wiki/Riemann_sum); for a differentiable function use the [Trapezoidal Rule](http://en.wikipedia.org/wiki/Trapezoidal_rule), for which you can estimate the error in terms of derivatives of $f$; for a thrice differentiable function use [Simpson's Rule](http://en.wikipedia.org/wiki/Simpson%27s_rule), etc. If necessary, you can combine the error estimates of these rules with statistical estimates of the error in computing each value of $f$ to obtain an estimate of the error in the average.
null
CC BY-SA 2.5
null
2011-02-17T21:41:38.403
2011-02-17T21:41:38.403
null
null
919
null
7346
6
null
null
0
null
I did not participate to the beta, but I have been very happy to contribute (Q&A, edits, votes, etc.) during the last six months. Despite being also very happy with our actual moderators, I would like to propose myself for fulfilling this task if one of them feel the need to get some time off. Needless to say, I will vote for them if they nominate themselves for a second round.
null
CC BY-SA 2.5
null
2011-02-17T21:56:24.370
2011-02-17T21:56:24.370
2011-02-17T21:56:24.370
930
930
null