question
stringlengths
37
38.8k
group_id
int64
0
74.5k
<p>I ran a data set containing ages through SAS npar1way first, but a reviewer wanted standard errors, so that motivated a smooth bootstrap. Since I could not use survyselect, I had to code a data step. I think it's set up correctly. I ran 10,000, 20,000 and 100,000 replicates. I need some feedback on my findings. The results I got weren't entirely satisfactory, but I don't have a lot of experience bootstrapping, so feedback is appreciated. I have two groups, so first I split them, then drew my samples, combined everything and used by processing to obtain the results.</p> <pre><code>data sasdata.group1 sasdata.group2; set sasdata.have(keep=group ga birthwt); if group=1 then output sasdata.group1; if group=2 then output sasdata.group2; run; sasfile sasdata.group1 load; data sasdata.outboot1(drop=__i); do Replicate = 1 to 100000; do __i = 1 to numrecs; p = int(1 + numrecs*(ranuni(62353006))); set sasdata.group1 point=p nobs=numrecs; a = a + rannor(623530)/sqrt(numrecs); /* 1 */ output; end; end; stop; run; sasfile sasdata.group1 close; ods listing close; sasfile sasdata.group2 load; data sasdata.outboot3(drop=__i); do Replicate = 1 to 100000; do __i = 1 to numrecs; p = int(1 + numrecs*(ranuni(17036255))); set sasdata.group2 point=p nobs=numrecs; a = a + rannor(170362)/sqrt(numrecs); /* 1 */ output; end; end; stop; run; sasfile sasdata.group2 close; ods listing close; data sasdata.a; set sasdata.outboot1 sasdata.outboot3; run; proc sort data=sasdata.a; by replicate group; run; ods listing close; proc univariate data=sasdata.a; var a; by replicate group; output out=sasdata.aresults median=medianx; run; ods listing; proc transpose data=sasdata.aresults (keep=replicate group medianx ) out=sasdata.t_results prefix=grp; by replicate; var medianx; id group; run; proc univariate data=sasdata.t_results; var grp1 grp2; run; data sasdata.median_diff; set sasdata.t_results; median_diff=grp1-grp2; run; proc univariate data=sasdata.median_diff /*noprint */; var median_diff; output out=sasdata.percentiles pctlpre=P_ pctlpts=2.5, 97.5; run; </code></pre> <p>The medians just don't seem to match what I get from npar1way. They're close, but slightly off; thus the difference in medians is slightly larger for the bootstrap case. It doesn't seem right to report the npar1way medians and the bootstrap SEs given that the bootstrap does not converge to the actual medians. Any ideas as to what's going on here?</p>
36,275
<p>Currently I am skimming through a couple of papers in well established journals! I became curious when I found papers with linear regression models using the Herfindahl index as the dependent variable. I thought such a continuous but limited variable had to be used in combination with the ln!</p> <p>Could any of you give me an explanation whether one procedure is right or wrong?</p>
36,276
<p>I am slightly getting confused at the presentation of regression models.</p> <p>What would be the difference between these two:</p> <p>$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2$</p> <p>$y = \gamma_0 + \gamma_1 x_1 + \gamma_2 x_2$</p> <p>When are the different notations to be used?</p> <p>I'm unable to search this concept well, so it's quite confusing as the literature seems to use the symbols on similar occasions. </p> <p>Is it just a different symbol for the same thing or is there a specific explanation as to when each notation should be used? </p>
73,790
<p>How would one go about testing an implementation of a Bayes Factor calculation? The analogue in Frequentist hypothesis testing is fairly straightforward: generate data according to the null hypothesis, use the code to generate a p-value, repeat thousands of times with different random seeds, and look for uniformity of the computed p-values. To test an implementation of some Bayes Factor code, however, I am not sure how to proceed. Do I choose from models $M_1$ and $M_2$ with equal probability, generate the data, and test whether the $K$ values are reasonably near 1? Also is there an analogue of Frequentist power testing for Bayes Factors along the same lines (choose from the models with a biased coin flip)?</p>
73,791
<p>I have data that includes the number of students in a class and the percentage of that group who achieved a preset pass level in a standard test. I have this data for a number if different schools in two population samples, about 30 schools in each. The class sizes differ considerably, so it seems take sense to use the percentage already given when calculating the t-test.</p> <p>But I also know that percentages shouldn't be averaged. I could calculate the number of students from the data given, but this does not reflect class size, which seems important. The percentage "automatically" reflects the weighting of class size. Any advice or thoughts about this problem appreciated.</p> <p>Example data to illustrate problem</p> <pre><code>No students percent passed calculated no passed 28 7% 2 79 7% 6 28 51% 14 58 50% 29 </code></pre> <p>Thanks Tim</p>
36,278
<p>I am modelling eyetracking data where people can look at one of two objects on the screen. Our experimental manipulation is meant to increase the likelihood that they look at object A over object B. However, the effect isn't likely to be linear and what we actually hypothesize is that our control participants will have a greater fit to a quadratic or cubic curve (indicating more alternation between the two objects) than our experimental condition. It's a between-subjects manipulation.</p> <p>So, I am using orthogonal polynomial codes to model with <code>lmer()</code>, with a model like:</p> <p><code>elog(proportion-A) ~ ot1 + ot2 + ot3 + Condition + ot1:Condition + ot2:Condition + ot3:Condition + (1 | Subject) + (0 + ot1 | Subject) + (0 + ot2 | Subject) + (0 + ot3 |Subject)</code></p> <p>Am I right in entering the random slopes as non-correlated (i.e., as distinct error terms) because orthogonal polynomial codes should not correlate with each other and should have unique effects on the predicted variable (looking proportion)? This has an added benefit of being able to use Markov Chain Monte Carlo sampling with <code>languageR</code>'s <code>pvals.fnc()</code>, but we do get different results than if I enter the random terms like <code>(1 + ot1 + ot2 + ot3 | Subject)</code> and so I want to make sure this is OK.</p>
47,846
<p>I would like to regress the influence of income, education, marital status etc. on life satisfaction. The data I use is from the SHARE survey – life satisfaction can take values of 1–10, most values are around 6–8.</p> <p>OLS regression seems to be a poor choice to me, as it might produce predicted values outside the 1–10 interval.</p> <p>My colleagues have suggested that I might take a look at truncated/censored analysis, such as tobit regression. However, I do not believe that I have data which is censored in the way tobit regression would assume, which would be the case if only part of the real spectrum of values can be observed. </p> <p>Most researchers use ordered logistic regression. This seems valid to me, but 10 might be quite high number of possible outcomes for ologit (I have usually done it with fewer outcomes, though I am not sure if this is an issue at all), and I believe ologit does not assume the intervals between the categories to be of equal size (<a href="http://www.ats.ucla.edu/stat/stata/dae/ologit.htm" rel="nofollow">stated here</a>), which I however believe is the case in my scenario (why would the difference between 3 and 4 be any different than between 7 and 8?)</p> <p>I wonder if <a href="http://www.ats.ucla.edu/stat/stata/dae/intreg.htm" rel="nofollow">interval regression</a> is what I need. I think it is, but I need proof :)</p> <p>So, which statistic analysis would you recommend?</p>
36,279
<p>I have two time-series:</p> <ol> <li>A proxy for the market risk premium (ERP; red line)</li> <li>The risk-free rate, proxied by a government bond (blue line)</li> </ol> <p><img src="http://i.stack.imgur.com/evTDC.png" alt="Risk premium proxy and risk-free rate over time"></p> <p>I want to test if the risk-free rate can explain the ERP. Hereby, I basically followed the advice of Tsay (2010, 3rd edition, p. 96): Financial Time Series:</p> <ol> <li>Fit the linear regression model and check serial correlations of the residuals.</li> <li>If the residual series is unit-root nonstationarity, take the first difference of both the dependent and explanatory variables.</li> </ol> <p>Doing the first step, I get the following results:</p> <pre><code>Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) 6.77019 0.25103 26.97 &lt;2e-16 *** Risk_Free_Rate -0.65320 0.04123 -15.84 &lt;2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 </code></pre> <p>As expected from the figure, the relation is negative and significant. However, the residuals are serially correlated:</p> <p><img src="http://i.stack.imgur.com/GfdWn.png" alt="ACF function of the residuals of the regression of risk-free rate on ERP"></p> <p>Therefore, I first difference both the dependent and explanatory variable. Here is what I get:</p> <pre><code>Coefficients: Estimate Std. Error t value Pr(&gt;|t|) (Intercept) -0.002077 0.016497 -0.126 0.9 Risk_Free_Rate -0.958267 0.053731 -17.834 &lt;2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 </code></pre> <p>And the ACF of the residuals looks like:</p> <p><img src="http://i.stack.imgur.com/J25Gn.png" alt="ACF function of the residuals of the regression of risk-free rate on ERP (differenced)"></p> <p>This result looks great: First, the residuals are now uncorrelated. Second, the relation seems to be more negative now. </p> <p>Here are my questions (you probably wondered by now ;-) The first regression, I would have interpreted as (econometric problems aside) "if the riskfree rate rises by one percentage point, the ERP falls by 0.65 percentage points." Actually, after pondering about this for a while, I would interpret the second regression just the same (now resulting in a 0.96 percentage points fall though). Is this interpretation correct? It just feels weird that I transform my variables, but don't have to change my interpretation. If this, however, is correct, why do the results change? Is this just the result of econometric problems? If so, does anyone have an idea why my second regression seems to be even "better"? Normally, I always read that you can have spurious correlations that vanish after you do it correctly. Here, it seems the other way round.</p>
47,851
<p>I did a one-way ANOVA followed by a Tukey's test to compare the means of different treatments.</p> <p>Let's say the treatments are A, B and C.</p> <p>The table of multiple comparisons tells me there is a significant difference between B and C. However, these two are not significantly different from A, and therefore there are in the same subset when we order the results. </p> <p>Can I say there is a significant difference between B and C, or is that not possible?</p>
3,017
<p>I'm trying to use the ridge.cv() function in R. The <a href="http://www.inside-r.org/packages/cran/parcor/docs/ridge.cv" rel="nofollow">documentation</a> says that the input y is the "vector of responses".</p> <p>What exactly does that mean?</p>
36,284
<p>What do I need to measure interaction between variables in a particular equation?</p> <p>For e.g. Me just taking 50 grams of protein everyday will help me health wise. Me just doing exercise for 1 hour everyday will help me health wise. Me just stretching 1 hour everyday will help me health wise. Etc.</p> <p>But Combing two of the above things will help me more health wise than just doing one. And combining all three will help me even more than just doing two of the above things.</p> <p>For e.g. Taking 50 gm protein and doing 1 hour exercise everyday is beneficial than just doing one or the other. How does taking 50 grams of protein complement exercising 1 hour every day, by how much?</p> <p>What I want to find out is, a way to measuring interaction between these things. How can I quantify a relationship between each above?</p> <p>What type of data-set will I need to quantify how they compliment each other?</p> <p>Note: I don't know what tags to use for my question. Please suggest.</p> <p>Any suggestions are much appreciated.</p>
73,792
<p>I sometimes have the situation where I have from several dozen to over 100 linear models to perform hypothesis tests on. They have the same predictor variables, but different response variables. </p> <p>Let's say I have 100 models and each model has four p-values-- one for the intercept, one each for two main effects, and one for the interaction effect. If I want to calculate false discovery rates, should I calculate one set of FDRs based on the 400 p-values, or should I calculate a separate set of FDRs for each term in the model, based on the 100 p-values for that one term? I've been told by a more experienced colleague that it is the latter, but I don't understand why.</p> <p>In case it matters, usually one of the main effects and its interactions with the other effects is of primary interest, and the other terms are included because they might influence the response and therefore must be taken into account.</p>
30,600
<p>Suppose $U_{(1)} , \dots , U_{(n)}$ is order statistics a random sample from $U(0,1)$.how can find joint Probability density function limit distribution of statistics $T_n=(nU_{(1)},nU_{(2)}).$</p>
36,288
<p>My question is exactly the title : to whom can we report a problem with SAS ?</p> <p>Below is an example. This problem is not really severe but somewhat dangerous (in fact I have just updated my example below after Aniko's comment; there was a confusion in the first version of this post). </p> <p>Consider such a dataset:</p> <pre><code>&gt; dat tube position y 1 1 top 0.25602779 2 1 top 2.99327392 3 1 top 0.03673459 4 1 top -0.94515391 5 1 bottom 9.12947343 6 1 bottom 5.96666893 7 1 bottom 6.65291454 8 2 top -2.32616858 9 2 top -1.61491564 10 2 top -2.88930533 11 2 top -1.48685691 12 2 bottom 0.03474644 13 2 bottom 4.23073725 14 2 bottom 1.43776713 15 3 top 3.04525229 16 3 top -1.06611380 17 3 top 0.64097731 18 3 bottom 5.63571519 19 3 bottom 5.96779074 20 3 bottom 2.14091389 21 3 bottom 5.46937089 22 4 top 7.00724734 23 4 top 4.33632991 24 4 top 1.90765886 25 4 top 1.91688415 26 4 bottom 9.54251973 27 4 bottom 6.88220097 28 4 bottom 3.62175779 29 5 top 6.38900310 30 5 top 7.19216388 31 5 top 8.29793550 32 5 bottom 9.46722783 33 5 bottom 9.11261143 34 5 bottom 11.08097843 35 6 top -1.05244281 36 6 top -0.86450352 37 6 top -0.66251724 38 6 top -1.29278055 39 6 bottom 4.99175539 40 6 bottom 3.92459045 41 6 bottom 6.90398638 </code></pre> <p>This SAS model</p> <pre><code>PROC MIXED DATA=dat ; CLASS POSITION TUBE ; MODEL y = POSITION / cl ; RANDOM POSITION / type=CS subject=TUBE ; RUN; QUIT; </code></pre> <p>is theoretically equivalent to this other SAS model (the marginal models are the same):</p> <pre><code>PROC MIXED DATA=dat ; CLASS POSITION TUBE ; MODEL y = POSITION / cl ; RANDOM TUBE TUBE*POSITION; RUN; QUIT; </code></pre> <p>However the two models yield the same estimates and standard errors but they yield completely different degrees of freedom for the estimates (with the default option). </p>
73,793
<p>I have a dataframe called "cleaned", which consists of about 300,000 rows and 13 variables. Except the dependent variable, all variables are categorical and have multiple levels ($\geq2$). The dependent variable is numeric and takes values ranging from -1,500 to 3,296, mostly positive. Here is a summary of the dataframe:</p> <pre><code>&gt; summary(cleaned$SumOf1st.Yr.Cash) Min. 1st Qu. Median Mean 3rd Qu. Max. -1574.00 37.37 101.50 155.60 204.60 3296.00 &gt; dput(head(cleaned)) structure(list(Submit.Qtr = structure(c(2L, 2L, 2L, 2L, 2L, 2L ), .Label = c("1Q11", "2Q11", "3Q11", "4Q11"), class = "factor"), SUBMIT_CAL_MONTH = c(201104L, 201104L, 201104L, 201104L, 201104L, 201104L), SumOf1st.Yr.Cash = c(-221.81, 127.86, 662.09, 77.24, 370.4, 176), CARRIER_NAME = structure(c(56L, 116L, 4L, 116L, 82L, 114L), .Label = c("AARP-branded plans, insured by Aetna", "Aetna", "Aetna Life Insurance Company", "Altius Health Plans", "Altius One", "American Family Life Assurance Company of Columbus (Aflac)", "American Family Life Assurance Company of New York (Aflac New York)", "AmeriHealth - New Jersey", "Ameritas Life Insurance Corp.", "Anthem BCBS. Serving residents of Indiana", "Anthem BCBS. Serving residents of Kentucky", "Anthem BCBS. Serving residents of Ohio", "Anthem Blue Cross", "Anthem Blue Cross and Blue Shield", "Anthem Blue Cross and Blue Shield of CT", "Anthem Blue Cross and Blue Shield of NH", "Anthem Blue Cross and Blue Shield of VA", "Anthem Blue Cross Blue Shield", "Anthem Blue Cross Blue Shield Indiana", "Anthem Blue Cross Blue Shield Kentucky", "Anthem Blue Cross Blue Shield of Connecticut", "Anthem Blue Cross Blue Shield of Missouri", "Anthem Blue Cross Blue Shield of Wisconsin", "Anthem Blue Cross Blue Shield Ohio", "Anthem BlueCross BlueShield", "Anthem Health Plans of Kentucky Inc.", "Anthem Health Plans of New Hampshire Inc.", "Argus", "Arise Health Plan", "Arkansas Blue Cross and Blue Shield", "Assurant", "Assurant Employee Benefits", "Assurant Health", "Asuris Northwest Health", "Avera Health Plans", "AvMed Health Plans", "Bay Dental", "BCBS of GA", "Blue Cross and Blue Shield of GA", "Blue Cross and Blue Shield of Georgia", "Blue Cross and Blue Shield of Illinois", "Blue Cross and Blue Shield of Kansas City", "Blue Cross and Blue Shield of Minnesota", "Blue Cross and Blue Shield of South Carolina", "Blue Cross and Blue Shield of Texas", "Blue Cross Blue Shield", "Blue Cross Blue Shield of Arizona", "Blue Cross Blue Shield of Delaware", "Blue Cross Blue Shield of Florida", "Blue Cross Blue Shield of Georgia", "Blue Cross Blue Shield of Michigan", "Blue Cross Blue Shield of North Dakota", "Blue Cross of Idaho", "Blue Cross of Northeastern Pennsylvania through its subsidiary First Priority Life Insurance Company", "Blue Shield of California", "BlueCross BlueShield of Louisiana", "BlueCross BlueShield of Montana", "BlueCross BlueShield of Nebraska", "BlueCross BlueShield of Tennessee", "BlueCross BlueShield of Wyoming", "Capital Blue Cross", "Care Improvement Plus", "CareFirst BlueCross BlueShield", "Celtic Ins. Co.", "CeltiCare Healthplan of MA, Inc.", "Cigna", "Clear One Health Plans", "ConnectiCare Inc.", "Coventry", "Coventry Health and Life Insurance Co. FL", "Coventry Health and Life Insurance Company", "Coventry Health Care of Delaware, Inc", "Coventry Health Care of Georgia, Inc.", "Coventry Health Care of Illinois, Inc.", "Coventry Health Care of Iowa, Inc.", "Coventry Health Care of Kansas Inc.", "Coventry Health Care of Louisiana, Inc.", "Coventry Health Care of Missouri, Inc.", "Coventry Health Care of Oklahoma Inc.", "Coventry Health Care of the Carolinas, Inc.", "Coventry Health Plan of Florida, Inc.", "Dean Health Plan, Inc.", "Delta Dental Insurance Company (Delta Dental)", "Delta Dental of California", "Delta Dental of Colorado", "Delta Dental of Iowa", "Delta Dental of Minnesota", "Delta Dental of North Carolina", "Dentegra Insurance Company", "Dominion Dental Services, Inc", "Easy Choice Health Plan of New York", "EmblemHealth", "Empire", "Empire BlueCross", "Evercare by UnitedHealthcare", "Everest Dental Plan", "Fallon Community Health Plan", "Geisinger Choice", "Generic Medicare Carrier", "Group Health", "HCC Life Insurance Company", "HCC Medical Insurance Services", "Health Alliance Plan", "Health Insurance Innovations", "Health Net", "Health Net of Arizona", "Health Net of Oregon", "Health Plan of Nevada", "HealthAmerica", "HealthPartners", "HealthPlus Insurance Company", "Highmark Blue Cross Blue Shield Delaware", "Highmark Blue Cross Blue Shield West Virginia", "Horizon Blue Cross Blue Shield of New Jersey", "Humana", "Humana CompBenefits", "Humana Health Benefit Plan of Louisiana Inc.", "Humana Health Insurance Company of Florida", "Humana Insurance Company of Kentucky", "IHC Group", "IMG Global", "Independence Blue Cross", "Kaiser Foundation Health Plan of the NW", "Kaiser Mid-Atlantic", "Kaiser Permanente CO", "Kaiser Permanente GA", "Kaiser Permanente of CA", "Kaiser Permanente of HI", "Kaiser Permanente of Ohio", "KPS Health Plans", "LifeWise Health Plan of Oregon", "LifeWise Health Plan of Washington", "Lovelace Health Plans", "Madison National Life Insurance Company", "Medica", "Medica of Minnesota", "Medical Mutual", "Mercy Health Plans", "Mutual of Omaha", "Mutual Of Omaha", "Mutual of Omaha Insurance Company", "MVP", "My Health Alliance", "Nationwide Life Insurance Company", "Next Generation Insurance Group", "ODS Alaska", "ODS Health Plan, Inc.", "Optima Health Insurance Company", "Oxford NJ", "Oxford NY", "PacifiCare", "PacificSource Health Plans", "PacificSource Health Plans of Idaho", "Physicians Health Plan of Northern Indiana, Inc.", "Physicians Plus", "PreferredOne Insurance Company", "Premera Blue Cross", "Premera Blue Cross Blue Shield of Alaska", "Presbyterian", "Providence Health Plan", "QCA Health Plan Inc", "Regence Blue Cross Blue Shield of Oregon", "Regence Blue Cross Blue Shield of Utah", "Regence Blue Shield of Idaho", "Regence BlueCross BlueShield of Oregon", "Regence BlueCross BlueShield of Utah", "Regence BlueShield", "Regence BlueShield of Idaho", "Regence Life and Health", "Regence Life and Health Insurance Company", "RegenceBCBS", "RegenceBS", "Rocky Mountain Health Plans", "Scott &amp; White Health Plan", "SecureHorizons by UnitedHealthcare", "Security Health Plan", "Security Life Insurance Company of America", "SelectHealth", "Seven Corners", "Sierra Health and Life", "Standard Security Life", "Standard Security Life Insurance Company", "SummaCare Inc of Ohio", "Symetra Life Insurance Company", "Total Dental Administrators Health Plan, Inc.", "UniCare", "United Concordia Dental", "United of Omaha", "United World", "UnitedHealthcare", "UnitedHealthcare Community Plan", "UnitedHealthOne", "Unity Health Insurance", "Vision Plan of America", "VSP", "WellCare", "WellCare Health Plans of New Jersey, Inc.", "WellCare of Florida, Inc.", "WellCare of New York, Inc.", "WellCare of Ohio, Inc.", "WellCare of Texas, Inc.", "WellCare Prescription Insurance, Inc.", "Wellmark Blue Cross and Blue Shield of Iowa", "Wellmark Blue Cross and Blue Shield of South Dakota", "WellPath Select, Inc.", "WINhealth Partners", "WPS", "WPS Health Insurance" ), class = "factor"), GENDER = structure(c(2L, 2L, 1L, 1L, 1L, 1L), .Label = c("F", "M", "U"), class = "factor"), FAMILY_TYPE = structure(c(1L, 1L, 1L, 1L, 2L, 2L), .Label = c("FAMILY", "INDIVIDUAL", "Unknown" ), class = "factor"), EXECUTIVE_AGE_GROUP = structure(c(5L, 5L, 4L, 3L, 3L, 6L), .Label = c("0-18 (&gt;=0 AND &lt;19)", "19-25 (&gt;=19 AND &lt;26)", "26-29 (&gt;=26 AND &lt;30)", "30-39 (&gt;=30 AND &lt;40)", "40-49 (&gt;=40 AND &lt;50)", "50-64 (&gt;=50 AND &lt;65)", "65+", "Unknown"), class = "factor"), MARITAL_STATUS = structure(c(4L, 1L, 8L, 1L, 7L, 1L), .Label = c("", "D", "L", "M", "O", "P", "S", "W"), class = "factor"), SELECTED_RIDERS = structure(c(11L, 1L, 1L, 10L, 1L, 10L), .Label = c("", "/Dental_Vision/", "/Dental/", "/Dental/Other/", "/Life/", "/Life/Dental_Vision/", "/Life/Dental/", "/Life/Dental/Other/", "/Life/Other/", "/None/", "/Other/", "/Vision/", "/Vision/Dental/", "/Vision/Dental/Other/", "/Vision/Life/", "/Vision/Life/Dental/", "/Vision/Life/Dental/Other/", "/Vision/Life/Other/", "/Vision/Other/"), class = "factor"), STATE_ABBR = structure(c(19L, 44L, 45L, 10L, 49L, 32L), .Label = c("AK", "AL", "AR", "AZ", "CA", "CO", "CT", "DC", "DE", "FL", "GA", "HI", "IA", "ID", "IL", "IN", "KS", "KY", "LA", "MA", "MD", "ME", "MI", "MN", "MO", "MS", "MT", "NC", "ND", "NE", "NH", "NJ", "NM", "NV", "NY", "OH", "OK", "OR", "PA", "RI", "SC", "SD", "TN", "TX", "UT", "VA", "VT", "WA", "WI", "WV", "WY" ), class = "factor"), SumOfAPPROVED.MEMBERS = c(7L, 6L, 4L, 2L, 1L, 1L), PRODUCTLINE_TYPE = structure(c(4L, 3L, 4L, 3L, 4L, 4L), .Label = c("ACC", "CAN", "DT", "IFP", "MA", "MAPD", "MD", "MS", "PDC", "ST", "STU", "TRV", "VSP"), class = "factor"), CHANNEL = structure(c(1L, 1L, 2L, 3L, 2L, 3L), .Label = c("Direct", "Online Advertising", "Performance Partners"), class = "factor")), .Names = c("Submit.Qtr", "SUBMIT_CAL_MONTH", "SumOf1st.Yr.Cash", "CARRIER_NAME", "GENDER", "FAMILY_TYPE", "EXECUTIVE_AGE_GROUP", "MARITAL_STATUS", "SELECTED_RIDERS", "STATE_ABBR", "SumOfAPPROVED.MEMBERS", "PRODUCTLINE_TYPE", "CHANNEL" ), row.names = c(2L, 4L, 5L, 6L, 7L, 8L), class = "data.frame") </code></pre> <p>I was trying to run a mixed effect model on these data </p> <pre><code> lme(SumOf1st.Yr.Cash~ GENDER + FAMILY_TYPE + + EXECUTIVE_AGE_GROUP + MARITAL_STATUS + SELECTED_RIDERS + + PRODUCTLINE_TYPE + CHANNEL, data=cleaned, random= ~1 | STATE_ABBR) </code></pre> <p>But I got an error message that I couldn't understand. </p> <pre><code>Error in MEEM(object, conLin, control$niterEM) : Singularity in backsolve at level 0, block 1 </code></pre> <p>I could run it when there is only one variable in the formula. But even with two I got the same error message.</p> <pre><code>lme(SumOf1st.Yr.Cash~ GENDER + + EXECUTIVE_AGE_GROUP , data=cleaned, random= ~1 | STATE_ABBR) Error in MEEM(object, conLin, control$niterEM) : Singularity in backsolve at level 0, block 1 lme(SumOf1st.Yr.Cash~ GENDER + , data=cleaned, random= ~1 | STATE_ABBR) Linear mixed-effects model fit by REML Data: cleaned Log-restricted-likelihood: -1877607 Fixed: SumOf1st.Yr.Cash ~ GENDER (Intercept) GENDERM GENDERU 146.53490 12.27621 494.77983 Random effects: Formula: ~1 | STATE_ABBR (Intercept) Residual StdDev: 39.21808 177.8023 Number of Observations: 284485 Number of Groups: 51 </code></pre>
36,289
<p>Am trying to develop a predictive model using high-dimensional clinical data including laboratory values. The data space is sparse with 5k samples and 200 variables. The idea is to rank the variables using a feature selection method (IG, RF etc) and use top-ranking features for developing a predictive model. </p> <p>While feature selection is going well with a NaΓ―ve Bayes approach, am now hitting an issue in implementing predictive model due to missing data (NA) in my variable space. Is there any machine learning algorithm that can carefully handle samples with missing data ? </p> <p>Any examples ? </p> <p>Thanks for your help in advance! </p>
33,717
<p>I do a GLM containing 8 predictors on a multivariate data set. Six of these predictors encode effects that have actually been manipulated in my experiment (effects of interest), the other two predictors are noise-predictors of physiological side-effects that influence the data, but are not meant to. </p> <p>It turns out that both noise-predictors have a high load in the GLM. Now I was wondering whether it might be possible to use that knowledge in order to correct for these undesired distortions in the data. Is there generally a way to use the beta weights of the GLM to regularize the data post-hoc, so that when re-estimating the GLM these predictors would be zero?</p> <p>Thanks a lot.</p>
73,794
<p>I am conducting analysis on 'Multi-Attribute Decision Making (MADM)', where I have two attributes (a1, a2) to characterize the quality of m alternative approaches. The first attribute, a1, is measured in percentage (hence does not have a unit), whereas the other has a unit. There are some alternatives that have zero as their first attribute value. </p> <p>What I am aiming to do is to compare different alternatives and determine the 'best' alternative by assuming equal weighting of the attributes. I am aware that 'Multi-Attribute Decision Making' methods may yield to the problem called 'rank reversal' (i.e. ranking of the alternatives may change by adding new alternatives). However, as far as I understood (and tested), 'Weighted Product Model (1)' does not suffer from this issue. The issue that arise; however, is that I cannot directly use this method as it requires division of the scores of different alternatives.</p> <p>I thought of increasing all the a1 values by a certain amount and it seems to be working fine. However, if I increase the a1 values more than a value, then rankings start to change. Do you know any other ways to use 'Weighted Product Model' to evaluate alternatives on the presence of zeros?</p> <p>Reference: (1) Triantaphyllou, E. &amp; Mann, S.H., 1989. An examination of the effectiveness of multi-dimensional decision-making methods: A decision-making paradox. Decision Support Systems, 5(3), pp.303–312.</p>
73,795
<p>We have a iid sequence of random variables $X_1, X_2, \dots, X_n$, where $E(X_i) = \mu$ and $var(X_i) = \sigma^2$. The sample mean $\bar{X}$ converges to $\mu$ at rate $\sqrt{n}$ thanks to the LLN. </p> <p>If we have a continuous function $f()$, the continuous mapping theorem assures that $f(\bar X)$ converges to $f(\mu)$.</p> <p>My question is the following: at what rate does $f(\bar X)$ converge to $f(\mu)$?</p> <p>Asymptotically I would say $\sqrt{n}$, given that $f()$ is continuous and hence locally linear. But can we have convergence rates very different from $\sqrt{n}$ in small samples?</p>
73,796
<p>I'm a complete newbie :)</p> <p>I'm doing a study with a sample size of 10,000 from a population of about 745,000. Each sample represents a "percentage similarity". The great majority of the samples are around 97%-98% but a few are between 60% and 90%, that is, the distribution is heavily negatively skewed. Around 0.6% of the results are 0%, but these will be treated separately from the sample.</p> <p>The mean of all 10,000 samples is 97.7%, and just in Excel, the StdDev is 3.20. I understand that the StdDev is not really applicable here because the results are not normally distributed (and because the +3.20 would put you above 100%!).</p> <p>My questions are: </p> <ol> <li>Is bootstrapping (a new concept for me) appropriate?</li> <li>Am I bootstrapping correctly :)</li> <li>What is a sufficient sample size?</li> </ol> <p>What I am doing is resampling (with replacement) my 10,000 results and calculating a new mean. I do this a few thousand times and store each mean in an array. I then calculate the "mean of the means" and this is my statistical result. To work out the 99% CI, I choose the 0.5%-th value and the 99.5%-th value, and this produces a very tight range: 97.4% - 98.0%. Is this a valid result or am I doing something wrong?</p> <p>As for sample size, I am sampling only about 1.3% of the population - I have no idea if this is "enough". How do I know if my sample is representative of the population? Ideally, I'd like to be 99% confident of a mean that is +/- 0.50% percentage points (i.e. 97.2% - 98.2%). </p> <p>Thanks in advance for any tips!</p>
36,297
<p>Let $ Z=X_1^2+X_2^2\cdots\cdots X_j^2 $, such that $X_i \sim \mathcal{N}(\mu_i,\sigma_i^2)$. All $X_i$'s are independent of each other and $\mu_i$ and $\sigma_i^2$ are all different from one another.</p> <p>Question is:</p> <p>1) What is the distribution of Z ?</p> <p>2) If it is non-central Chi square distribution, what it ll be in terms of means and variances?</p> <p>Note: I'm not going to call $X^2$ or $Y^2$ as Chi Square distribution, as that complicates the matter. Since in Central $\mu_i=0$ and in <a href="http://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution" rel="nofollow">non-central Chi square distribution</a> each term in divided by respective variance, that don't represent our case.</p>
49,816
<p>I have 2 correlation matrices $A$ and $B$ (using the Pearson's linear correlation coefficient through Matlab's <a href="http://www.mathworks.com/help/matlab/ref/corrcoef.html" rel="nofollow">corrcoef()</a>). I would like to quantify how much "more correlation" $A$ contains compared to $B$. Is there any standard metric or test for that? </p> <p>E.g. the correlation matrix</p> <p><img src="http://i.stack.imgur.com/4Lhvi.png" alt="enter image description here"></p> <p>contains "more correlation" than</p> <p><img src="http://i.stack.imgur.com/WsoUK.png" alt="enter image description here"></p> <p>I am aware of the <a href="http://www.real-statistics.com/multivariate-statistics/boxs-test-equality-covariance-matrices/boxs-test-basic-concepts/" rel="nofollow">Box’s M Test</a>, which is used to determine whether two or more covariance matrices are equal (and can be used for correlation matrices as well since the latter are the <a href="https://en.wikipedia.org/wiki/Correlation_and_dependence#Correlation_matrices" rel="nofollow">same</a> as the covariance matrices of standardized random variables). </p> <p>Right now I am comparing $A$ and $B$ via the mean of the absolute values of their non-diagonal elements, i.e. $\frac{2}{n^2-n}\sum_{1 \leq i &lt; j \leq n } \left | x_{i, j} \right |$. (I use the symmetry of the correlation matrix in this formula). I guess that there might be some cleverer metrics.</p> <hr> <p>Following Andy W's comment on the matrix determinant, I ran an experiment to compare the metrics:</p> <ul> <li><em>Mean of the absolute values of their non-diagonal elements</em>: $\text{metric}_\text{mean}()$</li> <li><em>Matrix determinant</em>: $\text{metric}_\text{determinant}()$:</li> </ul> <p>Let $A$ and $B$ two random symmetric matrix with ones on the diagonal of dimension $10 \times 10$. The upper triangle (diagonal excluded) of $A$ is populated with random floats from 0 to 1. The upper triangle (diagonal excluded) of $B$ is populated with random floats from 0 to 0.9. I generate 10000 such matrices and do some counting:</p> <ul> <li>$\text{metric}_\text{mean}(B) \leq \text{metric}_\text{mean}(A) $ 80.75% of the time</li> <li>$\text{metric}_\text{determinant}(B) \leq \text{metric}_\text{determinant}(A)$ 63.01% of the time</li> </ul> <p>Given the result I would tend to think that $\text{metric}_\text{mean}(B)$ is a better metric.</p> <p>Matlab code:</p> <pre><code>function [ ] = correlation_metric( ) %CORRELATION_METRIC Test some metric for % http://stats.stackexchange.com/q/110416/12359 : % I have 2 correlation matrices A and B (using the Pearson's linear % correlation coefficient through Matlab's corrcoef()). % I would like to quantify how much "more correlation" % A contains compared to B. Is there any standard metric or test for that? % Experiments' parameters runs = 10000; matrix_dimension = 10; %% Experiment 1 results = zeros(runs, 3); for i=1:runs dimension = matrix_dimension; M = generate_random_symmetric_matrix( dimension, 0.0, 1.0 ); results(i, 1) = abs(det(M)); % results(i, 2) = mean(triu(M, 1)); results(i, 2) = mean2(M); % results(i, 3) = results(i, 2) &lt; results(i, 2) ; end mean(results(:, 1)) mean(results(:, 2)) %% Experiment 2 results = zeros(runs, 6); for i=1:runs dimension = matrix_dimension; M = generate_random_symmetric_matrix( dimension, 0.0, 1.0 ); results(i, 1) = abs(det(M)); results(i, 2) = mean2(M); M = generate_random_symmetric_matrix( dimension, 0.0, 0.9 ); results(i, 3) = abs(det(M)); results(i, 4) = mean2(M); results(i, 5) = results(i, 1) &gt; results(i, 3); results(i, 6) = results(i, 2) &gt; results(i, 4); end mean(results(:, 5)) mean(results(:, 6)) boxplot(results(:, 1)) figure boxplot(results(:, 2)) end function [ random_symmetric_matrix ] = generate_random_symmetric_matrix( dimension, minimum, maximum ) % Based on http://www.mathworks.com/matlabcentral/answers/123643-how-to-create-a-symmetric-random-matrix d = ones(dimension, 1); %rand(dimension,1); % The diagonal values t = triu((maximum-minimum)*rand(dimension)+minimum,1); % The upper trianglar random values random_symmetric_matrix = diag(d)+t+t.'; % Put them together in a symmetric matrix end </code></pre> <p>Example of a generated $10 \times 10$ random symmetric matrix with ones on the diagonal:</p> <pre><code>&gt;&gt; random_symmetric_matrix random_symmetric_matrix = 1.0000 0.3984 0.1375 0.4372 0.2909 0.6172 0.2105 0.1737 0.2271 0.2219 0.3984 1.0000 0.3836 0.1954 0.5077 0.4233 0.0936 0.2957 0.5256 0.6622 0.1375 0.3836 1.0000 0.1517 0.9585 0.8102 0.6078 0.8669 0.5290 0.7665 0.4372 0.1954 0.1517 1.0000 0.9531 0.2349 0.6232 0.6684 0.8945 0.2290 0.2909 0.5077 0.9585 0.9531 1.0000 0.3058 0.0330 0.0174 0.9649 0.5313 0.6172 0.4233 0.8102 0.2349 0.3058 1.0000 0.7483 0.2014 0.2164 0.2079 0.2105 0.0936 0.6078 0.6232 0.0330 0.7483 1.0000 0.5814 0.8470 0.6858 0.1737 0.2957 0.8669 0.6684 0.0174 0.2014 0.5814 1.0000 0.9223 0.0760 0.2271 0.5256 0.5290 0.8945 0.9649 0.2164 0.8470 0.9223 1.0000 0.5758 0.2219 0.6622 0.7665 0.2290 0.5313 0.2079 0.6858 0.0760 0.5758 1.0000 </code></pre>
73,797
<p>I am working with a dataset where I need to use non-parametric stats (won't go into details). It is behavioral data on captive animals ($n=8$), where $4$ treatments were introduced $3$ different times randomly. I have tested for difference in my repeats using a Friedman test. None were found. I am looking for a way to analyze my data to see differences between the $4$ treatments but also considering difference between individuals ($8$) interacting. </p> <p><strong>Questions:</strong> </p> <ol> <li>In my mind I should perform a 2-way ANOVA (i.e., the non-parametric equivalent) but most of the tests I have found do not allow for replicated data. </li> <li>Do I average the repeats since no difference was found and then conduct the ANOVA equivalent test?</li> <li>Is there a test that will look for difference in 4 treatments, keeping in mind the individual interaction with replicated data?</li> </ol>
73,798
<p>I have quarterly unbalanced Panel data and I want to de-trend my dependent variable to make it stationary. how do i do it? I don't want to take differences as it will shorten my observations. The residual series that I get after regressing my dependent variable on time trend does not remove unit root for data . It should be noted my independent variables are stationary? should I transform/detrend them as well.</p> <p>Also can I regress differences on levels in a panel data setting or all variables should be in the same order of integration?</p> <p>Would Hodrick prescott filter be a good choice for detrending quarterly observations? 2002q2 to 2013q4</p>
73,799
<p>Can someone please provide examples of labelled and unlabelled data? I have been reading definitions of semi supervised learning but it does not make clear on what the two actually are.</p>
73,800
<p>I'm trying to build a monte-carlo simulation that can revise it's distribution of outcomes of a project based on observed measurements after the project has started.</p> <p><strong>I have a few questions about the best way to do this. I'm not a statistician, so please correct me if I am doing something wildly wrong.</strong></p> <p>For example, let's say I've observed that task <code>x</code> has been selected by person <code>y</code> (whose original 90% CI estimate for the task was [<code>l</code>,<code>h</code>]) , and that <code>y</code> has logged <code>w</code> hours of work to the task.</p> <p>I can use that data to re-simulate the project under new constraints and compute a new, more accurate, distribution of outcomes.</p> <p>For example, if <code>w</code> > <code>l</code>, then I know that the lower bound for the time to complete x is now <code>w</code>, not <code>l</code>, and can adjust the distribution used accordingly. However, <code>w</code> is not a 5% lower bound. It's a 0% lower bound (i.e. the limit), so using [<code>w</code>, <code>h</code>] as a 90% CI didn't quite seem correct. As a result I was thinking I could just pick some arbitrarily small number for <code>p(w)</code>, say 0.0001, and continue using .05 for <code>p(h)</code> and then generate a new distribution for [<code>w</code>, <code>h</code>] (of course, I would just use the number of deviations for <code>h</code> and <code>w</code> rather than the probabilities).</p> <p><strong>Is that sound?</strong></p> <p>What's not immediately clear is what I would do in the case where <code>w</code> > <code>h</code>. I have calibrated estimates with a 90% CI, so I should expect to see this 5% of the time. If I ask: "what do I know in that case", I come up with the following:</p> <ol> <li>I know <code>w</code> and I know my arbitrarily low <code>p(w)</code></li> <li>From the original confidence interval (which assumes a normal distribution), I can determine<code>p(w + sigma)</code>.</li> <li>So, I could produce: <code>[w, w + sigma]</code> as an interval, using <code>p(w)</code> and <code>p(w + sigma)</code>, and then derive a normal distribution from that (again, just using the z-values).</li> </ol> <p><strong>Is that sound as well?</strong></p>
494
<p>I've seen in <a href="http://www.khanacademy.org/video?v=hxZ6uooEJOk" rel="nofollow">video</a> lessons that if the sample size is big enough (n>30) sample distribution standard deviation can be approximated by sample standard deviation. How do we get the sample distribution standard deviation if sample size is small (n=10)?</p>
73,801
<p>Related to <a href="http://stats.stackexchange.com/questions/50537/should-one-remove-highly-correlated-variables-before-doing-pca">Should one remove highly correlated variables before doing PCA?</a>, PCA is used a lot in population genetics to essentially cluster individuals into ethnic group based on their genetic markers (SNPs). These SNPs may be highly correlated (linkage disequilibrium, LD), and hence are usually thinned to make them roughly independent. They can also be regressed on each other (<a href="http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.0020190" rel="nofollow">http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.0020190</a>) to make them even more independent before doing PCA. But these regressions are slightly "suspect" since the samples are not independent, which is what we're trying to estimate in the first place!</p> <p>So this "chicken and egg" problem seems to suggest an EM-like iterative approach where you first try to estimate SNP relatedness using the samples, then sample relatedness using the SNPs, and repeat. Does this approach make sense and is it already used in some areas?</p>
36,301
<p>I was wondering whether the following mechanical selection procedure will result in a possible bias. First let me introduce the first procedure, we start with a model and only look at the t-value and possibly correct them for it (heteroskedasticity / autocorrelation). We then only add variables into our final model that are significant. However I am well aware this gives us a bias with F-test, since even though some variables can be insignificant, they can be jointly significant.</p> <p>However if we also take that into account, whether adding that variable gives us a "sensible" subset to do a F-test on and only add them if they give a significant result (indicating joinly significance), this would then actually be a pretty good automated method or does this give some bias? Will anything happen that we do <strong>not</strong> like to have? </p> <p>Furthermore is a forward variable selection (starting with small set and increase it) better or backwards variable selection better? </p> <p>Thanks for you answer!</p>
36,302
<p>I am working on a prediction model in which I have several factor variables that have many levels. These factor variables have a nested structure, in the form of a Category, a Sub-Category, and a Sub-Sub-Category. For example suppose that I had one field that was the type of device a user browsed the website with (pc, tablet, phone), which then can be sub segmented into ((Apple, windows, linux), (kindle, iOS, android), (windows, iOS, android, RIM)), and then each of those could be subdivided into version numbers.</p> <p>Is there a standard way of handling nested features like this in tree models. At an intuitive level, I don't think the tree should be splitting on one of the subgroupings until it has first split on one of the major groups (since a windows phone and a windows PC are quite different). Creating a single factor that describes the full tree path would have too many possible levels.</p>
73,802
<p>I want to know if the initiation of a state Renewable Portfolio Standard affects the level of renewable energy output in that state.</p> <p>I don't have access to Stata right now, so I'm stuck using excel. I have data for renewable energy output for all thirty states that have initiated Renewable Portfolio Standards, with energy output levels from 1990-2011. I also know when each state initiated their respective RPS within that time frame, and I want to know if following the initiation, did the level of renewable energy output change significantly.</p>
36,303
<p>I am working with culture cells where one dish has been transfected with a scrambled knockdown clone and two dishes which have been transfected with two knockdown clones each knocking down the expression of a single gene. </p> <p>An example of an experiment I have performed is to measure the mitochondrial membrane potential (using a fluorescent dye) in these cells using a confocal microscope. This experiment was repeated on three independent occasions.</p> <p>On each experimental day, the intensity of the laser which I used (the laser "gain") varies therefore I cannot combine all experimental days without expressing the dye intensity of each knockdown clone as a percent of the control "scrambled" clone (e.g. control = 100% mean intensity; knockdown clone 1 = 50% mean intensity). </p> <p>Therefore, <strong>I need to test for a difference in means between my control scrambled clone and each of the knockdown clones</strong>, where my control scrambled clone is set to 100% dye intensity on each experimental day and my knockdown clones are normalised to this control. Therefore, my control has no variance (100% for all three experimental days) while my knockdown clones do have variance. </p> <p>I know an ANOVA would not be feasible given the difference in variance. I will look into the procedure suggested by Michael Lew, but <strong>would a t-test be unacceptable as well?</strong> (I have seen papers using ANOVA and t-tests in these circumstances, but in spite of this I am assuming these should not be used). Thanks in advance.</p>
73,803
<p>Can you please tell me the formula (I am more interested in the approach than the answer) to answer the following question? Let’s say I drive an unreliable car that is not expected to start 1 out of every 5 mornings. When the car doesn't start, I miss work. I work 5 days a week.<br> What is the expected number of days of work that I will miss per week? Thanks. </p>
36,304
<p>For fantasy basketball, I want to quantify a player's output based on his Points, Rebounds, Assists, Steals, 3s, and Blocks production.</p> <p>Let's pretend that over the course of an imaginary basketball season there are total of 245,000 Points, 100,000 Rebounds, 53,000 Assists, 18,000 steals, 12,000 blocks, and 16,000 3points ever recorded (these are totals by all players who played).</p> <p>How do I weight each stat category so that I know what 1 block might be worth in points, assists, or in any of the other categories?</p> <p>I do not have stat background but this sounds like a basic stat question that people here can help me understand.</p>
73,804
<p>I am trying to manually calculate P-Value (right tailed) from F-Test to understand it better. Would like to learn how the value obtained from the F-Test.</p> <p>Have obtained the below parameters,</p> <pre><code>ndf = 1 ddf = 238 Ξ’(ndf/2,ddf/2) = 0.162651 critical value, x (Ξ±=0.005) = 8.028403472 F_stat = 8983.6418 </code></pre> <p>Using equation below, referred from [1]</p> <blockquote> <p>P value = [1/Ξ’(ndf/2,ddf/2)] * [(ndf*x)/(ndf*x + ddf)]^(ndf/2) * [1-(ndf*x)/(ndf*x + ddf) ]^(ddf/2) * 1/x</p> </blockquote> <p>I got, <strong>P_value = 7.06096E-05</strong></p> <p>When comparing this with automatic calculation by Excel using F.DIST.RT,</p> <blockquote> <p>P_value=F.DIST.RT(8983.6418,1,238)</p> </blockquote> <p>I got, <strong>P_value=5.2396E-191</strong></p> <p>Some how the P_value obtained via manual calculation and automatically by Excel is not the same. I have verified F_stat calculated by Excel and manually calculated by me from the formula in [2] is accurate. Hence the F.DIST.RT result should not be wrong, the help page also mentioned it is a right tailed test too.</p> <p>Questions:</p> <ol> <li><p>Have I got the wrong formula for P_value calculation using F-Test.</p></li> <li><p>Could you advice me, where I can learn more on the right formula. Many write up in the internet often using software/tools to calculate this. I like to learn the fundamental mathematics behind it. I will learn it no matter how hard is its mathematical complexity. Just need some guidance.</p></li> <li><p>What is the formula used by Excel when F.DIST.RT is invoked. Excel's help page only shows how to use the command without elaborating the fundamentals. My search on internet to find this was not successful.</p></li> </ol> <p>Thank you very much.</p> <pre><code>Reference: [1] http://easycalculation.com/statistics/f-test-p-value.php [2] http://www.originlab.com/forum/topic.asp?TOPIC_ID=4823 </code></pre>
73,805
<p>I have a group of 200 children who came to clinic at months 0, 1, 2, 3, 6, 9, and 12 this year. At each clinic visit the children were weighed.</p> <pre><code># Set seed to create reproducible example data set.seed(50) # Create patient ID numbers, genders, and ages control &lt;- NULL control$Age_0 = round(runif(200,1,10), digits = 1) # Create monthly weights control$Weight_0 = ((control$Age_0 + 4) * 2) control$Weight_1 = (control$Weight_0 * 1.1) control$Weight_2 = (control$Weight_0 * 1.2) control$Weight_3 = (control$Weight_0 * 1.3) control$Weight_6 = (control$Weight_0 * 1.4) control$Weight_9 = (control$Weight_0 * 1.6) control$Weight_12 = (control$Weight_0 * 1.8) # Store as data frame control &lt;- as.data.frame(control) </code></pre> <p>I want to study how their weights vary with time. I thought the best way to do this would be to simply plot their mean weights at every visit versus time.</p> <pre><code># Plot mean weights versus time plot(c(0,1,2,3,6,9,12), c(mean(control$Weight_0), mean(control$Weight_1), mean(control$Weight_2), mean(control$Weight_3), mean(control$Weight_6), mean(control$Weight_9), mean(control$Weight_12)), xlab = "Month", ylab = "Weight (Kilograms)", main = "Weight versus time", ylim = c(0,50)) </code></pre> <p>I would like to put some vertical error bars on this plot. My questions are:</p> <ol> <li>Should I plot standard deviation, standard error, or a 95% confidence interval?</li> <li>How do I add vertical error bars to the plot?</li> </ol> <p>There is another group of children who got growth hormone injections during the year. I want to compare their growth over time to that of the children in the control group.</p> <pre><code># Create patient ID numbers, genders, and ages growth &lt;- NULL growth$Age_0 = round(runif(200,1,10), digits = 1) # Create monthly weights growth$Weight_0 = ((growth$Age_0 + 6) * 2) growth$Weight_1 = (growth$Weight_0 * 1.3) growth$Weight_2 = (growth$Weight_0 * 1.4) growth$Weight_3 = (growth$Weight_0 * 1.6) growth$Weight_6 = (growth$Weight_0 * 1.8) growth$Weight_9 = (growth$Weight_0 * 1.9) growth$Weight_12 = (growth$Weight_0 * 2.0) # Store as data frame growth &lt;- as.data.frame(growth) plot(c(0,1,2,3,6,9,12), c(mean(growth$Weight_0), mean(growth$Weight_1), mean(growth$Weight_2), mean(growth$Weight_3), mean(growth$Weight_6), mean(growth$Weight_9), mean(growth$Weight_12)), xlab = "Month", ylab = "Weight (Kilograms)", main = "Weight versus time", ylim = c(0,50)) </code></pre> <ol> <li>Does this change the kind of error bars I should create (i.e., should I use confidence intervals if I want to examine whether or not there is a difference between the groups)?</li> <li>How do I plot this on the same plot as the control group?</li> </ol> <p>Am I thinking of this problem the right way? Any other suggestions?</p>
73,806
<p>I have been trying to implement a Bayesian inference procedure from scratch for a specific problem, but I have implemented the procedure, and it doesn't seem to work. </p> <p>Since, I can't just post the code online and ask community to debug my code, I was wondering if someone could provide with a broader checklist when going about coding up a Bayesian inference procedure. (regardless of language) </p> <p><strong>EDIT: Specifics of the problem</strong></p> <p>I am trying to implement the procedure described in Section 5 of <a href="http://uai.sis.pitt.edu/papers/11/p736-wilson.pdf" rel="nofollow">this paper</a> on <strong>MATLAB</strong> . Briefly put, the procedure I've implemented is - </p> <ol> <li>I have 3 zero mean variables (i.e., $D = 3$ time series) for $500$ timepoints. I'm using initial $N = 350$ data points as training sample.</li> <li>The covariance function I'm using is a squared exponential kernel with 1 hyperparameter - characteristic length scale $l$. I'm assuming it to be the same for all 3 timeseries.</li> <li>I'm keeping degrees of freedom constant, $\nu = D + 1$.</li> <li>$L$, the lower Cholesky decomposition of the scale matrix $V$ is computed as the $D \times D$ covariance matrix of the $N \times D$ training dataset.</li> <li><p>The sampling procedure essentially involves 2 steps (using Gibbs sampling)</p> <p>5.1 Sample $u$ ($N \times D \times \nu$) dimensional vector, assuming Gaussian process prior (as defined in equation 19 of the paper). I've assumed a Gaussian likelihood function (as defined in equation 24). For this I'm using <a href="http://homepages.inf.ed.ac.uk/imurray2/pub/10ess/elliptical_slice.m" rel="nofollow">Elliptical Slice Sampling</a></p> <p>5.2 Sample GP hyperparameter $l$, using a lognormal prior (assumption, $mean=1.5$, $var = 1$). I've used <a href="http://homepages.inf.ed.ac.uk/imurray2/teaching/09mlss/slice_sample.m" rel="nofollow">slice sampling for this</a> with posterior as product of GP prior(eq. 19) and lognormal density.</p></li> </ol> <p>I let this Gibbs sampler run for $10000$ iterations ($5000$ burn-in). But convergence plot of $u$ doesn't seem to converge. </p> <p>I also tried this with smaller $N$ (~ $50$) and increased no. of iterations but didn't work.</p>
73,807
<p>I have some $d$-dimensional data points ($d \ge 2$). I want map them to a circle such that locality is preserved as much as possible. </p> <p>I know that PCA only maps points to a line ($d'=1$) or a plane ($d'=2$), but I want them on a circle. </p>
36,306
<p>I'm wondering how to fit multivariate linear mixed model and finding multivariate BLUP in R. I'd appreciate if someone come up with example and R code. Thanks</p> <p><strong>Edit</strong></p> <p>I wonder how to fit multivariate linear mixed model with <code>lme4</code>. I fitted univariate linear mixed models with the following code:</p> <pre><code>library(lme4) lmer.m1 &lt;- lmer(Y1~A*B+(1|Block)+(1|Block:A), data=Data) summary(lmer.m1) anova(lmer.m1) lmer.m2 &lt;- lmer(Y2~A*B+(1|Block)+(1|Block:A), data=Data) summary(lmer.m2) anova(lmer.m2) </code></pre> <p>I'd like to know how to fit multivariate linear mixed model with <code>lme4</code>. The data is below:</p> <pre><code>Block A B Y1 Y2 1 1 1 135.8 121.6 1 1 2 149.4 142.5 1 1 3 155.4 145.0 1 2 1 105.9 106.6 1 2 2 112.9 119.2 1 2 3 121.6 126.7 2 1 1 121.9 133.5 2 1 2 136.5 146.1 2 1 3 145.8 154.0 2 2 1 102.1 116.0 2 2 2 112.0 121.3 2 2 3 114.6 137.3 3 1 1 133.4 132.4 3 1 2 139.1 141.8 3 1 3 157.3 156.1 3 2 1 101.2 89.0 3 2 2 109.8 104.6 3 2 3 111.0 107.7 4 1 1 124.9 133.4 4 1 2 140.3 147.7 4 1 3 147.1 157.7 4 2 1 110.5 99.1 4 2 2 117.7 100.9 4 2 3 129.5 116.2 </code></pre> <p>Thank in advance for your time and cooperation.</p>
16,787
<p>Modularity: I understand that modularity is supposed to represent sophistication of structure on a macro level, but what does it mean on the individual level? </p> <p>Page Rank: I'm dealing with a dataset of people asking questions and answering questions. If someone answers a question, the arrow is drawn from the person who asked the question to the person who answered the question. If an individual has a high page rank value in this situation, what does that mean?</p> <p>Force Atlas 2: I have no idea how this works compared to the other layout algorithms.</p>
73,808
<p>I am a complete newbie in statistical modeling and I never got the opportunity to learn how to express a model in algebraic form and its respective matrix notation. I know how to define models in R code, but I do not understand how to write it in mathematic form, e.g. with Ξ² as the vector of a fixed effect. I am trying to find good literature on this topic, but have a hard time finding any. Does anyone know where can I learn this? Sorry if this is a basic and maybe inappropriate question here, and something that I should probably have learned long ago. However I have not had the need to use any "advanced" models until now.</p> <p>As an example, I have a longitudinal dataset of biomass from an experiment. The subjects have been exposed of 3 different treatments (LOW, MED, HI) during 200 days, and measured repeatedly at 40 occasions. I have constructed a linear mixed model in R, with lme (from nlme package), with a autocorrelation structure (AR1) and a random intercept and slope:</p> <pre><code>Biomass.lme&lt;-lme(Biomass~Treatment*Day,random=~Day|Subject,data=longterm, na.action=na.exclude,corr=corAR1(form=~Day|Flask)) </code></pre> <p>Where Treatment is a factor and Day continuous variable. Hence, I am interested to see if the responses of the groups is dependent on the exposure time (i.e. the interaction of treatment and time).</p> <p>I would be happy if someone could explain to me how this model would look in algebraic form and why, or provide me with some information on how to do express model.</p>
73,809
<p>According to <a href="http://jarrodwilcox.com/expert-investor/diversification-helps-growth/" rel="nofollow">this extract of a paper</a> posted on the web, the average return of a fair coin flip that pays 100% for heads and loses 50 percent for tails over 3 periods is 25 percent per period, while the geometric mean or compound annual return is 8.3 percent. </p> <p>To calculate the average return, you first calculate the terminal value then calculate the equivalent per period value. The terminal values is $0.125\cdot 8 + 0.375\cdot 2 + 0.375\cdot 0.5 + 0.125\cdot 0.125$, or $1.9531$ and the per period value is $1.9531^{1/3}$, or $1.25$. Subtracting $1$ gets you $25\%$ per period.</p> <p>To calculate the average geometric return or or mean compound return you take the cube root of each possible outcome and then apply the probabilities to each of the results. So the geometric mean is $0.125\cdot 100\% + 0.375\cdot 26\% + 0.375\cdot (-20.6\%) + 0.125\cdot (-50\%)$, or only $8.3\%$.</p> <p>My question is how should one interpret the geometric mean return? Does it describe the expected growth in your capital such that if I ran an experiment where I made a number of 3 year investments, the expected growth of my capital over the 3 years would be $8.3\%$? </p> <p>Why does the the geometric mean return decline with with period?</p>
73,810
<p>Assume there are two candidate models, $\hat{f}(\beta)$ and $\hat{g}(\beta,\theta)$. If the true data generating process is $f(\beta)$, then $\hat{g}(\beta,\theta)$ is unbiased but inefficient. If, on the other hand, the true DGP is $g(\beta, \theta)$, then $\hat{f}(\beta)$ is biased.</p> <p>Under a classical model selection regime, the analyst begins with $\hat{f}$ and has to reject $h_0:\theta = 0$ in order to justify using $\hat{g}$. The consequence of Type I error is biased estimate of $\beta$, and the consequence of a Type II error is an inefficient model.</p> <p>In a new model selection regime, the analyst begins with $\hat{g}$ and must reject $h_0:\theta\neq0$. The consequences of Type I and Type II errors are now precisely reversed.</p> <p>Is there a statistical advantage to the new method? It seems somehow "safer," but is there a way to justify this statistically? In general, would you prefer risk to be on the Type I or Type II error?</p>
73,811
<p>DISCLAIMER: I don't have a lot of stats experience, so please don't laugh too hard if my question is trivial.</p> <p>I have run an experiment with 5 categorical factors. The factors have anywhere between 2 and 8 levels each. I have one response variable, which is continuous in the range of 0 to 100. All-in-all, I have run a fully factorial experiment with 800-something combinations. Each combination has 10 samples. In total, in R-speak, I have a data frame with 6 columns and 8607 rows.</p> <p>My goal: determine the level of each factor that results in the best performance. For example, I want to be able to say "Performance is generally the best when factor1 is level "A", factor2 is level"C", ..., and factor5 is level "E". Conclusions: always use level "A" for factor1 ....". </p> <p>How do I achieve this?</p> <p>I first thought of PCA, but this isn't quite correct because the components that PCA finds are combinations of factors, but I need to be able to say which factor level is best, for each and every factor. I want to keep the factors in tact.</p> <p>I also thought of ANOVA, which may be what I want, but I'm not sure how to use its output. For example, in R, I get:</p> <pre><code>&gt; summary(aov(...)) Df Sum Sq Mean Sq F value Pr(&gt;F) preprocess 7 21.430 3.061 180.771 &lt; 2.2e-16 *** bugData 2 5.276 2.638 155.782 &lt; 2.2e-16 *** fileData 5 6.462 1.292 76.315 &lt; 2.2e-16 *** param1 2 255.766 127.883 7551.306 &lt; 2.2e-16 *** param2 1 15.579 15.579 919.887 &lt; 2.2e-16 *** Residuals 8589 145.457 0.017 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 </code></pre> <p>I don't know how to interpret these results. Is it that param1 has the largest effect, because it's "Sum Sq" is largest? How do I know what level of param1 is best?</p> <p>So, this is my idea: For each factor, compare the "winning-percentage" of each level against every other level. That is, the number of times that level X "beats" level Y, given that all other factors are equal. I can compare level X and level Y a lot of times, because there are so many other factors and levels of those factors. So, I change the level of the other factors, compare level X and level Y in the current factor, and keep track of who won. Doing this, I should end up with something like "For factor1, levelX beats levelY 85% of the time, and therefore is the better choice."</p> <p>Does this approach make sense? Is there a name for it? Or is there another approach altogether that achieves what I want?</p> <p>Any help or pointers is greatly appreciated. I would prefer if my answer is implementable in R, but I can adapt. I have a very beefy machine to use (16 processors, 196G RAM), so I'm not too worried about the efficiency of the algorithm that solves my problem.</p>
73,812
<p>I use a MLP with one hidden layer (15 nodes) and one output node. I use a sigmoid activation function, atan as error function, the error itself is calculated with MSE, 5-fold crossvalidation, resilient backpropagation for a batched binary classification task where within each batch approx. 1000 samples are available.</p> <p>My original dataset has a ratio of approx. 30/70 positive vs. negative samples. No matter what NN setup I tried (more features, more samples) the training error didn't go beneath 0.1, the f-measure I used for evaluation was between 0.3-05, precision 0.6-0.8 and recall only between 0.2-0.4.</p> <p>Then I tried oversampling in order to increase the positive/negative value to approx. 1. Now with the same setup I the error decreased only to 0.09, but now I get a constant f-measure of > 0.85 , precision around 0.8 and recall 0.95-1(!?).</p> <p>Now I'm really wondering if my setup is completely wrong or if have found a way to fit my data well.</p> <p>Does anybody have some hints where I might have made a mistake or do you think my setup is ok and my classifier, too?</p>
47,877
<p>I am using SPSS to analyse the 3-way mixed model ANOVA of my study. I have split my variables down to interpret simple interaction effects and found that whilst the graph displays an interaction effect between the variables, the spss output (test within subjects and test between subjects) does not display any significant main effect or interaction effect. </p> <p>How should I report this finding? </p> <p>Thanks!</p>
73,813
<p>Ryan Tibshirani introduced once a more general type of Lasso, where the regularizer is $$\parallel D \alpha \parallel_1$$ instead of $\parallel \alpha \parallel_1$. <a href="http://www.stat.cmu.edu/~ryantibs/papers/genlasso.pdf" rel="nofollow">See paper</a></p> <p>However, there is nearly no discussion about this form and I wonder why since its a great way to deal with derivative smoothness regularizers.</p> <ul> <li><p>Is there an easy way I overlooked to transform a general Lasso to the standard Lasso form? </p></li> <li><p>Which algorithm can be used for the gen. lasso? Currently I only tested quadratic programs, but this is quite slow.</p></li> </ul> <p>Thank you!</p>
73,814
<p>Someone has surveyed a number of people and put the results in a database (Survey 1). Each observation has additional information that, for any subpopulation (men only, young only, etc.), gives a national-level estimate of the number of people in that subpopulation, as well as a confidence interval for that estimate. As expected, the sum of the estimates for mutually exclusive subgroups (number of men plus number of women) gives the estimate for the total number of people in the population. </p> <p>I don't know how the survey was conducted, the sampling method, etc. All I have is the database. All estimated counts coming from the database are assumed to be log-normally distributed. </p> <p>Someone else has done another survey (Survey 2). A lot more people were interviewed. This survey was not meant to estimate anything -- it was just meant to give information on those people who were interviewed.</p> <p>For the population as a whole, and for any subpopulation, Survey 2 gives an undercount, since not everyone in the population was interviewed. Often, the estimate based on Survey 1 is greater than the count from Survey 2, but that's not always the case.</p> <p><strong>Question:</strong> What is the best way to combine the information from the two surveys? I am fine with an approximate solution.</p> <p>If I only had Survey 1, my point estimate for the number of people in subpopulation A would be E(A). However, from Survey 2, I know that A > $min(A)$. So should I be computing E(A|A > $min(A)$)?</p> <p>Doing so leads to a contradiction. Namely, the sum of estimated counts in mutually exclusive subpopulations comes out to be greater than the estimated count for the entire population.</p> <p>Thank you for your help. I hope this is clear. If not, please ask, I will try to explain. :-)</p>
36,312
<p>Gelman &amp; Hill (2006) say:</p> <blockquote> <p>In Bugs, missing outcomes in a regression can be handled easily by simply including the data vector, NA’s and all. Bugs explicitly models the outcome variable, and so it is trivial to use this model to, in effect, impute missing values at each iteration.</p> </blockquote> <p>This sounds like an easy way to use JAGS to do prediction. But do the observations with the missing outcomes also affect parameter estimates? If so, is there an easy way to keep these observations in the dataset that JAGS sees, but to not have them affect the parameter estimates? I was thinking about the cut function, but that's only available in BUGS, not JAGS.</p>
30,695
<p>I want to compare the distance/similarity of 2D flood frequency data maps. The maps are square with YxY grid size and in each cell of the map is stored its flood frequency. For example in a 5x5 grid we may have this two flood frequency maps of the same area for the past 10 years, where we observe how many times the corresponding cell/place flooded:</p> <p>0 0 0 0 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 0 0 0 0<br> 0 1 2 1 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 2 3 1 0<br> 0 4 6 2 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9 9 8 7 6<br> 0 1 2 1 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 2 3 1 0<br> 0 4 6 2 0 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9 9 8 7 6</p> <p>I can easily transform these maps into a probability map that will add up to one. So the question now is what is the most meaningful way of comparing these kind of maps with each other to find their (dis)similarity. A distance metric taken from information theory field like JSD or L1 (and many others) or a similarity metric taken from the image processing field like the area under ROC (and many others)?</p>
36,315
<p>In gaussian_kde from scipy library there are two methods to estimate the bandwidth, "scott" and "silverman"</p> <p>The silverman rule of thumb is explained <a href="http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth" rel="nofollow">here</a> and the equivalent function in R is provided <a href="http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/bandwidth.nrd.html" rel="nofollow">here</a></p> <p>So my question is, why the Silverman method in "gaussian_kde" doesn't look like same. You can see the code <a href="https://github.com/scipy/scipy/blob/v0.12.0/scipy/stats/kde.py#L432" rel="nofollow">here</a> . There is nothing about the variance and other stuff. the only parameters that the methods use is the dimension of the data !</p> <p>Last but not least, I to have different number of bandwidth given the number of dimension. Each dimension should have its own bandwidth.</p> <p>might be I am mistaken and this "silverman_factor" is not exactly silver man estimation. if so some one provide me how can I calculate the bandwidth using this library or other library in Python?</p> <p><strong>UPDATE</strong></p> <p>before going to the code, I would like to remind myself the Silverman's rule of thumb for bandwidth selection. So basically it is equal to 1,06*std*n**1/5 where std is standard deviation and n is the dimension of data.</p> <p>here is a R impelentation </p> <p>library(MASS) n.grid = 25</p> <pre><code>x = c(1, 9, 4, 10) y = c(11, 3, 5, 7) delta = c(bcv(x), bcv(y)) # 14.66375 11.78500 kde2d.xy &lt;- kde2d(x, y, n = 25, h = delta) FXY &lt;- kde2d.xy$z + .Machine$double.eps dx &lt;- kde2d.xy$x[2] - kde2d.xy$x[1] dy &lt;- kde2d.xy$y[2] - kde2d.xy$y[1] PXY &lt;- FXY/(sum(FXY)*dx*dy) PX &lt;- rowSums(PXY)*dy PY &lt;- colSums(PXY)*dx HXY &lt;- -sum(PXY * log(PXY))*dx*dy #entropy </code></pre> <p>and the equivalent python code :</p> <pre><code>import numpy as np import scipy.stats as stats from scipy.stats.stats import pearsonr def colSum(x): return sum(x) def rowSum(x): return sum(x.T) a = (1, 9, 4, 10) b = (11, 3, 5, 7) n = 25 tt = np.array( [ a, b ] ) rvs = tt.T kde = stats.gaussian_kde(rvs.T, bw_method='silverman') #~ kde = stats.gaussian_kde(rvs.T) #~ kde.set_bandwidth(11) kde.silverman_factor() x_flat = np.r_[rvs[:,0].min():rvs[:,0].max():25j] y_flat = np.r_[rvs[:,1].min():rvs[:,1].max():25j] # Regular grid to evaluate kde upon x,y = np.meshgrid(x_flat,y_flat) grid_coords = np.append(x.reshape(-1,1),y.reshape(-1,1),axis=1) z = kde(grid_coords.T) z = z.reshape(n,n) rho = pearsonr(a, b)[0] FXY = z + np.finfo(np.float).eps dx = x[0][2] - x[0][1] dy = y[1][0] - y[0][0] PXY = FXY/sum(sum(FXY))*dx*dy PX = rowSum(PXY)*dy PY = colSum(PXY)*dx HXY = -sum(sum(PXY * np.log(PXY)))*dx*dy print 'X : ' + str(a) print 'Y : ' + str(b) print 'bw : ' + str(kde.covariance_factor()) print kde.covariance print 'entropy: ' + str(HXY) bw1 bw2 H python 11.34 7.39 0.13 R 14.66 11.78 4.32 </code></pre> <p>I also have to add from what @rroowwllaanndd said the actual <em>silverman factor</em> is calculated in <code>_compute_covariance</code> function and the bandwidth values are diagonal of <code>kde_object.covariance</code> matrix even though <code>kde_object.silverman_factor()</code> does not return these diagonals.</p> <p>It is also very confusing that when I feed the R code with band width values from python, the entropy is not comparable to the python's value.</p> <pre><code>library(MASS) n.grid = 25 x = c(1, 9, 4, 10) y = c(11, 3, 5, 7) # delta = c(bcv(x), bcv(y)) delta = c(11.34, 7.39) # scipy bandwidth estimations from python code kde2d.xy &lt;- kde2d(x, y, n = n.grid, h = delta) FXY &lt;- kde2d.xy$z + .Machine$double.eps dx &lt;- kde2d.xy$x[2] - kde2d.xy$x[1] dy &lt;- kde2d.xy$y[2] - kde2d.xy$y[1] PXY &lt;- FXY/(sum(FXY)*dx*dy) PX &lt;- rowSums(PXY)*dy PY &lt;- colSums(PXY)*dx HXY &lt;- -sum(PXY * log(PXY))*dx*dy </code></pre>
73,815
<p>Let $$X_1,\dots,X_m$$ are i.i.d. with distribution function $F$ and $$Y_1,\dots,Y_n$$ are i.i.d. with distribution function $G$. Suppose that there exists an unknown function $\psi:\mathbb{R}\mapsto\mathbb{R}$ such that $\psi(X_i)\sim N(0,1)$ and $\psi(Y_j)\sim N(0,\sigma^2)$ for all $i=1,\dots,m$ and $j=1,\dots,n$. I'd like to estimate $\sigma^2$ in this problem.</p> <p>I have obtained the following facts:</p> <p>Note that $F(x)=P(X\le x)=P(\psi(X)\le \psi(x))=\Phi(\psi(x))$ where $\Phi$ is the cumulative standard normal distribution. This implies $\psi(x)=\Phi^{-1}(F(x))$.</p> <p>Recall that $\psi(Y_j)\sim N(0,\sigma^2)$. So, $$\check\sigma^2=\frac1n\sum_{j=1}^n\psi^2(Y_j)$$ is an optimal estimator for $\sigma^2$. Since $F$ is unknown then I replace it with its empirical distribution function $\hat F_m$ based on $X_1,\dots,X_m$. Hence, it is natural to replace $\psi$ with $\hat\psi=\Phi^{-1}(\hat F_m)$. Therefore, I conjecture that $$\hat\sigma^2=\frac1n\sum_{j=1}^n\hat\psi^2(Y_j)$$ is an optimal estimator. I have tried to show $\check\sigma^2$ and $\hat\sigma^2$ are asymptotically equivalent, but I was failed. Could anyone help me? Or does any one have another approach?</p>
73,816
<p>I'm not used to using variables in the date format in R. I'm just wondering if it is possible to add a date variable as an explanatory variable in a linear regression model. If it's possible, how can we interpret the coefficient? Is it the effect of one day on the outcome variable? </p> <p>See my <a href="https://gist.github.com/blaquans/5966314" rel="nofollow">gist</a> with an example what I'm trying to do. </p>
73,817
<p>I want to run Levene's test to test the equality of variances between a full sample a number of sub-samples. I can't find anything about Levene's test that states whether this would violate the assumptions of the test. In other words, given the null hypothesis that $\mathrm{Var}(X_{1}) = \mathrm{Var}(X_{2})$, does Levene's test require that $X_{1} \cap X_{2} = \varnothing$?</p>
73,818
<p>Is there a way to simplify this equation? </p> <p>$$\dbinom{8}{1} + \dbinom{8}{2} + \dbinom{8}{3} + \dbinom{8}{4} + \dbinom{8}{5} + \dbinom{8}{6} + \dbinom{8}{7} + \dbinom{8}{8}$$</p> <p>Or more generally,</p> <p>$$\sum_{k=1}^{n}\dbinom{n}{k}$$</p>
25,114
<p>Say I have below example data, where rows are observations and columns are variables, and NAs stand for missing values.</p> <pre><code> 1 2 NA 4 5 6 14 5 2 6 13 7 1 11 4 NA 9 6 15 12 3 12 NA 8 3 7 12 8 1 NA 7 8 9 4 6 1 </code></pre> <p>I want to impute the missing values by regression (I know I can impute by means, but I need to see how regression performs). There is a CRAN package named 'Amelia' for imputation by regression, but it gives an error for above data saying that #observations is smaller than #variables. 'mi' package also gives an error. I can code myself, but I do not want to reinvent the wheel since I am sure there is already a package for that which would work faster than the one I write (Speed is important since I will run this imputation for thousands of variables and hundreds of observations with lots of missing values). So, does anybody know about a package which would impute the values above by regression? Thanks.</p>
73,819
<p>I have two dependent groups that could have a disease before and after a treatment. My sample size is 12214 subjects. Before the treatment, 7 of them had the disease and after the treatment 14 of them had the disease.</p> <p>Percentages of total sample are very low although the number of patients with disease doubled. Does it make sense that McNemar gives me a significant p-value here? Why?</p> <p>Here is my R code:</p> <pre><code>mat=matrix(c(12207, 7, 12200, 14),2) mcnemar.test(mat) </code></pre> <p>Is there any size test I can run after McNemar? Have I chosen the right test?</p>
73,820
<p>I paper I am trying to replicate used Eviews to estimate their state space model (by maximizing the associated maximum likelihood). They used the BHHH and Marquardt algorithms.</p> <p>My question is given that the Marquardt algorithm is generally used to solve least square type problems what is Eviews doing to allow it to be applied to maximum likelihood problems?</p> <p>1.) Does it change the Marquardt algorithm? If so how?</p> <p>2.) Does it reformulate the log likelihood maximization as a least squares problem? If so how?</p> <p>Baz</p>
36,322
<p>I'm really new to stats and R and I suspect I'm missing something obvious. I have a set of memberships all who start after a point in time (six months ago). I have done my query to estimate the number of days in the membership to today marking those still ongoing as censored. I've done my plot from the data so the max number is 180 days and the survival rate drops to zero for 180 days. Is this the best way to look at this data. I'm a bit unsure of the best approach. It's Kaplan's survival algorithm. Given that a lot of the memberships are ongoing/censored should the probability for 180 days be zero.</p>
73,821
<p>I often hear the claim that Bayesian statistics can be highly subjective. The main argument being that inference depends on the choice of a prior (even though one could use the principle of indifference o maximum entropy to choose a prior). In comparison, the claim goes, frequentist statistics is in general more objective. How much truth is there in this statement? </p> <p>Also, this makes me wonder: </p> <ol> <li>What are concrete elements of frequentist statistics (if any) that can be particularly subjective and that are not present or are less important in Bayesian statistics?</li> <li>Is subjectivity <strong>more prevalent</strong> in Bayesian than in frequentist statistics? </li> </ol>
73,822
<p>How to guess what should be the starting values for beta estimates, which we need to specify in PARMS or PARAMETERS statement while using PROC NLIN (PROC NLIN is used to run non-linear regression in SAS)</p>
30,769
<p>How can I test heteroskedasticity of a time series in R? I have heard of two tests <code>McLeod.Li.test</code> and <code>bptest</code> (Breusch-Pagan test). Can I use these two tests? and what are the differences and assumptions of these tests if I can use them?</p> <p>Thanks</p>
73,823
<p>How do I interpret this model:</p> <p>$$ Price = -7.095 - 9.471[\ln(Number Of Different Kinds Of Fruits)] + 53.942 \sqrt{Number Of Customer} + ... $$</p> <p>Is that:</p> <ul> <li>If there are no kinds of fruits nor visitors nor other variables, the expected average price will be -7.095</li> <li>If the natural logarithm of Number Of Different Kinds Of Fruits increases with one and all other things remain the same, the expected average price will decrease with 9.471</li> <li>If there is one extra customer and all other variables remain the same, the average expected price will increase with 53.942</li> </ul>
73,824
<p>I am investigating a common discrepancy between male and female self-reports on a survey about sexual experiences. Generally, women report ~2/3 higher rates then men do. I have modified the survey and now need to find out if/how the discrepancy between males' and females' reporting rates may have changed. Males and Females each fill out both verisons (modified and original) of there gender-specific survey. So, what i want to compare is the difference between male-and-female difference scores on the original survey, and male-and-female differences on the newer/modified survey. What specific statistical test could I use here?</p>
73,825
<p>I am trying to find a more aesthetic way to present an interaction with a quadratic term in a logistic regression (categorisation of continuous variable is not appropriate).</p> <p>For a simpler example I use a linear term.</p> <pre><code>set.seed(1) df&lt;-data.frame(y=factor(rbinom(50,1,0.5)),var1=rnorm(50),var2=factor(rbinom(50,1,0.5))) mod&lt;-glm(y ~ var2*var1 , family="binomial" , df) #plot of predicted probabilities of two levels new.df&lt;-with(df,data.frame(expand.grid(var1=seq(-2,3,by=0.01),var2=levels(var2)))) pred&lt;-predict(mod,new.df,se.fit=T,type="r") with(new.df,plot(var1,pred$fit)) #plot the difference in predicted probabilities trans.logit&lt;-function(x) exp(x)/(1+exp(x)) pp&lt;-trans.logit(coef(mod)[1] + seq(-2,3,by=0.01) * coef(mod)[3]) -trans.logit((coef(mod)[1]+coef(mod)[2]) + seq(-2,3,by=0.01) * (coef(mod)[3]+coef(mod)[4])) plot(seq(-2,3,by=0.01),pp) </code></pre> <h3>Questions</h3> <ul> <li>How can I plot the predicted probability difference between the two levels of var2 (rather than the 2 levels separately) at different values of var1?</li> <li>Is there a way to define contrasts so I can use these in the glm so I can then pass this to predict? - I need a CI for the difference in probabilities</li> </ul>
73,826
<p>I overhead a professor speak about the Cauchy meta-distribution, but I am unable to find anything about it on the web. My question is what is the Cauchy meta distribution and what is the theory behind it?</p>
73,827
<p>I'm trying to find the probability that two randomly-selected letters from "average" text in a language will be the same. </p> <p>For example, if my hypothetical language contains four letters which each occur on average with the following frequency:</p> <pre><code>A = 60% B = 25% C = 10% D = 5% </code></pre> <p>What is the probability that selecting any two letters from a representational text will be the same?</p> <p>My intuition for solving this is first to find the chance that they're different, so the sum over the probabilities that a letter is chosen and then some other letter is chosen next, over each letter in the alphabet:</p> <pre><code>(0.6 * (1 - 0.6) + 0.25 * (1 - 0.25) + 0.1 * (1 - 0.1) + 0.05 * (1 - 0.05)) = 0.565 </code></pre> <p>Then the chance that they are the same: </p> <pre><code>1 - 0.565 = 0.435 </code></pre> <p>Is this reasoning sound? It seems like a very basic probability problem, but I always seem to be thinking about these things in the wrong way and would appreciate a sanity check (and any pointers to materials which would help me be more confident about this kind of thing in the future!)</p>
73,828
<p>Please see the model below (<a href="http://i.stack.imgur.com/CspnD.png" rel="nofollow">link to bigger image</a>). The independent variables are properties of 2500 companies from 32 countries, trying to explain companies' CSR (corporate social responsibility) score. </p> <p>I am worried about the VIF scores of especially the <code>LAW_</code>, <code>GCI_</code> and <code>HOF_</code> variables but I really need them all included in the model to connect it to the theoretical background the model is built upon. All variables are discrete numeric values, except the law <code>LAW_</code> variables: they are dummies as to which legal system applies in the companies' country of origin (either english, french, german or scandinavian).</p> <p><img src="http://i.stack.imgur.com/CspnD.png" alt="enter image description here"></p> <p>Amongst other articles, I have read <a href="http://www.thejuliagroup.com/blog/?p=1405" rel="nofollow">this article</a> about dealing with collinearity. Often-suggested tips are removing the variable with highest VIF-score (in my model this would be <code>LAW_ENG</code>). But then other VIF-scores increase as a result. I do not have the proper knowledge to see through what is going on here and how I can solve this problem. </p> <p>I have uploaded the corresponding data <a href="https://www.dropbox.com/s/tw6vyz0sx89fnch/stackexchange.sav" rel="nofollow">here</a> (in SPSS <code>.sav</code> format). I would really appreciate somebody with more experience having a quick look and tell me a way to solve the collinearity problem without taking out (any or too many) variables.</p> <p>Any help is greatly appreciated.</p> <p>P.S. For reference, I am including a correlation table (<a href="http://i.stack.imgur.com/Q4eiY.png" rel="nofollow">link to bigger image</a>):</p> <p><img src="http://i.stack.imgur.com/Q4eiY.png" alt="enter image description here"></p>
36,327
<p>I want to see if there is a correlation (or any sort of relationship) between two time series I am working with.</p> <p>One of them is a times series of temperature data, the other one is concentration of a substance. I am trying to see if there is a relationship between the concentration and temperature.</p> <p>However, temperature data series is not stationary (and there are gaps in it), and neither is the concentration series. The problem is that I don't have enough information on the temperature data series since I don't have data for even one period (one year). </p> <p>What can I do?</p>
73,829
<p>Lets say I want to generate data with particular <strong>association matrix</strong>. I am taking <a href="http://en.wikipedia.org/wiki/Phi_coefficient" rel="nofollow">phi coefficient</a>` as measure degree of association.</p> <p>Here are examples using R. </p> <pre><code> require(psych) var1 &lt;- sample(c("P", "A"), 10000, replace = TRUE) var2 &lt;- sample(c("P", "A"), 10000, replace = TRUE) mydf &lt;- data.frame (var1, var2) # degree of association require(psych) # No association case: # random variables means 0 association expected phi(table(var1, var2)) [1] -0.01 # copy of same variable, 1 association expected. var3 &lt;- var1 phi(table(var1, var3)) </code></pre> <p>Assuming that I have 4 x 4 matrix of <strong><em>phi coefficients</em></strong> between the four categorical variables. Say the following is <strong>association matrix</strong> (just like correlation matrix)</p> <pre><code>amat &lt;- matrix (c(1,0.5,0.4, 0.3, 0.5,1,0.5,0.3, 0.4,0.5,1,0.2, 0.3, 0.3, 0.2,1), 4) rownames(amat) &lt;- c("VarA", "VarB", "VarC", "VarD") colnames (amat) &lt;- c("VarA", "VarB", "VarC", "VarD") amat VarA VarB VarC VarD VarA 1 0.5 0.4 0.3 VarB 0.5 1 0.5 0.3 VarC 0.4 0.5 1 0.2 VarD 0.3 0.3 0.2 1 </code></pre> <p>Is there any way to generate a data with four variables with say 10000 observations that approximately hold the above association ? I know from <a href="http://stats.stackexchange.com/questions/100770/generating-a-correlated-data-matrix-where-both-observations-and-variables-are-co">the post</a> how we can do similar thing in quantitative variables. The examples does not need to be R specific, I know it only the idea, which can translated into any programming language. </p>
73,830
<p>I have a fairly basic statistics application question. Lets say I have a set of four fold-change values, representing the abundance of a factor as it passes through four consecutive time points:</p> <pre><code>x&lt;-c(1.0, 1.2, 15.3, 0.2) </code></pre> <p>And I want to define its "trend" ie, a single-number representation of how it acts during the entirety of the time course. </p> <p>In the example given, x has a general increasing trend. </p> <p>I have tried using trendlines, but I get a lot of over-generalization of the trend, and my info is lost. Is there a more-informative solution to defining a "trend" of values as they pass through a time series?</p>
36,329
<p>Suppose I have two events $B$ and $A^c$ and I wish to compute the probability of their intersection. I just want to ensure that the following proof holds (i.e., is correct -- I'm a little rusty). <strong>Updated!</strong> Assume the events are independent.</p> <p>\begin{gather*} P(A^c) = 1- P(A) \\ \end{gather*} so \begin{align*} P(B \cap A^c) &amp;= P(B) \times \Big(1-P(A)\Big) \\ &amp;= P(B) - P(B)P(A) \\ &amp;= P(B) - P(B \cap A) \end{align*}</p> <p>Another way to look at it, $A$ and $B$ are two events in some sample space $\mathcal{F}$, i.e., $A,B \in \mathcal{F}$. This means:</p> <p>\begin{align*} B = (A\cap B) \cup (B\cap A^c) \end{align*}</p> <p>so </p> <p>\begin{align*} P(B) &amp;= P(A\cap B) + P(B \cap A^c) \\ P(B \cap A^c) &amp;= P(B) - P(A \cap B) \end{align*}</p>
36,332
<p>I am trying to carry out the discretized process suggested by the authors in this <a href="http://web.mit.edu/dbertsim/www/papers/Finance/Optimal%20control%20of%20execution%20costs.pdf" rel="nofollow">paper</a>, i.e. the numerical optimization, and would be grateful for some pseudo code or better still R code that might explain how this is carried out practically speaking.</p> <p>But am unsure how to practically implement this due to the path dependency nature of the problem, whilst also incorporating the boundary condition that the original trade size is completed by the time $t=T$, i.e. $W_T+1=0$ and $W_1=\bar{S}$</p> <p>If discretizing the state and control space, how is one able to evaluate the value function, if the prices depend on the previous period's actions, and states?</p> <p>Should the grid search for optimal values be starting from $t=T$ or $t=1$? If starting from $T$ $t=1$ to get an optimal value for $S^*_1$, which is then used for the $t=2$ optimisation, then how does one make sure that the sum of the optimal values equals the original trade size?</p> <p>If going in the reverse direction, and continually optimizing, again, how does one ensure that the sum of the optimal values equals the original trade size, but also if starting from $t=T$, then how does one deal with the fact that the path of trades until until $T-1$ is still unknown when carrying out this optimisation?</p> <p>Any help would be much appreciated.</p>
73,831
<p>I followed <a href="http://rtutorialseries.blogspot.hk/2010/01/r-tutorial-series-basic-hierarchical.html" rel="nofollow">this tutorial</a> to learn Hierarchical Linear Regression (HLR) in R, but couldn't understand how to interpret its sample output of <code>&gt;anova(model1,model2,model3)</code></p> <p><img src="http://i.stack.imgur.com/MxXIM.png" alt="enter image description here"></p> <p>The tutorial simply says </p> <blockquote> <p>each predictor added along the way is making an important contribution to the overall model.</p> </blockquote> <p>But I would like some more details to <strong>quantify</strong> the contribution of each explanatory variable, like:</p> <ol> <li><p>"UNEM" explains <code>X</code> (or <code>X%</code>) variance</p></li> <li><p>Adding the "HGRAD" variable explains <code>Y</code> (or <code>Y%</code>) more variance</p></li> <li><p>Adding the "INC" variable further explains <code>Z</code> (or <code>Z%</code>) more variance</p></li> </ol> <p>So, can I get the value of <code>X</code>, <code>Y</code>, and <code>Z</code> using the above ANOVA table? How? Specifically, what do <code>Res.Df</code>, <code>RSS</code>, <code>Sum of Sq</code> mean in this ANOVA table?</p>
36,333
<p>The following figures show examples of ROC curves:</p> <p><img src="http://i.stack.imgur.com/yPqte.png" alt="t1"></p> <p>First of all ignoring the picture, from a logical point one can say: When the <strong>cutoff</strong> value <strong>decreases</strong>, more and more cases are allocated to class 1 and therefore the <strong>sensitivity</strong> will <strong>increase</strong> (true positives in relation to total actual positives). The specificity will decrease (these are true negatives in relation to total actual negatives and it will decrease, because less and less cases are allocated to class 0 so there will be less true negatives).</p> <p>I got that and I think it is correct. But I got confused when I looked at the picture.</p> <p>Lets take this point here:</p> <p><img src="http://i.stack.imgur.com/j4cXT.png" alt="t2"></p> <p>Now when I decrease the cutoff probability from this point, I move "down" so I get this point as a result:</p> <p><img src="http://i.stack.imgur.com/0DHRE.png" alt="t4"></p> <p>So I decrease the cutoff value, but one can see clearly that the value of the sensitivity on the y-axis also decreased ($x2&lt;x1$).</p> <p>Where is my logical error?</p>
73,832
<p>I have a series of daily log returns and I am looking to fit them to an AR() process. </p> <p>Any suggestions?</p>
37,761
<p>After R reads the data, say-</p> <pre><code>v1 &lt;- c(1,1,1,1,1,1,1,1,1,1,3,3,3,3,3,4,5,6) v2 &lt;- c(1,2,1,1,1,1,2,1,2,1,3,4,3,3,3,4,6,5) v3 &lt;- c(3,3,3,3,3,1,1,1,1,1,1,1,1,1,1,5,4,6) v4 &lt;- c(3,3,4,3,3,1,1,2,1,1,1,1,2,1,1,5,6,4) v5 &lt;- c(1,1,1,1,1,3,3,3,3,3,1,1,1,1,1,6,4,5) v6 &lt;- c(1,1,1,2,1,3,3,3,4,3,1,1,1,2,1,6,5,4) m1 &lt;- cbind(v1,v2,v3,v4,v5,v6) </code></pre> <p>if I run-</p> <pre><code>factanal(~v1+v2+v3+v4+v5+v6, factors = 3, scores = "Bartlett")$scores </code></pre> <p>I can get observation-wise factor scores. But when I do this-</p> <pre><code>r&lt;-cor(m1) library(psych) fa&lt;-fac(r,nfactors=3,rotate="varimax", scores="Bartlett") fa$score </code></pre> <p>I can't get individual-wise factor scores. Is it possible to get individual-wise factor scores using psych? Actually I need to use psych, because I need to put spearman's rank correlation matrix as the starting point for factor analysis. Kindly suggest me.</p> <p>Thank You</p>
73,833
<p>I have a training set with 4 different 3D position vectors as some of the features. I have defined a new 3D coordinate system based on the first 3 of these position vectors to achieve translational and rotational invariance. Then transformed the original points to this new system. Since the last point also originally resides in the same coordinate system, I have transformed that one also. To explain the situation better:</p> <p>Training data was before (Here assume first 3 vectors are used to describe the new coordinate system and $(x_s,y_s,z_s)$ is the last position vector which will also be converted):</p> <p>$x_1, y_1, z_1, x_2, y_2, z_2, x_3, y_3, z_3, x_s, y_s, z_s\ldots$ (and other features)</p> <p>Applying my coordinate system transformation it became</p> <p>$x_{t1}, x_{t2}, z_{t2}, x_{t3}, x_{ts}, y_{ts}, z_{ts}\ldots$ (and other features)</p> <p>Number of features decreased in the second case because in the new coordinate system some values are always zero ($y_{t1},y_{t2},z_{t1}$ etc) or equal to another value ($z_{t2} = -z_{t3}$)</p> <p>The new features are not in similar value ranges, generally $x_{ts}, y_{ts}, z_{ts}$ are much larger than the others $x_{t1}, z_{t2}$ etc. So I want to introduce normalization, but at the same time I don't want to lose semantic relation between the points described in the same new coordinate system.</p> <p>How should I do it? My initial thought is to look at min-max values of all $x_{t1}, x_{t2}, z_{t2}, x_{t3}, x_{ts}, y_{ts}, z_{ts}$ features and apply min-max normalization with same paramaters to each of them. So $z_{ts}$ for example will be much larger than $x_{t1}$ even after normalization. Does it make sense to do so?</p>
43,892
<p>I am trying to come up with estimates on the outcomes from a layered-earth inversion calculation whereby the calculation provides a posterior covariance matrix for the best-fitting model parameters given the data. In particular, I want to see how long a particular feature of the model persists from the surface (ie, I am looking for values $m_{i} \ge x_{0}, i=1..M$ . I believe this is a conditional probability question:</p> <ol> <li>What is probability layer 1 satisfies condition? $P(m_1 \ge x_0|m_2,...m_M, d_1,d_2,...,d_N)$</li> <li>What is probability layer 2 also satisfies condition? $P(m_2 \ge x_0|m_1 \ge x_0, m_2,...m_M, d_1,d_2,...,d_N)$</li> </ol> <p>... and so on. In this question, my model parameters are correlated through the posterior covariance matrix, and the postulate of the inversion provides me with Gaussian distribution of the parameters. I have been looking around for days how to evaluate an integral of this sort, but the best I could find was for the bivariate case where it was stated that the problem is not closed and must be evaluated numerically. I typically program in Matlab, but am happy to use any method to solve this problem. Is there anyone who can assist with this, please?</p>
73,834
<p>Say I have a set of classifier models, each generated using feature selection inside a repeated k-fold cross-validation. Each classifier model is generated using a different set of regularization parameters or hyperparameters. </p> <p>I understand that choosing the 'best' model of this set, i.e. one that yields best k-fold cross validation classication estimate could produce an optimistically biased estimate of generalized performance. However is bias avoided if the final performance estimate, is based on a separate repeated k-fold cross validation using the features and hyperparameters selected above? </p> <p>I have found this procedure (10 folds, 10 repetitions) works well in practice (model appears stable on genuinely unseen data) on a data set with Cases > Features however I wonder if any remaining bias could be considered unacceptable? I suspect this procedure is less acceptable in the case where Features >> Cases</p> <p>My question is related to <a href="http://stats.stackexchange.com/q/11602/11030">Training with the full dataset after cross-validation?</a></p> <p>Apologies if this question appears ignorant or repeats material discussed elsewhere.</p>
36,336
<p>CLT states in short, that sum/mean of random iid variables from almost any distribution approaches normal distribution. </p> <p>I failed to find information about asymptotic behavior of sample variance when sample is drawn from unknown distribution. Do we have any reason to believe, that variance of random iid variables asymptotically approach any particular distribution (like chi-squared for normal case)?</p> <p>What about covariance of multivariate iid distribution? Can we have any reason to believe, that covariance calculated on sample drawn from it can asymptotically approach Wishart distribution? (or any other?)</p>
36,337
<p>The <a href="http://en.wikipedia.org/wiki/Finite_difference#Forward.2C_backward.2C_and_central_differences" rel="nofollow">central difference</a> is a method to approximate numerically the derivative of a function sampled at discrete intervals. In R, one would do:</p> <pre><code>n&lt;-100 y&lt;-cumsum(rnorm(n)) y_p[1]&lt;-diff(y[1:2]) for(i in 2:(n-1)) y_p[i]&lt;-diff(y[c(i-1,i+1)])/2 y_p[n]&lt;-diff(y[c(n-1,n)]) </code></pre> <p>My question is, what is the comonly accepted inverse to this operator? (preferably in pseudo code form to avoid ambuiguity)</p> <p>Best,</p>
73,835
<p>suppose we have a stream of sentences and we need to compare each new sentence with previously received ones . For example , with sentences received in last 30 minutes. What is the best method to do that ? can we use Mahalanobis distance for this ? How ?</p>
38,146
<h2>Scenario:</h2> <p>Consider a statement, e.g. "This movie is an action" Then let people vote on that. 1-5 where 1 is "Not action", 3 is "Some action" and 5 is "Pure action". </p> <h2>Question</h2> <p>How does one determine how accurate the the statement from the voting result?</p> <p>1000 votes where 900 is a 5 is pretty accurate, but 1000 votes with 500 1's and 500 5's is not very accurate. Also 3 votes and all a 5 is not so accurate.</p> <p>Basically determine the certainty how much an object is of a specific category from uservotes. </p> <p>I'm having some problem explaining my idea, but ask any questions and I'll try to clarify</p>
36,340
<p>I have some satellite tag time-at-depth (TAD) frequency data that I would like some help with.</p> <p>The data was transmitted via satellite as percent time spent in each of 7 depth bins (0m, 0-1m, 1-10m, 10-50m etc.), binned over 6-hour intervals. I categorized each row of data corresponding to a date and time into summer vs. winter, and day vs. night, and then summed and averaged the given % for each depth bin. My data looks like this (for one individual, HG03):</p> <pre><code>HG03.dat Season Time Depth Sum Avrg 1 summ day 0 17.2 0.1702970 2 summ day 1 23.9 0.2366337 3 summ day 10 868.5 8.5990099 4 summ day 50 2698.2 26.7148515 5 summ day 100 419.7 4.1554455 6 summ day 200 266.1 2.6346535 7 summ day 300 1668.6 16.5207921 8 summ day 500 4138.2 40.9722772 9 summ night 0 283.6 5.7877551 10 summ night 1 229.1 4.6755102 11 summ night 10 479.3 9.7816327 12 summ night 50 761.9 15.5489796 13 summ night 100 235.8 4.8122449 14 summ night 200 40.9 0.8346939 15 summ night 300 763.1 15.5734694 16 summ night 500 2106.1 42.9816327 17 wint day 0 0.0 0.0000000 18 wint day 1 0.0 0.0000000 19 wint day 10 0.0 0.0000000 20 wint day 50 0.0 0.0000000 21 wint day 100 7.9 1.1285714 22 wint day 200 92.1 13.1571429 23 wint day 300 0.0 0.0000000 24 wint day 500 600.0 85.7142857 25 wint night 0 43.9 1.7560000 26 wint night 1 0.3 0.0120000 27 wint night 10 0.3 0.0120000 28 wint night 50 0.8 0.0320000 29 wint night 100 10.5 0.4200000 30 wint night 200 51.6 2.0640000 31 wint night 300 411.4 16.4560000 32 wint night 500 1981.2 79.2480000 </code></pre> <p>I wanted to test whether significant differences existed between depth in summer vs. winter, and day vs. night, controlling first for season and then for time of day. I carried out a Cochran-Mantel-Haenszel test, using Average Frequency (Avrg) as the dependent variable (2x2x8 contingency table).</p> <pre><code>&gt; ct&lt;-xtabs(Avrg~Time+Depth+Season,data=HG03.dat) &gt; mantelhaen.test(ct) Cochran-Mantel-Haenszel test data: ct Cochran-Mantel-Haenszel M^2 = 28.4548, df = 7, p-value = 0.0001818 &gt; ct&lt;-xtabs(Avrg~Season+Depth+Time,data=HG03.dat) &gt; mantelhaen.test(ct) Cochran-Mantel-Haenszel test data: ct Cochran-Mantel-Haenszel M^2 = 111.5986, df = 7, p-value &lt; 2.2e-16 </code></pre> <p>However, I'm not sure if these results are valid, since my raw data is already in frequencies, not in counts. When I used Sum as the dependent variable, I obtained different results.</p> <p>I am at a loss on how to proceed. If anyone has any ideas, they would be greatly appreciated. </p>
36,345
<p>Since the mean of the residuals should be close to zero and with my calculations yield the following result:</p> <pre><code>&gt; mean(resid(trees.lm) [1] -3.065293e-17 </code></pre> <p>is it correct to stipulate that the mean is close to zero?</p> <p>My second question is as follows: While I am working with the gamble data set in my class, I need to compute the correlation of the residuals with income. How would I set up the calculation? I'm thinking of <code>cor(residuals(data.lm))</code>.</p>
36,346
<p>I am reading an article that explores correlation between two variables, X and Y. Usually, if the scatter plot show something like this, we can claim that there is a strong correlation of X and Y.</p> <pre><code>Y | | o | o o | o | o o | o | o | o +---------------------&gt;X </code></pre> <p>What about the following case?</p> <pre><code>Y | o o | o o | o oo | o o | oo o | o o | o o | o oo o oo oo o o o oo +---------------------&gt;X x1 x2 </code></pre> <p>Basically, the scatter plot show strong clustering and spikes around a few data point along X axis, e.g. x1 and x2.</p> <p><strong>What kind of statistical property does this imply?</strong></p>
73,836
<p>I'm having a problem using the group function in TraMineR. I have a data set that contains SPELL data, so multiple rows per case. I also have demographic data per case, at one row per case. I merge these together and end up with data that has a demographic covariate per row, so multiple rows per case. An example of this data would be:</p> <pre><code>id startmin stopmin activity educ4 4 1 20 work HS 4 20 40 play HS 8 1 15 sleep College 8 15 40 work College </code></pre> <p>I can make sequence data from this, but when I try to run a plot using the group command</p> <pre><code> seqiplot(atus.seq, group = atus.seqdata$educ4, border=NA, withlegend="right", cex.plot=.55) </code></pre> <p>it tells me: </p> <pre><code> &gt;"Error: group must contain one value for each row in the sequence object". </code></pre> <p>I have gotten this to work with the example of the mvad data in the training manual, but I can't seem to get it to work with the groups, whether I link to the original demographic data, the merged data, or try to pass the covariates by seqformat and seqdef. Ideas?</p>
73,837
<p>I am trying to estimate a binomial proportion p, say, from a sample of binomials. There are k subjects. Associated with each subject is a sample size $n_i$ and a count $x_i$ of items, where $x_i$ is distributed as a binomial $(p, n_i)$. I can assume that the sample sizes are not a function of $p$. </p> <p>In sampling terms, the sampling unit is the subject, not the number of items $x_i$.</p> <p>I want to estimate p and provide a confidence interval for it.</p> <p>Should I take $$\hat{p}=\frac{\sum_i x_i}{\sum_i n_i}$$ or should I take the average of the cluster means? $$ \frac{1}{k} \sum_i \frac{x_i}{n_i}$$</p> <p>And how should I estimate the standard deviation?</p>
41,434
<p>I am dealing with a text classification problem. Where I need to assign tags to a document. The amount of tags I need to assign varies from 1 to 5. I am struggling somewhat on how I should tackle this problem. What I tried was to encode every combination of tags with LabelEncoder() from scikit-learn, I framed it as a regression problem, because this label encoding gave me too many classes. However, since I cannot fit the entire train set in memory I can only train on a small part of the train set. The test is much bigger than the part of the train set I train my regressor on. As a result, my estimator performs really poor on the test set. In cross validation, the regressor actually gave reasonable results, this is a sign for me that framing this as a regression problem isn't the main problem. I am not sure how I should proceed. Should I frame this as a classification problem? Should I use a different encoding of my tags? Or should I simply find a way to train my classifier on more samples?</p>
73,838
<p>I have some measurement results (about 10000 or even more). How can I say (which methods should I use and how to apply them) to check if my data is time series or is a statistical sample?</p>
49,924
<p>I am trying to do a small case study in 24 hours + change.</p> <p>For a dataset, I'm using <a href="http://ghtorrent.org" rel="nofollow">GHTorrent.org</a>.</p> <p>A general assumption about virtual work is that richer media leads to greater productivity. I have decided to focus on <a href="http://github.com" rel="nofollow">GitHub</a> and to examine the effects of @mentions on issue resolution.</p> <p>My hypothesis is that mentions are correlated with shorter time to issue resolution.</p> <p>To see if this is true, I figure I can take a look at when an issue opened, when it closed, and how many mentions there were divided by how many comments there were.</p> <p>Does this sound reasonable? I am a final-year master student and this is for a small assignment to get us familiar with writing scientific papers. Any advice is much appreciated.</p>
73,839
<p>I have a observation sequence of around 1000 samples, each observation is a 10 dim vector. I am trying to learn an HMM model based on this. Specifically I am using the GaussianHMM based on this example: <a href="http://scikit-learn.org/stable/auto_examples/applications/plot_hmm_stock_analysis.html#example-applications-plot-hmm-stock-analysis-py" rel="nofollow">http://scikit-learn.org/stable/auto_examples/applications/plot_hmm_stock_analysis.html#example-applications-plot-hmm-stock-analysis-py</a></p> <p>After few iterations it looks like my one of the state's covariance matrix becomes non-positive definite. (Initial values for covars are symmetric and positive definite)</p> <p>I even set the 'covars_prior=0.01' which I thought should prevent this from happening. What could be missing?</p> <p>Thanks in advance.</p>
73,840
<p>What Exactly is the difference between decode and score? The documentation seems pretty sparse regarding this.</p> <p>My guess is that: decode represents the probability of the best sequence of states for a observation sequence. score represents the sum of probabilities of all state sequences for a observation sequence.</p> <p>Is this correct? That is, decode is the viterbi probability while score is the forward probability.</p> <p>Thanks in advance.</p>
73,841
<p>I used Radial basis function and pseudo inverse to train neurons and then test it. Everything works here. But I read I can use gradient descent method instead of using pseudo inverse which sometimes may be hard to do (may run out of memory, may take too long to compute). But I don't understand few things here.</p> <p>Let say I have this data:</p> <p>$x = [-1, -0.75, -0.5, -0.25, 0, 0.25, 0.5, 0.75, 1]$</p> <p>$y = [-1 -0.75, -0.5, -0.25, 0, 0.25, 0.5, 0.75, 1] $</p> <p>$y$ is calculated using this function:</p> <p>$y_i(f(x_i))= 1/(1+x_i^2)$</p> <p>$N$ - number of train units (or elements)</p> <p>$m_1$ - number of centers.</p> <p>Then I calculate $Phi$ function:</p> <p>$t_j = -1 + (j - 1) g$</p> <p>$gamma = -m_1/4$</p> <p>$Phi = exp(gamma(abs(x - t_j)));$</p> <p>Then I fill $G$ with $Phi$ rows (or vectors)</p> <p>And then I would use $G$ to calculate $w$ vector like:</p> <p>$w = G^+y$</p> <p>where $G^+ = (G'G)^-1G'$ is this so called pseudo inverse.</p> <p>But in: <a href="http://www.ideal.ece.utexas.edu/~gjun/ee379k/html/regression/online_regression/page1.html" rel="nofollow">http://www.ideal.ece.utexas.edu/~gjun/ee379k/html/regression/online_regression/page1.html</a></p> <p>It suggests to use gradient descent method with something like:</p> <p>$E_w[t] = 1/2(y - w[t]Phi(x))^2 $</p> <p>$dE_w[t] =(y - w[t]^T Phi(x)(-Phi(x)))$ </p> <p>$w[t+1] = w[t] - n*dE_w[t]$</p> <p>Can someone explain how can I properly switch pseudo inverse with gradient descent? Also what should be init weight? As with pseudo inverse you get all the weights there, but with gradient descent I don't see what should be the first weight. Should I have some minimum error defined, so learning should stop when error reaches that level? If you could provide matlab example from this data, it would be even better.</p>
36,350
<p>I have trouble interpreting interaction plots when there is an interaction between the two independent variables.</p> <p>The following graphs are from <a href="http://courses.washington.edu/smartpsy/interactions.htm">this</a> site:</p> <p>Here, $A$ and $B$ are the independent variables and $DV$ is the dependent variable.</p> <p>Question : There is interaction and main effect of $A$, but no main effect of $B$</p> <p><img src="http://i.stack.imgur.com/eC58m.png" alt="enter image description here"></p> <p>I can see that the higher the value of $A$, the higher the value of $DV$, provided B is at $B_1$ otherwise, $DV$ is constant regardless of the value of $A$. Therefore, there is an interaction between $A$ and $B$ and main effect of $A$ (since higher $A$ leads to higher $DV$, holding $B$ constant at $B_1$).</p> <p>Also, I can see that different levels of $B$ will lead to different levels of $DV$, holding $A$ constants. Therefore, there is main effect of B. But this apparently is not the case. So, this must mean I am wrongly interpreting the interaction plot. What am I doing wrong?</p> <p>I am also wrongly interpreting plot 6-8. The logic I used to interpret them is the same as the one I used above so I if I know the error I am making above, I should be able to correctly interpret the rest. Otherwise, I will update this question. </p>
36,351
<p>I don't know how to grasp what this question is asking, nor how to attempt to solve it...</p> <p>The Polk Company reported that the average age of a car on US roads in a recent year was 7.5 years. Suppose the distribution of ages of cars on US roads is approximately bell-shaped. If 95% of the ages are between 1 year and 14 years, what is the standard deviation of car age?</p> <p>I could calculate the variance but I don't know N. Not sure what the 95% part is there for either or what to do with it.</p> <p>This isn't technically a homework question, but it's on a practice exam. I'd really like some help as to how to go about solving this, the wording is messing me up.</p>
36,352
<p>If $X_i$ is exponentially distributed $(i=1,...,n)$ with parameter $\lambda$ and $X_i$'s are mutually independent, what is the expectation of</p> <p>$$ \left(\sum_{i=1}^n {X_i} \right)^2$$</p> <p>in terms of $n$ and $\lambda$ and possibly other constants?</p> <p><strong>Note:</strong> This question has gotten a mathematical answer on <a href="http://math.stackexchange.com/q/12068/4051">http://math.stackexchange.com/q/12068/4051</a>. The readers would take a look at it too.</p>
36,354
<p>I am trying to compute the standard error of the sample <a href="http://en.wikipedia.org/wiki/Spectral_risk_measure" rel="nofollow">spectral risk measure</a>, which is used as a metric for portfolio risk. Briefly, a sample spectral risk measure is defined as $q = \sum_i w_i x_{(i)}$, where $x_{(i)}$ are the sample order statistics, and $w_i$ is a sequence of monotonically non-increasing non-negative weights that sum to $1$. I would like to compute the standard error of $q$ (preferrably not via bootstrap). I don't know much about L-estimators, but it looks to me like $q$ is a kind of L-estimator (but with extra restrictions imposed on the weights $w_i$), so this should probably be an easily solved problem. </p> <p><strong>edit</strong>: per @srikant's question, I should note that the weights $w_i$ are chosen <em>a priori</em> by the user, and should be considered independent from the samples $x$.</p>
36,355
<p>The <a href="http://en.wikipedia.org/wiki/Kernel_trick">kernel trick</a> is used in several machine learning models (e.g. <a href="http://en.wikipedia.org/wiki/Support_vector_machine">SVM</a>). It was first introduced in the "Theoretical foundations of the potential function method in pattern recognition learning" paper in 1964. </p> <p>The wikipedia definition says that it is </p> <blockquote> <p>a method for using a linear classifier algorithm to solve a non-linear problem by mapping the original non-linear observations into a higher-dimensional space, where the linear classifier is subsequently used; this makes a linear classification in the new space equivalent to non-linear classification in the original space.</p> </blockquote> <p>One example of a linear model that has been extended to non-linear problems is the <a href="http://en.wikipedia.org/wiki/Kernel_principal_component_analysis">kernel PCA</a>. Can the kernel trick be applied to any linear model, or does it have certain restrictions?</p>
660