source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
22,161 | Does anyone know how to work out whether points 7, 16 and 29 are influential points or not?
I read somewhere that because Cook's distance is lower than 1, they are not. Am, I right? | Some texts tell you that points for which Cook's distance is higher than 1 are to be considered as influential. Other texts give you a threshold of $4/N$ or $4/(N - k - 1)$, where $N$ is the number of observations and $k$ the number of explanatory variables. In your case the latter formula should yield a threshold around 0.1 . John Fox (1), in his booklet on regression diagnostics is rather cautious when it comes to giving numerical thresholds. He advises the use of graphics and to examine in closer details the points with "values of D that are substantially larger than the rest". According to Fox, thresholds should just be used to enhance graphical displays. In your case the observations 7 and 16 could be considered as influential. Well, I would at least have a closer look at them. The observation 29 is not substantially different from a couple of other observations. (1) Fox, John. (1991). Regression Diagnostics: An Introduction . Sage Publications. | {
"source": [
"https://stats.stackexchange.com/questions/22161",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5397/"
]
} |
22,209 | I am trying to grasp the dynamic time warping measure for comparing time series together. I have three time series datasets like this: T1 <- structure(c(0.000213652387565, 0.000535045478866, 0, 0, 0.000219346347883,
0.000359669104424, 0.000269469145783, 0.00016051364366, 0.000181950509461,
0.000385579332948, 0.00078170803205, 0.000747244535774, 0, 0.000622858922454,
0.000689084895259, 0.000487983408564, 0.000224744353298, 0.000416449765747,
0.000308388157895, 0.000198906016907, 0.000179549331179, 9.06289650172e-05,
0.000253506844685, 0.000582896161212, 0.000386473429952, 0.000179839942451,
0, 0.000275608635737, 0.000622665006227, 0.00036075036075, 0.00029057097196,
0.000353232073472, 0.000394710874285, 0.000207555002076, 0.000402738622634,
0, 0.000309693403531, 0.000506521463847, 0.000226988991034, 0.000414164423276,
9.6590360282e-05, 0.000476689865573, 0.000377572210685, 0.000378967314069,
9.25240562546e-05, 0.000172309813044, 0.000447627573859, 0, 0.000589333071408,
0.000191699415317, 0.000362943471554, 0.000287549122975, 0.000311688311688,
0.000724112961622, 0.000434656621269, 0.00122292103424, 0.00177549812586,
0.00308008213552, 0.00164338537387, 0.00176056338028, 0.00180072028812,
0.00258939580764, 0.00217548948513, 0.00493015612161, 0.00336344416683,
0.00422716412424, 0.00313360554553, 0.00540144648906, 0.00425728829246,
0.0046828437633, 0.00397219463754, 0.00501656412683, 0.00492700729927,
0.00224424911165, 0.000634696755994, 0.00120550276557, 0.00125313283208,
0.00164551010813, 0.00143575017947, 0.00237006940918, 0.00236686390533,
0.00420336269015, 0.00329840900272, 0.00242005185825, 0.00326554846371,
0.006217237596, 0.0037103784586, 0.0038714672861, 0.00455830066551,
0.00361747518783, 0.00304147465438, 0.00476801760499, 0.00569875504121,
0.00583855136233, 0.0050566695728, 0.0042220072126, 0.00408237321963,
0.00255222610833, 0.00123507616303, 0.00178136133508, 0.00147434637311,
0.00126742712294, 0.00186590371937, 0.00177226406735, 0.00249154653853,
0.00549127279859, 0.00349072202829, 0.00348027842227, 0.00229555236729,
0.00336862367661, 0.00383477593952, 0.00273999412858, 0.00349618180145,
0.00376108175875, 0.00383351588171, 0.00368928059028, 0.00480028982882,
0.00388823582602, 0.00745054380406, 0.0103754506287, 0.00822677278011,
0.00778350981989, 0.0041831792162, 0.00537228238059, 0.00723645609231,
0.0144428396845, 0.00893333333333, 0.0106231171714, 0.0158367059652,
0.01811729548, 0.0207095263821, 0.0211700064641, 0.017604180993,
0.0165804327375, 0.0188679245283, 0.0191859923629, 0.0269251008595,
0.0351239669421, 0.0283510318573, 0.0346557651212, 0.0270022042616,
0.0260845175767, 0.0349758630112, 0.0207069247809, 0.0106362024818,
0.00981093510475, 0.00916507201128, 0.00887198986058, 0.0073929115025,
0.00659077291791, 0.00716191546131, 0.00942304513143, 0.0106886280007,
0.0123527175979, 0.0171022290546, 0.0142909490656, 0.0157642220699,
0.0265140538974, 0.0194395354708, 0.0241685144124, 0.0229897123662,
0.017921889568, 0.0155115839714, 0.0145263157895, 0.017609281127,
0.0157671315949, 0.0190258751903, 0.0138453217956, 0.00958058335108,
0.0122924304507, 0.00929741151611, 0.00885235535884, 0.00509319462505,
0.0061314863177, 0.0063104189044, 0.00729117134253, 0.010843373494,
0.0217755443886, 0.0181687353841, 0.0155402963498, 0.017310022503,
0.0214746959003, 0.026357827476, 0.0194751217195, 0.0196820590462,
0.0184317400812, 0.0130208333333, 0.0128666035951, 0.0120045731707,
0.0122374253228, 0.00874940561103, 0.0114368092263, 0.00922893718369,
0.00479041916168, 0.00644107774653, 0.00775830595108, 0.00829578041786,
0.00681348095875, 0.00573782551125, 0.00772002058672, 0.0112488083889,
0.00908907291456, 0.0157722638969, 0.00994270306707, 0.0134179772039,
0.0126050420168, 0.0113648781554, 0.0153894803415, 0.0126959699913,
0.0116655865198, 0.0112065745237, 0.0122006737686, 0.010251878038,
0.010891174691, 0.0148273273273, 0.0138516532618, 0.0136552722011,
0.00986993819758, 0.0097852677358, 0.00889011089726, 0.00816723383568,
0.00917641660931, 0.00884466556108, 0.0182179529646, 0.0183156760639,
0.0217806648835, 0.0171099125907, 0.0186579938377, 0.019360390076,
0.0144603654529, 0.0177730696798, 0.0153226598566, 0.0134016909516,
0.0126480805202, 0.0115501519757, 0.0127156322248, 0.0124326204138,
0.0240245215806, 0.0130234933606, 0.0144222706691, 0.00854005693371,
0.0053560967445, 0.00504132231405, 0.00288778877888, 0.00593526847816,
0.00455653279644, 0.00433014040152, 0.00535770564135, 0.0131095962244,
0.0126319758673, 0.0154982879798, 0.0125940464508, 0.0169948745616,
0.0257535512184, 0.0256175663312, 0.0265191262043, 0.0228974403622,
0.0193122555411, 0.0165794768612, 0.015658837248, 0.0168208578638,
0.0129912843282, 0.0119498443154, 0.0112663755459, 0.00838112042347,
0.00925767186696, 0.0113408269771, 0.0210861519924, 0.0156036134684,
0.0121687119728, 0.011006497812, 0.0107891491985, 0.0134615384615,
0.0147229755909, 0.015756893641, 0.0176257128046, 0.016776075857,
0.0169553999263, 0.0179193118984, 0.0190055672874, 0.0183088625509,
0.0155489923558, 0.0152507401094, 0.0160748342567, 0.0161532350605,
0.0139190952588, 0.0161469457497, 0.0118186629035, 0.0109259765092,
0.00950587391265, 0.00928986154533, 0.00815520645549, 0.00702576112412,
0.00709539362541, 0.00827287768869, 0.0104688211197, 0.0130375888927,
0.0160891089109, 0.0188415910677, 0.0203265044814, 0.0183175033921,
0.0139940353292, 0.0124648170487, 0.0131685758095, 0.00957428620277,
0.0119647893342, 0.00835800104475, 0.0101892285298, 0.00904207699194,
0.00772134522992, 0.00740740740741, 0.00776823249863, 0.00642254601227,
0.00484237572883, 0.00361539964823, 0.00414811817078, 0.00358072916667,
0.00433306007729, 0.00485008818342, 0.00905280804694, 0.00931847250137,
0.00779271381259, 0.00779912497622, 0.00908230842006, 0.0058152538582,
0.0102777777778, 0.00807537012113, 0.00648535564854, 0.0145492582731,
0.00694127317563, 0.00759878419453, 0.00789242911429, 0.00635050701629,
0.00785233530492, 0.00607964332759, 0.00531968282646, 0.00361944157187,
0.00305157155935, 0.00276327909119, 0.00318820364651, 0.00184464029514,
0.00412550211703, 0.00516567972786, 0.00463655399342, 0.00702897308418,
0.0100714154917, 0.00791168353266, 0.00959190791768, 0.00736,
0.00738007380074, 0.012573964497, 0.0117919562013, 0.00842919476398,
0.00778887565289, 0.00623967700496, 0.0062232955601, 0.00447815755803,
0.00511135450894, 0.00502557659517, 0.00330328263712), .Tsp = c(1,
15.9583333333333, 24), class = "ts")
T2 <- structure(c(0, 0, 0, 0, 0.000109673173942, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9.66183574879e-05, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9.43930526713e-05,
0, 0, 0, 8.95255147717e-05, 0, 0, 0, 0, 0.000191699415317, 0.000207792207792,
0, 0, 0, 0.00019727756954, 0.000205338809035, 0.000205423171734,
0.000704225352113, 0.000450180072029, 0.000493218249075, 0.000120860526952,
0.000410846343468, 0.000384393619066, 0.000643264105863, 0.000189915487608,
0.000915499404925, 0.000185099490976, 0.000936568752661, 0.000451385754266,
0.000757217226692, 0.000273722627737, 0.000187020759304, 0.000211565585331,
0.000141823854772, 9.63948332369e-05, 0.000117536436295, 0.000287150035894,
0, 0, 0.000400320256205, 0.000388048117967, 0.000345721694036,
0.000296868042155, 0.000609533097647, 0.000424043252412, 0.000290360046458,
0.000546996079861, 0.000556534644282, 0.00036866359447, 0.000275077938749,
0.000964404699281, 0.00152310035539, 0.00113339145597, 0.00061570938517,
0.000362877619523, 0.000472634464505, 0.000102923013586, 0.000187511719482,
0.000294869274622, 0.00011522064754, 0.000248787162582, 0, 0.00035593521979,
0.000392233771328, 0.000551166636046, 0.000165727543918, 0.000143472022956,
0.00012030798845, 0.000438260107374, 0.000195713866327, 0.000184009568498,
0.000537297394108, 0.000365096750639, 0.000102480016397, 0.000452857531021,
0.000180848177955, 0.000770745910765, 0.00219818869252, 0.000357685773048,
0.000362023712553, 0.000660501981506, 0.000419709560984, 0.000488949735967,
0.00177758026886, 4e-04, 0.000475661962898, 0.000879816998064,
0.0014942099365, 0.00378173960022, 0.00274725274725, 0.00192545729611,
0.0016462841016, 0.00176238855484, 0.00260780478718, 0.00447289949132,
0.0034435261708, 0.00290522941294, 0.002694416055, 0.0041329904482,
0.00729244577412, 0.0296930503689, 0.00982375036117, 0.00453023439039,
0.00327031170158, 0.00221573169503, 0.00211237853823, 0.00108719286801,
0.00131815458358, 0.000983008004494, 0.00132253265002, 0.00227790432802,
0.00247054351957, 0.00307455803228, 0.0029314767314, 0.00222755311857,
0.00492610837438, 0.00454430699318, 0.00753880266075, 0.00671845475541,
0.00590490003108, 0.00288356368698, 0.00294736842105, 0.00248601615911,
0.00197089144936, 0.00326157860404, 0.00302866414278, 0.00202256759634,
0.00258788009489, 0.00169043845747, 0.00137000737696, 0.000433463372345,
0.000908368343363, 0.000805585392052, 0.00142653352354, 0.00189328743546,
0.00558347292016, 0.00161899622234, 0.00162631008312, 0.00276960360048,
0.00585673524553, 0.00519169329073, 0.0045125282033, 0.00562344544176,
0.00322815786733, 0.00330528846154, 0.00255439924314, 0.00285823170732,
0.00240894199268, 0.00218735140276, 0.00201826045171, 0.00168701002282,
0.000460617227084, 0.00127007166833, 0.00109529025192, 0.000819336337567,
0.00158170093685, 0.000588494924231, 0.00120089209127, 0.00305052430887,
0.00161583518481, 0.00211579149837, 0.0010111223458, 0.00346270379455,
0.00228091236495, 0.00207627581685, 0.00295140718878, 0.0022121765894,
0.00240718451995, 0.00224131490474, 0.0031867431485, 0.00176756517897,
0.00233382314807, 0.00178303303303, 0.00169794459339, 0.00162778079219,
0.000737939304492, 0.00135906496331, 0.000733205022454, 0.000875060768109,
0.00114705207616, 0.000967385295744, 0.00182179529646, 0.00359130903214,
0.00420328620558, 0.00446345545843, 0.00376583361862, 0.00659687365553,
0.00433810963586, 0.00353107344633, 0.00333955407131, 0.00341788091383,
0.0024939877082, 0.00538428137212, 0.00906989151698, 0.00773778473309,
0.0210421671775, 0.00859720803541, 0.00511487506289, 0.00406669377796,
0.00117164616286, 0.00206611570248, 0.00107260726073, 0.00148381711954,
0.000741761152909, 0.00104973100643, 0.00110305704381, 0.00209753539591,
0.00452488687783, 0.00486574157506, 0.00850507033039, 0.0101159967629,
0.0163991223005, 0.0150452373691, 0.0156443766097, 0.0112310639039,
0.00635593220339, 0.00627766599598, 0.00583041812427, 0.00622371740959,
0.00624897220852, 0.00420769166036, 0.00305676855895, 0.00291133656815,
0.00120006857535, 0.00501806503412, 0.00490575781048, 0.00593119810202,
0.00226874291018, 0.00304999336958, 0.00339087546239, 0.00541958041958,
0.00445563734986, 0.00431438754455, 0.0038016243304, 0.0037928519329,
0.00491460867428, 0.00460782305959, 0.00508734881935, 0.00300725278613,
0.00390896455872, 0.00367811967345, 0.00953591862683, 0.00529614264278,
0.00243584167029, 0.00427167876976, 0.00291056623743, 0.00227624510607,
0.00439422473321, 0.00232246538633, 0.00317623830372, 0.00263466042155,
0.00180200473026, 0.00190912562047, 0.0034896070399, 0.00338638672536,
0.00548090523338, 0.00697836706211, 0.00720230473752, 0.00746268656716,
0.00367056664373, 0.0032167269803, 0.00523135203391, 0.00299196443837,
0.00299119733356, 0.00287306285913, 0.00154657933042, 0.00214861235452,
0.00163006177076, 0.00157407407407, 0.00137086455858, 0.00124616564417,
0.000790591955727, 0.00107484854407, 0.00121408336706, 0.00108506944444,
0.00105398758637, 0.000881834215168, 0.00184409052808, 0.00237529691211,
0.0013637249172, 0.00190222560396, 0.00264900662252, 0.00156564526951,
0.00263888888889, 0.00183531139117, 0.00303347280335, 0.0120768352986,
0.00365330167139, 0.00351443768997, 0.00263080970476, 0.0029703984431,
0.00265143789517, 0.0014185834431, 0.00150557061126, 0.00144777662875,
0.00111890957176, 0.000716405690308, 0.000797050911627, 0.000512400081984,
0.000868526761481, 0.00113392969636, 0.00134609632067, 0.00240013715069,
0.00128181651712, 0.00110395584177, 0.00156958493198, 0.00208,
0.00184501845018, 0.00110946745562, 0.000736997262582, 0.00208250694169,
0.00229084578026, 0.00137639933933, 0.00111462010032, 0.000822518735149,
0.00200803212851, 0.000987166831194, 0.00041291032964), .Tsp = c(1,
15.9583333333333, 24), class = "ts")
T3 <- structure(c(0.00192287148809, 0.00149812734082, 0.00192410475681,
0.00151122625216, 0.00120640491336, 0.00167845582065, 0.00121261115602,
0.000802568218299, 0.00109170305677, 0.00250626566416, 0.00273597811218,
0.00242854474127, 0.00160915430002, 0.00124571784491, 0.00192943770673,
0.00329388800781, 0.00191032700303, 0.00156168662155, 0.00174753289474,
0.0014917951268, 0.00143639464943, 0.000543773790103, 0.000929525097178,
0.00141560496294, 0.000966183574879, 0.000719359769805, 0.00190740419629,
0.00137804317869, 0.00197177251972, 0.001443001443, 0.00203399680372,
0.00158954433063, 0.00256562068285, 0.00228310502283, 0.00302053966975,
0.00227352221056, 0.00263239393001, 0.00202608585539, 0.00272386789241,
0.00269206875129, 0.0027045300879, 0.00276480122033, 0.00405890126487,
0.00341070582662, 0.00351591413768, 0.00336004135436, 0.00358102059087,
0.00257289879931, 0.00235733228563, 0.00239624269146, 0.00136103801833,
0.000862647368926, 0.00145454545455, 0.00168959691045, 0.00246305418719,
0.0020964360587, 0.00335371868219, 0.00390143737166, 0.00349219391947,
0.00334507042254, 0.00255102040816, 0.00332922318126, 0.00386753686246,
0.00246507806081, 0.00432442821449, 0.00312442565705, 0.00408318298357,
0.00375354756019, 0.00416473854697, 0.00263942103023, 0.0028888688273,
0.00321817321344, 0.00310218978102, 0.002150738732, 0.00296191819464,
0.00134732662034, 0.00221708116445, 0.00152797367184, 0.00157932519742,
0.00220077873709, 0.00207100591716, 0.00260208166533, 0.00310438494373,
0.00311149524633, 0.00385928454802, 0.00292575886871, 0.00222622707516,
0.00329074719319, 0.00282614641262, 0.00287542899545, 0.00221198156682,
0.00311754997249, 0.00315623356128, 0.00287696733796, 0.00296425457716,
0.00263875450787, 0.00208654631226, 0.00179601096512, 0.00164676821737,
0.00206262891431, 0.00235895419697, 0.00241963359834, 0.0028610523697,
0.00516910352976, 0.00160170848905, 0.00254951951363, 0.00275583318023,
0.00298309579052, 0.00286944045911, 0.00288739172281, 0.00394434096636,
0.00254428026226, 0.00285214831171, 0.0034924330617, 0.00246440306681,
0.00266448042632, 0.00389457476678, 0.00253187449136, 0.00171276869059,
0.00184647850171, 0.00134132164893, 0.00153860077835, 0.000990752972259,
0.00117518677075, 0.00312927831019, 0.00188867903566, 0.0024,
0.00269541778976, 0.00263945099419, 0.00242809114681, 0.00378173960022,
0.00274725274725, 0.00165039196809, 0.00211665098777, 0.00290275761974,
0.00149017416411, 0.00105244693913, 0.00309917355372, 0.00240432779002,
0.00297314875035, 0.0015613519471, 0.00196335078534, 0.00227707441479,
0.00279302706347, 0.00295450068938, 0.00316811446091, 0.00211501661799,
0.00168990283059, 0.00195694716243, 0.00131815458358, 0.00112343771942,
0.00214911555629, 0.00157701068863, 0.00171037628278, 0.00230591852421,
0.00183217295713, 0.00102810143934, 0.00130396986381, 0.00151476899773,
0.00188470066519, 0.00220449296662, 0.00238267895991, 0.00238639753406,
0.00147368421053, 0.00113942407292, 0.0018192844148, 0.00152207001522,
0.00151433207139, 0.00117096018735, 0.000862626698296, 0.00095087163233,
0.00137000737696, 0.00119202427395, 0.00170319064381, 0.000805585392052,
0.0012680297987, 0.00189328743546, 0.00186115764005, 0.000719553876597,
0.000903505601735, 0.000865501125151, 0.00210241778045, 0.00146432374867,
0.00130625816411, 0.0011895749973, 0.00135374362178, 0.00120192307692,
0.00160832544939, 0.0015243902439, 0.00240894199268, 0.00218735140276,
0.00230658337338, 0.00188548179022, 0.0016582220175, 0.00263086274154,
0.00155166119022, 0.00204834084392, 0.00194670884536, 0.00308959835221,
0.00154400411734, 0.00152526215443, 0.00343364976772, 0.00269282554337,
0.00235928547354, 0.00230846919636, 0.00300120048019, 0.00327833023713,
0.00347844418678, 0.00259690295277, 0.00157392833997, 0.00345536047815,
0.00336884275699, 0.0023862129916, 0.00216094735932, 0.00478603603604,
0.00330652368186, 0.00551636824019, 0.00313624204409, 0.00253692126484,
0.00201631381175, 0.00243072435586, 0.00229410415233, 0.00386954118297,
0.00298111957602, 0.00305261267732, 0.0038211692778, 0.00334759159383,
0.00479287915098, 0.0045891294995, 0.00525831471014, 0.00800376647834,
0.0076613299283, 0.00638604065479, 0.00587868531219, 0.00633955709944,
0.00453494575849, 0.00617283950617, 0.00314804075884, 0.00425604358189,
0.00536642629549, 0.00422936152908, 0.00234329232572, 0.00454545454545,
0.00305280528053, 0.00389501993879, 0.0040267034015, 0.00275554389188,
0.00409706901986, 0.00506904387345, 0.0065987933635, 0.00594701748063,
0.00343473994112, 0.00579983814405, 0.00750664048966, 0.00365965233303,
0.00467423447486, 0.00348250043531, 0.00464471968709, 0.00603621730382,
0.00358154256205, 0.00445752733389, 0.00501562243052, 0.0035344609947,
0.00410480349345, 0.00467578297309, 0.00265729470255, 0.00210758731433,
0.00223771408899, 0.00218998083767, 0.00309374033206, 0.00291738496221,
0.00184956843403, 0.00297202797203, 0.00329329717164, 0.00318889514162,
0.00397442543632, 0.00481400437637, 0.002580169554, 0.00440303092361,
0.00335956997504, 0.00318415000884, 0.00269284225156, 0.00242217637032,
0.00381436745073, 0.00238326418925, 0.0037407568508, 0.00290474156343,
0.00335156112189, 0.00227624510607, 0.00376647834275, 0.00223313979455,
0.00197441840501, 0.00214676034348, 0.00225250591283, 0.00140002545501,
0.0034896070399, 0.00220115137149, 0.002828854314, 0.00418702023726,
0.00176056338028, 0.00393487109905, 0.00217939894471, 0.00331724969843,
0.00234508884279, 0.00282099504189, 0.00239295786685, 0.00269893783737,
0.00263828238719, 0.00250671441361, 0.00231640356898, 0.00231481481481,
0.00127947358801, 0.0017254601227, 0.00207530388378, 0.00185655657612,
0.00131525698098, 0.00227864583333, 0.0018737557091, 0.00220458553792,
0.00184409052808, 0.00109629088251, 0.00253263198909, 0.00228267072475,
0.00170293282876, 0.00134198165958, 0.000833333333333, 0.00269179004038,
0.00198744769874, 0.00209205020921, 0.00146132066855, 0.00113981762918,
0.00185131053298, 0.00194612311789, 0.00203956761167, 0.00111460127673,
0.00170631335943, 0.00186142709411, 0.00183094293561, 0.00194452973084,
0.0014944704593, 0.00153720024595, 0.00184561936815, 0.00151190626181,
0.000897397547113, 0.00222869878279, 0.00201428309833, 0.00202391904324,
0.00244157656087, 0.00256, 0.00184501845018, 0.00160256410256,
0.00115813855549, 0.0016858389528, 0.001741042793, 0.0026610387227,
0.00167193015047, 0.00201060135259, 0.00219058050383, 0.00233330341919,
0.000963457435827), .Tsp = c(1, 15.9583333333333, 24), class = "ts") I know that T1 and T2 are correlated and consider them as ground truth so any distance metric should tell me that (T1, T2) are closer than (T2, T3) and (T1, T3). However, when using dtw in R, I am getting the following: > dtw(T1, T2, k = TRUE)$distance; dtw(T1, T3, k = TRUE)$distance; dtw(T3, T2, k = TRUE)$distance
[1] 1.107791
[1] 1.568011
[1] 0.4102962 Can someone please explain how to use Dynamic Time Warping for nearest-neighbor queries? | Dynamic time warping makes a particular assumption on your data set: one vector is a non-linear time-streteched series of the other. But it also assumes that the actual values are on the same scale. Lets say you have: $x=1..10000$, $a(x)=1\cdot\sin(0.01*x)$, $b(x)=1\cdot\sin(0.01234*x)$,$c(x)=1000\cdot\sin(0.01*x)$. Then for DTW, $a$ and $b$ will be extremely similar, while $a$ and $c$ differ almost as much as with Manhattan distance. If you however do a frequency analysis, $a$ and $c$ will be identical with respect to their frequencies, and only differ in magnitude, while $a$ and $b$ have a clearly different frequency. DTW is not your magic weapon to solve all your time series matching needs. It makes particular assumptions on the kind of similarity you are interested in . If that doesn't match your data, it will not work well. Judging from the data series you shared, you do not need temporal alignment (which DTW does), but actually some appropriate normalization and maybe fourier transformations instead. Treshhold crossing distances might also work well for you, see for example: Similarity Search on Time Series Based on Threshold Queries Johannes Aßfalg, Hans-Peter Kriegel, Peer Kröger, Peter Kunath, Alexey Pryakhin and Matthias Renz, EDBT 2006 | {
"source": [
"https://stats.stackexchange.com/questions/22209",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2164/"
]
} |
22,347 | A published article ( pdf ) contains these 2 sentences: Moreover, misreporting may be caused by the application of incorrect rules or by a lack of knowledge of the statistical test. For example, the total df in an ANOVA may be taken to be the error df in the reporting of an $F$ test, or the researcher may divide the reported p value of a $\chi^2$ or $F$ test by two, in order to obtain a one-sided $p$ value, whereas the $p$ value of a $\chi^2$ or $F$ test is already a one-sided test. Why might they have said that? The chi-squared test is a two-sided test. (I have asked one of the authors, but gotten no response.) Am I overlooking something? | The chi-squared test is essentially always a one-sided test . Here is a loose way to think about it: the chi-squared test is basically a 'goodness of fit' test. Sometimes it is explicitly referred to as such, but even when it's not, it is still often in essence a goodness of fit. For example, the chi-squared test of independence on a 2 x 2 frequency table is (sort of) a test of goodness of fit of the first row (column) to the distribution specified by the second row (column), and vice versa, simultaneously. Thus, when the realized chi-squared value is way out on the right tail of it's distribution, it indicates a poor fit, and if it is far enough, relative to some pre-specified threshold, we might conclude that it is so poor that we don't believe the data are from that reference distribution. If we were to use the chi-squared test as a two-sided test, we would also be worried if the statistic were too far into the left side of the chi-squared distribution. This would mean that we are worried the fit might be too good . This is simply not something we are typically worried about. (As a historical side-note, this is related to the controversy of whether Mendel fudged his data. The idea was that his data were too good to be true. See here for more info if you're curious.) | {
"source": [
"https://stats.stackexchange.com/questions/22347",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8137/"
]
} |
22,381 | Some material I've seen on machine learning said that it's a bad idea to approach a classification problem through regression. But I think it's always possible to do a continuous regression to fit the data and truncate the continuous prediction to yield discrete classifications. So why is it a bad idea? | "..approach classification problem through regression.." by "regression" I will assume you mean linear regression, and I will compare this approach to the "classification" approach of fitting a logistic regression model. Before we do this, it is important to clarify the distinction between regression and classification models. Regression models predict a continuous variable, such as rainfall amount or sunlight intensity. They can also predict probabilities, such as the probability that an image contains a cat. A probability-predicting regression model can be used as part of a classifier by imposing a decision rule - for example, if the probability is 50% or more, decide it's a cat. Logistic regression predicts probabilities, and is therefore a regression algorithm. However, it is commonly described as a classification method in the machine learning literature, because it can be (and is often) used to make classifiers. There are also "true" classification algorithms, such as SVM, which only predict an outcome and do not provide a probability. We won't discuss this kind of algorithm here. Linear vs. Logistic Regression on Classification Problems As Andrew Ng explains it , with linear regression you fit a polynomial through the data - say, like on the example below we're fitting a straight line through {tumor size, tumor type} sample set: Above, malignant tumors get $1$ and non-malignant ones get $0$ , and the green line is our hypothesis $h(x)$ . To make predictions we may say that for any given tumor size $x$ , if $h(x)$ gets bigger than $0.5$ we predict malignant tumor, otherwise we predict benign. Looks like this way we could correctly predict every single training set sample, but now let's change the task a bit. Intuitively it's clear that all tumors larger certain threshold are malignant. So let's add another sample with a huge tumor size, and run linear regression again: Now our $h(x) > 0.5 \rightarrow malignant$ doesn't work anymore. To keep making correct predictions we need to change it to $h(x) > 0.2$ or something - but that not how the algorithm should work. We cannot change the hypothesis each time a new sample arrives. Instead, we should learn it off the training set data, and then (using the hypothesis we've learned) make correct predictions for the data we haven't seen before. Hope this explains why linear regression is not the best fit for classification problems! Also, you might want to watch VI. Logistic Regression. Classification video on ml-class.org which explains the idea in more detail. EDIT probabilityislogic asked what a good classifier would do. In this particular example you would probably use logistic regression which might learn a hypothesis like this (I'm just making this up): Note that both linear regression and logistic regression give you a straight line (or a higher order polynomial) but those lines have different meaning: $h(x)$ for linear regression interpolates, or extrapolates, the output and predicts the value for $x$ we haven't seen. It's simply like plugging a new $x$ and getting a raw number, and is more suitable for tasks like predicting, say car price based on {car size, car age} etc. $h(x)$ for logistic regression tells you the probability that $x$ belongs to the "positive" class. This is why it is called a regression algorithm - it estimates a continuous quantity, the probability. However, if you set a threshold on the probability, such as $h(x) > 0.5$ , you obtain a classifier, and in many cases this is what is done with the output from a logistic regression model. This is equivalent to putting a line on the plot: all points sitting above the classifier line belong to one class while the points below belong to the other class. So, the bottom line is that in classification scenario we use a completely different reasoning and a completely different algorithm than in regression scenario. | {
"source": [
"https://stats.stackexchange.com/questions/22381",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9008/"
]
} |
22,406 | I am looking for a program (in R or SAS or standalone, if free or low cost) that will do power analysis for ordinal logistic regression. | I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made. Simulating for power is quite straight forward (and affordable) using R. decide what you think your data should look like and how you will analyze it write a function or set of expressions that will simulate the data for a given relationship and sample size and do the analysis (a function is preferable in that you can make the sample size and parameters into arguments to make it easier to try different values). The function or code should return the p-value or other test statistic. use the replicate function to run the code from above a bunch of times (I usually start at about 100 times to get a feel for how long it takes and to get the right general area, then up it to 1,000 and sometimes 10,000 or 100,000 for the final values that I will use). The proportion of times that you rejected the null hypothesis is the power. redo the above for another set of conditions. Here is a simple example with ordinal regression: library(rms)
tmpfun <- function(n, beta0, beta1, beta2) {
x <- runif(n, 0, 10)
eta1 <- beta0 + beta1*x
eta2 <- eta1 + beta2
p1 <- exp(eta1)/(1+exp(eta1))
p2 <- exp(eta2)/(1+exp(eta2))
tmp <- runif(n)
y <- (tmp < p1) + (tmp < p2)
fit <- lrm(y~x)
fit$stats[5]
}
out <- replicate(1000, tmpfun(100, -1/2, 1/4, 1/4))
mean( out < 0.05 ) | {
"source": [
"https://stats.stackexchange.com/questions/22406",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/686/"
]
} |
22,414 | Have a quick question about parameter selection for an SVM. I'm using a rbf kernel, so trying to optimize C and gamma. I have an example set of around 4500, about 700 features, and using 700 examples from the set for testing. My dataset does consist of time series. I've been using a 5 fold cross validation with a grid search to find the optimal parameters for the test set and have continued to noticed fairly large differences between the accuracy of my training set vs. the accuracy of my test set. Note, however, when I say accuracy, I have imposed a cost matrix when evaluating the fit of the model such that certain classes have much higher costs when misclassified (note i also ran the svm with unequal class weights). Because my data is a time series, I'm wondering if I should use a different approach from cross validation e.g. a moving window evaluation or something similar. Is cross validation the best approach? Are there other ways to search for the optimal parameters? And also, are there ways to speed up the parameter search (I've heard of using minimum finding algorithms as an alternative to a grid search, which I'm considering implementing)? Any thoughts would be most welcome. Thanks. | I prefer to do power analyses beyond the basics by simulation. With precanned packages, I am never quite sure what assumptions are being made. Simulating for power is quite straight forward (and affordable) using R. decide what you think your data should look like and how you will analyze it write a function or set of expressions that will simulate the data for a given relationship and sample size and do the analysis (a function is preferable in that you can make the sample size and parameters into arguments to make it easier to try different values). The function or code should return the p-value or other test statistic. use the replicate function to run the code from above a bunch of times (I usually start at about 100 times to get a feel for how long it takes and to get the right general area, then up it to 1,000 and sometimes 10,000 or 100,000 for the final values that I will use). The proportion of times that you rejected the null hypothesis is the power. redo the above for another set of conditions. Here is a simple example with ordinal regression: library(rms)
tmpfun <- function(n, beta0, beta1, beta2) {
x <- runif(n, 0, 10)
eta1 <- beta0 + beta1*x
eta2 <- eta1 + beta2
p1 <- exp(eta1)/(1+exp(eta1))
p2 <- exp(eta2)/(1+exp(eta2))
tmp <- runif(n)
y <- (tmp < p1) + (tmp < p2)
fit <- lrm(y~x)
fit$stats[5]
}
out <- replicate(1000, tmpfun(100, -1/2, 1/4, 1/4))
mean( out < 0.05 ) | {
"source": [
"https://stats.stackexchange.com/questions/22414",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6434/"
]
} |
22,501 | For a given data matrix $A$ (with variables in columns and data points in rows), it seems like $A^TA$ plays an important role in statistics. For example, it is an important part of the analytical solution of ordinary least squares. Or, for PCA, its eigenvectors are the principal components of the data. I understand how to calculate $A^TA$, but I was wondering if there's an intuitive interpretation of what this matrix represents, which leads to its important role? | Geometrically, matrix $\bf A'A$ is called matrix of scalar products (= dot products, = inner products). Algebraically, it is called sum-of-squares-and-cross-products matrix ( SSCP ). Its $i$-th diagonal element is equal to $\sum a_{(i)}^2$, where $a_{(i)}$ denotes values in the $i$-th column of $\bf A$ and $\sum$ is the sum across rows. The $ij$-th off-diagonal element therein is $\sum a_{(i)}a_{(j)}$. There is a number of important association coefficients and their square matrices are called angular similarities or SSCP-type similarities: Dividing SSCP matrix by $n$, the sample size or number of rows of $\bf A$, you get MSCP (mean-square-and-cross-product) matrix. The pairwise formula of this association measure is hence $\frac{\sum xy}{n}$ (with vectors $x$ and $y$ being a pair of columns from $\bf A$). If you center columns (variables) of $\bf A$, then $\bf A'A$ is the scatter (or co-scatter, if to be rigorous) matrix and $\mathbf {A'A}/(n-1)$ is the covariance matrix. Pairwise formula of covariance is $\frac{\sum c_xc_y}{n-1}$ with $c_x$ and $c_y$ denoting centerted columns. If you z- standardize columns of $\bf A$ (subtract the column mean and divide by the standard deviation), then $\mathbf {A'A}/(n-1)$ is the Pearson correlation matrix: correlation is covariance for standardized variables. Pairwise formula of correlation is $\frac{\sum z_xz_y}{n-1}$ with $z_x$ and $z_y$ denoting standardized columns. The correlation is also called coefficient of linearity. If you unit- scale columns of $\bf A$ (bring their SS, sum-of-squares, to 1), then $\bf A'A$ is the cosine similarity matrix. The equivalent pairwise formula thus appears to be $\sum u_xu_y = \frac{\sum{xy}}{\sqrt{\sum x^2}\sqrt{\sum y^2}}$ with $u_x$ and $u_y$ denoting L2-normalized columns. Cosine similarity is also called coefficient of proportionality. If you center and then unit- scale columns of $\bf A$, then $\bf A'A$ is again the Pearson correlation matrix, because correlation is cosine for centered variables$^{1,2}$: $\sum cu_xcu_y = \frac{\sum{c_xc_y}}{\sqrt{\sum c_x^2}\sqrt{\sum c_y^2}}$ Alongside these four principal association measures let us also mention some other, also based on of $\bf A'A$, to top it off. They can be seen as measures alternative to cosine similarity because they adopt different from it normalization, the denominator in the formula: Coefficient of identity [Zegers & ten Berge, 1985] has its denominator in the form of arithmetic mean rather than geometric mean: $\frac{\sum{xy}}{(\sum x^2+\sum y^2)/2}$. It can be 1 if and only if the being compared columns of $\bf A$ are identical. Another usable coefficient like it is called similarity ratio : $\frac{\sum{xy}}{\sum x^2 + \sum y^2 -\sum {xy}} = \frac{\sum{xy}}{\sum {xy} + \sum {(x-y)^2}}$. Finally, if values in $\bf A$ are nonnegative and their sum within the columns is 1 (e.g. they are proportions), then $\bf \sqrt {A}'\sqrt A$ is the matrix of fidelity or Bhattacharyya coefficient. $^1$ One way also to compute correlation or covariance matrix, used by many statistical packages, bypasses centering the data and departs straight from SSCP matrix $\bf A'A$ this way. Let $\bf s$ be the row vector of column sums of data $\bf A$ while $n$ is the number of rows in the data. Then (1) compute the scatter matrix as $\bf C = A'A-s's/ \it n$ [thence, $\mathbf C/(n-1)$ will be the covariance matrix]; (2) the diagonal of $\bf C$ is the sums of squared deviations, row vector $\bf d$; (3) compute correlation matrix $\bf R=C/\sqrt{d'd}$. $^2$ An acute but statistically novice reader might find it difficult reconciling the two definitions of correlation - as "covariance" (which includes averaging by sample size, the division by df ="n-1") and as "cosine" (which implies no such averaging). But in fact no real averaging in the first formula of correlation takes place. The thing is that st. deviation, by which z-standardization was achieved, had been in turn computed with the division by that same df ; and so the denominator "n-1" in the formula of correlation-as-covariance entirely cancels if you unwrap the formula: the formula turns into the formula of cosine . To compute empirical correlation value you really need not to know $n$ (except when computing the mean, to center). | {
"source": [
"https://stats.stackexchange.com/questions/22501",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2403/"
]
} |
22,502 | "Big data" is everywhere in the media. Everybody says that "big data" is the big thing for 2012, e.g. KDNuggets poll on hot topics for 2012 . However, I have deep concerns here. With big data, everybody seems to be happy just to get anything out. But aren't we violating all classic statistical principles such as hypothesis testing and representative sampling? As long as we make only predictions about the same data set, this should be fine. So if I use Twitter data to predict Twitter user behavior, that is probably okay. However, using Twitter data to predict e.g. Elections completely neglects the fact that the Twitter users are not a representative sample for the whole population. Plus, most methods will actually not be able to differentiate between a true "grassroots" mood and a campaign. And twitter is full of campaigns. So when analyzing Twitter, you quickly end up just measuring campaigning and bots. (See for example "Yahoo Predicts America's Political Winners" which is full of poll bashing and "sentiment analysis is much better". They predicted "Romney has over a 90 percent likelihood of winning the nomination, and of winning the South Carolina primary" (he had 28%, while Gingrich had 40% in this primary). Do you know other such big data fails ? I remember roughly that one scientist predicted you could not maintain more than 150 friendships. He actually had only discovered a cap limit in friendster ... As for twitter data, or actually any "big data" collected from the web, I believe that often people even introduce additional bias by the way they collect their data. Few will have all of Twitter. They will have a certain subset they spidered, and this is just yet another bias in their data set. Splitting the data into a test set or for doing cross validation likely doesn't help much. The other set will have the same bias. And for big data, I need to "compress" my information so heavily that I'm rather unlikely to overfit. I recently heard this joke, with the big data scientist that discovered there are approximately 6 sexes in the world... and I can this just so imagine to happen... "Male, Female, Orc, Furry, Yes and No". So what methods do we have to get some statistical validity back into the analysis, in particular when trying to predict something outside of the "big data" dataset? | Your fears are well founded and perceptive. Yahoo and probably several other companies are doing randomized experiments on users and doing it well. But observational data are frought with difficulties. It is a common misperception that problems diminish as the sample size increases. This is true for variance, but bias stays constant as n increases. When the bias is large, a very small truly random sample or randomized study can be more valuable than 100,000,000 observations. | {
"source": [
"https://stats.stackexchange.com/questions/22502",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7828/"
]
} |
22,569 | In general, what is meant by saying that the fraction $x$ of the variance in an analysis like PCA is explained by the first principal component? Can someone explain this intuitively but also give a precise mathematical definition of what "variance explained" means in terms of principal component analysis (PCA)? For simple linear regression, the r-squared of best fit line is always described as the proportion of the variance explained, but I am not sure what to make of that either. Is proportion of variance here just the extend of deviation of points from the best fit line? | In case of PCA, "variance" means summative variance or multivariate variability or overall variability or total variability . Below is the covariance matrix of some 3 variables. Their variances are on the diagonal, and the sum of the 3 values (3.448) is the overall variability. 1.343730519 -.160152268 .186470243
-.160152268 .619205620 -.126684273
.186470243 -.126684273 1.485549631 Now, PCA replaces original variables with new variables, called principal components, which are orthogonal (i.e. they have zero covariations) and have variances (called eigenvalues) in decreasing order. So, the covariance matrix between the principal components extracted from the above data is this: 1.651354285 .000000000 .000000000
.000000000 1.220288343 .000000000
.000000000 .000000000 .576843142 Note that the diagonal sum is still 3.448, which says that all 3 components account for all the multivariate variability. The 1st principal component accounts for or "explains" 1.651/3.448 = 47.9% of the overall variability; the 2nd one explains 1.220/3.448 = 35.4% of it; the 3rd one explains .577/3.448 = 16.7% of it. So, what do they mean when they say that " PCA maximizes variance " or " PCA explains maximal variance "? That is not, of course, that it finds the largest variance among three values 1.343730519 .619205620 1.485549631 , no. PCA finds, in the data space, the dimension (direction) with the largest variance out of the overall variance 1.343730519+.619205620+1.485549631 = 3.448 . That largest variance would be 1.651354285 . Then it finds the dimension of the second largest variance, orthogonal to the first one, out of the remaining 3.448-1.651354285 overall variance. That 2nd dimension would be 1.220288343 variance. And so on. The last remaining dimension is .576843142 variance. See also "Pt3" here and the great answer here explaining how it done in more detail. Mathematically, PCA is performed via linear algebra functions called eigen-decomposition or svd-decomposition. These functions will return you all the eigenvalues 1.651354285 1.220288343 .576843142 (and corresponding eigenvectors) at once ( see , see ). | {
"source": [
"https://stats.stackexchange.com/questions/22569",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9097/"
]
} |
22,572 | Will software eventually make statisticians obsolete? What is done that can't be programmed into a computer? | @Adam, if you think of statistical researchers analogously to those in other fields - people who build upon the existing methodology and knowledge - then it might make it more clear that the answer to your first question is 'No'. Statisticians that make a living from simply applying canned software packages could quite possibly be replaced by computers for every step except writing the discussion section of a paper where the results must be interpreted. So, in that sense, yes - it could be automated (although it would have to be a complicated piece of software that has one hell of a natural language processor). However, as most researchers eventually figure out, the "canned" routines that people often use are pretty limited and must be modified (or new methods entirely must be developed) to answer specialized research questions - this is where the human aspect of statistics is indispensable. Or, a researcher must simply settle for a somewhat different, but related, research question that can be answered using classical methods. Most statisticians I know work in research jobs (e.g. professors, research scientists) where their primary role is to develop new methodology. If this process could be automated, meaning that a computer can formulate and crank out useful new methodology, then I'm afraid researchers in every field would be obsolete. | {
"source": [
"https://stats.stackexchange.com/questions/22572",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6369/"
]
} |
22,718 | The Pearson correlation coefficient of x and y is the same, whether you compute pearson(x, y) or pearson(y, x). This suggests that doing a linear regression of y given x or x given y should be the same, but I don't think that's the case. Can someone shed light on when the relationship is not symmetric, and how that relates to the Pearson correlation coefficient (which I always think of as summarizing the best fit line)? | The best way to think about this is to imagine a scatterplot of points with $y$ on the vertical axis and $x$ represented by the horizontal axis. Given this framework, you see a cloud of points, which may be vaguely circular, or may be elongated into an ellipse. What you are trying to do in regression is find what might be called the 'line of best fit'. However, while this seems straightforward, we need to figure out what we mean by 'best', and that means we must define what it would be for a line to be good, or for one line to be better than another, etc. Specifically, we must stipulate a loss function . A loss function gives us a way to say how 'bad' something is, and thus, when we minimize that, we make our line as 'good' as possible, or find the 'best' line. Traditionally, when we conduct a regression analysis, we find estimates of the slope and intercept so as to minimize the sum of squared errors . These are defined as follows: $$
SSE=\sum_{i=1}^N(y_i-(\hat\beta_0+\hat\beta_1x_i))^2
$$ In terms of our scatterplot, this means we are minimizing the (sum of the squared) vertical distances between the observed data points and the line. On the other hand, it is perfectly reasonable to regress $x$ onto $y$, but in that case, we would put $x$ on the vertical axis, and so on. If we kept our plot as is (with $x$ on the horizontal axis), regressing $x$ onto $y$ (again, using a slightly adapted version of the above equation with $x$ and $y$ switched) means that we would be minimizing the sum of the horizontal distances between the observed data points and the line. This sounds very similar, but is not quite the same thing. (The way to recognize this is to do it both ways, and then algebraically convert one set of parameter estimates into the terms of the other. Comparing the first model with the rearranged version of the second model, it becomes easy to see that they are not the same.) Note that neither way would produce the same line we would intuitively draw if someone handed us a piece of graph paper with points plotted on it. In that case, we would draw a line straight through the center, but minimizing the vertical distance yields a line that is slightly flatter (i.e., with a shallower slope), whereas minimizing the horizontal distance yields a line that is slightly steeper . A correlation is symmetrical; $x$ is as correlated with $y$ as $y$ is with $x$. The Pearson product-moment correlation can be understood within a regression context, however. The correlation coefficient, $r$, is the slope of the regression line when both variables have been standardized first. That is, you first subtracted off the mean from each observation, and then divided the differences by the standard deviation. The cloud of data points will now be centered on the origin, and the slope would be the same whether you regressed $y$ onto $x$, or $x$ onto $y$ (but note the comment by @DilipSarwate below). Now, why does this matter? Using our traditional loss function, we are saying that all of the error is in only one of the variables (viz., $y$). That is, we are saying that $x$ is measured without error and constitutes the set of values we care about, but that $y$ has sampling error . This is very different from saying the converse. This was important in an interesting historical episode: In the late 70's and early 80's in the US, the case was made that there was discrimination against women in the workplace, and this was backed up with regression analyses showing that women with equal backgrounds (e.g., qualifications, experience, etc.) were paid, on average, less than men. Critics (or just people who were extra thorough) reasoned that if this was true, women who were paid equally with men would have to be more highly qualified, but when this was checked, it was found that although the results were 'significant' when assessed the one way, they were not 'significant' when checked the other way, which threw everyone involved into a tizzy. See here for a famous paper that tried to clear the issue up. (Updated much later) Here's another way to think about this that approaches the topic through the formulas instead of visually: The formula for the slope of a simple regression line is a consequence of the loss function that has been adopted. If you are using the standard Ordinary Least Squares loss function (noted above), you can derive the formula for the slope that you see in every intro textbook. This formula can be presented in various forms; one of which I call the 'intuitive' formula for the slope. Consider this form for both the situation where you are regressing $y$ on $x$, and where you are regressing $x$ on $y$:
$$
\overbrace{\hat\beta_1=\frac{\text{Cov}(x,y)}{\text{Var}(x)}}^{y\text{ on } x}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\overbrace{\hat\beta_1=\frac{\text{Cov}(y,x)}{\text{Var}(y)}}^{x\text{ on }y}
$$
Now, I hope it's obvious that these would not be the same unless $\text{Var}(x)$ equals $\text{Var}(y)$. If the variances are equal (e.g., because you standardized the variables first), then so are the standard deviations, and thus the variances would both also equal $\text{SD}(x)\text{SD}(y)$. In this case, $\hat\beta_1$ would equal Pearson's $r$, which is the same either way by virtue of the principle of commutativity :
$$
\overbrace{r=\frac{\text{Cov}(x,y)}{\text{SD}(x)\text{SD}(y)}}^{\text{correlating }x\text{ with }y}~~~~~~~~~~~~~~~~~~~~~~~~~~~\overbrace{r=\frac{\text{Cov}(y,x)}{\text{SD}(y)\text{SD}(x)}}^{\text{correlating }y\text{ with }x}
$$ | {
"source": [
"https://stats.stackexchange.com/questions/22718",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9097/"
]
} |
22,800 | As an example, consider the ChickWeight data set in R. The variance obviously grows over time, so if I use a simple linear regression like: m <- lm(weight ~ Time*Diet, data=ChickWeight) My questions: Which aspects of the model will be questionable? Are the problems limited to extrapolating outside the Time range? How tolerant is linear regression to violation of this assumption (i.e., how heteroscedastic does it have to be to cause problems)? | The linear model (or "ordinary least squares") still has its unbiasedness property in this case. In the face of heteroskedasticity in error terms, you still have unbiased parameter estimates but you get a biased estimate of the covariance matrix: your inference (i.e. parameter tests and confidence intervals) could be incorrect. The common fix is to use a robust method for computing the covariance matrix aka standard errors. Which one you use is somewhat domain-dependent but White's method is a start. And for completeness, serial correlation of error terms is worse as it will lead to biased parameter estimates. | {
"source": [
"https://stats.stackexchange.com/questions/22800",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9200/"
]
} |
22,804 | I'm thinking of this from a very basic, minimal requirements perspective. What are the key theories an industry (not academic) statistician should know, understand and utilize on a regular basis? A big one that comes to mind is Law of large numbers . What are the most essential for applying statistical theory to data analysis? | Frankly, I don't think the law of large numbers has a huge role in industry. It is helpful to understand the asymptotic justifications of the common procedures, such as maximum likelihood estimates and tests (including the omniimportant GLMs and logistic regression, in particular), the bootstrap, but these are distributional issues rather than probability of hitting a bad sample issues. Beyond the topics already mentioned (GLM, inference, bootstrap), the most common statistical model is linear regression, so a thorough understanding of the linear model is a must. You may never run ANOVA in your industry life, but if you don't understand it, you should not be called a statistician. There are different kinds of industries. In pharma, you cannot make a living without randomized trials and logistic regression. In survey statistics, you cannot make a living without Horvitz-Thompson estimator and non-response adjustments. In computer science related statistics, you cannot make a living without statistical learning and data mining. In public policy think tanks (and, increasingly, education statistics), you cannot make a living without causality and treatment effect estimators (which, increasingly, involve randomized trials). In marketing research, you need to have a mix of economics background with psychometric measurement theory (and you can learn neither of them in a typical statistics department offerings). Industrial statistics operates with its own peculiar six sigma paradigms which are but remotely connected to mainstream statistics; a stronger bond can be found in design of experiments material. Wall Street material would be financial econometrics, all the way up to stochastic calculus. These are VERY disparate skills, and the term "industry" is even more poorly defined than "academia". I don't think anybody can claim to know more than two or three of the above at the same time. The top skills, however, that would be universally required in "industry" (whatever that may mean for you) would be time management, project management, and communication with less statistically-savvy clients. So if you want to prepare yourself for industry placement, take classes in business school on these topics. UPDATE: The original post was written in February 2012; these days (March 2014), you probably should call yourself "a data scientist" rather than "a statistician" to find a hot job in industry... and better learn some Hadoop to follow with that self-proclamation. | {
"source": [
"https://stats.stackexchange.com/questions/22804",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7471/"
]
} |
22,805 | How do I add a neat polygon around a group of points on a scatterplot? I am using ggplot2 but am disappointed with the results of geom_polygon . The dataset is over there , as a tab-delimited text file. The graph below shows two measures of attitudes towards health and unemployment in a bunch of countries: I would like to switch from geom_density2d to the less fancy but empirically more correct geom_polygon . The result on unsorted data is unhelpful: How do I draw 'neat' polygons that behave as contour paths around the min-max y-x values? I tried sorting the data to no avail. Code: print(fig2 <- ggplot(d, aes(man, eff, colour=issue, fill=issue)) +
geom_point() + geom_density2d(alpha=.5) + labs(x = "Efficiency", y = "Mandate")) The d object is obtained with this CSV file . Solution: Thanks to Wayne , Andy W and others for their pointers! The data, code and graphs have been posted to GitHub . The result looks like this: | With some googling I came across the website of Gota Morota who has an example of doing this already on her website . Below is that example extended to your data. library(ggplot2)
library(plyr)
work <- "E:\\Forum_Post_Stuff\\convex_hull_ggplot2"
setwd(work)
#note you have some missing data
mydata <- read.table(file = "emD71JT5.txt",header = TRUE, fill = TRUE)
nomissing <- na.omit(mydata) #chull function does not work with missing data
#getting the convex hull of each unique point set
df <- nomissing
find_hull <- function(df) df[chull(df $eff, df$ man), ]
hulls <- ddply(df, "issue", find_hull)
plot <- ggplot(data = nomissing, aes(x = eff, y = man, colour=issue, fill = issue)) +
geom_point() +
geom_polygon(data = hulls, alpha = 0.5) +
labs(x = "Efficiency", y = "Mandate")
plot | {
"source": [
"https://stats.stackexchange.com/questions/22805",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3582/"
]
} |
22,988 | I use lme4 in R to fit the mixed model lmer(value~status+(1|experiment))) where value is continuous, status and experiment are factors, and I get Linear mixed model fit by REML
Formula: value ~ status + (1 | experiment)
AIC BIC logLik deviance REMLdev
29.1 46.98 -9.548 5.911 19.1
Random effects:
Groups Name Variance Std.Dev.
experiment (Intercept) 0.065526 0.25598
Residual 0.053029 0.23028
Number of obs: 264, groups: experiment, 10
Fixed effects:
Estimate Std. Error t value
(Intercept) 2.78004 0.08448 32.91
statusD 0.20493 0.03389 6.05
statusR 0.88690 0.03583 24.76
Correlation of Fixed Effects:
(Intr) statsD
statusD -0.204
statusR -0.193 0.476 How can I know that the effect of status is significant? R reports only $t$-values and not $p$-values. | There is a lot of information on this topic at the GLMM FAQ . However, in your particular case, I would suggest using library(nlme)
m1 <- lme(value~status,random=~1|experiment,data=mydata)
anova(m1) because you don't need any of the stuff that lmer offers (higher speed, handling of crossed random effects, GLMMs ...). lme should give you exactly the same coefficient and variance estimates but will also compute df and p-values for you (which do make sense in a "classical" design such as you appear to have). You may also want to consider the random term ~status|experiment (allowing for variation of status effects across blocks, or equivalently including a status-by-experiment interaction). Posters above are also correct that your t statistics are so large that your p-value will definitely be <0.05, but I can imagine you would like "real" p-values. | {
"source": [
"https://stats.stackexchange.com/questions/22988",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6015/"
]
} |
23,090 | I've had these two explained multiple times. They continue to cook my brain. Missing Not at Random makes sense to be, and Missing Completely at Random makes sense...it's the Missing at Random that doesn't as much. What gives rise to data that would be MAR but not MCAR? | Missing at random (MAR) means that the missingness can be explained by variables on which you have full information. It's not a testable assumption, but there are cases where it is reasonable vs. not. For example, take political opinion polls. Many people refuse to answer. If you assume that the reasons people refuse to answer are entirely based on demographics, and if you have those demographics on each person, then the data is MAR. It is known that some of the reasons why people refuse to answer can be based on demographics (for instance, people at both low and high incomes are less likely to answer than those in the middle), but there's really no way to know if that is the full explanation. So, the question becomes "is it full enough?". Often, methods like multiple imputation work better than other methods as long as the data isn't very missing not at random. | {
"source": [
"https://stats.stackexchange.com/questions/23090",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5836/"
]
} |
23,117 | So we have arithmetic mean (AM), geometric mean (GM) and harmonic mean (HM). Their mathematical formulation is also well known along with their associated stereotypical examples (e.g., Harmonic mean and it's application to 'speed' related problems). However, a question that has always intrigued me is "how do I decide which mean is the most appropriate to use in a given context?" There must be at least some rule of thumb to help understand the applicability and yet the most common answer I've come across is: "It depends" (but on what?). This may seem to be a rather trivial question but even high-school texts failed to explain this -- they only provide mathematical definitions! I prefer an English explanation over a mathematical one -- simple test would be "would your mom/child understand it?" | This answer may have a slightly more mathematical bent than you were looking for. The important thing to recognize is that all of these means are simply the arithmetic mean in disguise . The important characteristic in identifying which (if any!) of the three common means (arithmetic, geometric or harmonic) is the "right" mean is to find the "additive structure" in the question at hand. In other words suppose we're given some abstract quantities $x_1, x_2,\ldots,x_n$, which I will call "measurements", somewhat abusing this term below for the sake of consistency. Each of these three means can be obtained by (1) transforming each $x_i$ into some $y_i$, (2) taking the arithmetic mean and then (3) transforming back to the original scale of measurement. Arithmetic mean : Obviously, we use the "identity" transformation: $y_i = x_i$. So, steps (1) and (3) are trivial (nothing is done) and $\bar x_{\mathrm{AM}} = \bar y$. Geometric mean : Here the additive structure is on the logarithms of the original observations. So, we take $y_i = \log x_i$ and then to get the GM in step (3) we convert back via the inverse function of the $\log$, i.e., $\bar x_{\mathrm{GM}} = \exp(\bar{y})$. Harmonic mean : Here the additive structure is on the reciprocals of our observations. So, $y_i = 1/x_i$, whence $\bar x_{\mathrm{HM}} = 1/\bar{y}$. In physical problems, these often arise through the following process: We have some quantity $w$ that remains fixed in relation to our measurements $x_1,\ldots,x_n$ and some other quantities, say $z_1,\ldots,z_n$. Now, we play the following game: Keep $w$ and $z_1+\cdots+z_n$ constant and try to find some $\bar x$ such that if we replace each of our individual observations $x_i$ by $\bar x$, then the "total" relationship is still conserved . The distance–velocity–time example appears to be popular, so let's use it. Constant distance, varying times Consider a fixed distance traveled $d$. Now suppose we travel this distance $n$ different times at speeds $v_1,\ldots,v_n$, taking times $t_1,\ldots,t_n$. We now play our game. Suppose we wanted to replace our individual velocities with some fixed velocity $\bar v$ such that the total time remains constant. Note that we have
$$
d - v_i t_i = 0 \>,
$$
so that $\sum_i (d - v_i t_i) = 0$. We want this total relationship (total time and total distance traveled) conserved when we replace each of the $v_i$ by $\bar v$ in our game. Hence,
$$
n d - \bar v \sum_i t_i = 0 \>,
$$
and since each $t_i = d / v_i$, we get that
$$
\bar v = \frac{n}{\frac{1}{v_1}+\cdots+\frac{1}{v_n}} = \bar v_{\mathrm{HM}} \>.
$$ Note that the "additive structure" here is with respect to the individual times, and our measurements are inversely related to them, hence the harmonic mean applies. Varying distances, constant time Now, let's change the situation. Suppose that for $n$ instances we travel a fixed time $t$ at velocities $v_1,\ldots,v_n$ over distances $d_1,\ldots,d_n$. Now, we want the total distance conserved. We have
$$
d_i - v_i t = 0 \>,
$$
and the total system is conserved if $\sum_i (d_i - v_i t) = 0$. Playing our game again, we seek a $\bar v$ such that
$$
\sum_i (d_i - \bar v t) = 0 \>,
$$
but, since $d_i = v_i t$, we get that
$$
\bar v = \frac{1}{n} \sum_i v_i = \bar v_{\mathrm{AM}} \>.
$$ Here the additive structure we are trying to maintain is proportional to the measurements we have, so the arithmetic mean applies. Equal volume cube Suppose we have constructed an $n$-dimensional box with a given volume $V$ and our measurements are the side-lengths of the box. Then
$$
V = x_1 \cdot x_2 \cdots x_n \>,
$$
and suppose we wanted to construct an $n$-dimensional (hyper)cube with the same volume. That is, we want to replace our individual side-lengths $x_i$ by a common side-length $\bar x$. Then
$$
V = \bar x \cdot \bar x \cdots \bar x = \bar x^n \>.
$$ This easily indicates that we should take $\bar x = (x_i \cdots x_n)^{1/n} = \bar x_{\mathrm{GM}}$. Note that the additive structure is in the logarithms, that is, $\log V = \sum_i \log x_i$ and we are trying to conserve the left-hand quantity. New means from old As an exercise, think about what the "natural" mean is in the situation where you let both the distances and times vary in the first example. That is, we have distances $d_i$, velocities $v_i$ and times $t_i$. We want to conserve the total distance and time traveled and find a constant $\bar v$ to achieve this. Exercise : What is the "natural" mean in this situation? | {
"source": [
"https://stats.stackexchange.com/questions/23117",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4426/"
]
} |
23,128 | In Andrew Ng's machine learning course , he introduces linear regression and logistic regression, and shows how to fit the model parameters using gradient descent and Newton's method. I know gradient descent can be useful in some applications of machine learning (e.g., backpropogation), but in the more general case is there any reason why you wouldn't solve for the parameters in closed form-- i.e., by taking the derivative of the cost function and solving via Calculus? What is the advantage of using an iterative algorithm like gradient descent over a closed-form solution in general, when one is available? | Unless the closed form solution is extremely expensive to compute, it generally is the way to go when it is available. However, For most nonlinear regression problems there is no closed form solution. Even in linear regression (one of the few cases where a closed form solution is available), it may be impractical to use the formula. The following example shows one way in which this can happen. For linear regression on a model of the form $y=X\beta$, where $X$ is a matrix with full column rank, the least squares solution, $\hat{\beta} = \arg \min \| X \beta -y \|_{2}$ is given by $\hat{\beta}=(X^{T}X)^{-1}X^{T}y$ Now, imagine that $X$ is a very large but sparse matrix. e.g. $X$ might have 100,000 columns and 1,000,000 rows, but only 0.001% of the entries in $X$ are nonzero. There are specialized data structures for storing only the nonzero entries of such sparse matrices. Also imagine that we're unlucky, and $X^{T}X$ is a fairly dense matrix with a much higher percentage of nonzero entries. Storing a dense 100,000 by 100,000 element $X^{T}X$ matrix would then require $1 \times 10^{10}$ floating point numbers (at 8 bytes per number, this comes to 80 gigabytes.) This would be impractical to store on anything but a supercomputer. Furthermore, the inverse of this matrix (or more commonly a Cholesky factor) would also tend to have mostly nonzero entries. However, there are iterative methods for solving the least squares problem that require no more storage than $X$, $y$, and $\hat{\beta}$ and never explicitly form the matrix product $X^{T}X$. In this situation, using an iterative method is much more computationally efficient than using the closed form solution to the least squares problem. This example might seem absurdly large. However, large sparse least squares problems of this size are routinely solved by iterative methods on desktop computers in seismic tomography research. | {
"source": [
"https://stats.stackexchange.com/questions/23128",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1977/"
]
} |
23,142 | I've been reading a lot lately about the differences between Fisher's method of hypothesis testing and the Neyman-Pearson school of thought. My question is, ignoring philosophical objections for a moment; when should we use the Fisher's approach of statistical modelling and when should be use the Neyman-Pearson method of significance levels et cetera? Is there a practical way of deciding which viewpoint to endorse in any given practical problem? | Let me start by defining the terms of the discussion as I see them. A p-value is the probability of getting a sample statistic (say, a sample mean) as far as , or further from some reference value than your sample statistic, if the reference value were the true population parameter. For example, a p-value answers the question: what is the probability of getting a sample mean IQ more than $|\bar x-100|$ points away from 100, if 100 is really the mean of the population from which your sample was drawn. Now the issue is, how should that number be employed in making a statistical inference? Fisher thought that the p-value could be interpreted as a continuous measure of evidence against the null hypothesis . There is no particular fixed value at which the results become 'significant'. The way I usually try to get this across to people is to point out that, for all intents and purposes, p=.049 and p=.051 constitute an identical amount of evidence against the null hypothesis (cf. @Henrik's answer here ). On the other hand, Neyman & Pearson thought you could use the p-value as part of a formalized decision making process . At the end of your investigation, you have to either reject the null hypothesis, or fail to reject the null hypothesis. In addition, the null hypothesis could be either true or not true. Thus, there are four theoretical possibilities (although in any given situation, there are just two): you could make a correct decision (fail to reject a true--or reject a false--null hypothesis), or you could make a type I or type II error (by rejecting a true null, or failing to reject a false null hypothesis, respectively). (Note that the p-value is not the same thing as the type I error rate, which I discuss here .) The p-value allows the process of deciding whether or not to reject the null hypothesis to be formalized. Within the Neyman-Pearson framework, the process would work like this: there is a null hypothesis that people will believe by default in the absence of sufficient evidence to the contrary, and an alternative hypothesis that you believe may be true instead. There are some long-run error rates that you will be willing to live with (note that there is no reason these have to be 5% and 20%). Given these things, you design your study to differentiate between those two hypotheses while maintaining, at most, those error rates, by conducting a power analysis and conducting your study accordingly. (Typically, this means having sufficient data.) After your study is completed, you compare your p-value to $\alpha$ and reject the null hypothesis if $p<\alpha$; if it's not, you fail to reject the null hypothesis. Either way, your study is complete and you have made your decision. The Fisherian and Neyman-Pearson approaches are not the same . The central contention of the Neyman-Pearson framework is that at the end of your study, you have to make a decision and walk away. Allegedly, a researcher once approached Fisher with 'non-significant' results, asking him what he should do, and Fisher said, 'go get more data'. Personally, I find the elegant logic of the Neyman-Pearson approach very appealing. But I don't think it's always appropriate. To my mind, at least two conditions must be met before the Neyman-Pearson framework should be considered: There should be some specific alternative hypothesis ( effect magnitude ) that you care about for some reason. (I don't care what the effect size is, what your reason is, whether it's well-founded or coherent, etc., only that you have one.) There should be some reason to suspect that the effect will be 'significant', if the alternative hypothesis is true. (In practice, this will typically mean that you conducted a power analysis, and have enough data.) When these conditions aren't met, the p-value can still be interpreted in keeping with Fisher's ideas. Moreover, it seems likely to me that most of the time these conditions are not met. Here are some easy examples that come to mind, where tests are run, but the above conditions are not met: the omnibus ANOVA for a multiple regression model (it is possible to figure out how all the hypothesized non-zero slope parameters come together to create a non-centrality parameter for the F distribution , but it isn't remotely intuitive, and I doubt anyone does it) the value of a Shapiro-Wilk test of the normality of your residuals in a regression analysis (what magnitude of $W$ do you care about and why? how much power to you have to reject the null when that magnitude is correct?) the value of a test of homogeneity of variance (e.g., Levene's test ; same comments as above) any other tests to check assumptions, etc. t-tests of covariates other than the explanatory variable of primary interest in the study initial / exploratory research (e.g., pilot studies) | {
"source": [
"https://stats.stackexchange.com/questions/23142",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5795/"
]
} |
23,391 | How does a Support Vector Machine (SVM) work, and what differentiates it from other linear classifiers, such as the Linear Perceptron , Linear Discriminant Analysis , or Logistic Regression ? * (* I'm thinking in terms of the underlying motivations for the algorithm, optimisation strategies, generalisation capabilities, and run-time complexity ) | Support vector machines focus only on the points that are the most difficult to tell apart, whereas other classifiers pay attention to all of the points. The intuition behind the support vector machine approach is that if a classifier is good at the most challenging comparisons (the points in B and A that are closest to each other in Figure 2), then the classifier will be even better at the easy comparisons (comparing points in B and A that are far away from each other). Perceptrons and other classifiers: Perceptrons are built by taking one point at a time and adjusting the dividing line accordingly. As soon as all of the points are separated, the perceptron algorithm stops. But it could stop anywhere. Figure 1 shows that there are a bunch of different dividing lines that separate the data. The perceptron's stopping criteria is simple: "separate the points and stop improving the line when you get 100% separation". The perceptron is not explicitly told to find the best separating line. Logistic regression and linear discriminant models are built similarly to perceptrons. The best dividing line maximizes the distance between the B points closest to A and the A points closest to B. It's not necessary to look at all of the points to do this. In fact, incorporating feedback from points that are far away can bump the line a little too far, as seen below. Support Vector Machines: Unlike other classifiers, the support vector machine is explicitly told to find the best separating line. How? The support vector machine searches for the closest points (Figure 2), which it calls the "support vectors" (the name "support vector machine" is due to the fact that points are like vectors and that the best line "depends on" or is "supported by" the closest points). Once it has found the closest points, the SVM draws a line connecting them (see the line labeled 'w' in Figure 2). It draws this connecting line by doing vector subtraction (point A - point B). The support vector machine then declares the best separating line to be the line that bisects -- and is perpendicular to -- the connecting line. The support vector machine is better because when you get a new sample (new points), you will have already made a line that keeps B and A as far away from each other as possible, and so it is less likely that one will spillover across the line into the other's territory. I consider myself a visual learner, and I struggled with the intuition behind support vector machines for a long time. The paper called Duality and Geometry in SVM Classifiers finally helped me see the light; that's where I got the images from. | {
"source": [
"https://stats.stackexchange.com/questions/23391",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7365/"
]
} |
23,445 | I encounter it in many books as well as web. Natural Language Processing and Machine Learning are said to be different subsets of Artificial Intelligence. Why is it? We can achieve results of Natural Language Processing by feeding sound patterns to Machine Learning algorithms. Then, what's the difference? | Because they are different: One does not include the other. Yes modern NLP (Natural Language Processing) does make use of a lot of ML (Machine Learning), but that is just one group of techniques in the arsenal. For example, graph theory and search algorithms are also used a lot. As is simple text processing (Regular Expressions). Note I also said "modern NLP" - the statistical approach to NLP is a relatively recent development over the past few decades. I understand a more formal approach (e.g. based on parsing formal grammars) was the norm back in the 1960s/1970s. Similarly ML does not have to use NLP, and usually it doesn't, although some applications might use NLP techniques (eg. to process text input). | {
"source": [
"https://stats.stackexchange.com/questions/23445",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35234/"
]
} |
23,463 | Recently there was a ML-like question over on cstheory stackexchange, and I posted an answer recommending Powell's method, gradient descent, genetic algorithms, or other "approximation algorithms". In a comment someone told me these methods were "heuristics" and not "approximation algorithms" and frequently did not come close to the theoretical optimum (because they "frequently get stuck in local minima"). Do others agree with that? Also, it seems to me there is a sense of which heuristic algorithms can be guaranteed to come close to theoretical optimums if they are set up to explore a large part of the search space (eg setting parameters/step sizes small), although I haven't seen that in a paper. Does anyone know if this has been shown or proven in a paper? (if not for a large class of algorithms maybe for a small class say NNs etc.) | I think you're mixing multiple important concepts. Let me try to clarify a couple of things: There are metaheuristic methods, which are methods that iteratively try to improve a candidate solution. Examples of this are tabu search, simulated annealing, genetic algorithms, etc. Observe that while there can be many cases where these methods work nicely, there isn't any deep understanding of when these methods work and when they don't. And more importantly when they don't get to the solution, we can be arbitrarily far from it. Problems solved by metaheuristic methods tend to be discrete in nature, because there are far better tools to handle continuous problems. But every now and then you see metaheuristics for continuous problems, too. There are numerical optimization methods, people in this community carefully examine the nature of the function that is to be optimized and the restrictions of the solution (into groups like convex optimization, quadratic programming, linear programming, etc) and apply algorithms that have been shown to work for that type of function, and those type of restrictions. When people in this area say "shown to work" they mean a proof. The situation is that these types of methods work in continuous problems. But when your problem falls in this category, this is definitely the tool to use. There are discrete optimization methods, which tend to be things that in nature are connected to algorithms to well studied discrete problems: such as shortest paths, max flow, etc. People in this area also care that their algorithms really work (proofs). There are a subset of people in this group that study really hard problems for which no fast algorithm is expected to exist. They then study approximation algorithms, which are fast algorithms for which they are able to show that their solution is within a constant factor of the true optimum. This is called "approximation algorithms". These people also show their results as proofs. So... to answer your question, I do not think that metaheuristics are approximation algorithms. It doesn't seem to me as something connected to opinion, it is just fact. | {
"source": [
"https://stats.stackexchange.com/questions/23463",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17493/"
]
} |
23,472 | We find the cluster centers and assign points to k different cluster bins in k-means clustering which is a very well known algorithm and is found almost in every machine learning package on the net. But the missing and most important part in my opinion is the choice of a correct k. What is the best value for it? And, what is meant by best ? I use MATLAB for scientific computing where looking at silhouette plots is given as a way to decide on k discussed here . However, I would be more interested in Bayesian approaches. Any suggestions are appreciated. | This has been asked a couple of times on stackoverflow: here , here and here . You can take a look at what the crowd over there thinks about this question (or a small variant thereof). Let me also copy my own answer to this question, on stackoverflow.com: Unfortunately there is no way to automatically set the "right" K nor is there a definition of what "right" is. There isn't a principled statistical method, simple or complex that can set the "right K". There are heuristics, rules of thumb that sometimes work, sometimes don't. The situation is more general as many clustering methods have these type of parameters, and I think this is a big open problem in the clustering/unsupervised learning research community. | {
"source": [
"https://stats.stackexchange.com/questions/23472",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5025/"
]
} |
23,490 | Naive Bayes classifiers are a popular choice for classification problems. There are many reasons for this, including: "Zeitgeist" - widespread awareness after the success of spam filters about ten years ago Easy to write The classifier model is fast to build The model can be modified with new training data without having to rebuild the model However, they are 'naive' - i.e. they assume the features are independent - this contrasts with other classifiers such as Maximum Entropy classifiers (which are slow to compute). The independence assumption cannot usually be assumed, and in many (most?) cases, including the spam filter example, it is simply wrong. So why does the Naive Bayes Classifier still perform very well in such applications, even when the features are not independent of each other? | This paper seems to prove (I can't follow the math) that bayes is good not only when features are independent, but also when dependencies of features from each other are similar between features: In this paper, we propose a novel explanation on the
superb classification performance of naive Bayes. We
show that, essentially, the dependence distribution; i.e.,
how the local dependence of a node distributes in each
class, evenly or unevenly, and how the local dependencies of all nodes work together, consistently (supporting a certain classification) or inconsistently (canceling each other out), plays a crucial role. Therefore,
no matter how strong the dependences among attributes
are, naive Bayes can still be optimal if the dependences
distribute evenly in classes, or if the dependences cancel each other out | {
"source": [
"https://stats.stackexchange.com/questions/23490",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6484/"
]
} |
23,617 | Today, at the Cross Validated Journal Club (why weren't you there?), @mbq asked: Do you think we (modern data scientists) know what significance means? And how it relates to our confidence in our results? @Michelle replied as some (including me) usually do: I'm finding the concept of significance (based on p-values) less and less helpful as I continue in my career. For example, I can be using extremely large datasets so everything is statistically significant ($p<.01$) This is probably a stupid question, but isn't the problem the hypothesis being tested? If you test the null hypothesis "A is equal to B" then you know the answer is "No". Bigger data sets will only bring you closer to this inevitably true conclusion. I believe it was Deming who once gave an example with the hypothesis "the number of hairs on the right side of a lamb is equal to the number of hairs on its left side." Well, of course it isn't. A better hypothesis would be "A does not differ from B by more than so much." Or, in the lamb example, "the number of hairs on the sides of a lamb does not differ by more than X%". Does this make sense? | As far as significance testing goes (or anything else that does essentially the same thing as significance testing), I have long thought that the best approach in most situations is likely to be estimating a standardized effect size, with a 95% confidence interval about that effect size. There's nothing really new there--mathematically you can shuffle back and forth between them--if the p-value for a 'nil' null is <.05, then 0 will lie outside of a 95% CI, and vise versa. The advantage of this, in my opinion, is psychological ; that is, it makes salient information that exists but that people can't see when only p-values are reported. For example, it is easy to see that an effect is wildly 'significant', but ridiculously small; or 'non-significant', but only because the error bars are huge whereas the estimated effect is more or less what you expected. These can be paired with raw values and their CI's. Now, in many fields the raw values are intrinsically meaningful, and I recognize that raises the question of whether it's still worthwhile to compute effect size measures given that we already have values like means and slopes. An example might be looking at stunted growth; we know what it means for a 20 year old, white male to be 6 +/- 2 inches shorter (i.e. 15 +/- 5 cm), than they would otherwise, so why mention $d=-1.6\pm.5$? I tend to think that there can still be value in reporting both, and functions can be written to compute these so that it's very little extra work, but I recognize that opinions will vary. At any rate, I argue that point estimates with confidence intervals replace p-values as the first part of my response. On the other hand, I think a bigger question is 'is the thing that significance testing does what we really want?' I think the real problem is that for most people analyzing data (i.e., practitioners not statisticians), significance testing can become the entirety of data analysis. It seems to me that the most important thing is to have a principled way to think about what is going on with our data, and null hypothesis significance testing is, at best, a very small part of that. Let me give an imaginary example (I acknowledge that this is a caricature, but unfortunately, I fear it is somewhat plausible): Bob conducts a study, gathering data on something-or-other. He
expects the data will be normally distributed, clustering tightly
around some value, and intends to conduct a one-sample t-test to see
if his data are 'significantly different' from some pre-specified
value. After collecting his sample, he checks to see if his data are
normally distributed, and finds that they are not. Instead, they do
not have a pronounced lump in the center but are relatively high over a given
interval and then trail off with a long left tail. Bob worries about
what he should do to ensure that his test is valid. He ends up doing
something (e.g., a transformation, a non-parametric test, etc.), and
then reports a test statistic and a p-value. I hope this doesn't come off as nasty. I don't mean to mock anyone, but I think something like this does happen occasionally. Should this scenario occur, we can all agree that it is poor data analysis. However, the problem isn't that the test statistic or the p-value is wrong; we can posit that the data were handled properly in that respect . I would argue that the problem is Bob is engaged in what Cleveland called "rote data analysis". He appears to believe that the only point is to get the right p-value, and thinks very little about his data outside of pursuing that goal. He even could have switched over to my suggestion above and reported a standardized effect size with a 95% confidence interval, and it wouldn't have changed what I see as the larger problem (this is what I meant by doing "essentially the same thing" by a different means). In this specific case, the fact that the data didn't look the way he expected (i.e., weren't normal) is real information, it's interesting , and very possibly important, but that information is essentially just thrown away. Bob doesn't recognize this, because of the focus on significance testing. To my mind, that is the real problem with significance testing. Let me address a few other perspectives that have been mentioned, and I want to be very clear that I am not criticizing anyone. It is often mentioned that many people don't really understand
p-values (e.g., thinking they're the probability the null is
true), etc. It is sometimes argued that, if only people would use
the Bayesian approach, these problems would go away. I believe that people
can approach Bayesian data analysis in a manner that is just as
incurious and mechanical. However, I think that misunderstanding the meaning of p-values would be less harmful if no one thought getting a p-value was the goal. The existence of 'big data' is generally unrelated to this issue. Big data only make it obvious that organizing data analysis around 'significance' is not a helpful approach. I do not believe the problem is with the hypothesis being tested. If people only wanted to see if the estimated value is outside of an interval, rather than if it's equal to a point value, many of the same issues could arise. (Again, I want to be clear I know you are not 'Bob' .) For the record, I want to mention that my own suggestion from the first paragraph, does not address the issue, as I tried to point out. For me, this is the core issue: What we really want is a principled way to think about what happened . What that means in any given situation is not cut and dried. How to impart that to students in a methods class is neither clear nor easy. Significance testing has a lot of inertia and tradition behind it. In a stats class, it's clear what needs to be taught and how. For students and practitioners it becomes possible to develop a conceptual schema for understanding the material, and a checklist / flowchart (I've seen some!) for conducting analysis. Significance testing can naturally evolve into rote data analysis without anyone being dumb or lazy or bad. That is the problem. | {
"source": [
"https://stats.stackexchange.com/questions/23617",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/666/"
]
} |
23,779 | Because I find them fascinating, I'd like to hear what folks in this community find as the most interesting statistical paradox and why. | It's not a paradox per se , but it is a puzzling comment, at least at first. During World War II, Abraham Wald was a statistician for the U.S. government. He looked at the bombers that returned from missions and analyzed the pattern of the bullet "wounds" on the planes. He recommended that the Navy reinforce areas where the planes had no damage. Why? We have selection effects at work. This sample suggests that damage inflicted in the observed areas could be withstood. Either planes were never hit in the untouched areas, an unlikely proposition, or strikes to those parts were lethal. We care about the planes that went down, not just those that returned. Those that fell likely suffered an attack in a place that was untouched on those that survived. For copies of his original memoranda, see here . For a more modern application, see this Scientific American blog post . Expanding upon a theme, according to this blog post , during World War I, the introduction of a tin helmet led to more head wounds than a standard cloth hat. Was the new helmet worse for soldiers? No; though injuries were higher, fatalities were lower. | {
"source": [
"https://stats.stackexchange.com/questions/23779",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1913/"
]
} |
23,789 | Is AR(1) process such as
$y_t=\rho y_{t-1}+\varepsilon_t$ a Markov process? If it is, then VAR(1) is the vector version of Markov process? | The following result holds: If $\epsilon_1, \epsilon_2, \ldots$ are independent taking values in $E$ and $f_1, f_2, \ldots $ are functions $f_n: F \times E \to F$ then with $X_n$ defined recursively as $$X_n = f_n(X_{n-1}, \epsilon_n), \quad X_0 = x_0 \in F$$ the process $(X_n)_{n \geq 0}$ in $F$ is a Markov process starting at $x_0$. The process is time-homogeneous if the $\epsilon$'s are identically distributed and all the $f$-functions are identical. The AR(1) and VAR(1) are both processes given in this form with $$f_n(x, \epsilon) = \rho x + \epsilon.$$ Thus they are homogeneous Markov processes if the $\epsilon$'s are i.i.d. Technically, the spaces $E$ and $F$ need a measurable structure and the $f$-functions must be measurable. It is quite interesting that a converse result holds if the space $F$ is a Borel space . For any Markov process $(X_n)_{n \geq 0}$ on a Borel space $F$ there are i.i.d. uniform random variables $\epsilon_1, \epsilon_2, \ldots$ in $[0,1]$ and functions $f_n : F \times [0, 1] \to F$ such that with probability one
$$X_n = f_n(X_{n-1}, \epsilon_n).$$
See Proposition 8.6 in Kallenberg, Foundations of Modern Probability . | {
"source": [
"https://stats.stackexchange.com/questions/23789",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3525/"
]
} |
24,054 | I have a bunch (around 1000) of estimates and they are all supposed to be estimates of long-run elasticity. A little more than half of these is estimated using method A and the rest using a method B. Somewhere I read something like "I think method B estimates something very different than method A, because the estimates are much (50-60%) higher". My knowledge of robust statistics is next to nothing, so I only calculated the sample means and medians of both samples... and I immediately saw the difference. Method A is very concentrated, the difference between median and mean is very little, but method B sample varied wildly. I concluded that the outliers and measurement errors skew the method B sample, so I threw away about 50 values (about 15%) that were very inconsistent with theory... and suddenly the means of both samples (including their CI) were very similar. The density plots as well. (In the quest of eliminating outliers, I looked at the range of sample A and removed all sample points in B that fell outside it.) I would like you to tell me where I could find out some basics of robust estimation of means that would allow me to judge this situation more rigorously. And to have some references. I do not need very deep understanding of various techniques, rather read through a comprehensive survey of the methodology of robust estimation. I t-tested for significance of mean difference after removing the outliers and the p-value is 0.0559 (t around 1.9), for the full samples the t stat was around 4.5. But that is not really the point, the means can be a bit different, but they should not differ by 50-60% as stated above. And I don't think they do. | Are you looking for the theory, or something practical? If you are looking for books, here are some that I found helpful: F.R. Hampel, E.M. Ronchetti, P.J.Rousseeuw, W.A. Stahel, Robust Statistics: The Approach Based on In
fluence Functions , John Wiley & Sons, 1986. P.J. Huber, Robust Statistics , John Wiley & Sons, 1981. P.J. Rousseeuw, A.M. Leroy, Robust Regression and Outlier Detection , John Wiley & Sons, 1987. R.G. Staudte, S.J. Sheather, Robust Estimation and Testing , John Wiley & Sons, 1990. If you are looking for practical methods, here are few robust methods of estimating the mean ("estimators of location" is I guess the more principled term): The median is simple, well-known, and pretty powerful. It has excellent robustness to outliers. The "price" of robustness is about 25%. The 5%-trimmed average is another possible method. Here you throw away the 5% highest and 5% lowest values, and then take the mean (average) of the result. This is less robust to outliers: as long as no more than 5% of your data points are corrupted, it is good, but if more than 5% are corrupted, it suddenly turns awful (it doesn't degrade gracefully). The "price" of robustness is less than the median, though I don't know what it is exactly. The Hodges-Lehmann estimator computes the median of the set $\{(x_i+x_j)/2 : 1 \le i \le j \le n\}$ (a set containing $n(n+1)/2$ values), where $x_1,\dots,x_n$ are the observations. This has very good robustness: it can handle corruption of up to about 29% of the data points without totally falling apart. And the "price" of robustness is low: about 5%. It is a plausible alternative to the median. The interquartile mean is another estimator that is sometimes used. It computes the average of the first and third quartiles, and thus is simple to compute. It has very good robustness: it can tolerate corruption of up to 25% of the data points. However, the "price" of robustness is non-trivial: about 25%. As a result, this seems inferior to the median. There are many other measures that have been proposed, but the ones above seem reasonable. In short, I would suggest the median or possibly the Hodges-Lehmann estimator. P.S. Oh, I should explain what I mean by the "price" of robustness. A robust estimator is designed to still work decently well even if some of your data points have been corrupted or are otherwise outliers. But what if you use a robust estimator on a data set that has no outliers and no corruption? Ideally, we'd like the robust estimator to be as efficient at making use of the data as possible. Here we can measure the efficiency by the standard error (intuitively, the typical amount of error in the estimate produced by the estimator). It is known that if your observations come from a Gaussian distribution (iid), and if you know you won't need robustness, then the mean is optimal: it has the smallest possible estimation error. The "price" of robustness, above, is how much the standard error increases if we apply a particular robust estimator to this situation. A price of robustness of 25% for the median means that the size of the typical estimation error with the median will be about 25% larger than the size of the typical estimation error with the mean. Obviously, the lower the "price" is, the better. | {
"source": [
"https://stats.stackexchange.com/questions/24054",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8560/"
]
} |
24,072 | I am running the following unit root test (Dickey-Fuller) on a time series using the ur.df() function in the urca package. The command is: summary(ur.df(d.Aus, type = "drift", 6)) The output is: ###############################################
# Augmented Dickey-Fuller Test Unit Root Test #
###############################################
Test regression drift
Call:
lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)
Residuals:
Min 1Q Median 3Q Max
-0.266372 -0.036882 -0.002716 0.036644 0.230738
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.001114 0.003238 0.344 0.73089
z.lag.1 -0.010656 0.006080 -1.753 0.08031 .
z.diff.lag1 0.071471 0.044908 1.592 0.11214
z.diff.lag2 0.086806 0.044714 1.941 0.05279 .
z.diff.lag3 0.029537 0.044781 0.660 0.50983
z.diff.lag4 0.056348 0.044792 1.258 0.20899
z.diff.lag5 0.119487 0.044949 2.658 0.00811 **
z.diff.lag6 -0.082519 0.045237 -1.824 0.06874 .
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.06636 on 491 degrees of freedom
Multiple R-squared: 0.04211, Adjusted R-squared: 0.02845
F-statistic: 3.083 on 7 and 491 DF, p-value: 0.003445
Value of test-statistic is: -1.7525 1.6091
Critical values for test statistics:
1pct 5pct 10pct
tau2 -3.43 -2.86 -2.57
phi1 6.43 4.59 3.78 What do the significance codes (Signif. codes) mean? I noticed that some of them where written against: z.lag.1, z.diff.lag.2, z.diff.lag.3 (the "." significance code) and z.diff.lag.5 (the "**" significance code). The output gives me two (2) values of test statistic: -1.7525 and 1.6091. I know that the ADF test statistic is the first one (i.e. -1.7525). What is the second one then? Finally, in order to test the hypothesis for unit root at the 95% significance level, I need to compare my ADF test statistic (i.e. -1.7525) to a critical value, which I normally get from a table. The output here seems to give me the critical values through. However, the question is: which critical value between "tau2" and "phi1" should I use. Thank you for your response. | It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I found that Enders was an incredibly helpful resource (Applied Econometric Time Series 3e, 2010, p. 206-209--I imagine other editions would also be fine). Below I'll use data from the URCA package, real income in Denmark as an example. > income <- ts(denmark$LRY) It might be useful to first describe the 3 different formulae Dickey-Fuller used to get different hypotheses, since these match the ur.df "type" options. Enders specifies that in all of these 3 cases, the consistent term used is gamma, the coefficient for the previous value of y, the lag term. If gamma=0, then there is a unit root (random walk, nonstationary). Where the null hypothesis is gamma=0, if p<0.05, then we reject the null (at the 95% level), and presume there is no unit root. If we fail to reject the null (p>0.05) then we presume a unit root exists. From here, we can proceed to interpreting the tau's and phi's. type="none": $\Delta y_t = \gamma \, y_{t-1} + e_t$ (formula from Enders p. 208) (where $e_t$ is the error term, presumed to be white noise; $\gamma = a-1$ from $y_t = a \,y_{t-1} + e_t$ ; $y_{t-1}$ refers to the previous value of $y$ , so is the lag term) For type= "none," tau (or tau1 in R output) is the null hypothesis for gamma = 0. Using the Denmark income example, I get "Value of test-statistic is 0.7944" and the "Critical values for test statistics are: tau1 -2.6 -1.95 -1.61. Given that the test statistic is within the all 3 regions (1%, 5%, 10%) where we fail to reject the null, we should presume the data is a random walk, ie that a unit root is present. In this case, the tau1 refers to the gamma = 0 hypothesis. The "z.lag1" is the gamma term, the coefficient for the lag term (y(t-1)), which is p=0.431, which we fail to reject as significant, simply implying that gamma isn't statistically significant to this model. Here is the output from R > summary(ur.df(y=income, type = "none",lags=1))
>
> ###############################################
> # Augmented Dickey-Fuller Test Unit Root Test #
> ###############################################
>
> Test regression none
>
>
> Call:
> lm(formula = z.diff ~ z.lag.1 - 1 + z.diff.lag)
>
> Residuals:
> Min 1Q Median 3Q Max
> -0.044067 -0.016747 -0.006596 0.010305 0.085688
>
> Coefficients:
> Estimate Std. Error t value Pr(>|t|)
> z.lag.1 0.0004636 0.0005836 0.794 0.431
> z.diff.lag 0.1724315 0.1362615 1.265 0.211
>
> Residual standard error: 0.0251 on 51 degrees of freedom
> Multiple R-squared: 0.04696, Adjusted R-squared: 0.009589
> F-statistic: 1.257 on 2 and 51 DF, p-value: 0.2933
>
>
> Value of test-statistic is: 0.7944
>
> Critical values for test statistics:
> 1pct 5pct 10pct
> tau1 -2.6 -1.95 -1.61 type = "drift" (your specific question above): : $\Delta y_t = a_0 + \gamma \, y_{t-1} + e_t$ (formula from Enders p. 208) (where $a_0$ is "a sub-zero" and refers to the constant, or drift term)
Here is where the output interpretation gets trickier. "tau2" is still the $\gamma=0$ null hypothesis. In this case, where the first test statistic = -1.4462 is within the region of failing to reject the null, we should again presume a unit root, that $\gamma=0$ . The phi1 term refers to the second hypothesis, which is a combined null hypothesis of $a_0 = \gamma = 0$ . This means that BOTH of the values are tested to be 0 at the same time. If p<0.05, we reject the null, and presume that AT LEAST one of these is false--i.e. one or both of the terms $a_0$ or $\gamma$ are not 0. Failing to reject this null implies that BOTH $a_0$ AND $\gamma = 0$ , implying 1) that $\gamma=0$ therefore a unit root is present, AND 2) $a_0=0$ , so there is no drift term. Here is the R output > summary(ur.df(y=income, type = "drift",lags=1))
>
> ###############################################
> # Augmented Dickey-Fuller Test Unit Root Test #
> ###############################################
>
> Test regression drift
>
>
> Call:
> lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag)
>
> Residuals:
> Min 1Q Median 3Q Max
> -0.041910 -0.016484 -0.006994 0.013651 0.074920
>
> Coefficients:
> Estimate Std. Error t value Pr(>|t|)
> (Intercept) 0.43453 0.28995 1.499 0.140
> z.lag.1 -0.07256 0.04873 -1.489 0.143
> z.diff.lag 0.22028 0.13836 1.592 0.118
>
> Residual standard error: 0.0248 on 50 degrees of freedom
> Multiple R-squared: 0.07166, Adjusted R-squared: 0.03452
> F-statistic: 1.93 on 2 and 50 DF, p-value: 0.1559
>
>
> Value of test-statistic is: -1.4891 1.4462
>
> Critical values for test statistics:
> 1pct 5pct 10pct
> tau2 -3.51 -2.89 -2.58
> phi1 6.70 4.71 3.86 Finally, for the type="trend": $\Delta y_t = a_0 + \gamma * y_{t-1} + a_{2}t + e_t$ (formula from Enders p. 208) (where $a_{2}t$ is a time trend term)
The hypotheses (from Enders p. 208) are as follows: tau: $\gamma=0$ phi3: $\gamma = a_2 = 0$ phi2: $a_0 = \gamma = a_2 = 0$ This is similar to the R output. In this case, the test statistics are -2.4216 2.1927 2.9343
In all of these cases, these fall within the "fail to reject the null" zones (see critical values below). What tau3 implies, as above, is that we fail to reject the null of unit root, implying a unit root is present.
Failing to reject phi3 implies two things: 1) $\gamma = 0$ (unit root) AND 2) there is no time trend term, i.e., $a_2=0$ . If we rejected this null, it would imply that one or both of these terms was not 0.
Failing to reject phi2 implies 3 things: 1) $\gamma = 0$ AND 2) no time trend term AND 3) no drift term, i.e. that $\gamma =0$ , that $a_0 = 0$ , and that $a_2 = 0$ . Rejecting this null implies that one, two, OR all three of these terms was NOT zero. Here is the R output > summary(ur.df(y=income, type = "trend",lags=1))
>
> ###############################################
> # Augmented Dickey-Fuller Test Unit Root Test #
> ###############################################
>
> Test regression trend
>
>
> Call:
> lm(formula = z.diff ~ z.lag.1 + 1 + tt + z.diff.lag)
>
> Residuals:
> Min 1Q Median 3Q Max
> -0.036693 -0.016457 -0.000435 0.014344 0.074299
>
> Coefficients:
> Estimate Std. Error t value Pr(>|t|)
> (Intercept) 1.0369478 0.4272693 2.427 0.0190 *
> z.lag.1 -0.1767666 0.0729961 -2.422 0.0192 *
> tt 0.0006299 0.0003348 1.881 0.0659 .
> z.diff.lag 0.2557788 0.1362896 1.877 0.0665 .
> ---
> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
>
> Residual standard error: 0.02419 on 49 degrees of freedom
> Multiple R-squared: 0.1342, Adjusted R-squared: 0.08117
> F-statistic: 2.531 on 3 and 49 DF, p-value: 0.06785
>
>
> Value of test-statistic is: -2.4216 2.1927 2.9343
>
> Critical values for test statistics:
> 1pct 5pct 10pct
> tau3 -4.04 -3.45 -3.15
> phi2 6.50 4.88 4.16
> phi3 8.73 6.49 5.47 In your specific example above, for the d.Aus data, since both of the test statistics are inside of the "fail to reject" zone, it implies that $\gamma=0$ AND $a_0 = 0$ , meaning that there is a unit root, but no drift term. | {
"source": [
"https://stats.stackexchange.com/questions/24072",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/43908/"
]
} |
24,330 | I'm playing with a randomForest and have found that generally increasing the sampSize leads to better performance. Is there a rule / formula / etc that suggests what the optimal sampSize should be or is it a trial and error thing? I guess another way of phrasing it; what are my risks of too small of a sampSize or too large (overfitting?)? This question is referring to the R implementation of random forest in the randomForest package. The function randomForest has a parameter sampSize which is described in the documentation as Size(s) of sample to draw. For classification, if sampsize is a vector of the length the number of strata, then sampling is stratified by strata, and the elements of sampsize indicate the numbers to be drawn from the strata. | In general, the sample size for a random forest acts as a control on the "degree of randomness" involved, and thus as a way of adjusting the bias-variance tradeoff. Increasing the sample size results in a "less random" forest, and so has a tendency to overfit. Decreasing the sample size increases the variation in the individual trees within the forest, preventing overfitting, but usually at the expense of model performance. A useful side-effect is that lower sample sizes reduce the time needed to train the model. The usual rule of thumb for the best sample size is a "bootstrap sample", a sample equal in size to the original dataset, but selected with replacement, so some rows are not selected, and others are selected more than once. This typically provides near-optimal performance, and is the default in the standard R implementation. However, you may find in real-world applications that adjusting the sample size can lead to improved performance. When in doubt, select the appropriate sample size (and other model parameters) using cross-validation. | {
"source": [
"https://stats.stackexchange.com/questions/24330",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7411/"
]
} |
24,441 | It seems to me that only two R packages are able to perform Latent Dirichlet Allocation : One is lda , authored by Jonathan Chang; and the other is topicmodels authored by Bettina Grün and Kurt Hornik. What are the differences between these two packages, in terms of performance, implementation details and extensibility? | Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sampling. See http://www.jstatsoft.org/v40/i13/paper . The package works well with the utilities from the tm package. The lda package uses a collapsed Gibbs Sampler for a number of models similar to those from the GSL library. However, it has been implemented by the package authors itself, not by Blei et al. This implementation therefore differs in general from the estimation technique proposed in the original papers introducing these model variants, where the VEM algorithm is usually applied. On the other hand, the package offers more functionality then the other package.
The package provides text mining functionality too. Extensibility:
Regarding extensibility, the topicmodel code by its very nature can be extended to interface other topic model code written in C and C++. The lda package seems to be more relying on the specific implementation provided by the authors, but there Gibbs sampler might allow specifying your own topic model. For extensibility issues nota bene, the former is licensed under GPL-2 and the latter LGPL, so it might depend on what you need to extend it for (GPL-2 is stricter regarding the open source aspect, i.e. you can't use it in proprietary software). Performance:
I can't help you here, I only used topicmodels so far. Conclusion: Personally I use topicmodels , as it is well documented (see the JSS paper above) and I trust the authors (Grün also implemeted flexmix and Hornik is R core member). | {
"source": [
"https://stats.stackexchange.com/questions/24441",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3026/"
]
} |
24,442 | I'm running into troubles fitting a polytomous logistic regression model using grouped data. The data are of the form (dput at bottom): > head(alligator)
lake sex size food count
1 Hancock male small fish 7
2 Hancock male small invert 1
3 Hancock male small reptile 0
4 Hancock male small bird 0
5 Hancock male small other 5
6 Hancock male large fish 4 And I've tried to fit the model with vglm() from package VGAM: > result <- vglm(food~lake+size+sex, data=alligator, fam=multinomial, weights=count)
Error in if (max(abs(ycounts - round(ycounts))) > smallno) warning("converting 'ycounts' to integer in @loglikelihood") :
missing value where TRUE/FALSE needed
In addition: Warning messages:
1: In checkwz(wz, M = M, trace = trace, wzepsilon = control$wzepsilon) :
96 elements replaced by 1.819e-12 It was also suggested to look at mlogit() from package globaltest (on Bioconductor), but it does not appear to support grouped data. It obviously doesn't support the weights parameter, but I can't find where the equivalent parameter is documented: source("http://bioconductor.org/biocLite.R")
biocLite("globaltest")
result <- mlogit(food~lake+size+sex, weights=count, data=alligator)
Error in mlogit(food ~ lake + size + sex, weights = count, data = alligator) :
unused argument(s) (weights = count) If anyone could put me down the right path, I'd appreciate it! > dput(alligator)
structure(list(lake = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
3L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L,
4L, 4L, 4L, 4L, 4L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("George", "Hancock",
"Oklawaha", "Trafford"), class = "factor"), sex = structure(c(2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("female",
"male"), class = "factor"), size = structure(c(2L, 2L, 2L, 2L,
2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L,
2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L,
1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L,
2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L,
1L, 1L, 2L, 2L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L), .Label = c("large",
"small"), class = "factor"), food = structure(c(2L, 3L, 5L, 1L,
4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L,
2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L,
3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L,
5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L,
1L, 4L, 2L, 3L, 5L, 1L, 4L, 2L, 3L, 5L, 1L, 4L), .Label = c("bird",
"fish", "invert", "other", "reptile"), class = "factor"), count = c(7L,
1L, 0L, 0L, 5L, 4L, 0L, 0L, 1L, 2L, 16L, 3L, 2L, 2L, 3L, 3L,
0L, 1L, 2L, 3L, 2L, 2L, 0L, 0L, 1L, 13L, 7L, 6L, 0L, 0L, 3L,
9L, 1L, 0L, 2L, 0L, 1L, 0L, 1L, 0L, 3L, 7L, 1L, 0L, 1L, 8L, 6L,
6L, 3L, 5L, 2L, 4L, 1L, 1L, 4L, 0L, 1L, 0L, 0L, 0L, 13L, 10L,
0L, 2L, 2L, 9L, 0L, 0L, 1L, 2L, 3L, 9L, 1L, 0L, 1L, 8L, 1L, 0L,
0L, 1L)), .Names = c("lake", "sex", "size", "food", "count"), class = "data.frame", row.names = c(NA,
-80L)) | Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sampling. See http://www.jstatsoft.org/v40/i13/paper . The package works well with the utilities from the tm package. The lda package uses a collapsed Gibbs Sampler for a number of models similar to those from the GSL library. However, it has been implemented by the package authors itself, not by Blei et al. This implementation therefore differs in general from the estimation technique proposed in the original papers introducing these model variants, where the VEM algorithm is usually applied. On the other hand, the package offers more functionality then the other package.
The package provides text mining functionality too. Extensibility:
Regarding extensibility, the topicmodel code by its very nature can be extended to interface other topic model code written in C and C++. The lda package seems to be more relying on the specific implementation provided by the authors, but there Gibbs sampler might allow specifying your own topic model. For extensibility issues nota bene, the former is licensed under GPL-2 and the latter LGPL, so it might depend on what you need to extend it for (GPL-2 is stricter regarding the open source aspect, i.e. you can't use it in proprietary software). Performance:
I can't help you here, I only used topicmodels so far. Conclusion: Personally I use topicmodels , as it is well documented (see the JSS paper above) and I trust the authors (Grün also implemeted flexmix and Hornik is R core member). | {
"source": [
"https://stats.stackexchange.com/questions/24442",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/696/"
]
} |
24,562 | Here is the article in NY times called "Apple confronts the law of large numbers" . It tries to explain Apple share price rise using law of large numbers. What statistical (or mathematical) errors does this article make? | Here is the rub: Apple is so big, it’s running up against the law of
large numbers. Also known as the golden theorem, with a proof attributed to the
17th-century Swiss mathematician Jacob Bernoulli, the law states that
a variable will revert to a mean over a large sample of results. In
the case of the largest companies, it suggests that high earnings
growth and a rapid rise in share price will slow as those companies
grow ever larger. This muddled jumble actually refers to three different phenomena! The (various) Laws of Large Numbers are fundamental in probability theory for characterizing situations where it is reasonable to expect large samples to give increasingly better information about a process or population being sampled. Indeed, Jacob Bernoulli was the first to recognize the need to state and prove such a theorem, which appeared in his posthumous Ars Conjectandi in 1713 (edited by nephew Nicholas Bernoulli). There is no apparent valid application of such a law to Apple's growth. Regression toward the mean was first recognized by Francis Galton in the 1880's. It has often been underappreciated among business analysts, however. For example, at the beginning of 1933 (during the depths of a Great Depression), Horace Secrist published his magnum opus, the Triumph of Mediocrity in Business. In it, he copiously examined business time series and found, in every case, evidence of regression toward the mean. But, failing to recognize this as an ineluctable mathematical phenomenon, he maintained that he had uncovered a basic truth of business development! This fallacy of mistaking a purely mathematical pattern for the result of some underlying force or tendency (now often called the "regression fallacy") is reminiscent of the quoted passage. (It is noteworthy that Secrist was a prominent statistician, author of one of the most popular statistics textbooks published at the time. On JSTOR, you can find a lacerating review of Triumph... by Harold Hotelling published in JASA in late 1933. In a subsequent exchange of letters with Secrist, Hotelling wrote My review ... was chiefly devoted to warning readers not to conclude that business firms have a tendency to become mediocre ... To "prove" such a mathematical result by a costly and prolonged numerical study ... is analogous to proving the multiplication table by arranging elephants in rows and columns, and then doing the same for numerous other kinds of animals. The performance, though perhaps entertaining, and having a certain pedagogical value, is not an important contribution either to zoology or to mathematics. [JASA Vol. 29, No. 186 (June 1934), pp 198 and 199].) The NY Times passage seems to make the same mistake with Apple's business data. If we read on in the article, however, we soon uncover the author's intended meaning: If Apple’s share price grew even 20 percent a year for the next decade, which is far below its current blistering pace, its \$500 billion market capitalization would be more than \$3 trillion by 2022. This, of course, is a statement about extrapolation of exponential growth. As such it contains echoes of Malthusian population predictions . The hazards of extrapolation are not confined to exponential growth, however. Mark Twain (Samuel Clements) pilloried wanton extrapolators in Life on the Mississippi (1883, chapter 17): Now, if I wanted to be one of those ponderous scientific people, and 'let on' to prove ... what will occur in the far future by what has occurred in late years, what an opportunity is here! ... Please observe:-- In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the “Old Oolitic Silurian Period,” just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and threequarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact. ” (Emphasis added.) Twain's satire compares favorably to the article's quotation of business analyst Robert Cihra: If you extrapolate far enough out into the future, to sustain that growth Apple would have to sell an iPhone to every man, woman, child, animal and rock on the planet. (Unfortunately, it appears Cihra does not heed his own advice: he rates this stock a "buy." He might be right, not on the merits, but by virtue of the greater fool theory .) If we take the article to mean "beware of extrapolating previous growth into the future," we will get much out of it. Investors who think this company is a good buy because its PE ratio is low (which includes several of the notable money managers quoted in the article) are no better than the "ponderous scientific people" Twain skewered over a century ago. A better acquaintance with Bernoulli, Hotelling, and Twain would have improved the accuracy and readability of this article, but in the end it seems to have gotten the message right. | {
"source": [
"https://stats.stackexchange.com/questions/24562",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2116/"
]
} |
24,781 | I am a graduate student in computer science. I have been doing some exploratory factor analysis for a research project. My colleagues (who are leading the project) use SPSS, while I prefer to use R. This didn't matter until we discovered a major discrepancy between the two statistical packages. We are using principal axis factoring as the extraction method (please note that I am well aware of the difference between PCA and factor analysis, and that we are not using PCA , at least not intentionally). From what I've read, this should correspond to "principal axis" method in R, and either "principal axis factoring" or "unweighted least squares" in SPSS, according to R documentation . We are using an oblique rotation method (specifically, promax ) because we expect correlated factors, and are interpreting the pattern matrix . Running the two procedures in R and SPSS, there are major differences. The pattern matrix gives different loadings. Although this gives more or less the same factor to variable relationships, there is up to a 0.15 difference between corresponding loadings, which seems more than would be expected by just a different implementation of the extraction method and promax rotations. However, that is not the most startling difference. The cumulative variance explained by the factors is around 40% in the SPSS results, and 31% in the R results. This is a huge difference, and has led to my colleagues wanting to use SPSS instead of R. I have no problem with this, but a difference that big makes me think that we might be interpreting something incorrectly, which is a problem. Muddying the waters even more, SPSS reports different types of explained variance when we run unweighted least squares factoring. The proportion of explained variance by Initial Eigenvalues is 40%, while the proportion of explained variance from Extraction Sums of Squared Loadings (SSL) is 33%. This leads me to think that the Initial Eigenvalues is not the appropriate number to look at (I suspect this is the variance explained before rotation, though which it's so big is beyond me). Even more confusing, SPSS also shows Rotation SSL, but does not calculate the percentage of explained variance (SPSS tells me that having correlated factors means I cannot add SSLs to find the total variance, which makes sense with the math I've seen). The reported SSLs from R do not match any of these, and R tells me that it describes 31% of the total variance. R's SSLs match the Rotation SSLs the most closely. R's eigenvalues from the original correlation matrix do match the Initial Eigenvalues from SPSS. Also, please note that I have played around with using different methods, and that SPSS's ULS and PAF seem to match R's PA method the closest. My specific questions: How much of a difference should I expect between R and SPSS with factor analysis implementations? Which of the Sums of Squared Loadings from SPSS should I be interpreting, Initial Eigenvalues, Extraction, or Rotation? Are there any other issues that I might have overlooked? My calls to SPSS and R are as follows: SPSS: FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT INITIAL KMO AIC EXTRACTION ROTATION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION PROMAX(4). R: library(psych)
fa.results <- fa(data, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25) | First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pinpoint the problem. Specifically you can run FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT CORRELATION
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE. to see the correlation matrix SPSS is using to carry out the factor analysis. Then, in R, prepare the correlation matrix yourself by running r <- cor(data) Any discrepancy in the way missing values are handled should be apparent at this stage. Once you have checked that the correlation matrix is the same, you can feed it to the fa function and run your analysis again: fa.results <- fa(r, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25) If you still get different results in SPSS and R, the problem is not missing values-related. Next, you can compare the results of the factor analysis/extraction method itself. FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT EXTRACTION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE. and fa.results <- fa(r, nfactors=6, rotate="none",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25) Again, compare the factor matrices/communalities/sum of squared loadings. Here you can expect some tiny differences but certainly not of the magnitude you describe. All this would give you a clearer idea of what's going on. Now, to answer your three questions directly: In my experience, it's possible to obtain very similar results, sometimes after spending some time figuring out the different terminologies and fiddling with the parameters. I have had several occasions to run factor analyses in both SPSS and R (typically working in R and then reproducing the analysis in SPSS to share it with colleagues) and always obtained essentially the same results. I would therefore generally not expect large differences, which leads me to suspect the problem might be specific to your data set. I did however quickly try the commands you provided on a data set I had lying around (it's a Likert scale) and the differences were in fact bigger than I am used to but not as big as those you describe. (I might update my answer if I get more time to play with this.) Most of the time, people interpret the sum of squared loadings after rotation as the “proportion of variance explained” by each factor but this is not meaningful following an oblique rotation (which is why it is not reported at all in psych and SPSS only reports the eigenvalues in this case – there is even a little footnote about it in the output). The initial eigenvalues are computed before any factor extraction. Obviously, they don't tell you anything about the proportion of variance explained by your factors and are not really “sum of squared loadings” either (they are often used to decide on the number of factors to retain). SPSS “Extraction Sums of Squared Loadings” should however match the “SS loadings” provided by psych . This is a wild guess at this stage but have you checked if the factor extraction procedure converged in 25 iterations? If the rotation fails to converge, SPSS does not output any pattern/structure matrix and you can't miss it but if the extraction fails to converge, the last factor matrix is displayed nonetheless and SPSS blissfully continues with the rotation. You would however see a note “a. Attempted to extract 6 factors. More than 25 iterations required. (Convergence=XXX). Extraction was terminated.” If the convergence value is small (something like .005, the default stopping condition being “less than .0001”), it would still not account for the discrepancies you report but if it is really large there is something pathological about your data. | {
"source": [
"https://stats.stackexchange.com/questions/24781",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9918/"
]
} |
24,782 | I have about 400 pieces of silver of different geometric dimensions. They were assigned to six groups and each group went through a series of stress tests, such as bending, pulling, putting in fire for a period of time, etc. The treatments that were given to the six groups were not the same, but fairly similar. The sizes of the six groups were not the same. The pieces either broke at some stage and that was recorded as a success or didn't, which was recorded as a failure. The time of each success was also recorded. The number of successes was about 80. My goal is build a predictive model to determine if a piece of silver breaks based on its physical dimensions and the treatment it goes through. I have been somewhat successful in building a model using the physical dimensions, but adding various aspects of the treatment (eg. total time spent in fire) didn't improve the performance at all. I have even tried to build features (eg.total stress on the metal in various directions, total strain on the metal, etc.) based on the physical dimensions and the treatment, for each individual piece, but even these didn't add any predictive performance. How can I incorporate the treatment information in a way that adds to my predictive power? It is clear that the treatment is a factor in whether a piece breaks or not, and it should somehow show up somewhere. N.B. I didn't have any control over the design of the treatment, and testing more samples with other treatments is not an option for me. I'd very much appreciate any suggestions or comments. Many thanks! | First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pinpoint the problem. Specifically you can run FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT CORRELATION
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE. to see the correlation matrix SPSS is using to carry out the factor analysis. Then, in R, prepare the correlation matrix yourself by running r <- cor(data) Any discrepancy in the way missing values are handled should be apparent at this stage. Once you have checked that the correlation matrix is the same, you can feed it to the fa function and run your analysis again: fa.results <- fa(r, nfactors=6, rotate="promax",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25) If you still get different results in SPSS and R, the problem is not missing values-related. Next, you can compare the results of the factor analysis/extraction method itself. FACTOR
/VARIABLES <variables>
/MISSING PAIRWISE
/ANALYSIS <variables>
/PRINT EXTRACTION
/FORMAT BLANK(.35)
/CRITERIA FACTORS(6) ITERATE(25)
/EXTRACTION ULS
/CRITERIA ITERATE(25)
/ROTATION NOROTATE. and fa.results <- fa(r, nfactors=6, rotate="none",
scores=TRUE, fm="pa", oblique.scores=FALSE, max.iter=25) Again, compare the factor matrices/communalities/sum of squared loadings. Here you can expect some tiny differences but certainly not of the magnitude you describe. All this would give you a clearer idea of what's going on. Now, to answer your three questions directly: In my experience, it's possible to obtain very similar results, sometimes after spending some time figuring out the different terminologies and fiddling with the parameters. I have had several occasions to run factor analyses in both SPSS and R (typically working in R and then reproducing the analysis in SPSS to share it with colleagues) and always obtained essentially the same results. I would therefore generally not expect large differences, which leads me to suspect the problem might be specific to your data set. I did however quickly try the commands you provided on a data set I had lying around (it's a Likert scale) and the differences were in fact bigger than I am used to but not as big as those you describe. (I might update my answer if I get more time to play with this.) Most of the time, people interpret the sum of squared loadings after rotation as the “proportion of variance explained” by each factor but this is not meaningful following an oblique rotation (which is why it is not reported at all in psych and SPSS only reports the eigenvalues in this case – there is even a little footnote about it in the output). The initial eigenvalues are computed before any factor extraction. Obviously, they don't tell you anything about the proportion of variance explained by your factors and are not really “sum of squared loadings” either (they are often used to decide on the number of factors to retain). SPSS “Extraction Sums of Squared Loadings” should however match the “SS loadings” provided by psych . This is a wild guess at this stage but have you checked if the factor extraction procedure converged in 25 iterations? If the rotation fails to converge, SPSS does not output any pattern/structure matrix and you can't miss it but if the extraction fails to converge, the last factor matrix is displayed nonetheless and SPSS blissfully continues with the rotation. You would however see a note “a. Attempted to extract 6 factors. More than 25 iterations required. (Convergence=XXX). Extraction was terminated.” If the convergence value is small (something like .005, the default stopping condition being “less than .0001”), it would still not account for the discrepancies you report but if it is really large there is something pathological about your data. | {
"source": [
"https://stats.stackexchange.com/questions/24782",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5264/"
]
} |
24,853 | In May 2010 Wikipedia user Mcorazao added a sentence to the skewness article that "A zero value indicates that the values are relatively evenly distributed on both sides of the mean, typically but not necessarily implying a symmetric distribution." However, the wiki page has no actual examples of distributions which break this rule. Googling "example asymmetrical distributions with zero skewness" also gives no real examples, at least in the first 20 results. Using the definition that the skew is calculated by $ \operatorname{E}\Big[\big(\tfrac{X-\mu}{\sigma}\big)^{\!3}\, \Big]$, and the R formula sum((x-mean(x))^3)/(length(x) * sd(x)^3) I can construct a small, arbitrary distribution to make the skewness low. For example, the distribution x = c(1, 3.122, 5, 4, 1.1) yields a skew of $-5.64947\cdot10^{-5}$. But this is a small sample and moreover the deviation from symmetry is not large. So, is it possible to construct a larger distribution with one peak that is highly asymmetrical but still has a skewness of nearly zero? | Consider discrete distributions. One that is supported on $k$ values $x_1, x_2,\ldots, x_k$ is determined by non-negative probabilities $p_1, p_2,\ldots, p_k$ subject to the conditions that (a) they sum to 1 and (b) the skewness coefficient equals 0 (which is equivalent to the third central moment being zero). That leaves $k-2$ degrees of freedom (in the equation-solving sense, not the statistical one!). We can hope to find solutions that are unimodal. To make the search for examples easier, I sought solutions supported on a small symmetrical vector $\mathbf{x}=(-3,-2,-1,0,1,2,3)$ with a unique mode at $0$ , zero mean, and zero skewness. One such solution is $(p_1, \ldots, p_7) = (1396, 3286, 9586, 47386, 8781, 3930, 1235)/75600$ . You can see it is asymmetric. Here's a more obviously asymmetric solution with $\mathbf{x} = (-3,-1,0,1,2)$ (which is asymmetric) and $p = (1,18, 72, 13, 4)/108$ : Now it's obvious what's going on: because the mean equals $0$ , the negative values contribute $(-3)^3=-27$ and $18 \times (-1)^3=-18$ to the third moment while the positive values contribute $4\times 2^3 = 32$ and $13 \times 1^3 = 13$ , exactly balancing the negative contributions. We can take a symmetric distribution about $0$ , such as $\mathbf{x}=(-1,0,1)$ with $\mathbf{p}=(1,4,1)/6$ , and shift a little mass from $+1$ to $+2$ , a little mass from $+1$ down to $-1$ , and a slight amount of mass down to $-3$ , keeping the mean at $0$ and the skewness at $0$ as well, while creating an asymmetry. The same approach will work to maintain zero mean and zero skewness of a continuous distribution while making it asymmetric; if we're not too aggressive with the mass shifting, it will remain unimodal. Edit: Continuous Distributions Because the issue keeps coming up, let's give an explicit example with continuous distributions. Peter Flom had a good idea: look at mixtures of normals. A mixture of two normals won't do: when its skewness vanishes, it will be symmetric. The next simplest case is a mixture of three normals. Mixtures of three normals, after an appropriate choice of location and scale, depend on six real parameters and therefore should have more than enough flexibility to produce an asymmetric, zero-skewness solution. To find some, we need to know how to compute skewnesses of mixtures of normals. Among these, we will search for any that are unimodal (it is possible there are none). Now, in general, the $r^\text{th}$ (non-central) moment of a standard normal distribution is zero when $r$ is odd and otherwise equals $2^{r/2}\Gamma\left(\frac{1-r}{2}\right)/\sqrt{\pi}$ . When we rescale that standard normal distribution to have a standard deviation of $\sigma$ , the $r^\text{th}$ moment is multiplied by $\sigma^r$ . When we shift any distribution by $\mu$ , the new $r^\text{th}$ moment can be expressed in terms of moments up to and including $r$ . The moment of a mixture of distributions (that is, a weighted average of them) is the same weighted average of the individual moments. Finally, the skewness is zero exactly when the third central moment is zero, and this is readily computed in terms of the first three moments. This gives us an algebraic attack on the problem. One solution I found is an equal mixture of three normals with parameters $(\mu, \sigma)$ equal to $(0,1)$ , $(1/2,1)$ , and $(0, \sqrt{127/18}) \approx (0, 2.65623)$ . Its mean equals $(0 + 1/2 + 0)/3 = 1/6$ . This image shows the pdf in blue and the pdf of the distribution flipped about its mean in red. That they differ shows they are both asymmetric. (The mode is approximately $0.0519216$ , unequal to the mean of $1/6$ .) They both have zero skewness by construction . The plots indicate these are unimodal. (You can check using Calculus to find local maxima.) | {
"source": [
"https://stats.stackexchange.com/questions/24853",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2073/"
]
} |
24,934 | Is there any command-line tool that accepts the flow of numbers (in ascii format) from standard input and gives the basic descriptive statistics for this flow, such as min, max, average, median, RMS, quantiles etc? The output is welcome to be parseable by the next command in command-line chain. Working environment is Linux, but other options are welcome. | You can do this with R , which may be a bit of overkill... EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript , which is meant to do what you're trying to do. For example, if I have a file bar which has a list of numbers, one per line: Rscript -e 'summary (as.numeric (readLines ("stdin")))' < bar Will send the numbers in the file into R and run R's summary command on the lines, returning something like: Min. 1st Qu. Median Mean 3rd Qu. Max.
1.00 2.25 3.50 3.50 4.75 6.00 You could also do something like: Rscript -e 'quantile (as.numeric (readLines ("stdin")), probs=c(0.025, 0.5, 0.975))' to get quantiles. And you could obviously chop off the first line of output (which contains labels) with something like: Rscript -e 'summary (as.numeric (readLines ("stdin")))' < bar | tail -n +2 I'd highly recommend doing what you want in interactive R first, to make sure you have the command correct. In trying this, I left out the closing parenthesis and Rscript returns nothing -- no error message, no result, just nothing. (For the record, file bar contains: 1
2
3
4
5
6 | {
"source": [
"https://stats.stackexchange.com/questions/24934",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2820/"
]
} |
24,938 | Is it possible that two random variables have the same distribution and yet they are almost surely different? | Let $X\sim N(0,1)$ and define $Y=-X$. It is easy to prove that $Y\sim N(0,1)$. But
$$
P\{\omega : X(\omega)=Y(\omega)\} = P\{\omega : X(\omega)=0,Y(\omega)=0\} \leq P\{\omega : X(\omega)=0\} = 0 \, .
$$ Hence, $X$ and $Y$ are different with probability one. | {
"source": [
"https://stats.stackexchange.com/questions/24938",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9993/"
]
} |
25,672 | I recently read a post from R-Bloggers, that linked to this blog post from John Myles White about a new language called Julia . Julia takes advantage of a just-in-time compiler that gives it wicked fast run times and puts it on the same order of magnitude of speed as C/C++ (the same order , not equally fast). Furthermore, it uses the orthodox looping mechanisms that those of us who started programming on traditional languages are familiar with, instead of R's apply statements and vector operations. R is not going away by any means, even with such awesome timings from Julia. It has extensive support in industry, and numerous wonderful packages to do just about anything. My interests are Bayesian in nature, where vectorizing is often not possible. Certainly serial tasks must be done using loops and involve heavy computation at each iteration. R can be very slow at these serial looping tasks, and C/++ is not a walk in the park to write. Julia seems like a great alternative to writing in C/++, but it's in its infancy, and lacks a lot of the functionality I love about R. It would only make sense to learn Julia as a computational statistics workbench if it garners enough support from the statistics community and people start writing useful packages for it. My questions follow: What features does Julia need to have in order to have the allure that made R the de facto language of statistics? What are the advantages and disadvantages of learning Julia to do computationally-heavy tasks, versus learning a low-level language like C/++? | I think the key will be whether or not libraries start being developed for Julia. It's all well and good to see toy examples (even if they are complicated toys) showing that Julia blows R out of the water at tasks R is bad at. But poorly done loops and hand coded algorithms are not why many of the people I know who use R use R. They use it because for nearly any statistical task under the sun, someone has written R code for it. R is both a programming language and a statistics package - at present Julia is only the former. I think its possible to get there, but there are much more established languages (Python) that still struggle with being usable statistical toolkits. | {
"source": [
"https://stats.stackexchange.com/questions/25672",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1118/"
]
} |
25,690 | I am familiar with using multiple linear regressions to create models of various variables. However, I was curious if regression tests are ever used to do any sort of basic hypothesis testing. If so, what would those scenarios/hypotheses look like? | Here is a simple example. I don't know if you are familiar with R, but hopefully the code is sufficiently self-explanatory. set.seed(9) # this makes the example reproducible
N = 36
# the following generates 3 variables:
x1 = rep(seq(from=11, to=13), each=12)
x2 = rep(rep(seq(from=90, to=150, by=20), each=3 ), times=3)
x3 = rep(seq(from=6, to=18, by=6 ), times=12)
cbind(x1, x2, x3)[1:7,] # 1st 7 cases, just to see the pattern
x1 x2 x3
[1,] 11 90 6
[2,] 11 90 12
[3,] 11 90 18
[4,] 11 110 6
[5,] 11 110 12
[6,] 11 110 18
[7,] 11 130 6
# the following is the true data generating process, note that y is a function of
# x1 & x2, but not x3, note also that x1 is designed above w/ a restricted range,
# & that x2 tends to have less influence on the response variable than x1:
y = 15 + 2*x1 + .2*x2 + rnorm(N, mean=0, sd=10)
reg.Model = lm(y~x1+x2+x3) # fits a regression model to these data Now, lets see what this looks like: . . .
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.76232 27.18170 -0.065 0.94871
x1 3.11683 2.09795 1.486 0.14716
x2 0.21214 0.07661 2.769 0.00927 **
x3 0.17748 0.34966 0.508 0.61524
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
. . .
F-statistic: 3.378 on 3 and 32 DF, p-value: 0.03016 We can focus on the "Coefficients" section of the output. Each parameter estimated by the model gets its own row. The actual estimate itself is listed in the first column. The second column lists the Standard Errors of the estimates, that is, an estimate of how much estimates would 'bounce around' from sample to sample, if we were to repeat this process over and over and over again. More specifically, it is an estimate of the standard deviation of the sampling distribution of the estimate. If we divide each parameter estimate by its SE, we get a t-score , which is listed in the third column; this is used for hypothesis testing, specifically to test whether the parameter estimate is 'significantly' different from 0. The last column is the p-value associated with that t-score. It is the probability of finding an estimated value that far or further from 0, if the null hypothesis were true. Note that if the null hypothesis is not true, it is not clear that this value is telling us anything meaningful at all. If we look back and forth between the Coefficients table and the true data generating process above, we can see a few interesting things. The intercept is estimated to be -1.8 and its SE is 27, whereas the true value is 15. Because the associated p-value is .95, it would not be considered 'significantly different' from 0 (a type II error ), but it is nonetheless within one SE of the true value. There is thus nothing terribly extreme about this estimate from the perspective of the true value and the amount it ought to fluctuate; we simply have insufficient power to differentiate it from 0. The same story holds, more or less, for x1 . Data analysts would typically say that it is not even 'marginally significant' because its p-value is >.10, however, this is another type II error. The estimate for x2 is quite accurate $.21214\approx.2$, and the p-value is 'highly significant', a correct decision. x3 also could not be differentiated from 0, p=.62, another correct decision (x3 does not show up in the true data generating process above). Interestingly, the p-value is greater than that for x1 , but less than that for the intercept, both of which are type II errors. Finally, if we look below the Coefficients table we see the F-value for the model, which is a simultaneous test. This test checks to see if the model as a whole predicts the response variable better than chance alone. Another way to say this, is whether or not all the estimates should be considered unable to be differentiated from 0. The results of this test suggests that at least some of the parameter estimates are not equal to 0, anther correct decision. Since there are 4 tests above, we would have no protection from the problem of multiple comparisons without this. (Bear in mind that because p-values are random variables--whether something is significant would vary from experiment to experiment, if the experiment were re-run--it is possible for these to be inconsistent with each other. This is discussed on CV here: Significance of coefficients in multiple regression: significant t-test vs. non-significant F-statistic , and the opposite situation here: How can a regression be significant yet all predictors be non-significant , & here: F and t statistics in a regression .) Perhaps curiously, there are no type I errors in this example. At any rate, all 5 of the tests discussed in this paragraph are hypothesis tests. From your comment, I gather you may also wonder about how to determine if one explanatory variable is more important than another. This is a very common question, but is quite tricky. Imagine wanting to predict the potential for success in a sport based on an athlete's height and weight, and wondering which is more important. A common strategy is to look to see which estimated coefficient is larger. However, these estimates are specific to the units that were used: for example, the coefficient for weight will change depending on whether pounds or kilograms are used. In addition, it is not remotely clear how to equate / compare pounds and inches, or kilograms and centimeters. One strategy people employ is to standardize (i.e., turn into z-scores) their data first. Then these dimensions are in common units (viz., standard deviations), and the coefficients are similar to r-scores . Moreover, it is possible to test if one r-score is larger than another . Unfortunately, this does not get you out of the woods; unless the true r is exactly 0, the estimated r is driven in large part by the range of covariate values that are used. (I don't know how easy it will be to recognize, but @whuber's excellent answer here: Is $R^2$ useful or dangerous , illustrates this point; to see it, just think about how $r=\sqrt{r^2}$.) Thus, the best that can ever be said is that variability in one explanatory variable within a specified range is more important to determining the level of the response than variability in another explanatory variable within another specified range. | {
"source": [
"https://stats.stackexchange.com/questions/25690",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8590/"
]
} |
25,713 | It is admission season for graduate schools. I (and many students like me) am now trying to decide which statistics program to pick. What are some things those of you who work with statistics suggest we consider about masters programs in statistics? Are there common pitfalls or mistakes students make (perhaps with regard to school reputation)? For employment, should we look to focus on applied statistics or a mix of applied and theoretical statistics? Edit: Here is some additional information about my personal situation: All of the programs I am now considering are in the United States. Some focus on the more applied side and give masters degrees in "applied statistics" while others have more theoretical coursework and grant degrees in "statistics". I'm personally not that intent on working in one industry over another. I have some programming background and know the tech industry a little better than, say, the genomics or bioinformatics industry. However, I'm primarily looking for a career with interesting problems. Edit : Tried to make the question more generally applicable. | Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that. I am going to assume that you are interested in a terminal masters
degree to later go into industry and are not interested in
potentially pursuing a doctorate. Please do not take this reply as
authoritative, though. Below are several points of advice from my own experiences. I've
ordered them very roughly from what I think is most important to
least. As you choose a program, you might weigh each of them against
one another taking some of the points below into account. Try to make the best choice for you personally . There are very
many factors involved in such a decision: geography, personal
relationships, job and networking opportunities, coursework,
costs of education and living, etc. The most important thing is
to weigh each of these yourself and try to use your own best
judgment. You are the one that ultimately lives with the
consequences of your choice, both positive and negative, and you are the only one in a position to appraise your whole
situation. Act accordingly. Learn to collaborate and manage your time . You may not believe
me, but an employer will very likely care more about your
personality, ability to collaborate with others and ability to
work efficiently than they will care about your raw technical
skills. Effective communication is crucial in statistics,
especially when communicating with nonstatisticians. Knowing how
to manage a complex project and make steady progress is very
important. Take advantage of structured statistical-consulting opportunities, if they exist, at your chosen institution. Learn a cognate area . The greatest weakness I see in many
masters and PhD graduates in statistics, both in industry and
in academia, is that they often have very little subject-matter
knowledge. The upshot is that sometimes "standard" statistical
analyses get used due to a lack of understanding of the underlying
mechanisms of the problem they are trying to analyze. Developing
some expertise in a cognate area can, therefore, be very
enriching both statistically and professionally. But, the most
important aspect of this is the learning itself: Realizing that
incorporating subject matter knowledge can be vital to
correctly analyzing a problem. Being competent in the vocabulary
and basic knowledge can also aid greatly in communication and will
improve the perception that your nonstatistician colleagues have
of you. Learn to work with (big) data . Data sets in virtually every
field that uses statistics have been growing tremendously in size
over the last 20 years. In an industrial setting, you will likely
spend more time manipulating data than you will analyzing them. Learning good data-management procedures, sanity checking,
etc. is crucial to valid analysis. The more efficient you become
at it, the more time you'll spend doing the "fun" stuff. This
is something that is very heavily underemphasized and
underappreciated in academic programs. Luckily, there are now
some bigger data sets available to the academic community that
one can play with. If you can't do this within the program
itself, spend some time doing so outside of it. Learn linear regression and the associated applied linear algebra
very, very well . It is surprising how many masters and PhD
graduates obtain their degrees (from "top" programs!), but
can't answer basic questions on linear regression or how it
works. Having this material down cold will serve you incredibly
well. It is important in its own right and is the gateway to
many, many more advanced statistical and machine-learning
techniques. If possible, do a masters report or thesis . The masters
programs associated with some of the top U.S. statistics departments
(usually gauged more on their doctorate programs) seem to have
moved away from incorporating a report or a thesis. The fact of
the matter is that a purely course-based program usually deprives
the student of developing any real depth of knowledge in a
particular area. The area itself is not so important, in my view,
but the experience is. The persistence, time-management,
collaboration with faculty, etc. required to produce a masters
report or thesis can pay off greatly when transitioning to
industry. Even if a program doesn't advertise one, if you're
otherwise interested in it, send an email to the admissions chair
and ask about the possibility of a customized program that allows
for it. Take the most challenging coursework you can manage . While the
most important thing is to understand the core material very,
very well, you should also use your time and money wisely by
challenging yourself as much as possible. The particular topic
matter you choose to learn may appear to be fairly "useless",
but getting some contact with the literature and challenging
yourself to learn something new and difficult will make it easier
when you have to do so later in industry. For example, learning
some of the theory behind classical statistics turns out to be
fairly useless in and of itself for the daily work of many
industrial statisticians, but the concepts conveyed are extremely useful and provide continual guidance. It also will
make all the other statistical methods you come into contact with
seem less mysterious. A program's reputation only matters for your first job . Way too
much emphasis is put on a school's or program's reputation.
Unfortunately, this is a time- and energy-saving heuristic for
human-resource managers. Be aware that programs are judged much
more by their research and doctoral programs than their masters
ones. In many such top departments, the M.S. students often end up
feeling a bit like second-class citizens since most of the
resources are expended on the doctoral programs. One of the brightest young statistical
collaborators I've worked with has a doctorate from a small
foreign university you've probably never heard of. People can get
a wonderful education (sometimes a much better one, especially at
the undergraduate and masters level!) at "no-name"
institutions than at "top" programs. They're almost guaranteed
to get more interaction with core faculty at the former. The name of the school at the top of your resume is likely to
have a role in getting you in the door for your first job and
people will care more about where your most advanced degree came
from than where any others did. After that first job, people will care substantially more about what
experience you bring to the table. Finding a school where lots of
interesting job opportunities come to you through career fairs,
circulated emails, etc., can have a big payoff and this happens
more at top programs. A personal remark : I personally have a preference for somewhat
more theoretical programs that still allow some contact with data
and a smattering of applied courses. The fact of the matter is that
you're simply not going to become a good applied statistician by
obtaining a masters degree. That comes only with (much more) time
and experience in struggling with challenging problems and analyses
on a daily basis. | {
"source": [
"https://stats.stackexchange.com/questions/25713",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10312/"
]
} |
25,804 | I am fitting an lm() model to a data set that includes indicators for the financial quarter (Q1, Q2, Q3, making Q4 a default). Using lm(Y~., data = data ) I get a NA as the coefficient for Q3, and a warning that one variable was exclude because of singularities. Do I need to add a Q4 column? | NA as a coefficient in a regression indicates that the variable in question is linearly related to the other variables. In your case, this means that $Q3 = a \times Q1 + b \times Q2 + c$ for some $a, b, c$. If this is the case, then there's no unique solution to the regression without dropping one of the variables. Adding $Q4$ is only going to make matters worse. | {
"source": [
"https://stats.stackexchange.com/questions/25804",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10346/"
]
} |
25,811 | I am a graduate student in economics who recently converted to R from other very well-known statistical packages (I was using SPSS mainly). My little problem at the moment is that I am the only R user in my class. My classmates use Stata and Gauss and one of my professors even said that R is perfect for engineering, but not for economics. He said that many packages are built by people who know a lot about programming, but not much about economics and therefore are not reliable. He also mentioned the fact that since no money is actually involved in building an R package, there is therefore no incentive to do it correctly (unlike in Stata for example) and that he used R for a time and got some "ridiculous" results in his attempts to estimate some stuff. Moreover, he complained about he random number generator in R which he said was "messy". I've been using R for just a little more than a month and I must say I have fallen in love with it. All this stuff I am hearing from my professor is just discouraging me. So my question is: "Is R reliable for the field of economics?". | Let me share a contrasting view point. I'm an economist. I was trained in econometrics using SAS. I work in financial services and just tonight I updated R based models which we will use tomorrow to put millions of dollars at risk. Your professor is just plain wrong. But the mistake he's making is VERY common and is worth discussing. What your professor seems to be doing is commingling the idea of the R software (the GNU implementation of the S language) vs. packages (or other code) implemented in R. I can write crap implementations of a linear regression using SAS IML. As a matter of fact, I've done that very thing. Does that mean SAS is crap? Of course not. SAS is crap because their pricing is non-transparent, ridiculously expensive, and their in house consultants over promise, under deliver, and charge a premium for the pleasure. But I digress... The openness of R is a double edged sword: Openness allows any Tom, Dick, or Harry to write a crap implementation of any algorithm they think up while smoking pot in the basement of the economics building. The same openness allows practicing economists to share code openly and improve on each other's code. The licensing rules with R mean that I can write parallelization code for running R in parallel on Amazon's cloud and not have to worry about licensing fees for a 30 node cluster. This is a HUGE win for simulation based analysis which is a big part of what I do. Your professor's comment that "many packages are built by people who know a lot about programming, but not much about economics" is, no doubt, correct. But there are 3716 packages on CRAN. You can be damn sure many of them were not written by economists. In the same way that you can be sure many of the 105,089 modules in CPAN were not written by economists. Choose your software carefully. Make sure you understand and have tested the tools you're using. Also make sure you understand the true economics behind which ever implementation you chose. Getting locked into a closed software solution is more costly than just the licensing fees. | {
"source": [
"https://stats.stackexchange.com/questions/25811",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/43908/"
]
} |
25,827 | I'm trying to come up with a metric for measuring non-uniformity of a distribution for an experiment I'm running. I have a random variable that should be uniformly distributed in most cases, and I'd like to be able to identify (and possibly measure the degree of) examples of data sets where the variable is not uniformly distributed within some margin. An example of three data series each with 10 measurements representing frequency of the occurrence of something I'm measuring might be something like this: a: [10% 11% 10% 9% 9% 11% 10% 10% 12% 8%]
b: [10% 10% 10% 8% 10% 10% 9% 9% 12% 8%]
c: [ 3% 2% 60% 2% 3% 7% 6% 5% 5% 7%] <-- non-uniform
d: [98% 97% 99% 98% 98% 96% 99% 96% 99% 98%] I'd like to be able to distinguish distributions like c from those like a and b, and measure c's deviation from a uniform distribution. Equivalently, if there's a metric for how uniform a distribution is (std. deviation close to zero?), I can perhaps use that to distinguish ones with high variance. However, my data may just have one or two outliers, like the c example above, and am not sure if that will be easily detectable that way. I can hack something to do this in software, but am looking for statistical methods/approaches to justify this formally. I took a class years ago, but stats is not my area. This seems like something that should have a well-known approach. Sorry if any of this is completely bone-headed. Thanks in advance! | If you have not only the frequencies but the actual counts, you can use a $\chi^2$ goodness-of-fit test for each data series. In particular, you wish to use the test for a discrete uniform distribution . This gives you a good test , which allows you to find out which data series are likely not to have been generated by a uniform distribution, but does not provide a measure of uniformity. There are other possible approaches, such as computing the entropy of each series - the uniform distribution maximizes the entropy, so if the entropy is suspiciously low you would conclude that you probably don't have a uniform distribution. That works as a measure of uniformity in some sense. Another suggestion would be to use a measure like the Kullback-Leibler divergence , which measures the similarity of two distributions. | {
"source": [
"https://stats.stackexchange.com/questions/25827",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10358/"
]
} |
25,848 | I have a monthly average for a value and a standard deviation corresponding to that average. I am now computing the annual average as the sum of monthly averages, how can I represent the standard deviation for the summed average ? For example considering output from a wind farm: Month MWh StdDev
January 927 333
February 1234 250
March 1032 301
April 876 204
May 865 165
June 750 263
July 780 280
August 690 98
September 730 76
October 821 240
November 803 178
December 850 250 We can say that in the average year the wind farm produces 10,358 MWh, but what is the standard deviation corresponding to this figure ? | Short answer: You average the variances ; then you can take square root to get the average standard deviation . Example Month MWh StdDev Variance
========== ===== ====== ========
January 927 333 110889
February 1234 250 62500
March 1032 301 90601
April 876 204 41616
May 865 165 27225
June 750 263 69169
July 780 280 78400
August 690 98 9604
September 730 76 5776
October 821 240 57600
November 803 178 31684
December 850 250 62500
=========== ===== ======= =======
Total 10358 647564
÷12 863 232 53964 And then the average standard deviation is sqrt(53,964) = 232 From Sum of normally distributed random variables : If $X$ and $Y$ are independent random variables that are normally distributed (and therefore also jointly so), then their sum is also normally distributed ...the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances And from Wolfram Alpha's Normal Sum Distribution : Amazingly, the distribution of a sum of two normally distributed independent variates $X$ and $Y$ with means and variances $(\mu_X,\sigma_X^2)$ and $(\mu_Y,\sigma_Y^2)$, respectively is another normal distribution $$
P_{X+Y}(u) = \frac{1}{\sqrt{2\pi (\sigma_X^2 + \sigma_Y^2)}}
e^{-[u-(\mu_X+\mu_Y)]^2/[2(\sigma_X^2 + \sigma_Y^2)]}
$$ which has mean $$\mu_{X+Y} = \mu_X+\mu_Y$$ and variance $$ \sigma_{X+Y}^2 = \sigma_X^2 + \sigma_Y^2$$ For your data: sum: 10,358 MWh variance: 647,564 standard deviation: 804.71 ( sqrt(647564) ) So to answer your question: How to 'sum' a standard deviation ? You sum them quadratically: s = sqrt(s1^2 + s2^2 + ... + s12^2) Conceptually you sum the variances, then take the square root to get the standard deviation. Because i was curious, i wanted to know the average monthly mean power, and its standard deviation . Through induction, we need 12 normal distributions which: sum to a mean of 10,358 sum to a variance of 647,564 That would be 12 average monthly distributions of: mean of 10,358/12 = 863.16 variance of 647,564/12 = 53,963.6 standard deviation of sqrt(53963.6) = 232.3 We can check our monthly average distributions by adding them up 12 times, to see that they equal the yearly distribution: Mean: 863.16*12 = 10358 = 10,358 ( correct ) Variance: 53963.6*12 = 647564 = 647,564 ( correct ) Note : i'll leave it to someone with a knowledge of the esoteric Latex math to convert my formula images, and formula code into stackexchange formatted formulas. Edit : I moved the short, to the point, answer up top. Because i needed to do this again today, but wanted to double-check that i average the variances . | {
"source": [
"https://stats.stackexchange.com/questions/25848",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3579/"
]
} |
25,889 | I have a few tens of thousands of observations that are in a time series but grouped by locations. For example: location date observationA observationB
---------------------------------------
A 1-2010 22 12
A 2-2010 26 15
A 3-2010 45 16
A 4-2010 46 27
B 1-2010 167 48
B 2-2010 134 56
B 3-2010 201 53
B 4-2010 207 42 I want to see if month x 's observationA has any linear relationship with month x +1's observationB . I did some research and found a zoo function, but it doesn't appear to have a way to limit the lag by group. So if I used zoo and lagged observationB by 1 row, I'd end up with the location A's last observationB as location B's first observationB . I'd rather have the first observationB of any location to be NA or some other obvious value to indicate "don't touch this row". I guess what I'm getting at is whether there's a built-in way of doing this in R? If not, I imagine I can get this done with a standard loop construct. Or do I even need to manipulate the data? | There are several ways how you can get a lagged variable within a group. First of all you should sort the data, so that in each group the time is sorted accordingly. First let us create a sample data.frame: > set.seed(13)
> dt <- data.frame(location = rep(letters[1:2], each = 4), time = rep(1:4, 2), var = rnorm(8))
> dt
location time var
1 a 1 0.5543269
2 a 2 -0.2802719
3 a 3 1.7751634
4 a 4 0.1873201
5 b 1 1.1425261
6 b 2 0.4155261
7 b 3 1.2295066
8 b 4 0.2366797 Define our lag function: lg <- function(x)c(NA, x[1:(length(x)-1)]) Then the lag of variable within group can be calculated using tapply : > unlist(tapply(dt$var, dt$location, lg))
a1 a2 a3 a4 b1 b2 b3 b4
NA 0.5543269 -0.2802719 1.7751634 NA 1.1425261 0.4155261 1.2295066 Using ddply from package plyr : > ddply(dt, ~location, transform, lvar = lg(var))
location time var lvar
1 a 1 -0.1307015 NA
2 a 2 -0.6365957 -0.1307015
3 a 3 -0.6417577 -0.6365957
4 a 4 -1.5191950 -0.6417577
5 b 1 -1.6281638 NA
6 b 2 0.8748671 -1.6281638
7 b 3 -1.3343222 0.8748671
8 b 4 1.5431753 -1.3343222 Speedier version using data.table from package data.table > ddt <- data.table(dt)
> ddt[,lvar := lg(var), by = c("location")]
location time var lvar
[1,] a 1 -0.1307015 NA
[2,] a 2 -0.6365957 -0.1307015
[3,] a 3 -0.6417577 -0.6365957
[4,] a 4 -1.5191950 -0.6417577
[5,] b 1 -1.6281638 NA
[6,] b 2 0.8748671 -1.6281638
[7,] b 3 -1.3343222 0.8748671
[8,] b 4 1.5431753 -1.3343222 Using lag function from package plm > pdt <- pdata.frame(dt)
> lag(pdt$var)
a-1 a-2 a-3 a-4 b-1 b-2 b-3 b-4
NA 0.5543269 -0.2802719 1.7751634 NA 1.1425261 0.4155261 1.2295066 Using lag function from package dplyr > dt %>% group_by(location) %>% mutate(lvar = lag(var))
Source: local data frame [8 x 4]
Groups: location
location time var lvar
1 a 1 0.5543269 NA
2 a 2 -0.2802719 0.5543269
3 a 3 1.7751634 -0.2802719
4 a 4 0.1873201 1.7751634
5 b 1 1.1425261 NA
6 b 2 0.4155261 1.1425261
7 b 3 1.2295066 0.4155261
8 b 4 0.2366797 1.2295066 Last two approaches require conversion from data.frame to another object, although then you do not need to worry about sorting. My personal preference is the last one, which was not available when writing the answer initially. Update: Changed the data.table code to reflect the developments of the data.table package, pointed out by @Hibernating. Update 2: Added dplyr example. | {
"source": [
"https://stats.stackexchange.com/questions/25889",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9397/"
]
} |
25,894 | I have constructed a social capital index using PCA technique. This index comprises values both positive and negative. I want to transform / convert this index to 0-100 scale to make it easy to interpret. Please suggest me an easiest way to do so. | Any variable (univariate distribution) $v$ with observed $min_{old}$ and $max_{old}$ values (or these could be preset potential bounds for values) can be rescaled to range $min_{new}$ to $max_{new}$ by the following formula: $\frac{max_{new}-min_{new}}{max_{old}-min_{old}}\cdot (v-max_{old})+max_{new}$ or $\frac{max_{new}-min_{new}}{max_{old}-min_{old}}\cdot (v-min_{old})+min_{new}$. | {
"source": [
"https://stats.stackexchange.com/questions/25894",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10398/"
]
} |
25,956 | What formula is used in the standard deviation function sd in R? | As pointed out by @Gschneider, it computes the sample standard deviation $$\sqrt{\frac{\sum\limits_{i=1}^{n} (x_i - \bar{x})^2}{n-1}}$$ which you can easily check as follows: > #generate a random vector
> x <- rnorm(n=5, mean=3, sd=1.5)
> n <- length(x)
>
> #sd in R
> sd1 <- sd(x)
>
> #self-written sd
> sd2 <- sqrt(sum((x - mean(x))^2) / (n - 1))
>
> #comparison
> c(sd1, sd2) #:-)
[1] 0.6054196 0.6054196 | {
"source": [
"https://stats.stackexchange.com/questions/25956",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8627/"
]
} |
26,024 | This is a basic question on Box-Jenkins MA models. As I understand, an MA model is basically a linear regression of time-series values $Y$ against previous error terms $e_t,..., e_{t-n}$. That is, the observation $Y$ is first regressed against its previous values $Y_{t-1}, ..., Y_{t-n}$ and then one or more $Y - \hat{Y}$ values are used as the error terms for the MA model. But how are the error terms calculated in an ARIMA(0, 0, 2) model? If the MA model is used without an autoregressive part and thus no estimated value, how can I possibly have an error term? | MA Model Estimation: Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by $$y_t=\varepsilon_t-\theta\varepsilon_{t-1},\quad t=1,2,\cdots,100\quad (1)$$ The error term here is not observed. So to obtain this, Box et al. Time Series Analysis: Forecasting and Control (3rd Edition) , page 228 , suggest that the error term is computed recursively by, $$\varepsilon_t=y_t+\theta\varepsilon_{t-1}$$ So the error term for $t=1$ is,
$$\varepsilon_{1}=y_{1}+\theta\varepsilon_{0}$$
Now we cannot compute this without knowing the value of $\theta$. So to obtain this, we need to compute the Initial or Preliminary estimate of the model, refer to Box et al. of the said book, Section 6.3.2 page 202 state that, It has been shown that the first $q$ autocorrelations of MA($q$) process
are nonzero and can be written in terms of the parameters of the model
as
$$\rho_k=\displaystyle\frac{-\theta_{k}+\theta_1\theta_{k+1}+\theta_2\theta_{k+2}+\cdots+\theta_{q-k}\theta_q}{1+\theta_1^2+\theta_2^2+\cdots+\theta_q^2}\quad k=1,2,\cdots, q$$ The expression above for$\rho_1,\rho_2\cdots,\rho_q$
in terms $\theta_1,\theta_2,\cdots,\theta_q$, supplies $q$ equations
in $q$ unknowns. Preliminary estimates of the $\theta$s can be
obtained by substituting estimates $r_k$ for $\rho_k$ in above
equation Note that $r_k$ is the estimated autocorrelation. There are more discussion in Section 6.3 - Initial Estimates for the Parameters , please read on that. Now, assuming we obtain the initial estimate $\theta=0.5$. Then,
$$\varepsilon_{1}=y_{1}+0.5\varepsilon_{0}$$
Now, another problem is we don't have value for $\varepsilon_0$ because $t$ starts at 1, and so we cannot compute $\varepsilon_1$. Luckily, there are two methods two obtain this, Conditional Likelihood Unconditional Likelihood According to Box et al. Section 7.1.3 page 227 , the values of $\varepsilon_0$ can be substituted to zero as an approximation if $n$ is moderate or large, this method is Conditional Likelihood. Otherwise, Unconditional Likelihood is used, wherein the value of $\varepsilon_0$ is obtain by back-forecasting, Box et al. recommend this method. Read more about back-forecasting at Section 7.1.4 page 231 . After obtaining the initial estimates and value of $\varepsilon_0$, then finally we can proceed with the recursive calculation of the error term. Then the final stage is to estimate the parameter of the model $(1)$, remember this is not the preliminary estimate anymore. In estimating the parameter $\theta$, I use Nonlinear Estimation procedure, particularly the Levenberg-Marquardt algorithm, since MA models are nonlinear on its parameter. Overall, I would highly recommend you to read Box et al. Time Series Analysis: Forecasting and Control (3rd Edition) . | {
"source": [
"https://stats.stackexchange.com/questions/26024",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7795/"
]
} |
26,088 | I recently used bootstrapping to estimate confidence intervals for a project. Someone who doesn't know much about statistics recently asked me to explain why bootstrapping works, i.e., why is it that resampling the same sample over and over gives good results. I realized that although I'd spent a lot of time understanding how to use it, I don't really understand why bootstrapping works. Specifically: if we are resampling from our sample, how is it that we are learning something about the population rather than only about the sample? There seems to be a leap there which is somewhat counter-intuitive. I have found a few answers to this question here which I half-understand. Particularly this one . I am a "consumer" of statistics, not a statistician, and I work with people who know much less about statistics than I do. So, can someone explain, with a minimum of references to theorems, etc., the basic reasoning behind the bootstrap? That is, if you had to explain it to your neighbor, what would you say? | fwiw the medium length version I usually give goes like this: You want to ask a question of a population but you can't. So you take a sample and ask the question of it instead. Now, how confident you should be that the sample answer is close to the population answer obviously depends on the structure of population. One way you might learn about this is to take samples from the population again and again, ask them the question, and see how variable the sample answers tended to be. Since this isn't possible you can either make some assumptions about the shape of the population, or you can use the information in the sample you actually have to learn about it. Imagine you decide to make assumptions, e.g. that it is Normal, or Bernoulli or some other convenient fiction. Following the previous strategy you could again learn about how much the answer to your question when asked of a sample might vary depending on which particular sample you happened to get by repeatedly generating samples of the same size as the one you have and asking them the same question. That would be straightforward to the extent that you chose computationally convenient assumptions. (Indeed particularly convenient assumptions plus non-trivial math may allow you to bypass the sampling part altogether, but we will deliberately ignore that here.) This seems like a good idea provided you are happy to make the assumptions. Imagine you are not. An alternative is to take the sample you have and sample from it instead. You can do this because the sample you have is also a population, just a very small discrete one; it looks like the histogram of your data. Sampling 'with replacement' is just a convenient way to treat the sample like it's a population and to sample from it in a way that reflects its shape. This is a reasonable thing to do because not only is the sample you have the best, indeed the only information you have about what the population actually looks like, but also because most samples will, if they're randomly chosen, look quite like the population they came from. Consequently it is likely that yours does too. For intuition it is important to think about how you could learn about variability by aggregating sampled information that is generated in various ways and on various assumptions. Completely ignoring the possibility of closed form mathematical solutions is important to get clear about this. | {
"source": [
"https://stats.stackexchange.com/questions/26088",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/52/"
]
} |
26,176 | In a simple linear model with a single explanatory variable, $\alpha_i = \beta_0 + \beta_1 \delta_i + \epsilon_i$ I find that removing the intercept term improves the fit greatly (value of $R^2$ goes from 0.3 to 0.9). However, the intercept term appears to be statistically significant. With intercept: Call:
lm(formula = alpha ~ delta, data = cf)
Residuals:
Min 1Q Median 3Q Max
-0.72138 -0.15619 -0.03744 0.14189 0.70305
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.48408 0.05397 8.97 <2e-16 ***
delta 0.46112 0.04595 10.04 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2435 on 218 degrees of freedom
Multiple R-squared: 0.316, Adjusted R-squared: 0.3129
F-statistic: 100.7 on 1 and 218 DF, p-value: < 2.2e-16 Without intercept: Call:
lm(formula = alpha ~ 0 + delta, data = cf)
Residuals:
Min 1Q Median 3Q Max
-0.92474 -0.15021 0.05114 0.21078 0.85480
Coefficients:
Estimate Std. Error t value Pr(>|t|)
delta 0.85374 0.01632 52.33 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.2842 on 219 degrees of freedom
Multiple R-squared: 0.9259, Adjusted R-squared: 0.9256
F-statistic: 2738 on 1 and 219 DF, p-value: < 2.2e-16 How would you interpret these results? Should an intercept term be included in the model or not? Edit Here's the residual sums of squares: RSS(with intercept) = 12.92305
RSS(without intercept) = 17.69277 | First of all, we should understand what the R software is doing when no intercept
is included in the model. Recall that the usual computation of $R^2$
when an intercept is present is
$$
R^2 = \frac{\sum_i (\hat y_i - \bar y)^2}{\sum_i (y_i - \bar
y)^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bar
y)^2} \>.
$$
The first equality only occurs because of the inclusion of the
intercept in the model even though this is probably the more popular
of the two ways of writing it. The second equality actually provides
the more general interpretation! This point is also address in this
related question . But, what happens if there is no intercept in the model? Well, in that
case, R ( silently! ) uses the modified form
$$
R_0^2 = \frac{\sum_i \hat y_i^2}{\sum_i y_i^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i y_i^2} \>.
$$ It helps to recall what $R^2$ is trying to measure. In the former
case, it is comparing your current model to the reference model that only includes an intercept (i.e., constant term). In the
second case, there is no intercept, so it makes little sense to
compare it to such a model. So, instead, $R_0^2$ is computed, which
implicitly uses a reference model corresponding to noise only . In what follows below, I focus on the second expression for both $R^2$ and $R_0^2$ since that expression generalizes to other contexts and it's generally more natural to think about things in terms of residuals. But, how are they different, and when? Let's take a brief digression into some linear algebra and see if we
can figure out what is going on. First of all, let's call the fitted
values from the model with intercept $\newcommand{\yhat}{\hat
{\mathbf y}}\newcommand{\ytilde}{\tilde {\mathbf y}}\yhat$ and the
fitted values from the model without intercept $\ytilde$. We can rewrite
the expressions for $R^2$ and $R_0^2$ as
$$\newcommand{\y}{\mathbf y}\newcommand{\one}{\mathbf 1}
R^2 = 1 - \frac{\|\y - \yhat\|_2^2}{\|\y - \bar y \one\|_2^2} \>,
$$
and
$$
R_0^2 = 1 - \frac{\|\y - \ytilde\|_2^2}{\|\y\|_2^2} \>,
$$
respectively. Now, since $\|\y\|_2^2 = \|\y - \bar y \one\|_2^2 + n \bar y^2$, then $R_0^2 > R^2$ if and only if
$$
\frac{\|\y - \ytilde\|_2^2}{\|\y - \yhat\|_2^2} < 1 + \frac{\bar
y^2}{\frac{1}{n}\|\y - \bar y \one\|_2^2} \> .
$$ The left-hand side is greater than one since the model corresponding
to $\ytilde$ is nested within that of $\yhat$. The second term on the
right-hand side is the squared-mean of the responses divided by the
mean square error of an intercept-only model. So, the larger the mean of the response relative to the other variation, the more "slack" we have and a greater chance of $R_0^2$ dominating $R^2$. Notice that all the
model-dependent stuff is on the left side and non-model dependent
stuff is on the right. Ok, so how do we make the ratio on the left-hand side small? Recall that
$\newcommand{\P}{\mathbf P}\ytilde = \P_0 \y$ and $\yhat = \P_1 \y$ where $\P_0$ and $\P_1$ are
projection matrices corresponding to subspaces $S_0$ and $S_1$ such
that $S_0 \subset S_1$. So, in order for the ratio to be close to one, we need the subspaces
$S_0$ and $S_1$ to be very similar. Now $S_0$ and $S_1$ differ only by
whether $\one$ is a basis vector or not, so that means that $S_0$
had better be a subspace that already lies very close to $\one$. In essence, that means our predictor had better have a strong mean
offset itself and that this mean offset should dominate the variation
of the predictor. An example Here we try to generate an example with an intercept explicitly in the model and which behaves close to the case in the question. Below is some simple R code to demonstrate. set.seed(.Random.seed[1])
n <- 220
a <- 0.5
b <- 0.5
se <- 0.25
# Make sure x has a strong mean offset
x <- rnorm(n)/3 + a
y <- a + b*x + se*rnorm(x)
int.lm <- lm(y~x)
noint.lm <- lm(y~x+0) # Intercept be gone!
# For comparison to summary(.) output
rsq.int <- cor(y,x)^2
rsq.noint <- 1-mean((y-noint.lm$fit)^2) / mean(y^2) This gives the following output. We begin with the model with intercept. # Include an intercept!
> summary(int.lm)
Call:
lm(formula = y ~ x)
Residuals:
Min 1Q Median 3Q Max
-0.656010 -0.161556 -0.005112 0.178008 0.621790
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.48521 0.02990 16.23 <2e-16 ***
x 0.54239 0.04929 11.00 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.2467 on 218 degrees of freedom
Multiple R-squared: 0.3571, Adjusted R-squared: 0.3541
F-statistic: 121.1 on 1 and 218 DF, p-value: < 2.2e-16 Then, see what happens when we exclude the intercept. # No intercept!
> summary(noint.lm)
Call:
lm(formula = y ~ x + 0)
Residuals:
Min 1Q Median 3Q Max
-0.62108 -0.08006 0.16295 0.38258 1.02485
Coefficients:
Estimate Std. Error t value Pr(>|t|)
x 1.20712 0.04066 29.69 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.3658 on 219 degrees of freedom
Multiple R-squared: 0.801, Adjusted R-squared: 0.8001
F-statistic: 881.5 on 1 and 219 DF, p-value: < 2.2e-16 Below is a plot of the data with the model-with-intercept in red and the model-without-intercept in blue. | {
"source": [
"https://stats.stackexchange.com/questions/26176",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10028/"
]
} |
26,247 | Based on a sample of $n$ survival times, I would like to estimate the probability of surviving time $t$, for some specific $t$, using the Kaplan-Meier estimator. Is it possible to do this in R ? Please, note that $t$ is not necessarily an event time. | You can use the output of the survfit function from the survival package and give that to stepfun . km <- survfit(Surv(time, status)~1, data=veteran)
survest <- stepfun(km$time, c(1, km$surv)) Now survest is a function that can be evaluated at any time. > survest(0:100)
[1] 1.0000000 0.9854015 0.9781022 0.9708029 0.9635036 0.9635036 0.9635036
[8] 0.9416058 0.9124088 0.9124088 0.8978102 0.8905109 0.8759124 0.8613139
[15] 0.8613139 0.8467153 0.8394161 0.8394161 0.8175182 0.8029197 0.7883212
[22] 0.7737226 0.7664234 0.7664234 0.7518248 0.7299270 0.7299270 0.7225540
[29] 0.7225540 0.7151810 0.7004350 0.6856890 0.6856890 0.6783160 0.6783160
[36] 0.6709430 0.6635700 0.6635700 0.6635700 0.6635700 0.6635700 0.6635700
[43] 0.6561970 0.6488240 0.6414510 0.6340780 0.6340780 0.6340780 0.6267050
[50] 0.6193320 0.6193320 0.5972130 0.5750940 0.5677210 0.5529750 0.5529750
[57] 0.5456020 0.5456020 0.5456020 0.5382290 0.5382290 0.5308560 0.5308560
[64] 0.5234830 0.5234830 0.5234830 0.5234830 0.5234830 0.5234830 0.5234830
[71] 0.5234830 0.5234830 0.5161100 0.5087370 0.5087370 0.5087370 0.5087370
[78] 0.5087370 0.5087370 0.5087370 0.4939910 0.4939910 0.4866180 0.4866180
[85] 0.4791316 0.4791316 0.4791316 0.4716451 0.4716451 0.4716451 0.4640380
[92] 0.4640380 0.4564308 0.4564308 0.4564308 0.4412164 0.4412164 0.4412164
[99] 0.4412164 0.4257351 0.4179945 | {
"source": [
"https://stats.stackexchange.com/questions/26247",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7064/"
]
} |
26,300 | Correlation does not imply causation, as there could be many explanations for the correlation. But does causation imply correlation? Intuitively, I would think that the presence of causation means there is necessarily some correlation. But my intuition has not always served me well in statistics. Does causation imply correlation? | As many of the answers above have stated, causation does not imply linear correlation . Since a lot of the correlation concepts come from fields that rely heavily on linear statistics, usually correlation is seen as equal to linear correlation. The wikipedia article is an alright source for this, I really like this image: Look at some of the figures in the bottom row, for instance the parabola-ish shape in the 4th example. This is kind of what happens in @StasK answer (with a little bit of noise added). Y can be fully caused by X but if the numeric relationship is not linear and symmetric, you will still have a correlation of 0. The word you are looking for is mutual information : this is sort of the general non-linear version of correlation. In that case, your statement would be true: causation implies high mutual information . | {
"source": [
"https://stats.stackexchange.com/questions/26300",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10543/"
]
} |
26,437 | In the year 2000, Judea Pearl published Causality . What controversies surround this work? What are its major criticisms? | Some authors dislike Pearl's focus on the directed acyclic graph (DAG) as the way in which to view causality. Pearl essentially argues that any causal system can be considered as a non-parametric structural equation model (NPSEM), in which the value of each node is taken as a function of its parents and some individual error term; the error terms between different nodes may in general be correlated, to represent common causes. Cartwright's book Hunting Causes and Using Them , for example, gives an example involving a car engine, which she claims cannot be modelled in the NPSEM framework. Pearl disputes this in his review of Cartwright's book. Others caution that the use of DAGs can be misleading, in that the arrows lend an apparent authority to a chosen model as having causal implications, when this may not be the case at all. See Dawid's Beware of the DAG . For example, the three DAGs $A \rightarrow B \rightarrow C$ , $A \leftarrow B \rightarrow C$ and $A \leftarrow B \leftarrow C$ all induce the same probabilistic model under Pearl's d-separation criterion, which is that A is independent of C given B. They are therefore indistinguishable based upon observational data. However they have quite different causal interpretations , so if we wish to learn about the causal relationships here we would need more than simply observational data, whether that be the results of interventional experiments, prior information about the system, or something else. | {
"source": [
"https://stats.stackexchange.com/questions/26437",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/858/"
]
} |
26,450 | It seems that through various related questions here, there is consensus that the "95%" part of what we call a "95% confidence interval" refers to the fact that if we were to exactly replicate our sampling and CI-computation procedures many times, 95% of thusly computed CIs would contain the population mean. It also seems to be the consensus that this definition does not permit one to conclude from a single 95%CI that there is a 95% chance that the mean falls somewhere within the CI. However, I don't understand how the former doesn't imply the latter insofar as, having imagined many CIs 95% of which contain the population mean, shouldn't our uncertainty (with regards to whether our actually-computed CI contains the population mean or not) force us to use the base-rate of the imagined cases (95%) as our estimate of the probability that our actual case contains the CI? I've seen posts argue along the lines of "the actually-computed CI either contains the population mean or it doesn't, so its probability is either 1 or 0", but this seems to imply a strange definition of probability that is dependent on unknown states (i.e. a friend flips fair coin, hides the result, and I am disallowed from saying there is a 50% chance that it's heads). Surely I'm wrong, but I don't see where my logic has gone awry... | Part of the issue is that the frequentist definition of a probability doesn't allow a nontrivial probability to be applied to the outcome of a particular experiment, but only to some fictitious population of experiments from which this particular experiment can be considered a sample. The definition of a CI is confusing as it is a statement about this (usually) fictitious population of experiments, rather than about the particular data collected in the instance at hand. So part of the issue is one of the definition of a probability: The idea of the true value lying within a particular interval with probability 95% is inconsistent with a frequentist framework. Another aspect of the issue is that the calculation of the frequentist confidence doesn't use all of the information contained in the particular sample relevant to bounding the true value of the statistic. My question "Are there any examples where Bayesian credible intervals are obviously inferior to frequentist confidence intervals" discusses a paper by Edwin Jaynes which has some really good examples that really highlight the difference between confidence intervals and credible intervals. One that is particularly relevant to this discussion is Example 5, which discusses the difference between a credible and a confidence interval for estimating the parameter of a truncated exponential distribution (for a problem in industrial quality control). In the example he gives, there is enough information in the sample to be certain that the true value of the parameter lies nowhere in a properly constructed 90% confidence interval! This may seem shocking to some, but the reason for this result is that confidence intervals and credible intervals are answers to two different questions, from two different interpretations of probability. The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in $100p$% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability $p$ given the particular sample I've actually observed. " To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself. The main reason that any particular 95% confidence interval does not imply a 95% chance of containing the mean is because the confidence interval is an answer to a different question, so it is only the right answer when the answer to the two questions happens to have the same numerical solution. In short, credible and confidence intervals answer different questions from different perspectives; both are useful, but you need to choose the right interval for the question you actually want to ask. If you want an interval that admits an interpretation of a 95% (posterior) probability of containing the true value, then choose a credible interval (and, with it, the attendant conceptualization of probability), not a confidence interval. The thing you ought not to do is to adopt a different definition of probability in the interpretation than that used in the analysis. Thanks to @cardinal for his refinements! Here is a concrete example, from David MaKay's excellent book "Information Theory, Inference and Learning Algorithms" (page 464): Let the parameter of interest be $\theta$ and the data $D$, a pair of points $x_1$ and $x_2$ drawn independently from the following distribution: $p(x|\theta) = \left\{\begin{array}{cl} 1/2 & x = \theta,\\1/2 & x = \theta + 1, \\ 0 & \mathrm{otherwise}\end{array}\right.$ If $\theta$ is $39$, then we would expect to see the datasets $(39,39)$, $(39,40)$, $(40,39)$ and $(40,40)$ all with equal probability $1/4$. Consider the confidence interval $[\theta_\mathrm{min}(D),\theta_\mathrm{max}(D)] = [\mathrm{min}(x_1,x_2), \mathrm{max}(x_1,x_2)]$. Clearly this is a valid 75% confidence interval because if you re-sampled the data, $D = (x_1,x_2)$, many times then the confidence interval constructed in this way would contain the true value 75% of the time. Now consider the data $D = (29,29)$. In this case the frequentist 75% confidence interval would be $[29, 29]$. However, assuming the model of the generating process is correct, $\theta$ could be 28 or 29 in this case, and we have no reason to suppose that 29 is more likely than 28, so the posterior probability is $p(\theta=28|D) = p(\theta=29|D) = 1/2$. So in this case the frequentist confidence interval is clearly not a 75% credible interval as there is only a 50% probability that it contains the true value of $\theta$, given what we can infer about $\theta$ from this particular sample . Yes, this is a contrived example, but if confidence intervals and credible intervals were not different, then they would still be identical in contrived examples. Note the key difference is that the confidence interval is a statement about what would happen if you repeated the experiment many times, the credible interval is a statement about what can be inferred from this particular sample. | {
"source": [
"https://stats.stackexchange.com/questions/26450",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/364/"
]
} |
26,528 | I want to use Lasso or ridge regression for a model with more than 50,000 variables. I want do so using software package in R. How can I estimate the shrinkage parameter ($\lambda$)? Edits: Here is the point I got up to: set.seed (123)
Y <- runif (1000)
Xv <- sample(c(1,0), size= 1000*1000, replace = T)
X <- matrix(Xv, nrow = 1000, ncol = 1000)
mydf <- data.frame(Y, X)
require(MASS)
lm.ridge(Y ~ ., mydf)
plot(lm.ridge(Y ~ ., mydf,
lambda = seq(0,0.1,0.001))) My question is: How do I know which $\lambda$ is best for my model? | The function cv.glmnet from the R package glmnet does automatic cross-validation on a grid of $\lambda$ values used for $\ell_1$-penalized regression problems. In particular, for the lasso. The glmnet package also supports the more general elastic net penalty, which is a combination of $\ell_1$ and $\ell_2$ penalization. As of version 1.7.3. of the package taking the $\alpha$ parameter equal to 0 gives ridge regression (at least, this functionality was not documented until recently). Cross-validation is an estimate of the expected generalization error for each $\lambda$ and $\lambda$ can sensibly be chosen as the minimizer of this estimate. The cv.glmnet function returns two values of $\lambda$. The minimizer, lambda.min , and the always larger lambda.1se , which is a heuristic choice of $\lambda$ producing a less complex model, for which the performance in terms of estimated expected generalization error is within one standard error of the minimum. Different choices of loss functions for measuring the generalization error are possible in the glmnet package. The argument type.measure specifies the loss function. Alternatively, the R package mgcv contains extensive possibilities for estimation with quadratic penalization including automatic selection of the penalty parameters. Methods implemented include generalized cross-validation and REML, as mentioned in a comment. More details can be found in the package authors book: Wood, S.N. (2006) Generalized Additive Models: an introduction with R, CRC. | {
"source": [
"https://stats.stackexchange.com/questions/26528",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7244/"
]
} |
26,676 | While I know that there are a series of functions for generating heat maps in R, the problem is that I'm unable to produce visually appealing maps. For example, the images below are good examples of heat maps I want to avoid. The first clearly lacks detail, while the other one (based on the same points) is too detailed to be useful. Both plots have been generated by the density() function in the spatstat R package. How can I get more "flow" into my plots? What I'm aiming for is more of the look the results of the commercial SpatialKey ( screenshot ) software is able to produce. Any hints, algorithms, packages or lines of code that could take me in this direction? | There are two things that will impact the smoothness of the plot, the bandwidth used for your kernel density estimate and the breaks you assign colors to in the plot. In my experience, for exploratory analysis I just adjust the bandwidth until I get a useful plot. Demonstration below. library(spatstat)
set.seed(3)
X <- rpoispp(10)
par(mfrow = c(2,2))
plot(density(X, 1))
plot(density(X, 0.1))
plot(density(X, 0.05))
plot(density(X, 0.01)) Simply changing the default color scheme won't help any, nor will changing the resolution of the pixels (if anything the default resolution is too precise, and you should reduce the resolution and make the pixels larger). Although you may want to change the default color scheme for aesthetic purposes, it is intended to be highly discriminating. Things you can do to help the color are change the scale level to logarithms (will really only help if you have a very inhomogenous process), change the color palette to vary more at the lower end (bias in terms of the color ramp specification in R), or adjust the legend to have discrete bins instead of continuous. Examples of bias in the legend adapted from here , and I have another post on the GIS site explaining coloring the discrete bins in a pretty simple example here . These won't help though if the pattern is over or under smoothed though to begin with. Z <- density(X, 0.1)
logZ <- eval.im(log(Z))
bias_palette <- colorRampPalette(c("blue", "magenta", "red", "yellow", "white"), bias=2, space="Lab")
norm_palette <- colorRampPalette(c("white","red"))
par(mfrow = c(2,2))
plot(Z)
plot(logZ)
plot(Z, col=bias_palette(256))
plot(Z, col=norm_palette(5)) To make the colors transparent in the last image (where the first color bin is white) one can just generate the color ramp and then replace the RGB specification with transparent colors. Example below using the same data as above. library(spatstat)
set.seed(3)
X <- rpoispp(10)
Z <- density(X, 0.1)
A <- rpoispp(100) #points other places than density
norm_palette <- colorRampPalette(c("white","red"))
pal_opaque <- norm_palette(5)
pal_trans <- norm_palette(5)
pal_trans[1] <- "#FFFFFF00" #was originally "#FFFFFF"
par(mfrow = c(1,3))
plot(A, Main = "Opaque Density")
plot(Z, add=T, col = pal_opaque)
plot(A, Main = "Transparent Density")
plot(Z, add=T, col = pal_trans)
pal_trans2 <- paste(pal_opaque,"50",sep = "")
plot(A, Main = "All slightly transparent")
plot(Z, add=T, col = pal_trans2) | {
"source": [
"https://stats.stackexchange.com/questions/26676",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3401/"
]
} |
26,722 | Is there a way in R (a built-in function) to calculate the transition matrix for a Markov Chain from a set of observations? For example, taking a data set like the following and calculate the first order transition matrix? dat<-data.frame(replicate(20,sample(c("A", "B", "C","D"), size = 100, replace=TRUE))) | I am not immediately aware of a "built-in" function (e.g., in base or similar), but we can do this very easily and efficiently in a couple of lines of code. Here is a function that takes a matrix (not a data frame) as an input and produces either the transition counts ( prob=FALSE ) or, by default ( prob=TRUE ), the estimated transition probabilities. # Function to calculate first-order Markov transition matrix.
# Each *row* corresponds to a single run of the Markov chain
trans.matrix <- function(X, prob=T)
{
tt <- table( c(X[,-ncol(X)]), c(X[,-1]) )
if(prob) tt <- tt / rowSums(tt)
tt
} If you need to call it on a data frame you can always do trans.matrix(as.matrix(dat)) If you're looking for some third-party package, then Rseek or the R search site may provide additional resources. | {
"source": [
"https://stats.stackexchange.com/questions/26722",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2040/"
]
} |
26,762 | I'm reviewing a paper which has the following biological experiment. A device is used to expose cells to varying amounts of fluid shear stress. As greater shear stress is applied to the cells, more of them start to detach from the substrate. At each level of shear stress, they count the cells that remain attached, and since they know the total number of cells that were attached at the beginning, they can calculate a fractional attachment (or detachment). If you plot the adherent fraction vs. shear stress, the result is a logistic curve. In theory, each individual cell is a single observation, but obviously there are thousands or tens of thousand of cells, so the data set would be gigantic, if it was set up in the usual way (with each row being an observation). So, naturally, my question (as stated in the title) should make sense now. How do we do a logistic regression using the fractional outcome as the D.V.? Is there some automatic transform that can be done in glm? Along the same lines, if there were potentially 3 or more (fractional) measurements, how would one do this for a multinomial logistic regression? | The glm function in R allows 3 ways to specify the formula for a logistic regression model. The most common is that each row of the data frame represents a single observation and the response variable is either 0 or 1 (or a factor with 2 levels, or other varibale with only 2 unique values). Another option is to use a 2 column matrix as the response variable with the first column being the counts of 'successes' and the second column being the counts of 'failures'. You can also specify the response as a proportion between 0 and 1, then specify another column as the 'weight' that gives the total number that the proportion is from (so a response of 0.3 and a weight of 10 is the same as 3 'successes' and 7 'failures'). Either of the last 2 ways would fit what you are trying to do, the last seems the most direct for how you describe your data. | {
"source": [
"https://stats.stackexchange.com/questions/26762",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10747/"
]
} |
26,855 | As the general consensus seems to be to use mixed-models via lmer() in R instead of classical ANOVA (for the often cited reasons, like unbalanced designs, crossed random effects etc.), I would like to give it a try with my data. However I am worried that I would be able to "sell" this approach to my supervisor (who is expecting classical analysis with a p-value in the end) or later to the reviewers. Could you recommend some nice examples of published articles that used mixed-models or lmer() for different designs like repeated-measures or multiple within- and between-subject designs for the field biology, psychology, medicine? | Update 3 (May, 2013): Another really good paper on mixed models in Psychology was released in the Journal of Memory and Language (although I do not agree with the authors conclusions on how to obtain p -values, see package afex instead). It very nicely discusses on how to specify the random effects structure. Go read it! Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal . Journal of Memory and Language , 68(3), 255–278. doi:10.1016/j.jml.2012.11.001 Update 2 (July, 2012): A paper advocating the use in (Social) Psychology when there are crossed (e.g., participants and items) random effects. The big thing is: It shows how to obtain p-values using the pbkrtest package : Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology , 103(1), 54–69. doi:10.1037/a0028347 (only available as a Word .doc) Jake Westfall told me (per mail) that an alternative for obtaining p-values to the advocated Kenward-Rogers approximation (used in pbkrtest) is the (less optimal) Satterthwaite approximation, which can be found in the MixMod package using the anovaTab function. Small update to last update: My R package afex contains function mixed() to conveniently obtain p-values for all effects in a mixed model. Alternatively, the car package now also obtains p-values for mixed models in Anova() using test.statistic = "F" UPDATE1: Another paper describing lme4 Kliegl, R., Wei, P., Dambacher, M., Yan, M., & Zhou, X. (2011). Experimental effects and individual differences in linear mixed models: estimating the relationship between spatial, object, and attraction effects in visual attention. Frontiers in Quantitative Psychology and Measurement , 1, 238. doi:10.3389/fpsyg.2010.00238 Original Response: I do not have a number of examples, only one (see below), but know some paper you should cite from Psychology/Cognitive Sciences. The most important one is definitely: Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language , 59(4), 390–412. doi:10.1016/j.jml.2007.12.005 Another one from Baayen is: Baayen, R. H., & Milin, P. (2010). Analyzing Reaction Times. International Journal of Psychological Research , 3(2), 12–28. I actually totally liked his book, too, which also has a nice introductory chapter on mixed model (and is pretty cheap for a stats book): Baayen, R. H. (2008). Analyzing linguistic data : a practical introduction to statistics using R . Cambridge, UK; New York: Cambridge University Press. I probably guess he also has a lot of papers using lme4 , but as my main interest is not psycholinguistics, you might wanna check his homepage . From my field (reasoning), I know of this one paper that uses lme4 : Fugard, A. J. B., Pfeifer, N., Mayerhofer, B., & Kleiter, G. D. (2011). How people interpret conditionals: Shifts toward the conditional event. Journal of Experimental Psychology: Learning, Memory, and Cognition , 37(3), 635–648. doi:10.1037/a0022329 (although I have the feeling they use a likelihood ratio test to compare models which only differ in the fixed parameters, which I have heard is not the correct way. I think you should use AIC instead.) | {
"source": [
"https://stats.stackexchange.com/questions/26855",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10389/"
]
} |
27,112 | Why is it dangerous to initialize weights with zeros? Is there any simple example that demonstrates it? | edit see alfa's comment below. I'm not an expert on neural nets, so I'll defer to him. My understanding is different from the other answers that have been posted here. I'm pretty sure that backpropagation involves adding to the existing weights, not multiplying. The amount that you add is specified by the delta rule . Note that wij doesn't appear on the right-hand-side of the equation. My understanding is that there are at least two good reasons not to set the initial weights to zero: First, neural networks tend to get stuck in local minima, so it's a good idea to give them many different starting values. You can't do that if they all start at zero. Second, if the neurons start with the same weights, then all the neurons will follow the same gradient, and will always end up doing the same thing as one another. | {
"source": [
"https://stats.stackexchange.com/questions/27112",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8078/"
]
} |
27,266 | Is there a way to simplify this equation? $$\dbinom{8}{1} + \dbinom{8}{2} + \dbinom{8}{3} + \dbinom{8}{4} + \dbinom{8}{5} + \dbinom{8}{6} + \dbinom{8}{7} + \dbinom{8}{8}$$ Or more generally, $$\sum_{k=1}^{n}\dbinom{n}{k}$$ | See http://en.wikipedia.org/wiki/Combination#Number_of_k-combinations_for_all_k which says $$ \sum_{k=0}^{n} \binom{n}{k} = 2^n$$ You can prove this using the binomial theorem where $x=y=1$. Now, since $\binom{n}{0} = 1$ for any $n$, it follows that $$ \sum_{k=1}^{n} \binom{n}{k} = 2^n - 1$$ In your case $n=8$, so the answer is $2^8 - 1 = 255$. | {
"source": [
"https://stats.stackexchange.com/questions/27266",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7282/"
]
} |
27,300 | I'm new to feature selection and I was wondering how you would use PCA to perform feature selection. Does PCA compute a relative score for each input variable that you can use to filter out noninformative input variables? Basically, I want to be able to order the original features in the data by variance or amount of information contained. | The basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients ( loadings ). You may recall that PCA seeks to replace $p$ (more or less correlated) variables by $k<p$ uncorrelated linear combinations (projections) of the original variables. Let us ignore how to choose an optimal $k$ for the problem at hand. Those $k$ principal components are ranked by importance through their explained variance, and each variable contributes with varying degree to each component. Using the largest variance criteria would be akin to feature extraction , where principal component are used as new features, instead of the original variables. However, we can decide to keep only the first component and select the $j<p$ variables that have the highest absolute coefficient; the number $j$ might be based on the proportion of the number of variables (e.g., keep only the top 10% of the $p$ variables), or a fixed cutoff (e.g., considering a threshold on the normalized coefficients). This approach bears some resemblance with the Lasso operator in penalized regression (or PLS regression). Neither the value of $j$, nor the number of components to retain are obvious choices, though. The problem with using PCA is that (1) measurements from all of the original variables are used in the projection to the lower dimensional space, (2) only linear relationships are considered, and (3) PCA or SVD-based methods, as well as univariate screening methods (t-test, correlation, etc.), do not take into account the potential multivariate nature of the data structure (e.g., higher order interaction between variables). About point 1, some more elaborate screening methods have been proposed, for example principal feature analysis or stepwise method, like the one used for ' gene shaving ' in gene expression studies. Also, sparse PCA might be used to perform dimension reduction and variable selection based on the resulting variable loadings. About point 2, it is possible to use kernel PCA (using the kernel trick ) if one needs to embed nonlinear relationships into a lower dimensional space. Decision trees , or better the random forest algorithm, are probably better able to solve Point 3. The latter allows to derive Gini- or permutation-based measures of variable importance . A last point: If you intend to perform feature selection before applying a classification or regression model, be sure to cross-validate the whole process (see §7.10.2 of the Elements of Statistical Learning , or Ambroise and McLachlan, 2002 ). As you seem to be interested in R solution, I would recommend taking a look at the caret package which includes a lot of handy functions for data preprocessing and variable selection in a classification or regression context. | {
"source": [
"https://stats.stackexchange.com/questions/27300",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5464/"
]
} |
27,332 | I am using R, I searched on Google and learnt that kpss.test() , PP.test() , and adf.test() are used to know about stationarity of time series. But I am not a statistician, who can interpret their results > PP.test(x)
Phillips-Perron Unit Root Test
data: x
Dickey-Fuller = -30.649, Truncation lag parameter = 7, p-value = 0.01
> kpss.test(b$V1)
KPSS Test for Level Stationarity
data: b$V1
KPSS Level = 0.0333, Truncation lag parameter = 3, p-value = 0.1
Warning message:
In kpss.test(b$V1) : p-value greater than printed p-value
> adf.test(x)
Augmented Dickey-Fuller Test
data: x
Dickey-Fuller = -9.6825, Lag order = 9, p-value = 0.01
alternative hypothesis: stationary
Warning message:
In adf.test(x) : p-value smaller than printed p-value I am dealing with thousands of time series, kindly me tell how to check quantitatively about stationarity of time series. | Testing if a series is stationary versus non-stationary requires that you consider a sequence of alternative hypotheses . One for each listable Gaussian Assumption. One has to understand that the Gaussian Assumptions are all about the error process and have nothing to do with the observed series under evaluation. As correctly summarized by StasK this could include violations of stationarity, like mean change, variance change, changes in the parameters of the model over time. For example an upward trending set of values would be a prima facie example of a series that in Y was not constant while the residuals from a suitable model might be described as having a constant mean. Thus the original series is non-stationary in the mean but the residual series is stationary in its mean. If there are unmitigated mean violations in the residual series like Pulses, Level Shifts, Seasonal Pulses and/or Local Time Trends then the residual series (untreated) can be characterized as being non-stationary in the mean while a series of indicator variables could be easily detected and incorporated into the model to render the model residuals stationary in the mean. Now if the variance of the original series exhibits non-stationary variance it is quite reasonable to constrict a filter/model to render an error process that has constant variance. Similarly the residuals from a model might have non-constant variance requiring one of three possible remedies - Weighted Least Squares (broadly overlooked by some analysts) A power transformation to decouple the expected value from the variance of the errors identifiable via a Box-Cox test and/or A need for a GARCH model to account for an ARIMA structure evident in the squared residuals. Continuing if parameters change over time OR the form of the model changes over time then one is faced with the need for detecting this characteristic and remedying it with either data segmentation or the utilization of a TAR approach à la Tong. | {
"source": [
"https://stats.stackexchange.com/questions/27332",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10785/"
]
} |
27,345 | I'm rather evangelistic with regards to the use of likelihood ratios for representing the objective evidence for/against a given phenomenon. However, I recently learned that the Bayes factor serves a similar function in the context of Bayesian methods (i.e. the subjective prior is combined with the objective Bayes factor to yield an objectively updated subjective state of belief). I'm now trying to understand the computational and philosophical differences between a likelihood ratio and a Bayes factor. At the computational level, I understand that while the likelihood ratio is usually computed using the likelihoods that represent the maximum likelihood for each model's respective parameterization (either estimated by cross validation or penalized according to model complexity using AIC), apparently the Bayes factor somehow uses likelihoods that represent the likelihood of each model integrated over it's entire parameter space (i.e. not just at the MLE). How is this integration actually achieved typically? Does one really just try to calculate the likelihood at each of thousands (millions?) of random samples from the parameter space, or are there analytic methods to integrating the likelihood across the parameter space? Additionally, when computing the Bayes factor, does one apply correction for complexity (automatically via cross-validated estimation of likelihood or analytically via AIC) as one does with the likelihood ratio? Also, what are the philosophical differences between the likelihood ratio and the Bayes factor (n.b. I'm not asking about the philosophical differences between the likelihood ratio and Bayesian methods in general, but the Bayes factor as a representation of the objective evidence specifically). How would one go about characterizing the meaning of the Bayes factor as compared to the likelihood ratio? | apparently the Bayes factor somehow uses likelihoods that represent the likelihood of each model integrated over it's entire parameter space (i.e. not just at the MLE). How is this integration actually achieved typically? Does one really just try to calculate the likelihood at each of thousands (millions?) of random samples from the parameter space, or are there analytic methods to integrating the likelihood across the parameter space? First, any situation where you consider a term such as $P(D|M)$ for data $D$ and model $M$ is considered a likelihood model. This is often the bread and butter of any statistical analysis, frequentist or Bayesian, and this is the portion that your analysis is meant to suggest is either a good fit or a bad fit. So Bayes factors are not doing anything fundamentally different than likelihood ratios. It's important to put Bayes factors in their right setting. When you have two models, say, and you convert from probabilities to odds, then Bayes factors act like an operator on prior beliefs: $$ Posterior Odds = Bayes Factor * Prior Odds $$
$$ \frac{P(M_{1}|D)}{P(M_{2}|D)} = B.F. \times \frac{P(M_{1})}{P(M_{2})} $$ The real difference is that likelihood ratios are cheaper to compute and generally conceptually easier to specify. The likelihood at the MLE is just a point estimate of the Bayes factor numerator and denominator, respectively. Like most frequentist constructions, it can be viewed as a special case of Bayesian analysis with a contrived prior that's hard to get at. But mostly it arose because it's analytically tractable and easier to compute (in the era before approximate Bayesian computational approaches arose). To the point on computation, yes: you will evaluate the different likelihood integrals in the Bayesian setting with a large-scale Monte Carlo procedure in almost any case of practical interest. There are some specialized simulators, such as GHK, that work if you assume certain distributions, and if you make these assumptions, sometimes you can find analytically tractable problems for which fully analytic Bayes factors exist. But no one uses these; there is no reason to. With optimized Metropolis/Gibbs samplers and other MCMC methods, it's totally tractable to approach these problems in a fully data driven way and compute your integrals numerically. In fact, one will often do this hierarchically and further integrate the results over meta-priors that relate to data collection mechanisms, non-ignorable experimental designs, etc. I recommend the book Bayesian Data Analysis for more on this. Although, the author, Andrew Gelman, seems not to care too much for Bayes factors . As an aside, I agree with Gelman. If you're going to go Bayesian, then exploit the full posterior. Doing model selection with Bayesian methods is like handicapping them, because model selection is a weak and mostly useless form of inference. I'd rather know distributions over model choices if I can... who cares about quantizing it down to "model A is better than model B" sorts of statements when you do not have to? Additionally, when computing the Bayes factor, does one apply correction for complexity (automatically via cross-validated estimation of likelihood or analytically via AIC) as one does with the likelihood ratio? This is one of the nice things about Bayesian methods. Bayes factors automatically account for model complexity in a technical sense. You can set up a simple scenario with two models, $M_{1}$ and $M_{2}$ with assumed model complexities $d_{1}$ and $d_{2}$, respectively, with $d_{1} < d_{2}$ and a sample size $N$. Then if $B_{1,2}$ is the Bayes factor with $M_{1}$ in the numerator, under the assumption that $M_{1}$ is true one can prove that as $N\to\infty$, $B_{1,2}$ approaches $\infty$ at a rate that depends on the difference in model complexity, and that the Bayes factor favors the simpler model. More specifically, you can show that under all of the above assumptions, $$ B_{1,2} = \mathcal{O}(N^{\frac{1}{2}(d_{2}-d_{1})}) $$ I'm familiar with this derivation and the discussion from the book Finite Mixture and Markov Switching Models by Sylvia Frühwirth-Schnatter, but there are likely more directly statistical accounts that dive more into the epistemology underlying it. I don't know the details well enough to give them here, but I believe there are some fairly deep theoretical connections between this and the derivation of AIC. The Information Theory book by Cover and Thomas hinted at this at least. Also, what are the philosophical differences between the likelihood ratio and the Bayes factor (n.b. I'm not asking about the philosophical differences between the likelihood ratio and Bayesian methods in general, but the Bayes factor as a representation of the objective evidence specifically). How would one go about characterizing the meaning of the Bayes factor as compared to the likelihood ratio? The Wikipedia article's section on "Interpretation" does a good job of discussing this (especially the chart showing Jeffreys' strength of evidence scale). Like usual, there's not too much philosophical stuff beyond the basic differences between Bayesian methods and frequentist methods (which you seem already familiar with). The main thing is that the likelihood ratio is not coherent in a Dutch book sense. You can concoct scenarios where the model selection inference from likelihood ratios will lead one to accept losing bets. The Bayesian method is coherent, but operates on a prior which could be extremely poor and has to be chosen subjectively. Tradeoffs.. tradeoffs... FWIW, I think this kind of heavily parameterized model selection is not very good inference. I prefer Bayesian methods and I prefer to organize them more hierarchically, and I want the inference to center on the full posterior distribution if it is at all computationally feasible to do so. I think Bayes factors have some neat mathematical properties, but as a Bayesian myself, I am not impressed by them. They conceal the really useful part of Bayesian analysis, which is that it forces you to deal with your priors out in the open instead of sweeping them under the rug, and allows you to do inference on full posteriors. | {
"source": [
"https://stats.stackexchange.com/questions/27345",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/364/"
]
} |
27,426 | Have, let's say, the following data: 8232302 684531 116857 89724 82267 75988 63871
23718 1696 436 439 248 235 Want a simple way to fit this (and several other datasets) to a Pareto distribution. Ideally it would output the matching theoretical values, less ideally the parameters. | Well, if you have a sample $X_1, ..., X_n$ from a pareto distribution with parameters $m>0$ and $\alpha>0$ (where $m$ is the lower bound parameter and $\alpha$ is the shape parameter) the log-likelihood of that sample is: $$n \log(\alpha) + n \alpha \log(m) - (\alpha+1) \sum_{i=1}^{n} \log(X_i) $$ this is a monotonically increasing in $m$, so the maximizer is the largest value that is consistent with the observed data. Since the parameter $m$ defines the lower bound of the support for the Pareto distribution, the optimum is $$\hat{m} = \min_{i} X_i $$ which does not depend on $\alpha$. Next, using ordinary calculus tricks, the MLE for $\alpha$ must satisfy $$ \frac{n}{\alpha} + n \log( \hat{m} ) - \sum_{i=1}^{n} \log(X_i) = 0$$ some simple algebra tells us the MLE of $\alpha$ is $$ \hat{\alpha} = \frac{n}{\sum_{i=1}^{n} \log(X_i/\hat{m})} $$ In many important senses (e.g. optimal asymptotic efficiency in that it achieves the Cramer-Rao lower bound), this is the best way to fit data to a Pareto distribution. The R code below calculates the MLE for a given data set, X . pareto.MLE <- function(X)
{
n <- length(X)
m <- min(X)
a <- n/sum(log(X)-log(m))
return( c(m,a) )
}
# example.
library(VGAM)
set.seed(1)
z = rpareto(1000, 1, 5)
pareto.MLE(z)
[1] 1.000014 5.065213 Edit: Based on the commentary by @cardinal and I below, we can also note that $\hat{\alpha}$ is the reciprocal of the sample mean of the $\log(X_i /\hat{m})$'s, which happen to have an exponential distribution. Therefore, if we have access to software that can fit an exponential distribution (which is more likely, since it seems to arise in many statistical problems), then fitting a Pareto distribution can be accomplished by transforming the data set in this way and fitting it to an exponential distribution on the transformed scale. | {
"source": [
"https://stats.stackexchange.com/questions/27426",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9680/"
]
} |
27,436 | Say I have multivariate normal $N(\mu, \Sigma)$ density. I want to get the second (partial) derivative w.r.t. $\mu$. Not sure how to take derivative of a matrix. Wiki says take the derivative element by element inside the matrix. I am working with Laplace approximation
$$\log{P}_{N}(\theta)=\log {P}_{N}-\frac{1}{2}{(\theta-\hat{\theta})}^{T}{\Sigma}^{-1}(\theta-\hat{\theta}) \>.$$ The mode is $\hat\theta=\mu$. I was given $${\Sigma}^{-1}=-\frac{{{\partial }^{2}}}{\partial {{\theta }^{2}}}\log p(\hat{\theta }|y),$$ how did this come about? What I have done: $$\log P(\theta|y) = -\frac{k}{2} \log 2 \pi - \frac{1}{2} \log \left| \Sigma \right| - \frac{1}{2} {(\theta-\hat \theta)}^{T}{\Sigma}^{-1}(\theta-\hat\theta)$$ So, I take derivative w.r.t to $\theta$, first off, there is a transpose, secondly, it is a matrix. So, I am stuck. Note: If my professor comes across this, I am referring to the lecture. | In chapter 2 of the Matrix Cookbook there is a nice review of matrix calculus stuff that gives a lot of useful identities that help with problems one would encounter doing probability and statistics, including rules to help differentiate the multivariate Gaussian likelihood. If you have a random vector ${\boldsymbol y}$ that is multivariate normal with mean vector ${\boldsymbol \mu}$ and covariance matrix ${\boldsymbol \Sigma}$, then use equation (86) in the matrix cookbook to find that the gradient of the log likelihood ${\bf L}$ with respect to ${\boldsymbol \mu}$ is $$\begin{align}
\frac{ \partial {\bf L} }{ \partial {\boldsymbol \mu}}
&= -\frac{1}{2} \left(
\frac{\partial \left( {\boldsymbol y} - {\boldsymbol \mu} \right)'
{\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y} - {\boldsymbol \mu}\right)
}{\partial {\boldsymbol \mu}} \right) \nonumber \\
&= -\frac{1}{2}
\left( -2 {\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y} - {\boldsymbol \mu}\right) \right) \nonumber \\
&= {\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y} - {\boldsymbol \mu} \right)
\end{align}$$ I'll leave it to you to differentiate this again and find the answer to be $-{\boldsymbol \Sigma}^{-1}$. As "extra credit", use equations (57) and (61) to find that the gradient with respect to ${\boldsymbol \Sigma}$ is $$
\begin{align}
\frac{ \partial {\bf L} }{ \partial {\boldsymbol \Sigma}}
&= -\frac{1}{2} \left( \frac{ \partial \log(|{\boldsymbol \Sigma}|)}{\partial{\boldsymbol \Sigma}}
+ \frac{\partial \left( {\boldsymbol y} - {\boldsymbol \mu}\right)'
{\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y}- {\boldsymbol \mu}\right)
}{\partial {\boldsymbol \Sigma}} \right)\\
&= -\frac{1}{2} \left( {\boldsymbol \Sigma}^{-1} -
{\boldsymbol \Sigma}^{-1}
\left( {\boldsymbol y} - {\boldsymbol \mu} \right)
\left( {\boldsymbol y} - {\boldsymbol \mu} \right)'
{\boldsymbol \Sigma}^{-1} \right)
\end{align}
$$ I've left out a lot of the steps, but I made this derivation using only the identities found in the matrix cookbook, so I'll leave it to you to fill in the gaps. I've used these score equations for maximum likelihood estimation, so I know they are correct :) | {
"source": [
"https://stats.stackexchange.com/questions/27436",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9177/"
]
} |
27,443 | I was wondering how you would generate data from a Poisson regression equation in R? I'm kind of confused how to approach the problem. So if I assume we have two predictors $X_1$ and $X_2$ which are distributed $N(0,1)$. And the intercept is 0 and both of the coefficients equal 1. Then my estimate is simply: $$\log(Y) = 0+ 1\cdot X_1 + 1\cdot X_2$$ But once I have calculated log(Y) - how do I generate poisson counts based on that? What is the rate parameter for the Poisson distribution? If anyone could write a brief R script that generates Poisson regression samples that would be awesome! | The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V(Y) = \mu$) and that $\log(\mu) = \beta_0 + \beta_1 x$. Generating data according to that model easily follows. Here is an example which you can adapt according to your own scenario. > #sample size
> n <- 10
> #regression coefficients
> beta0 <- 1
> beta1 <- 0.2
> #generate covariate values
> x <- runif(n=n, min=0, max=1.5)
> #compute mu's
> mu <- exp(beta0 + beta1 * x)
> #generate Y-values
> y <- rpois(n=n, lambda=mu)
> #data set
> data <- data.frame(y=y, x=x)
> data
y x
1 4 1.2575652
2 3 0.9213477
3 3 0.8093336
4 4 0.6234518
5 4 0.8801471
6 8 1.2961688
7 2 0.1676094
8 2 1.1278965
9 1 1.1642033
10 4 0.2830910 | {
"source": [
"https://stats.stackexchange.com/questions/27443",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5464/"
]
} |
27,495 | I've been very interested in data-mining and machine-learning for a while, partly because I majored in that area at school, but also because I am truly much more excited trying to solve problems that require a bit more thought than just programming knowledge and whose solution can have multiple forms. I don't have a researcher/scientist background, I come from a computer science background with an emphasis on data analysis, I have a Master's degree and not a PhD. I currently have a position related to data analysis, even if that is not the primary focus of what I'm doing, but I have at least some good exposure to it. As I was interviewing some time ago for a job with several companies, and got to talk with a few recruiters, I found a common pattern that people seem to think that you need to have a PhD to do machine learning , even if I may be generalizing a bit too much (some companies were not really looking especially for PhDs). While I think it's good to have a PhD in that area, I don't think this is absolutely necessary . I have some pretty decent knowledge of most real-world machine learning algorithms, have implemented most of them myself (either at school or on personal projects), and feel pretty confident when approaching problems involving machine-learning / data-mining and statistics in general. And I have some friends with a similar profile who seem very knowledgeable about this also, but also feel that in general companies are pretty shy about hiring in data-mining if you're not a PhD. I'd like to get some feedback, do you think a PhD is absolutely necessary to have a job very focused in that area? (I hesitated a bit before posting this question here, but since it seems to be an acceptable topic on meta , I've decided to post this question on which I've been thinking for a while.) | I believe actually the opposite of your conclusion is true. In The Disposable Academic , several pointers are given about the low wage premium in applied math, math, and computer science for PhD holders over master's degree holders. In part, this is because companies are realizing that master's degree holders usually have just as much theoretical depth, better programming skills, and are more pliable and can be trained for their company's specific tasks. It's not easy to get an SVM disciple, for instance, to appreciate your company's infrastructure that relies on decision trees, say. Often, when someone has dedicated tons of time to a particular machine learning paradigm, they have a hard time generalizing their productivity to other domains. Another problem is that a lot of machine learning jobs these days are all about getting things done, and not so much about writing papers or developing new methods. You can take a high risk approach to developing new mathematical tools, studying VC-dimensional aspects of your method, its underlying complexity theory, etc. But in the end, you might not get something that practitioners will care about. Meanwhile, look at something like poselets . Basically no new math arises from poselets at all. It's entirely unelegant, clunky, and lacks any mathematical sophistication. But it scales up to large data sets amazingly well and it's looking like it will be a staple in pose recognition (especially in computer vision) for some time to come. Those researchers did a great job and their work is to be applauded, but it's not something most people associate with a machine learning PhD. With a question like this, you'll get tons of different opinions, so by all means consider them all. I am currently a PhD student in computer vision, but I've decided to leave my program early with a master's degree, and I'll be working for an asset management company doing natural language machine learning, computational statistics, etc. I also considered ad-based data mining jobs at several large TV companies, and a few robotics jobs. In all of these domains, there are plenty of jobs for someone with mathematical maturity and a knack for solving problems in multiple programming languages. Having a master's degree is just fine. And, according to that Economist article, you'll be paid basically just as well as someone with a PhD. And if you work outside of academia, bonuses and getting to promotions faster than someone who spends extra years on a PhD can often mean your overall lifetime earnings are higher. As Peter Thiel once said, "Graduate school is like hitting the snooze button on the alarm clock of life..." | {
"source": [
"https://stats.stackexchange.com/questions/27495",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10683/"
]
} |
27,503 | I understand that to calculate the probability of two independent events both happening, one can simply multiply the probabilities. However, consider the following: A list of counties in the East coastal U.S. Each county has a calculated probability of a hurricane making landfall next year. I am trying to calculate the probability of a hurricane making landfall across multiple counties at the same time, i.e. three counties being effected in a single year. However, many counties are either in the same state, or in adjacent states, so they are not “independent”. For example (county, probability) Cameron .045
Hidalgo .049
Willacy .023
Kenedy .064 For these counties, as a single hurricane may affect multiple counties, how should one think about calculating a combined probability of these four counties experiencing a landfall hurricane next year? Further, any thoughts on how to apply a decreasingly related probability towards independence? (two counties very far from each other, opposite ends of the eastern US, for example). | I believe actually the opposite of your conclusion is true. In The Disposable Academic , several pointers are given about the low wage premium in applied math, math, and computer science for PhD holders over master's degree holders. In part, this is because companies are realizing that master's degree holders usually have just as much theoretical depth, better programming skills, and are more pliable and can be trained for their company's specific tasks. It's not easy to get an SVM disciple, for instance, to appreciate your company's infrastructure that relies on decision trees, say. Often, when someone has dedicated tons of time to a particular machine learning paradigm, they have a hard time generalizing their productivity to other domains. Another problem is that a lot of machine learning jobs these days are all about getting things done, and not so much about writing papers or developing new methods. You can take a high risk approach to developing new mathematical tools, studying VC-dimensional aspects of your method, its underlying complexity theory, etc. But in the end, you might not get something that practitioners will care about. Meanwhile, look at something like poselets . Basically no new math arises from poselets at all. It's entirely unelegant, clunky, and lacks any mathematical sophistication. But it scales up to large data sets amazingly well and it's looking like it will be a staple in pose recognition (especially in computer vision) for some time to come. Those researchers did a great job and their work is to be applauded, but it's not something most people associate with a machine learning PhD. With a question like this, you'll get tons of different opinions, so by all means consider them all. I am currently a PhD student in computer vision, but I've decided to leave my program early with a master's degree, and I'll be working for an asset management company doing natural language machine learning, computational statistics, etc. I also considered ad-based data mining jobs at several large TV companies, and a few robotics jobs. In all of these domains, there are plenty of jobs for someone with mathematical maturity and a knack for solving problems in multiple programming languages. Having a master's degree is just fine. And, according to that Economist article, you'll be paid basically just as well as someone with a PhD. And if you work outside of academia, bonuses and getting to promotions faster than someone who spends extra years on a PhD can often mean your overall lifetime earnings are higher. As Peter Thiel once said, "Graduate school is like hitting the snooze button on the alarm clock of life..." | {
"source": [
"https://stats.stackexchange.com/questions/27503",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11016/"
]
} |
27,506 | I am interested in non-parametric methods for building confidence intervals for an estimator (e.g. the mean) using few samples (e.g. 10). I think I have read somewhere that smoothing the bootstrapped estimator values can improve the quality of the derived percentiles interval. However I could not find any online reference that explains how to tune the bandwidth of the smoothing step. | I believe actually the opposite of your conclusion is true. In The Disposable Academic , several pointers are given about the low wage premium in applied math, math, and computer science for PhD holders over master's degree holders. In part, this is because companies are realizing that master's degree holders usually have just as much theoretical depth, better programming skills, and are more pliable and can be trained for their company's specific tasks. It's not easy to get an SVM disciple, for instance, to appreciate your company's infrastructure that relies on decision trees, say. Often, when someone has dedicated tons of time to a particular machine learning paradigm, they have a hard time generalizing their productivity to other domains. Another problem is that a lot of machine learning jobs these days are all about getting things done, and not so much about writing papers or developing new methods. You can take a high risk approach to developing new mathematical tools, studying VC-dimensional aspects of your method, its underlying complexity theory, etc. But in the end, you might not get something that practitioners will care about. Meanwhile, look at something like poselets . Basically no new math arises from poselets at all. It's entirely unelegant, clunky, and lacks any mathematical sophistication. But it scales up to large data sets amazingly well and it's looking like it will be a staple in pose recognition (especially in computer vision) for some time to come. Those researchers did a great job and their work is to be applauded, but it's not something most people associate with a machine learning PhD. With a question like this, you'll get tons of different opinions, so by all means consider them all. I am currently a PhD student in computer vision, but I've decided to leave my program early with a master's degree, and I'll be working for an asset management company doing natural language machine learning, computational statistics, etc. I also considered ad-based data mining jobs at several large TV companies, and a few robotics jobs. In all of these domains, there are plenty of jobs for someone with mathematical maturity and a knack for solving problems in multiple programming languages. Having a master's degree is just fine. And, according to that Economist article, you'll be paid basically just as well as someone with a PhD. And if you work outside of academia, bonuses and getting to promotions faster than someone who spends extra years on a PhD can often mean your overall lifetime earnings are higher. As Peter Thiel once said, "Graduate school is like hitting the snooze button on the alarm clock of life..." | {
"source": [
"https://stats.stackexchange.com/questions/27506",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2150/"
]
} |
27,589 | If the interest is merely estimating the parameters of a model (pointwise and/or interval estimation) and the prior information is not reliable, weak, (I know this is a bit vague but I am trying to establish an scenario where the choice of a prior is difficult) ... Why would someone choose to use the Bayesian approach with 'noninformative' improper priors instead of the classical approach? | Two reasons one may go with a Bayesian approach even if you're using highly non-informative priors: Convergence problems. There are some distributions (binomial, negative binomial and generalized gamma are the ones I'm most familiar with) that have convergence issues a non-trivial amount of the time. You can use a "Bayesian" framework - and particular Markov chain Monte Carlo (MCMC) methods, to essentially plow through these convergence issues with computational power and get decent estimates from them. Interpretation. A Bayesian estimate + 95% credible interval has a more intuitive interpretation than a frequentist estimate + 95% confidence interval, so some may prefer to simply report those. | {
"source": [
"https://stats.stackexchange.com/questions/27589",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
27,651 | I always have a hard time explaining statistical techniques to audience with no statistical background. If I wanted to explain what GLM is to such audience (without throwing out statistical jargon), what would be the best or most effective way? I usually explain GLM with three parts -- (1) the random component which is response variable, (2) the systematic component which is linear predictors, and (3) the link function which is the "key" to connecting (1) and (2). Then I would give an example of linear or logistic regression and explain how the link function is selected based on the response variable. Hence it acts as the key connecting two components. | If the audience really has no statistical background, I think I would try to simplify the explanation quite a bit more. First, I would draw a coordinate plane on the board with a line on it, like so: Everyone at your talk will be familiar with the equation for a simple line, $\ y = mx + b $, because that's something that is learned in grade school. So I would display that alongside the drawing. However, I would write it backwards, like so: $\ mx + b = y $ I would say that this equation is an example of a simple linear regression. I would then explain how you (or a computer) could fit such an equation to a scatter plot of data points, like the one shown in this image: I would say that here, we are using the age of the organism that we are studying to predict how big it is, and that the resultant linear regression equation that we get (shown on the image) can be used to predict how big an organism is if we know its age. Returning to our general equation $\ mx + b = y $, I would say that x's are variables that can predict the y's, so we call them predictors . The y's are commonly called responses . Then I would explain again that this was an example of a simple linear regression equation, and that there are actually more complicated varieties. For example, in a variety called logistic regression , the y's are only allowed to be 1's or 0's. One might want to use this type of model if you are trying to predict a "yes" or "no" answer, like whether or not someone has a disease. Another special variety is something called Poisson regression , which is used to analyse "count" or "event" data (I wouldn't delve further into this unless really necessary). I would then explain that linear regression, logistic regression, and Poisson regression are really all special examples of a more general method, something called a "generalized linear model". The great thing about "generalized linear models" is that they allow us to use "response" data that can take any value (like how big an organism is in linear regression), take only 1's or 0's (like whether or not someone has a disease in logistic regression), or take discrete counts (like number of events in Poisson regression). I would then say that in these types of equations, the x's (predictors) are connected to the y's (responses) via something that statisticians call a "link function". We use these "link functions" in the instances in which the x's are not related to the y's in a linear manner. Anyway, those are my two cents on the issue! Maybe my proposed explanation sounds a bit hokey and dumb, but if the purpose of this exercise is just to get the "gist" across to the audience, perhaps an explanation like this isn't too bad. I think it's important that the concept be explained in an intuitive way and that you avoid throwing around words like "random component", "systematic component", "link function", "deterministic", "logit function", etc. If you're talking to people who truly have no statistical background, like a typical biologist or physician, their eyes are just going to glaze over at hearing those words. They don't know what a probability distribution is, they've never heard of a link function, and they don't know what a "logit" function is, etc. In your explanation to a non-statistical audience, I would also focus on when to use what variety of model. I might talk about how many predictors you are allowed to include on the left hand side of the equation (I've heard rules of thumb like no more than your sample size divided by ten). It would also be nice to include an example spread sheet with data and explain to the audience how to use a statistical software package to generate a model. I would then go through the output of that model step by step and try to explain what all the different letters and numbers mean. Biologists are clueless about this stuff and are more interested in learning what test to use when rather than actually gaining an understanding of the math behind the GUI of SPSS! I would appreciate any comments or suggestions regarding my proposed explanation, particularly if anyone notes errors or thinks of a better way to explain it! | {
"source": [
"https://stats.stackexchange.com/questions/27651",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9651/"
]
} |
27,662 | Econometrics has substantial overlap with traditional statistics, but often uses its own jargon about a variety of topics ("identification," "exogenous," etc.). I once heard an applied statistics professor in another field comment that frequently the terminology is different but the concepts are the same. Yet it also has its own methods and philosophical distinctions (Heckman's famous essay comes to mind). What terminology differences exist between econometrics and mainstream statistics, and where do the fields diverge to become different in more than just terminology? | There are some terminological differences where the same thing is called different names in different disciplines: Longitudinal data in biostatistics are repeated observations of the same individuals = panel data in econometrics. The model for a binary dependent variable in which the probability of 1 is modeled as $1/(1+\exp[-x'\beta])$ is called a logit model in econometrics, and logistic model in biostatistics. Biostatisticians tend to work with logistic regression in terms of odds ratios, as their $x$ s are often binary, so the odds ratios represent the relative frequencies of the outcome of interest in the two groups in the population. This is such a common interpretation that you will often see a continuous variable transformed into two categories (low vs high blood pressure) to make this interpretation easier. Statisticians' "estimating equations" are econometricians' "moment conditions". Statisticians' $M$ -estimates are econometricians' extremum estimators. There are terminological differences where the same term is used to mean different things in different disciplines: Fixed effects stand for the $x'\beta$ in the regression equation for ANOVA statisticians, and for a "within" estimator in longitudinal/panel data models for econometricians. (Random effects are cursed for econometricians, for good.) Robust inference means heteroskedasticity-corrected standard errors for economists (with extensions to clustered standard errors and/or autocorrelation-corrected standard errors) and methods robust to far outliers to statisticians. It seems that economists have a ridiculous idea that stratified samples are those in which probabilities of selection vary between observations. These should be called unequal probability samples. Stratified samples are those in which the population is split into pre-defined groups according to characteristics known before sampling takes place. Econometricians' "data mining" (at least in the 1980s literature) used to mean multiple testing and pitfalls related to it that have been wonderfully explained in Harrell's book . Computer scientists' (and statisticians') data mining procedures are non-parametric methods of finding patterns in the data, also known as statistical learning . Horvitz-Thompson estimator is a non-parametric estimator of a finite population total in sampling statistics that relies on fixed probabilities of selection, with variance determined by the second order selection probabilities. In econometrics, it had grown to denote inverse propensity weighting estimators that rely on a moderately long list of the standard causal inference assumptions (conditional independence, SUTVA, overlap, all that stuff that makes Rubin's counterfactuals work). Yeah, there is some sort of probability in the denominator in both, but understanding the estimator in one context gives you zero ability to understand the other context. I view the unique contributions of econometrics to be Ways to deal with endogeneity and poorly specified regression models, recognizing, as mpiktas has explained in another answer , that (i) the explanatory variables may themselves be random (and hence correlated with regression errors producing bias in parameter estimates), (ii) the models can suffer from omitted variables (which then become part of the error term), (iii) there may be unobserved heterogeneity of how economic agents react to the stimuli, thus complicating the standard regression models. Angrist & Pischke is a wonderful review of these issues, and statisticians will learn a lot about how to do regression analysis from it. At the very least, statisticians should learn and understand instrumental variables regression. More generally, economists want to make as few assumptions as possible about their models, so as to make sure that their findings do not hinge on something as ridiculous as multivariate normality. That's why GMM and empirical likelihood are hugely popular with economists, and never caught up in statistics (GMM was first described as minimum $\chi^2$ by Ferguson, and empirical likelihood, by Jon Rao, both famous statisticians, in the late 1960s). That's why economists run their regression with "robust" standard errors, and statisticians, with the default OLS $s^2 (X'X)^{-1}$ standard errors. There's been a lot of work in the time domain with regularly spaced processes -- that's how macroeconomic data are collected. The unique contributions include integrated and cointegrated processes and autoregressive conditional heteroskedasticity ( (G)ARCH ) methods. Being generally a micro person, I am less familiar with these. Overall, economists tend to look for strong interpretation of coefficients in their models. Statisticians would take a logistic model as a way to get to the probability of the positive outcome, often as a simple predictive device, and may also note the GLM interpretation with nice exponential family properties that it possesses, as well as connections with discriminant analysis. Economists would think about the utility interpretation of the logit model, and be concerned that only $\beta/\sigma$ is identified in this model, and that heteroskedasticity can throw it off. (Statisticians will be wondering what $\sigma$ are the economists talking about, of course.) Of course, a utility that is linear in its inputs is a very funny thing from the perspective of Microeconomics 101, although some generalizations to semi-concave functions are probably done in Mas-Collel. What economists generally tend to miss, but, IMHO, would benefit from, are aspects of multivariate analysis (including latent variable models as a way to deal with measurement errors and multiple proxies... statisticians are oblivious to these models, though, too), regression diagnostics (all these Cook's distances, Mallows' $C_p$ , DFBETA, etc.), analysis of missing data (Manski's partial identification is surely fancy, but the mainstream MCAR/MAR/NMAR breakdown and multiple imputation are more useful), and survey statistics. A lot of other contributions from the mainstream statistics have been entertained by econometrics and either adopted as a standard methodology, or passed by as a short term fashion: ARMA models of the 1960s are probably better known in econometrics than in statistics, as some graduate programs in statistics may fail to offer a time series course these days; shrinkage estimators/ridge regression of the 1970s have come and gone; the bootstrap of the 1980s is a knee-jerk reaction for any complicated situations, although economists need to be better aware of the limitations of the bootstrap ; the empirical likelihood of the 1990s has seen more methodology development from theoretical econometricians than from theoretical statisticians; computational Bayesian methods of the 2000s are being entertained in econometrics, but my feeling is that are just too parametric, too heavily model-based, to be compatible with the robustness paradigm I mentioned earlier. (EDIT: that was the view on the scene in 2012; by 2020, Bayesian models have become standard in empirical macro where people probably care a little less about robustness, and are making their presence heard in empirical micro, as well. They are just too easy to run these days to pass by.) Whether economists will find any use of the statistical learning/bioinformatics or spatio-temporal stuff that is extremely hot in modern statistics is an open call. | {
"source": [
"https://stats.stackexchange.com/questions/27662",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3488/"
]
} |
27,682 | What is the reason why we use natural logarithm (ln) rather than log to base 10 in specifying functions in econometrics? | In the context of linear regression in the social sciences, Gelman and Hill write[1]: We prefer natural logs (that is, logarithms base $e$) because, as
described above, coefficients on the natural-log scale are directly
interpretable as approximate proportional differences: with a
coefficient of 0.06, a difference of 1 in $x$ corresponds to an
approximate 6% difference in $y$, and so forth. [1] Andrew Gelman and Jennifer Hill (2007). Data Analysis using Regression and Multilevel/Hierarchical Models . Cambridge University Press: Cambridge; New York, pp. 60-61. | {
"source": [
"https://stats.stackexchange.com/questions/27682",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
27,691 | I have two time-series: A proxy for the market risk premium (ERP; red line) The risk-free rate, proxied by a government bond (blue line) I want to test if the risk-free rate can explain the ERP. Hereby, I basically followed the advice of Tsay (2010, 3rd edition, p. 96): Financial Time Series: Fit the linear regression model and check serial correlations of the residuals. If the residual series is unit-root nonstationarity, take the first difference of both the dependent and explanatory variables. Doing the first step, I get the following results: Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 6.77019 0.25103 26.97 <2e-16 ***
Risk_Free_Rate -0.65320 0.04123 -15.84 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 As expected from the figure, the relation is negative and significant. However, the residuals are serially correlated: Therefore, I first difference both the dependent and explanatory variable. Here is what I get: Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.002077 0.016497 -0.126 0.9
Risk_Free_Rate -0.958267 0.053731 -17.834 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 And the ACF of the residuals looks like: This result looks great: First, the residuals are now uncorrelated. Second, the relation seems to be more negative now. Here are my questions (you probably wondered by now ;-) The first regression, I would have interpreted as (econometric problems aside) "if the riskfree rate rises by one percentage point, the ERP falls by 0.65 percentage points." Actually, after pondering about this for a while, I would interpret the second regression just the same (now resulting in a 0.96 percentage points fall though). Is this interpretation correct? It just feels weird that I transform my variables, but don't have to change my interpretation. If this, however, is correct, why do the results change? Is this just the result of econometric problems? If so, does anyone have an idea why my second regression seems to be even "better"? Normally, I always read that you can have spurious correlations that vanish after you do it correctly. Here, it seems the other way round. | Suppose that we have the model
$$\begin{equation*} y_t = \beta_0 + \beta_1 x_t + \beta_2 t + \epsilon_t. \end{equation*}$$
You say that these coefficients are easier to interpret. Let's subtract $y_{t-1}$ from the lefthand side and $\beta_0 + \beta_1 x_{t-1} + \beta_2 ({t-1}) + \epsilon_{t-1}$, which equals $y_{t-1}$, from the righthand side. We have
$$\begin{equation*} \Delta y_t = \beta_1 \Delta x_t + \beta_2 + \Delta \epsilon_t. \end{equation*}$$
The intercept in the difference equation is the time trend. And the coefficient on $\Delta x$ has the same interpretation as $\beta_1$ in the original model. If the errors were non-stationary such that
$$\begin{equation*} \epsilon_t = \sum_{s=0}^{t-1}{\nu_s}, \end{equation*}$$
such that $\nu_s$ is white noise, the the differenced error is white noise. If the errors have a stationary AR(p) distribution, say, then the differenced error term would have a more complicated distribution and, notably, would retain serial correlation. Or if the original $\epsilon$ are already white noise (An AR(1) with a correlation coefficient of 0 if you like), then differencing induces serial correlation between the errors. For these reasons, it is important to only difference processes that are non-stationary due to unit roots and use detrending for so-called trend stationary ones. (A unit root causes the variance of a series to change and it actually explode over time; the expected value of this series is constant, however. A trend stationary process has the opposite properties.) | {
"source": [
"https://stats.stackexchange.com/questions/27691",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7964/"
]
} |
27,724 | I am actually reviewing a manuscript where the authors compare 5-6 logit regression models with AIC. However, some of the models have interaction terms without including the individual covariate terms. Does it ever make sense to do this? For example (not specific to logit models): M1: Y = X1 + X2 + X1*X2
M2: Y = X1 + X2
M3: Y = X1 + X1*X2 (missing X2)
M4: Y = X2 + X1*X2 (missing X1)
M5: Y = X1*X2 (missing X1 & X2) I've always been under the impression that if you have the interaction term X1*X2 you also need X1 + X2. Therefore, models 1 and 2 would be fine but models 3-5 would be problematic (even if AIC is lower). Is this correct? Is it a rule or more of a guideline? Does anyone have a good reference that explains the reasoning behind this? I just want to make sure I don't miscommunicate anything important in the review. | Most of the time this is a bad idea - the main reason is that it no longer makes the model invariant to location shifts. For example, suppose you have a single outcome $y_i$ and two predictors $x_i$ and $z_i$ and specify the model: $$ y_i = \beta_0 + \beta_1 x_{i} z_i + \varepsilon $$ If you were to center the predictors by their means, $x_i z_i$ becomes $$ (x_i - \overline{x})(z_i - \overline{z}) = x_i z_i - x_{i} \overline{z} - z_{i} \overline{x} + \overline{x} \overline{z}$$ So, you can see that the main effects have been reintroduced into the model. I've given a heuristic argument here, but this does present a practical issue. As noted in Faraway(2005) on page 114, an additive change in scale changes the model inference when the main effects are left out of the model, whereas this does not happen when the lower order terms are included. It is normally undesirable to have arbitrary things like a location shift cause a fundamental change in the statistical inference (and therefore the conclusions of your inquiry), as can happen when you include polynomial terms or interactions in a model without the lower order effects. Note: There may be special circumstances where you would only want to include the interaction, if the $x_i z_i$ has some particular substantive meaning or if you only observe the product and not the individual variables $x_i, z_i$. But, in that case, one may as well think of the predictor $a_i = x_i z_i$ and proceed with the model $$ y_i = \alpha_0 + \alpha_1 a_i + \varepsilon_i $$ rather than thinking of $a_i$ as an interaction term. | {
"source": [
"https://stats.stackexchange.com/questions/27724",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8289/"
]
} |
27,730 | I've been using the $K$-fold cross-validation a few times now to evaluate performance of some learning algorithms, but I've always been puzzled as to how I should choose the value of $K$. I've often seen and used a value of $K = 10$, but this seems totally arbitrary to me, and I now just use $10$ by habit instead of thinking it over. To me it seems that you're getting a better granularity as you improve the value of $K$, so ideally you should make your $K$ very large, but there is also a risk to be biased. I'd like to know what the value of $K$ should depend on, and how I should be thinking about this when I evaluate my algorithm. Does it change something if I use the stratified version of the cross-validation or not? | The choice of $k = 10$ is somewhat arbitrary. Here's how I decide $k$: first of all, in order to lower the variance of the CV result, you can and should repeat/iterate the CV with new random splits. This makes the argument of high $k$ => more computation time largely irrelevant, as you anyways want to calculate many models. I tend to think mainly of the total number of models calculated (in analogy to bootstrapping). So I may decide for 100 x 10-fold CV or 200 x 5-fold CV. @ogrisel already explained that usually large $k$ mean less (pessimistic) bias. (Some exceptions are known particularly for $k = n$, i.e. leave-one-out). If possible, I use a $k$ that is a divisor of the sample size, or the size of the groups in the sample that should be stratified. Too large $k$ mean that only a low number of sample combinations is possible, thus limiting the number of iterations that are different. For leave-one-out: $\binom{n}{1} = n = k$ different model/test sample combinations are possible. Iterations don't make sense at all. E.g. $n = 20$ and $k = 10$: $\binom{n=20}{2} = 190 = 19 ⋅ k$ different model/test sample combinations exist. You may consider going through all possible combinations here as 19 iterations of $k$-fold CV or a total of 190 models is not very much. These thoughts have more weight with small sample sizes. With more samples available $k$ doesn't matter very much. The possible number of combinations soon becomes large enough so the (say) 100 iterations of 10-fold CV do not run a great risk of being duplicates. Also, more training samples usually means that you are at a flatter part of the learning curve, so the difference between the surrogate models and the "real" model trained on all $n$ samples becomes negligible. | {
"source": [
"https://stats.stackexchange.com/questions/27730",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10683/"
]
} |
27,750 | I have recently been reading a lot on this site (@Aniko, @Dikran Marsupial, @Erik) and elsewhere about the problem of overfitting occuring with cross validation - (Smialowski et al 2010 Bioinformatics, Hastie, Elements of statistical learning).
The suggestion is that any supervised feature selection (using correlation with class labels) performed outside of the model performance estimation using cross validation (or other model estimating method such as bootstrapping) may result in overfitting. This seems unintuitive to me - surely if you select a feature set and then evaluate your model using only the selected features using cross validation, then you are getting an unbiased estimate of generalized model performance on those features (this assumes the sample under study are representive of the populatation)? With this procedure one cannot of course claim an optimal feature set but can one report the performance of the selected feature set on unseen data as valid? I accept that selecting features based on the entire data set may resuts in some data leakage between test and train sets. But if the feature set is static after initial selection, and no other tuning is being done, surely it is valid to report the cross-validated performance metrics? In my case I have 56 features and 259 cases and so #cases > #features. The features are derived from sensor data. Apologies if my question seems derivative but this seems an important point to clarify. Edit: On implementing feature selection within cross validation on the data set detailed above (thanks to the answers below), I can confirm that selecting features prior to cross-validation in this data set introduced a significant bias. This bias/overfitting was greatest when doing so for a 3-class formulation, compared to as 2-class formulation.
I think the fact that I used stepwise regression for feature selection increased this overfitting; for comparison purposes, on a different but related data set I compared a sequential forward feature selection routine performed prior to cross-validation against results I had previously obtained with feature selection within CV. The results between both methods did not differ dramatically. This may mean that stepwise regression is more prone to overfitting than sequential FS or may be a quirk of this data set. | If you perform feature selection on all of the data and then cross-validate, then the test data in each fold of the cross-validation procedure was also used to choose the features and this is what biases the performance analysis. Consider this example. We generate some target data by flipping a coin 10 times and recording whether it comes down as heads or tails. Next, we generate 20 features by flipping the coin 10 times for each feature and write down what we get. We then perform feature selection by picking the feature that matches the target data as closely as possible and use that as our prediction. If we then cross-validate, we will get an expected error rate slightly lower than 0.5. This is because we have chosen the feature on the basis of a correlation over both the training set and the test set in every fold of the cross-validation procedure. However, the true error rate is going to be 0.5 as the target data is simply random. If you perform feature selection independently within each fold of the cross-validation, the expected value of the error rate is 0.5 (which is correct). The key idea is that cross-validation is a way of estimating the generalization performance of a process for building a model, so you need to repeat the whole process in each fold. Otherwise, you will end up with a biased estimate, or an under-estimate of the variance of the estimate (or both). HTH Here is some MATLAB code that performs a Monte-Carlo simulation of this setup, with 56 features and 259 cases, to match your example, the output it gives is: Biased estimator: erate = 0.429210 (0.397683 - 0.451737) Unbiased estimator: erate = 0.499689 (0.397683 - 0.590734) The biased estimator is the one where feature selection is performed prior to cross-validation, the unbiased estimator is the one where feature selection is performed independently in each fold of the cross-validation. This suggests that the bias can be quite severe in this case, depending on the nature of the learning task. NF = 56;
NC = 259;
NFOLD = 10;
NMC = 1e+4;
% perform Monte-Carlo simulation of biased estimator
erate = zeros(NMC,1);
for i=1:NMC
y = randn(NC,1) >= 0;
x = randn(NC,NF) >= 0;
% perform feature selection
err = mean(repmat(y,1,NF) ~= x);
[err,idx] = min(err);
% perform cross-validation
partition = mod(1:NC, NFOLD)+1;
y_xval = zeros(size(y));
for j=1:NFOLD
y_xval(partition==j) = x(partition==j,idx(1));
end
erate(i) = mean(y_xval ~= y);
plot(erate);
drawnow;
end
erate = sort(erate);
fprintf(1, ' Biased estimator: erate = %f (%f - %f)\n', mean(erate), erate(ceil(0.025*end)), erate(floor(0.975*end)));
% perform Monte-Carlo simulation of unbiased estimator
erate = zeros(NMC,1);
for i=1:NMC
y = randn(NC,1) >= 0;
x = randn(NC,NF) >= 0;
% perform cross-validation
partition = mod(1:NC, NFOLD)+1;
y_xval = zeros(size(y));
for j=1:NFOLD
% perform feature selection
err = mean(repmat(y(partition~=j),1,NF) ~= x(partition~=j,:));
[err,idx] = min(err);
y_xval(partition==j) = x(partition==j,idx(1));
end
erate(i) = mean(y_xval ~= y);
plot(erate);
drawnow;
end
erate = sort(erate);
fprintf(1, 'Unbiased estimator: erate = %f (%f - %f)\n', mean(erate), erate(ceil(0.025*end)), erate(floor(0.975*end))); | {
"source": [
"https://stats.stackexchange.com/questions/27750",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11030/"
]
} |
27,951 | I've read that using log scales when charting/graphing is appropriate in certain circumstances, like the y-axis in a time series chart. However, I've not been able to find a definitive explanation as to why that's the case, or when else it would be appropriate.
Please keep in mind, I'm not a statistician so I may be missing the point altogether and if that's the case I'd appreciate direction to remedial resources. | This is a very interesting question, and one that too few people think about. There are several different ways that a log scale can be appropriate. The first and most well known is that mentioned by Macro in his comment: log scales allow a large range to be displayed without small values being compressed down into bottom of the graph. A different reason for preferring a log scaling is in circumstances where the data are more naturally expressed geometrically. An example is when the data represent concentration of a biological mediator. Concentrations cannot be negative and the variability almost invariably scales with the mean (i.e. there is heteroscedastic variance). Using a logarithmic scale or, equivalently, using the log concentration as the primary measure both 'fixe' the uneven variability and gives a scale that is unbounded on both ends. The concentrations are probably log-normally distributed and so a log scaling gives us a very convenient result that is arguably 'natural'. In pharmacology we use a logarithmic scale for drug concentrations far more often than not, and in many cases linear scales are only the product of non-pharmacologists dabbling with drugs ;-) Another good reason for a log scale, probably the one that you are interested in for time-series data, comes from the ability of a log scale to make fractional changes equivalent. Imagine a display of the long-term performance of your retirement investments. It (should) be growing roughly exponentially because tomorrow's interest depends on today's investment (roughly speaking). Thus even if the performance in percentage terms has been fairly constant a graph of the funds will appear to have grown most rapidly at the right hand end. With a logarithmic scale a constant percentage change is seen as a constant vertical distance so a constant growth rate is seen as a straight line. That is often a substantial advantage. Another slightly more esoteric reason for choosing a log scale comes in circumstances where values can be reasonably expressed either as x or 1/x. An example from my own research is vascular resistance which can also be sensibly expressed as the reciprocal, vascular conductance. (It is also sensible in some circumstances to think of the diameter of the blood vessels which scale as a power of resistance or conductance.) Neither of those measures has any more reality than the other and both can be found in research papers. If they are scaled logarithmically then they are simply the negative of each other and the choice of one or the other makes no susbstantive difference. (Vascular diameter will differ from resistance and conductance by a constant multiplier when they are all log scaled.) | {
"source": [
"https://stats.stackexchange.com/questions/27951",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10217/"
]
} |
27,958 | I have written a program which generates random data. If the program is working correctly, that data should follow a specific, known probability distribution. I would like to run the program, do some calculations on the result, and come up with a p-value. Before anybody else says it: I understand that hypothesis testing cannot detect when the program is operating correctly. It can only detect when it is operating incorrectly in a specific way. (And even then, the test "should" fail X% of the time, depending on what significance level you choose...) So, I am trying to gain an understanding of what tools might be appropriate. In particular: I can generate as much random data as I want. All I have to do is leave the program running long enough. So I'm not limited to any specific sample size. I'm interested in techniques which produce a p-value. So staring at a graph and saying "yes, that looks kinda linear" is not an interesting option. Unless there's some way of putting a hard number on the "wonkyness" of a graph. ;-) What I know so far: I've seen three main sorts of test mentioned which sound like they might be applicable: [Pearson] chi-squared test, Kolmogorov-Smirnov test and Anderson-Darling test. It appears that a chi-squared test is appropriate for discrete distributions, while the other two are more appropriate for continuous distributions. (?) Various sources hint that the AD test is "better" than the KS test, but fail to go into any further detail. Ultimately, all of these tests presumably detect "different ways" of deviating from the specified null distribution. But I don't really know what the differences are yet... In summary, I'm looking for some kind of general description of where each type of test is most applicable, and what sorts of problems it detects best. | Here is a general description of how the 3 methods mentioned work. The Chi-Squared method works by comparing the number of observations in a bin to the number expected to be in the bin based on the distribution. For discrete distributions the bins are usually the discrete possibilities or combinations of those. For continuous distributions you can choose cut points to create the bins. Many functions that implement this will automatically create the bins, but you should be able to create your own bins if you want to compare in specific areas. The disadvantage of this method is that differences between the theoretical distribution and the empirical data that still put the values in the same bin will not be detected, an example would be rounding, if theoretically the numbers between 2 and 3 should be spread througout the range (we expect to see values like 2.34296), but in practice all those values are rounded to 2 or 3 (we don't even see a 2.5) and our bin includes the range from 2 to 3 inclusive, then the count in the bin will be similar to the theoretical prediction (this can be good or bad), if you want to detect this rounding you can just manually choose the bins to capture this. The KS test statistic is the maximum distance between the 2 Cumulative Distribution Functions being compared (often a theoretical and an empirical). If the 2 probability distributions only have 1 intersection point then 1 minus the maximum distance is the area of overlap between the 2 probability distributions (this helps some people visualize what is being measured). Think of plotting on the same plot the theoretical distribution function and the EDF then measure the distance between the 2 "curves", the largest difference is the test statistic and it is compared against the distribution of values for this when the null is true. This captures differences is shape of the distribution or 1 distribution shifted or stretched compared to the other. It does not have a lot of power based on single outliers (if you take the maximum or minimum in the data and send it to Infinity or Negative Infinity then the maximum effect it will have on the test stat is $\frac1n$. This test depends on you knowing the parameters of the reference distribution rather than estimating them from the data (your situation seems fine here). If you estimate the parameters from the same data then you can still get a valid test by comparing to your own simulations rather than the standard reference distribution. The Anderson-Darling test also uses the difference between the CDF curves like the KS test, but rather than using the maximum difference it uses a function of the total area between the 2 curves (it actually squares the differences, weights them so the tails have more influence, then integrates over the domain of the distributions). This gives more weight to outliers than KS and also gives more weight if there are several small differences (compared to 1 big difference that KS would emphasize). This may end up overpowering the test to find differences that you would consider unimportant (mild rounding, etc.). Like the KS test this assumes that you did not estimate parameters from the data. Here is a graph to show the general ideas of the last 2: based on this R code: set.seed(1)
tmp <- rnorm(25)
edf <- approxfun( sort(tmp), (0:24)/25, method='constant',
yleft=0, yright=1, f=1 )
par(mfrow=c(3,1), mar=c(4,4,0,0)+.1)
curve( edf, from=-3, to=3, n=1000, col='green' )
curve( pnorm, from=-3, to=3, col='blue', add=TRUE)
tmp.x <- seq(-3, 3, length=1000)
ediff <- function(x) pnorm(x) - edf(x)
m.x <- tmp.x[ which.max( abs( ediff(tmp.x) ) ) ]
ediff( m.x ) # KS stat
segments( m.x, edf(m.x), m.x, pnorm(m.x), col='red' ) # KS stat
curve( ediff, from=-3, to=3, n=1000 )
abline(h=0, col='lightgrey')
ediff2 <- function(x) (pnorm(x) - edf(x))^2/( pnorm(x)*(1-pnorm(x)) )*dnorm(x)
curve( ediff2, from=-3, to=3, n=1000 )
abline(h=0) The top graph shows an EDF of a sample from a standard normal compared to the CDF of the standard normal with a line showing the KS stat. The middle graph then shows the difference in the 2 curves (you can see where the KS stat occurs). The bottom is then the squared, weighted difference, the AD test is based on the area under this curve (assuming I got everything correct). Other tests look at the correlation in a qqplot, look at the slope in the qqplot, compare the mean, var, and other stats based on the moments. | {
"source": [
"https://stats.stackexchange.com/questions/27958",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10914/"
]
} |
28,029 | I'm new to data mining and I'm trying to train a decision tree against a data set which is highly unbalanced. However, I'm having problems with poor predictive accuracy. The data consists of students studying courses, and the class variable is the course status which has two values - Withdrawn or Current. Age Ethnicity Gender Course ... Course Status In the data set there are many more instances which are Current than Withdrawn. Withdrawn instances only accounting for 2% of the total instances. I want to be able to build a model which can predict the probability that a person will withdraw in the future. However when testing the model against the training data, the accuracy of the model is terrible. I've had similar issues with decision trees where the data is dominated by one or two classes. What approach can I use to solve this problem and build a more accurate classifier? | This is an interesting and very frequent problem in classification - not just in decision trees but in virtually all classification algorithms. As you found empirically, a training set consisting of different numbers of representatives from either class may result in a classifier that is biased towards the majority class. When applied to a test set that is similarly imbalanced, this classifier yields an optimistic accuracy estimate. In an extreme case, the classifier might assign every single test case to the majority class, thereby achieving an accuracy equal to the proportion of test cases belonging to the majority class. This is a well-known phenomenon in binary classification (and it extends naturally to multi-class settings). This is an important issue, because an imbalanced dataset may lead to inflated performance estimates. This in turn may lead to false conclusions about the significance with which the algorithm has performed better than chance. The machine-learning literature on this topic has essentially developed three solution strategies. You can restore balance on the training set by undersampling the large class or by oversampling the small class, to prevent bias from arising in the first place. Alternatively, you can modify the costs of misclassification, as noted in a previous response, again to prevent bias. An additional safeguard is to replace the accuracy by the so-called balanced accuracy . It is defined as the arithmetic mean of the class-specific accuracies, $\phi := \frac{1}{2}\left(\pi^+ + \pi^-\right),$ where $\pi^+$ and $\pi^-$ represent the accuracy obtained on positive and negative examples, respectively. If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set, then the balanced accuracy, as appropriate, will drop to chance (see sketch below). I would recommend to consider at least two of the above approaches in conjunction. For example, you could oversample your minority class to prevent your classifier from acquiring a bias in favour the majority class. Following this, when evaluating the performance of your classifier, you could replace the accuracy by the balanced accuracy. The two approaches are complementary. When applied together, they should help you both prevent your original problem and avoid false conclusions following from it. I would be happy to post some additional references to the literature if you would like to follow up on this. | {
"source": [
"https://stats.stackexchange.com/questions/28029",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10043/"
]
} |
28,406 | In document classification, is cosine similarity considered a classification or a clustering technique? But you need training data with the cosine similarity for creation of the centroid right? | No. Cosine similarity can be computed amongst arbitrary vectors. It is a similarity measure (which can be converted to a distance measure, and then be used in any distance based classifier, such as nearest neighbor classification.) $$\cos \varphi = \frac{a\cdot b}{\|a\| \, \|b\|} $$ Where $a$ and $b$ are whatever vectors you want to compare. If you want to do NN classification, you would use $a$ as your new document, and $b$ as your known sample documents, then classify the new document based on the most similar sample(s). Alternatively, you could compute a centroid for a whole class, but that would assume that the class is very consistent in itself, and that the centroid is a reasonable estimator for the cosine distances (I'm not sure about this!). NN classification is much easier for you, and less dependent on your corpus to be very consistent in itself. Say you have the topic "sports". Some documents will talk about Soccer, others about Basketball, others about American Football. The centroid will probably be quite meaningless. Keeping a number of good sample documents for NN classification will likely work much better. This happens commonly when one class consists of multiple clusters. It's an often misunderstood thing, classes do not necessarily equal clusters. Multiple classes may be one big cluster when they are hard to discern in the data. And on the other hand a class may well have multiple clusters if it is not very uniform. Clustering can work well for finding good sample documents from your training data, but there are other more appropriate methods. In a supervised context, supervised methods will always perform better than unsupervised. | {
"source": [
"https://stats.stackexchange.com/questions/28406",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11272/"
]
} |
28,431 | I am writing my PhD thesis and I've realized that I rely excessively in box plots in order to compare distributions. Which other alternatives do you like for achieving this task? I'd also like to ask if you know any other resource as the R gallery in which I can inspire myself with different ideas on data visualization. | I am going to elaborate my comment, as suggested by @gung. I will also include the violin plot suggested by @Alexander, for completeness. Some of these tools can be used for comparing more than two samples. # Required packages
library(sn)
library(aplpack)
library(vioplot)
library(moments)
library(beanplot)
# Simulate from a normal and skew-normal distributions
x = rnorm(250,0,1)
y = rsn(250,0,1,5)
# Separated histograms
hist(x)
hist(y)
# Combined histograms
hist(x, xlim=c(-4,4),ylim=c(0,1), col="red",probability=T)
hist(y, add=T, col="blue",probability=T)
# Boxplots
boxplot(x,y)
# Separated smoothed densities
plot(density(x))
plot(density(y))
# Combined smoothed densities
plot(density(x),type="l",col="red",ylim=c(0,1),xlim=c(-4,4))
points(density(y),type="l",col="blue")
# Stem-and-leaf plots
stem(x)
stem(y)
# Back-to-back stem-and-leaf plots
stem.leaf.backback(x,y)
# Violin plot (suggested by Alexander)
vioplot(x,y)
# QQ-plot
qqplot(x,y,xlim=c(-4,4),ylim=c(-4,4))
qqline(x,y,col="red")
# Kolmogorov-Smirnov test
ks.test(x,y)
# six-numbers summary
summary(x)
summary(y)
# moment-based summary
c(mean(x),var(x),skewness(x),kurtosis(x))
c(mean(y),var(y),skewness(y),kurtosis(y))
# Empirical ROC curve
xx = c(-Inf, sort(unique(c(x,y))), Inf)
sens = sapply(xx, function(t){mean(x >= t)})
spec = sapply(xx, function(t){mean(y < t)})
plot(0, 0, xlim = c(0, 1), ylim = c(0, 1), type = 'l')
segments(0, 0, 1, 1, col = 1)
lines(1 - spec, sens, type = 'l', col = 2, lwd = 1)
# Beanplots
beanplot(x,y)
# Empirical CDF
plot(ecdf(x))
lines(ecdf(y)) I hope this helps. | {
"source": [
"https://stats.stackexchange.com/questions/28431",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8112/"
]
} |
28,437 | Before it is pointed, I am aware that a very similar question was already asked . Still, I am in doubt regarding the concept. More specifically, it is mentioned by the most voted answer that: In terms of a simple rule of thumb , I'd suggest that you: Run factor analysis if you assume or wish to test a theoretical model of latent factors causing observed variables. Run principal components analysis If you want to simply reduce your correlated observed variables to a smaller set of important independent composite variables. Question 1: I am having difficulties on understanding based on the results I obtained from R where exactly I am inputing my theoretical model of latent factors . I am using the functions from statsmethods . On both factanal() and princomp() the inputs were the same: A table where each row represented one data point and the columns consisted of different attributes I was interested on reducing. Thus, this add to my confusion on where is this pre assumed model play its role. I noticed that for factor analysis function I used parallel analysis also suggested by the site using the nScree() function to determine the number of factors and I specified if I wanted a varimax (orthogonal) or promax (oblique) rotation. Is that what is it mean by the model? Being able to choose the amount of factors and the type of rotation? The results being provided as visual graphs for both PCA and EFA also doesn't seem to highlight this difference which adds to my confusion. Where does this distinction can be observed on them? PCA EFA Question 2: -- Answered I bought a book to study about this from Richard L. Gorsuch. On this book there is something that the author caught attention on the difference between PCA (Principal Component Analysis) and EFA (Exploratory Factor Analysis): It is mentioned that PCA is for population while EFA is for sample . Is that true? I didn't see that being mentioned on any discussion I read so far. Is it irrelevant? Question 3: I noticed that all those methods seems to impose the normal distribution constraint. I also read that for larger sets this constraint can be ignored. Is that true or PCA, EFA and CFA are sensible to distribution constraint violations? Question 4: Where from the results of PCA and EFA should I note that one is talking about latent factors (EFA) and the other is just clustering on components (factors) the variables? The outputs from R looks the same to me. Is it just the way I perceive what the factors being shown as output? I noted that both show me the table where I can see which I can observe which of my variables are expressed the most of my factors. What is the difference on the interpretation I should have on which variable belongs to which factor in respect to PCA and EFA? EFA is saying those with higher expression seems to be more explained by that latent factor while PCA is trying to say that factor is holding those variables from what is it observed? Question 5 Finally the last question is regarding CFA (Confirmatory Factor Analysis). On the same function website the following image is being shown: I read that CFA is usually followed after EFA for hypothesis testing. In that sense, EFA tells you which are the latent factors (which are the output factors) and then you use CFA assuming those factors you observed from EFA for hypothesis testing? Question 6 For EFA one of the available rotations on the literature is direct oblimium. I heard that it can accounts for both promax and varimax so 'it takes the best of two words'. Is that true? I am also trying to find a function that employs them on R, since the one suggested on the site does not. I would be happy to get any suggestion on this one. I hope it is noted that this question is way more specific on the doubts regarding EFA and PCA and also adds to CFA so not to get closed for being repeated on the subject. If at least one of the questions is answered I am more than happy too as to clarify the confusion in my head. Thank you. | I am going to elaborate my comment, as suggested by @gung. I will also include the violin plot suggested by @Alexander, for completeness. Some of these tools can be used for comparing more than two samples. # Required packages
library(sn)
library(aplpack)
library(vioplot)
library(moments)
library(beanplot)
# Simulate from a normal and skew-normal distributions
x = rnorm(250,0,1)
y = rsn(250,0,1,5)
# Separated histograms
hist(x)
hist(y)
# Combined histograms
hist(x, xlim=c(-4,4),ylim=c(0,1), col="red",probability=T)
hist(y, add=T, col="blue",probability=T)
# Boxplots
boxplot(x,y)
# Separated smoothed densities
plot(density(x))
plot(density(y))
# Combined smoothed densities
plot(density(x),type="l",col="red",ylim=c(0,1),xlim=c(-4,4))
points(density(y),type="l",col="blue")
# Stem-and-leaf plots
stem(x)
stem(y)
# Back-to-back stem-and-leaf plots
stem.leaf.backback(x,y)
# Violin plot (suggested by Alexander)
vioplot(x,y)
# QQ-plot
qqplot(x,y,xlim=c(-4,4),ylim=c(-4,4))
qqline(x,y,col="red")
# Kolmogorov-Smirnov test
ks.test(x,y)
# six-numbers summary
summary(x)
summary(y)
# moment-based summary
c(mean(x),var(x),skewness(x),kurtosis(x))
c(mean(y),var(y),skewness(y),kurtosis(y))
# Empirical ROC curve
xx = c(-Inf, sort(unique(c(x,y))), Inf)
sens = sapply(xx, function(t){mean(x >= t)})
spec = sapply(xx, function(t){mean(y < t)})
plot(0, 0, xlim = c(0, 1), ylim = c(0, 1), type = 'l')
segments(0, 0, 1, 1, col = 1)
lines(1 - spec, sens, type = 'l', col = 2, lwd = 1)
# Beanplots
beanplot(x,y)
# Empirical CDF
plot(ecdf(x))
lines(ecdf(y)) I hope this helps. | {
"source": [
"https://stats.stackexchange.com/questions/28437",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9887/"
]
} |
28,449 | I've written some code to calculate Poisson confidence limits (a) using Chi-squared, and (b) from first principles, using Poisson's probability mass function equation. However, the two sets of results don't agree. For example, for Lambda=10 and 95% confidence limits, I get: 95%:
Chi squared [4.80, 18.39]
Exact [3, 17] The discrepancy is worse for wider confidence limits: 3 sigma:
Chi squared [3.08, 23.64]
Exact [1, 21] Note that my 'exact' result shows the first k outside the confidence limits (in other words, the range [4,16] is completely inside the 95% range). My 95% Chi squared result above agrees with several online Poisson limits calculators. Statpages , for example, also gives [4.80, 18.39]. However, the exact result also appears to be correct. For 95%, the online calculator at Stattrek appears to give the same results as my exact [3,17]. More precisely, here are cumulative probabilities for lambda=10 taken from a table here : 0 - 2 events: negligible
3 1.0%
4 2.9%
...
15 95.1%
16 97.3%
17 98.6% So, the 95 confidence limits are for 4 to 16 events, inclusive, which agrees with my program output, which says that <= 3 events, or >= 17 events, are outside the 95% limits. Have I got this wrong somewhere? Is it just that Lambda=10 is too small for the chi-squared method to be exact? If I increase Lambda to 100 I get: 95%:
Chi squared [81.36, 121.63]
Exact [80, 120]
3 sigma:
Chi squared [72.65, 133.83]
Exact [70, 131] It essentially makes no difference. I can live with the fact that the inexact result is continuous, but not with the inaccuracy. EDIT Thanks for the comments, everyone. As I understand it, the basic answer is that they both provide confidence limits, but they're different, and I shouldn't expect them to be the same, and should just live with it - correct? For background, this is for analysing healthcare providers, and finding out if any differ significantly from the average. The important thing here (for me, anyway) is not to point the finger at somebody and say that they're outside the 2 or 3 SD limits, when another analysis could show that they're actually inside the limits. For the same reason, I don't care that a discrete method doesn't give me exact 95% coverage - I just need to positively identify outliers. My own background isn't stats, but I do understand the exact method, and I'm happy that it gives the "right" answer (notwithstanding the fact that the processes aren't really appropriate for Poisson). However, I don't understand the Poisson/Chi-squared transformation, and I'm not happy with it for this application, because it 'incorrectly' adds outliers at the low end of the range (not to mention missing 'real' outliers at the top). However, it is universally used for exactly this application. Would it be fair for me to say that the exact method is better for this application, and the approximation is simply that, and it is incorrect? | I am going to elaborate my comment, as suggested by @gung. I will also include the violin plot suggested by @Alexander, for completeness. Some of these tools can be used for comparing more than two samples. # Required packages
library(sn)
library(aplpack)
library(vioplot)
library(moments)
library(beanplot)
# Simulate from a normal and skew-normal distributions
x = rnorm(250,0,1)
y = rsn(250,0,1,5)
# Separated histograms
hist(x)
hist(y)
# Combined histograms
hist(x, xlim=c(-4,4),ylim=c(0,1), col="red",probability=T)
hist(y, add=T, col="blue",probability=T)
# Boxplots
boxplot(x,y)
# Separated smoothed densities
plot(density(x))
plot(density(y))
# Combined smoothed densities
plot(density(x),type="l",col="red",ylim=c(0,1),xlim=c(-4,4))
points(density(y),type="l",col="blue")
# Stem-and-leaf plots
stem(x)
stem(y)
# Back-to-back stem-and-leaf plots
stem.leaf.backback(x,y)
# Violin plot (suggested by Alexander)
vioplot(x,y)
# QQ-plot
qqplot(x,y,xlim=c(-4,4),ylim=c(-4,4))
qqline(x,y,col="red")
# Kolmogorov-Smirnov test
ks.test(x,y)
# six-numbers summary
summary(x)
summary(y)
# moment-based summary
c(mean(x),var(x),skewness(x),kurtosis(x))
c(mean(y),var(y),skewness(y),kurtosis(y))
# Empirical ROC curve
xx = c(-Inf, sort(unique(c(x,y))), Inf)
sens = sapply(xx, function(t){mean(x >= t)})
spec = sapply(xx, function(t){mean(y < t)})
plot(0, 0, xlim = c(0, 1), ylim = c(0, 1), type = 'l')
segments(0, 0, 1, 1, col = 1)
lines(1 - spec, sens, type = 'l', col = 2, lwd = 1)
# Beanplots
beanplot(x,y)
# Empirical CDF
plot(ecdf(x))
lines(ecdf(y)) I hope this helps. | {
"source": [
"https://stats.stackexchange.com/questions/28449",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11302/"
]
} |
28,474 | I have what is probably a simple question, but it is baffling me right now, so I am hoping you can help me out. I have a least squares regression model, with one independent variable and one dependent variable. The relationship is not significant. Now I add a second independent variable. Now the relationship between the first independent variable and the dependent variable becomes significant. How does this work? This is probably demonstrating some issue with my understanding, but to me, but I do not see how adding this second independent variable can make the first significant. | Although collinearity (of predictor variables) is a possible explanation, I would like to suggest it is not an illuminating explanation because we know collinearity is related to "common information" among the predictors, so there is nothing mysterious or counter-intuitive about the side effect of introducing a second correlated predictor into the model. Let us then consider the case of two predictors that are truly orthogonal : there is absolutely no collinearity among them. A remarkable change in significance can still happen. Designate the predictor variables $X_1$ and $X_2$ and let $Y$ name the predictand. The regression of $Y$ against $X_1$ will fail to be significant when the variation in $Y$ around its mean is not appreciably reduced when $X_1$ is used as the independent variable. When that variation is strongly associated with a second variable $X_2$, however, the situation changes. Recall that multiple regression of $Y$ against $X_1$ and $X_2$ is equivalent to Separately regress $Y$ and $X_1$ against $X_2$. Regress the $Y$ residuals against the $X_1$ residuals. The residuals from the first step have removed the effect of $X_2$. When $X_2$ is closely correlated with $Y$, this can expose a relatively small amount of variation that had previously been masked. If this variation is associated with $X_1$, we obtain a significant result. All this might perhaps be clarified with a concrete example. To begin, let's use R to generate two orthogonal independent variables along with some independent random error $\varepsilon$: n <- 32
set.seed(182)
u <-matrix(rnorm(2*n), ncol=2)
u0 <- cbind(u[,1] - mean(u[,1]), u[,2] - mean(u[,2]))
x <- svd(u0)$u
eps <- rnorm(n) (The svd step assures the two columns of matrix x (representing $X_1$ and $X_2$) are orthogonal, ruling out collinearity as a possible explanation of any subsequent results.) Next, create $Y$ as a linear combination of the $X$'s and the error. I have adjusted the coefficients to produce the counter-intuitive behavior: y <- x %*% c(0.05, 1) + eps * 0.01 This is a realization of the model $Y \sim_{iid} N(0.05 X_1 + 1.00 X_2, 0.01^2)$ with $n=32$ cases. Look at the two regressions in question. First , regress $Y$ against $X_1$ only: > summary(lm(y ~ x[,1]))
...
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.002576 0.032423 -0.079 0.937
x[, 1] 0.068950 0.183410 0.376 0.710 The high p-value of 0.710 shows that $X_1$ is completely non-significant. Next , regress $Y$ against $X_1$ and $X_2$: > summary(lm(y ~ x))
...
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.002576 0.001678 -1.535 0.136
x1 0.068950 0.009490 7.265 5.32e-08 ***
x2 1.003276 0.009490 105.718 < 2e-16 *** Suddenly, in the presence of $X_2$, $X_1$ is strongly significant, as indicated by the near-zero p-values for both variables. We can visualize this behavior by means of a scatterplot matrix of the variables $X_1$, $X_2$, and $Y$ along with the residuals used in the two-step characterization of multiple regression above. Because $X_1$ and $X_2$ are orthogonal, the $X_1$ residuals will be the same as $X_1$ and therefore need not be redrawn. We will include the residuals of $Y$ against $X_2$ in the scatterplot matrix, giving this figure: lmy <- lm(y ~ x[,2])
d <- data.frame(X1=x[,1], X2=x[,2], Y=y, RY=residuals(lmy))
plot(d) Here is a rendering of it (with a little prettification): This matrix of graphics has four rows and four columns, which I will count down from the top and from left to right. Notice: The $(X_1, X_2)$ scatterplot in the second row and first column confirms the orthogonality of these predictors: the least squares line is horizontal and correlation is zero. The $(X_1, Y)$ scatterplot in the third row and first column exhibits the slight but completely insignificant relationship reported by the first regression of $Y$ against $X_1$. (The correlation coefficient, $\rho$, is only $0.07$). The $(X_2, Y)$ scatterplot in the third row and second column shows the strong relationship between $Y$ and the second independent variable. (The correlation coefficient is $0.996$). The fourth row examines the relationships between the residuals of $Y$ (regressed against $X_2$) and other variables: The vertical scale shows that the residuals are (relatively) quite small: we couldn't easily see them in the scatterplot of $Y$ against $X_2$. The residuals are strongly correlated with $X_1$ ($\rho = 0.80$). The regression against $X_2$ has unmasked this previously hidden behavior. By construction, there is no remaining correlation between the residuals and $X_2$. There is little correlation between $Y$ and these residuals ($\rho = 0.09$). This shows how the residuals can behave entirely differently than $Y$ itself. That's how $X_1$ can suddenly be revealed as a significant contributor to the regression. Finally, it is worth remarking that the two estimates of the $X_1$ coefficient (both equal to $0.06895$, not far from the intended value of $0.05$) agree only because $X_1$ and $X_2$ are orthogonal. Except in designed experiments, it is rare for orthogonality to hold exactly. A departure from orthogonality usually causes the coefficient estimates to change. | {
"source": [
"https://stats.stackexchange.com/questions/28474",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11314/"
]
} |
28,715 | How would you test or check that sampling is IID (Independent and Identically Distributed)? Note that I do not mean Gaussian and Identically Distributed, just IID. And idea that comes to my mind is to repeatedly split the sample in two sub-samples of equal size, perform the Kolmogorov-Smirnov test and check that the distribution of the p-values is uniform. Any comment on that approach, and any suggestion is welcome. Clarification after starting bounty: I am looking for a general test that can be applied to non time series data. | What you conclude about if data is IID comes from outside information, not the data itself. You as the scientist need to determine if it is a reasonable to assume the data IID based on how the data was collected and other outside information. Consider some examples. Scenario 1: We generate a set of data independently from a single distribution that happens to be a mixture of 2 normals. Scenario 2: We first generate a gender variable from a binomial distribution, then within males and females we independently generate data from a normal distribution (but the normals are different for males and females), then we delete or lose the gender information. In scenario 1 the data is IID and in scenario 2 the data is clearly not Identically distributed (different distributions for males and females), but the 2 distributions for the 2 scenarios are indistinguishable from the data, you have to know things about how the data was generated to determine the difference. Scenario 3: I take a simple random sample of people living in my city and administer a survey and analyse the results to make inferences about all people in the city. Scenario 4: I take a simple random sample of people living in my city and administer a survey and analyze the results to make inferences about all people in the country. In scenario 3 the subjects would be considered independent (simple random sample of the population of interest), but in scenario 4 they would not be considered independent because they were selected from a small subset of the population of interest and the geographic closeness would likely impose dependence. But the 2 datasets are identical, it is the way that we intend to use the data that determines if they are independent or dependent in this case. So there is no way to test using only the data to show that data is IID, plots and other diagnostics can show some types of non-IID, but lack of these does not guarantee that the data is IID. You can also compare to specific assumptions (IID normal is easier to disprove than just IID). Any test is still just a rule out, but failure to reject the tests never proves that it is IID. Decisions about whether you are willing to assume that IID conditions hold need to be made based on the science of how the data was collected, how it relates to other information, and how it will be used. Edits: Here are another set of examples for non-identical. Scenario 5: the data is residuals from a regression where there is heteroscedasticity (the variances are not equal). Scenario 6: the data is from a mixture of normals with mean 0 but different variances. In scenario 5 we can clearly see that the residuals are not identically distributed if we plot the residuals against fitted values or other variables (predictors, or potential predictors), but the residuals themselves (without the outside info) would be indistinguishable from scenario 6. | {
"source": [
"https://stats.stackexchange.com/questions/28715",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10849/"
]
} |
28,730 | I have a (mixed) model in which one of my predictors should a priori only be quadratically related to the predictor (due to the experimental manipulation). Hence, I would like to add only the quadratic term to the model. Two things keep me from doing so: I think I read somehwere that you should always include the lower order polynomial when fitting higher order polynomials. I forgot where I found it and in the literature I looked at (e.g., Faraway, 2002; Fox, 2002) I cannot find a good explanation. When I add both, the linear and quadratic term, both are significant. When I add only one of them, they are not significant. However, a linear relation of predictor and data is not interpretable. The context of my question is specifically a mixed-model using lme4 , but I would like to get answers that could explain why it is or why it is not okay to inlcude a higher order polynomial and not the lower order polynomial. If necessary I can provide the data. | 1. Why include the linear term? It is illuminating to notice that a quadratic relationship can be written in two ways: $$y = a_0 + a_1 x + a_2 x^2 = a_2(x - b)^2 + c$$ (where, equating coefficients, we find $-2a_2 b = a_1$ and $a_2 b^2 + c = a_0$ ). The value $x=b$ corresponds to a global extremum of the relationship (geometrically, it locates the vertex of a parabola). If you do not include the linear term $a_1 x$ , the possibilities are reduced to $$y = a_0 + a_2 x^2 = a_2(x - 0)^2 + c$$ (where now, obviously, $c = a_0$ and it is assumed the model contains a constant term $a_0$ ). That is, you force $b=0$ . In light of this, question #1 comes down to whether you are certain that the global extremum must occur at $x=0$ . If you are, then you may safely omit the linear term $a_1 x$ . Otherwise, you must include it. 2. How to understand changes in significance as terms are included or excluded? This is discussed in great detail in a related thread at https://stats.stackexchange.com/a/28493 . In the present case, the significance of $a_2$ indicates there is curvature in the relationship and the significance of $a_1$ indicates that $b$ is nonzero: it sounds like you need to include both terms (as well as the constant, of course). | {
"source": [
"https://stats.stackexchange.com/questions/28730",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/442/"
]
} |
29,038 | I am thinking of building a model predicting a ratio $a/b$, where $a \le b$ and $a > 0$ and $b > 0$. So, the ratio would be between $0$ and $1$. I could use linear regression, although it doesn't naturally limit to 0..1. I have no reason to believe the relationship is linear, but of course it is often used anyway, as a simple first model. I could use a logistic regression, although it is normally used to predict the probability of a two-state outcome, not to predict a continuous value from the range 0..1. Knowing nothing more, would you use linear regression, logistic regression, or hidden option c ? | You should choose "hidden option c", where c is beta regression. This is a type of regression model that is appropriate when the response variable is distributed as Beta . You can think of it as analogous to a generalized linear model . It's exactly what you are looking for. There is a package in R called betareg which deals with this. I don't know if you use R , but even if you don't you could read the 'vignettes' anyway, they will give you general information about the topic in addition to how to implement it in R (which you wouldn't need in that case). Edit (much later): Let me make a quick clarification. I interpret the question as being about the ratio of two, positive, real values. If so, (and they are distributed as Gammas) that is a Beta distribution. However, if $a$ is a count of 'successes' out of a known total, $b$, of 'trials', then this would be a count proportion $a/b$, not a continuous proportion, and you should use binomial GLM (e.g., logistic regression). For how to do it in R, see e.g. How to do logistic regression in R when outcome is fractional (a ratio of two counts)? Another possibility is to use linear regression if the ratios can be transformed so as to meet the assumptions of a standard linear model, although I would not be optimistic about that actually working. | {
"source": [
"https://stats.stackexchange.com/questions/29038",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2849/"
]
} |
29,039 | I'm trying to make overlaid ROC curves to represent successive improvements in model performance when particular predictors are added one at a time to the model. I want one ROC curve for each of about 5 nested models (which I will define manually), all overlaid in one plot. For example: #outcome var
y = c(rep(0,50), rep(1, 50))
#predictors
x1 = y + rnorm(100, sd = 1)
x2 = y + rnorm(100, sd = 4)
#correlations of predictors with outcome
cor(x1, y)
cor(x2, y)
library(Epi)
ROC(form = y ~ x1, plot = "ROC)
ROC(form = y ~ x1 + x2, plot = "ROC") I'd want the two ROC curves on the same plot (and ideally without the distracting model info in the background). Any ggplot/graphics gurus willing to lend a hand? | You should choose "hidden option c", where c is beta regression. This is a type of regression model that is appropriate when the response variable is distributed as Beta . You can think of it as analogous to a generalized linear model . It's exactly what you are looking for. There is a package in R called betareg which deals with this. I don't know if you use R , but even if you don't you could read the 'vignettes' anyway, they will give you general information about the topic in addition to how to implement it in R (which you wouldn't need in that case). Edit (much later): Let me make a quick clarification. I interpret the question as being about the ratio of two, positive, real values. If so, (and they are distributed as Gammas) that is a Beta distribution. However, if $a$ is a count of 'successes' out of a known total, $b$, of 'trials', then this would be a count proportion $a/b$, not a continuous proportion, and you should use binomial GLM (e.g., logistic regression). For how to do it in R, see e.g. How to do logistic regression in R when outcome is fractional (a ratio of two counts)? Another possibility is to use linear regression if the ratios can be transformed so as to meet the assumptions of a standard linear model, although I would not be optimistic about that actually working. | {
"source": [
"https://stats.stackexchange.com/questions/29039",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11511/"
]
} |
29,044 | Ok, I have a logistic regression and have used the predict() function to develop a probability curve based on my estimates. ## LOGIT MODEL:
library(car)
mod1 = glm(factor(won) ~ as.numeric(bid), data=mydat, family=binomial(link="logit"))
## PROBABILITY CURVE:
all.x <- expand.grid(won=unique(won), bid=unique(bid))
y.hat.new <- predict(mod1, newdata=all.x, type="response")
plot(bid<-000:1000,predict(mod1,newdata=data.frame(bid<-c(000:1000)),type="response"), lwd=5, col="blue", type="l") This is great but I'm curious about plotting the confidence intervals for the probabilities. I've tried plot.ci() but had no luck. Can anyone point me to some ways to get this done, preferably with the car package or base R. | The code you used estimates a logistic regression model using the glm function. You didn't include data, so I'll just make some up. set.seed(1234)
mydat <- data.frame(
won=as.factor(sample(c(0, 1), 250, replace=TRUE)),
bid=runif(250, min=0, max=1000)
)
mod1 <- glm(won~bid, data=mydat, family=binomial(link="logit")) A logistic regression model models the relationship between a binary response variable and, in this case, one continuous predictor. The result is a logit-transformed probability as a linear relation to the predictor. In your case, the outcome is a binary response corresponding to winning or not winning at gambling and it is being predicted by the value of the wager. The coefficients from mod1 are given in logged odds (which are difficult to interpret), according to: $$\text{logit}(p)=\log\left(\frac{p}{(1-p)}\right)=\beta_{0}+\beta_{1}x_{1}$$ To convert logged odds to probabilities, we can translate the above to $$p=\frac{\exp(\beta_{0}+\beta_{1}x_{1})}{(1+\exp(\beta_{0}+\beta_{1}x_{1}))}$$ You can use this information to set up the plot. First, you need a range of the predictor variable: plotdat <- data.frame(bid=(0:1000)) Then using predict , you can obtain predictions based on your model preddat <- predict(mod1, newdata=plotdat, se.fit=TRUE) Note that the fitted values can also be obtained via mod1$fitted By specifying se.fit=TRUE , you also get the standard error associated with each fitted value. The resulting data.frame is a matrix with the following components: the fitted predictions ( fit ), the estimated standard errors ( se.fit ), and a scalar giving the square root of the dispersion used to compute the standard errors ( residual.scale ). In the case of a binomial logit, the value will be 1 (which you can see by entering preddat$residual.scale in R ). If you want to see an example of what you've calculated so far, you can type head(data.frame(preddat)) . The next step is to set up the plot. I like to set up a blank plotting area with the parameters first: with(mydat, plot(bid, won, type="n",
ylim=c(0, 1), ylab="Probability of winning", xlab="Bid")) Now you can see where it is important to know how to calculate the fitted probabilities. You can draw the line corresponding to the fitted probabilities following the second formula above. Using the preddat data.frame you can convert the fitted values to probabilities and use that to plot a line against the values of your predictor variable. with(preddat, lines(0:1000, exp(fit)/(1+exp(fit)), col="blue")) Finally, answer your question, the confidence intervals can be added to the plot by calculating the probability for the fitted values +/- 1.96 times the standard error: with(preddat, lines(0:1000, exp(fit+1.96*se.fit)/(1+exp(fit+1.96*se.fit)), lty=2))
with(preddat, lines(0:1000, exp(fit-1.96*se.fit)/(1+exp(fit-1.96*se.fit)), lty=2)) The resulting plot (from the randomly generated data) should look something like this: For expediency's sake, here's all the code in one chunk: set.seed(1234)
mydat <- data.frame(
won=as.factor(sample(c(0, 1), 250, replace=TRUE)),
bid=runif(250, min=0, max=1000)
)
mod1 <- glm(won~bid, data=mydat, family=binomial(link="logit"))
plotdat <- data.frame(bid=(0:1000))
preddat <- predict(mod1, newdata=plotdat, se.fit=TRUE)
with(mydat, plot(bid, won, type="n",
ylim=c(0, 1), ylab="Probability of winning", xlab="Bid"))
with(preddat, lines(0:1000, exp(fit)/(1+exp(fit)), col="blue"))
with(preddat, lines(0:1000, exp(fit+1.96*se.fit)/(1+exp(fit+1.96*se.fit)), lty=2))
with(preddat, lines(0:1000, exp(fit-1.96*se.fit)/(1+exp(fit-1.96*se.fit)), lty=2)) (Note: This is a heavily edited answer in an attempt to make it more relevant to stats.stackexchange.) | {
"source": [
"https://stats.stackexchange.com/questions/29044",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3310/"
]
} |
29,121 | How would you explain intuitively what is a unit root, in the context of the unit root test? I'm thinking in ways of explaining much like I've founded in this question . The case with unit root is that I know (little, by the way) that the unit root test is used to test for stationarity in a time series, but it's just it. How would you go to explain it to the layperson, or to a person who has studied a very basic probability and statistics course? UPDATE I accepted whuber's answer as it is what most reflect what I asked here. But I urge everybody that came here to read Patrick's and Michael's answers also, as they are the natural "next step" in understanding the Unit Root. They use mathematics, but in a very intuitive way. | He had just come to the bridge; and not looking where he was going,
he tripped over something, and the fir-cone jerked out of his
paw into the river. "Bother," said Pooh, as it floated slowly under the bridge, and he went back to get another fir-cone which had a rhyme
to it. But then he thought that he would just look at the river
instead, because it was a peaceful sort of day, so he lay down and
looked at it, and it slipped slowly away beneath him . . . and
suddenly, there was his fir-cone slipping away too. "That's funny," said Pooh. "I dropped it on the other side," said Pooh, "and it came out on this side! I wonder if it would do it again?" A.A. Milne, The House at Pooh Corner (Chapter VI. In which Pooh invents a new game and eeyore joins in.) Here is a picture of the flow along the surface of the water: The arrows show the direction of flow and are connected by streamlines. A fir cone will tend to follow the streamline in which it falls. But it doesn't always do it the same way each time, even when it's dropped in the same place in the stream: random variations along its path, caused by turbulence in the water, wind, and other whims of nature kick it onto neighboring stream lines. Here, the fir cone was dropped near the upper right corner. It more or less followed the stream lines--which converge and flow away down and to the left--but it took little detours along the way. An "autoregressive process" (AR process) is a sequence of numbers thought to behave like certain flows. The two-dimensional illustration corresponds to a process in which each number is determined by its two preceding values--plus a random "detour." The analogy is made by interpreting each successive pair in the sequence as coordinates of a point in the stream. Instant by instant, the stream's flow changes the fir cone's coordinates in the same mathematical way given by the AR process. We can recover the original process from the flow-based picture by writing the coordinates of each point occupied by the fir cone and then erasing all but the last number in each set of coordinates. Nature--and streams in particular--is richer and more varied than the flows corresponding to AR processes. Because each number in the sequence is assumed to depend in the same fixed way on its predecessors--apart from the random detour part--the flows that illustrate AR processes exhibit limited patterns. They can indeed seem to flow like a stream, as seen here. They can also look like the swirling around a drain. The flows can occur in reverse, seeming to gush outwards from a drain. And they can look like mouths of two streams crashing together: two sources of water flow directly at one another and then split away to the sides. But that's about it. You can't have, say, a flowing stream with eddies off to the sides. AR processes are too simple for that. In this flow, the fir cone was dropped at the lower right corner and quickly carried into the eddy in the upper right, despite the slight random changes in position it underwent. But it will never quite stop moving, due to those same random movements which rescue it from oblivion. The fir cone's coordinates move around a bit--indeed, they are seen to oscillate, on the whole, around the coordinates of the center of the eddy. In the first stream flow, the coordinates progressed inevitably along the center of the stream, which quickly captured the cone and carried it away faster than its random detours could slow it down: they trend in time. By contrast, circling around an eddy exemplifies a stationary process in which the fir cone is captured; flowing away down the stream, in which the cone flows out of sight--trending--is non-stationary. Incidentally, when the flow for an AR process moves away downstream, it also accelerates. It gets faster and faster as the cone moves along it. The nature of an AR flow is determined by a few special, "characteristic," directions, which are usually evident in the stream diagram: streamlines seem to converge towards or come from these directions. One can always find as many characteristic directions as there are coefficients in the AR process: two in these illustrations. Associated with each characteristic direction is a number, its "root" or "eigenvalue." When the size of the number is less than unity, the flow in that characteristic direction is towards a central location. When the size of the root is greater than unity, the flow accelerates away from a central location. Movement along a characteristic direction with a unit root--one whose size is $1$ --is dominated by the random forces affecting the cone. It is a "random walk." The cone can wander away slowly but without accelerating. (Some of the figures display the values of both roots in their titles.) Even Pooh--a bear of very little brain--would recognize that the stream will capture his fir cone only when all the flow is toward one eddy or whirlpool; otherwise, on one of those random detours the cone will eventually find itself under the influence of that part of the flow with a root greater than $1$ in magnitude, whence it will wander off downstream and be lost forever. Consequently, an AR process can be stationary if and only if all characteristic values are less than unity in size . Economists are perhaps the greatest analysts of time series and employers of the AR process technology. Their series of data typically do not accelerate out of sight. They are concerned, therefore, only whether there is a characteristic direction whose value may be as large as $1$ in size: a "unit root." Knowing whether the data are consistent with such a flow can tell the economist much about the potential fate of his pooh stick: that is, about what will happen in the future. That's why it can be important to test for a unit root. A fine Wikipedia article explains some of the implications. Pooh and his friends found an empirical test of stationarity: Now one day Pooh and Piglet and Rabbit and Roo were all playing
Poohsticks together. They had dropped their sticks in when Rabbit
said "Go!" and then they had hurried across to the other side of
the bridge, and now they were all leaning over the edge, waiting to
see whose stick would come out first. But it was a long time coming,
because the river was very lazy that day, and hardly seemed to mind
if it didn't ever get there at all. "I can see mine!" cried Roo. "No, I can't, it's something else. Can you see yours, Piglet? I thought I could see
mine, but I couldn't. There it is! No, it isn't. Can you see yours,
Pooh?" "No," said Pooh. "I expect my stick's stuck," said Roo. "Rabbit, my stick's stuck. Is your stick stuck, Piglet?" "They always take longer than you think," said Rabbit. This passage, from 1928, could be construed as the very first "Unit Roo test." | {
"source": [
"https://stats.stackexchange.com/questions/29121",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11090/"
]
} |
29,126 | Let $X_1, X_2, \dotsc, X_k$ be an i.i.d. sample of a random variable $X$ . I plot these in a histogram and would like to include a confidence interval for the height of each histogram bar. Do you know how to go about doing it? | He had just come to the bridge; and not looking where he was going,
he tripped over something, and the fir-cone jerked out of his
paw into the river. "Bother," said Pooh, as it floated slowly under the bridge, and he went back to get another fir-cone which had a rhyme
to it. But then he thought that he would just look at the river
instead, because it was a peaceful sort of day, so he lay down and
looked at it, and it slipped slowly away beneath him . . . and
suddenly, there was his fir-cone slipping away too. "That's funny," said Pooh. "I dropped it on the other side," said Pooh, "and it came out on this side! I wonder if it would do it again?" A.A. Milne, The House at Pooh Corner (Chapter VI. In which Pooh invents a new game and eeyore joins in.) Here is a picture of the flow along the surface of the water: The arrows show the direction of flow and are connected by streamlines. A fir cone will tend to follow the streamline in which it falls. But it doesn't always do it the same way each time, even when it's dropped in the same place in the stream: random variations along its path, caused by turbulence in the water, wind, and other whims of nature kick it onto neighboring stream lines. Here, the fir cone was dropped near the upper right corner. It more or less followed the stream lines--which converge and flow away down and to the left--but it took little detours along the way. An "autoregressive process" (AR process) is a sequence of numbers thought to behave like certain flows. The two-dimensional illustration corresponds to a process in which each number is determined by its two preceding values--plus a random "detour." The analogy is made by interpreting each successive pair in the sequence as coordinates of a point in the stream. Instant by instant, the stream's flow changes the fir cone's coordinates in the same mathematical way given by the AR process. We can recover the original process from the flow-based picture by writing the coordinates of each point occupied by the fir cone and then erasing all but the last number in each set of coordinates. Nature--and streams in particular--is richer and more varied than the flows corresponding to AR processes. Because each number in the sequence is assumed to depend in the same fixed way on its predecessors--apart from the random detour part--the flows that illustrate AR processes exhibit limited patterns. They can indeed seem to flow like a stream, as seen here. They can also look like the swirling around a drain. The flows can occur in reverse, seeming to gush outwards from a drain. And they can look like mouths of two streams crashing together: two sources of water flow directly at one another and then split away to the sides. But that's about it. You can't have, say, a flowing stream with eddies off to the sides. AR processes are too simple for that. In this flow, the fir cone was dropped at the lower right corner and quickly carried into the eddy in the upper right, despite the slight random changes in position it underwent. But it will never quite stop moving, due to those same random movements which rescue it from oblivion. The fir cone's coordinates move around a bit--indeed, they are seen to oscillate, on the whole, around the coordinates of the center of the eddy. In the first stream flow, the coordinates progressed inevitably along the center of the stream, which quickly captured the cone and carried it away faster than its random detours could slow it down: they trend in time. By contrast, circling around an eddy exemplifies a stationary process in which the fir cone is captured; flowing away down the stream, in which the cone flows out of sight--trending--is non-stationary. Incidentally, when the flow for an AR process moves away downstream, it also accelerates. It gets faster and faster as the cone moves along it. The nature of an AR flow is determined by a few special, "characteristic," directions, which are usually evident in the stream diagram: streamlines seem to converge towards or come from these directions. One can always find as many characteristic directions as there are coefficients in the AR process: two in these illustrations. Associated with each characteristic direction is a number, its "root" or "eigenvalue." When the size of the number is less than unity, the flow in that characteristic direction is towards a central location. When the size of the root is greater than unity, the flow accelerates away from a central location. Movement along a characteristic direction with a unit root--one whose size is $1$ --is dominated by the random forces affecting the cone. It is a "random walk." The cone can wander away slowly but without accelerating. (Some of the figures display the values of both roots in their titles.) Even Pooh--a bear of very little brain--would recognize that the stream will capture his fir cone only when all the flow is toward one eddy or whirlpool; otherwise, on one of those random detours the cone will eventually find itself under the influence of that part of the flow with a root greater than $1$ in magnitude, whence it will wander off downstream and be lost forever. Consequently, an AR process can be stationary if and only if all characteristic values are less than unity in size . Economists are perhaps the greatest analysts of time series and employers of the AR process technology. Their series of data typically do not accelerate out of sight. They are concerned, therefore, only whether there is a characteristic direction whose value may be as large as $1$ in size: a "unit root." Knowing whether the data are consistent with such a flow can tell the economist much about the potential fate of his pooh stick: that is, about what will happen in the future. That's why it can be important to test for a unit root. A fine Wikipedia article explains some of the implications. Pooh and his friends found an empirical test of stationarity: Now one day Pooh and Piglet and Rabbit and Roo were all playing
Poohsticks together. They had dropped their sticks in when Rabbit
said "Go!" and then they had hurried across to the other side of
the bridge, and now they were all leaning over the edge, waiting to
see whose stick would come out first. But it was a long time coming,
because the river was very lazy that day, and hardly seemed to mind
if it didn't ever get there at all. "I can see mine!" cried Roo. "No, I can't, it's something else. Can you see yours, Piglet? I thought I could see
mine, but I couldn't. There it is! No, it isn't. Can you see yours,
Pooh?" "No," said Pooh. "I expect my stick's stuck," said Roo. "Rabbit, my stick's stuck. Is your stick stuck, Piglet?" "They always take longer than you think," said Rabbit. This passage, from 1928, could be construed as the very first "Unit Roo test." | {
"source": [
"https://stats.stackexchange.com/questions/29126",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10990/"
]
} |
29,130 | In the context of neural networks, what is the difference between the learning rate and weight decay? | The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled. So let's say that we have a cost or error function $E(\mathbf{w})$ that we want to minimize. Gradient descent tells us to modify the weights $\mathbf{w}$ in the direction of steepest descent in $E$:
\begin{equation}
w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i},
\end{equation}
where $\eta$ is the learning rate, and if it's large you will have a correspondingly large modification of the weights $w_i$ (in general it shouldn't be too large, otherwise you'll overshoot the local minimum in your cost function). In order to effectively limit the number of free parameters in your model so as to avoid over-fitting, it is possible to regularize the cost function. An easy way to do that is by introducing a zero mean Gaussian prior over the weights, which is equivalent to changing the cost function to $\widetilde{E}(\mathbf{w})=E(\mathbf{w})+\frac{\lambda}{2}\mathbf{w}^2$. In practice this penalizes large weights and effectively limits the freedom in your model. The regularization parameter $\lambda$ determines how you trade off the original cost $E$ with the large weights penalization. Applying gradient descent to this new cost function we obtain:
\begin{equation}
w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}-\eta\lambda w_i.
\end{equation}
The new term $-\eta\lambda w_i$ coming from the regularization causes the weight to decay in proportion to its size. | {
"source": [
"https://stats.stackexchange.com/questions/29130",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8401/"
]
} |
29,131 | I have recently been involved in a project that needs me to analyze the survival time of objects. Therefore, I plan to use the rms package to build a Cox model. The problem is, since the dataset I have is so big (about 450,000 instances, and each has 9 covariables), the R environment fails to handle this. Does anyone have suggestions as to how to fit these models? | The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled. So let's say that we have a cost or error function $E(\mathbf{w})$ that we want to minimize. Gradient descent tells us to modify the weights $\mathbf{w}$ in the direction of steepest descent in $E$:
\begin{equation}
w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i},
\end{equation}
where $\eta$ is the learning rate, and if it's large you will have a correspondingly large modification of the weights $w_i$ (in general it shouldn't be too large, otherwise you'll overshoot the local minimum in your cost function). In order to effectively limit the number of free parameters in your model so as to avoid over-fitting, it is possible to regularize the cost function. An easy way to do that is by introducing a zero mean Gaussian prior over the weights, which is equivalent to changing the cost function to $\widetilde{E}(\mathbf{w})=E(\mathbf{w})+\frac{\lambda}{2}\mathbf{w}^2$. In practice this penalizes large weights and effectively limits the freedom in your model. The regularization parameter $\lambda$ determines how you trade off the original cost $E$ with the large weights penalization. Applying gradient descent to this new cost function we obtain:
\begin{equation}
w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}-\eta\lambda w_i.
\end{equation}
The new term $-\eta\lambda w_i$ coming from the regularization causes the weight to decay in proportion to its size. | {
"source": [
"https://stats.stackexchange.com/questions/29131",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11055/"
]
} |
29,135 | Since the time variable can be treated as a normal feature in classification, why not using more powerful classification methods (such as, C4.5, SVM) to predict the occurrence of an event? Why lots of people still use the classic but old Cox model? In case of the right-censoring data, since the time would change for an instances, so I think same object with different time values could be treated as two different instances in classification. Is this OK? Is there are some highly-cited paper on this topic? Thank you! | The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled. So let's say that we have a cost or error function $E(\mathbf{w})$ that we want to minimize. Gradient descent tells us to modify the weights $\mathbf{w}$ in the direction of steepest descent in $E$:
\begin{equation}
w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i},
\end{equation}
where $\eta$ is the learning rate, and if it's large you will have a correspondingly large modification of the weights $w_i$ (in general it shouldn't be too large, otherwise you'll overshoot the local minimum in your cost function). In order to effectively limit the number of free parameters in your model so as to avoid over-fitting, it is possible to regularize the cost function. An easy way to do that is by introducing a zero mean Gaussian prior over the weights, which is equivalent to changing the cost function to $\widetilde{E}(\mathbf{w})=E(\mathbf{w})+\frac{\lambda}{2}\mathbf{w}^2$. In practice this penalizes large weights and effectively limits the freedom in your model. The regularization parameter $\lambda$ determines how you trade off the original cost $E$ with the large weights penalization. Applying gradient descent to this new cost function we obtain:
\begin{equation}
w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}-\eta\lambda w_i.
\end{equation}
The new term $-\eta\lambda w_i$ coming from the regularization causes the weight to decay in proportion to its size. | {
"source": [
"https://stats.stackexchange.com/questions/29135",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11055/"
]
} |
29,325 | What is the difference between linear regression and logistic regression? When would you use each? | Linear regression uses the general linear equation $Y=b_0+∑(b_i X_i)+\epsilon$ where $Y$ is a continuous dependent variable and independent variables $X_i$ are usually continuous (but can also be binary, e.g. when the linear model is used in a t-test) or other discrete domains. $\epsilon$ is a term for the variance that is not explained by the model and is usually just called "error". Individual dependent values denoted by $Y_j$ can be solved by modifying the equation a little: $Y_j=b_0 + \sum{(b_i X_{ij})+\epsilon_j}$ Logistic regression is another generalized linear model (GLM) procedure using the same basic formula, but instead of the continuous $Y$, it is regressing for the probability of a categorical outcome. In simplest form, this means that we're considering just one outcome variable and two states of that variable- either 0 or 1. The equation for the probability of $Y=1$ looks like this:
$$
P(Y=1) = {1 \over 1+e^{-(b_0+\sum{(b_iX_i)})}}
$$ Your independent variables $X_i$ can be continuous or binary. The regression coefficients $b_i$ can be exponentiated to give you the change in odds of $Y$ per change in $X_i$, i.e., $Odds={P(Y=1) \over P(Y=0)}={P(Y=1) \over 1-P(Y=1)}$ and ${\Delta Odds}= e^{b_i}$. $\Delta Odds$ is called the odds ratio, $Odds(X_i+1)\over Odds(X_i)$. In English, you can say that the odds of $Y=1$ increase by a factor of $e^{b_i}$ per unit change in $X_i$. Example: If you wanted to see how body mass index predicts blood cholesterol (a continuous measure), you'd use linear regression as described at the top of my answer. If you wanted to see how BMI predicts the odds of being a diabetic (a binary diagnosis), you'd use logistic regression. | {
"source": [
"https://stats.stackexchange.com/questions/29325",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6329/"
]
} |
29,327 | Five months ago, jbowman posted a very useful answer to estimate the break point in a broken stick model with random effects in R. I never use "computing" like ifelse and I would like to estimate two break points. I should write two other functions like b1 and b2 but I don't know how. Can someone please tell me how to do that in R? Thanks! jbowman's code: library(lme4)
str(sleepstudy)
#Basis functions
bp = 4
b1 <- function(x, bp) ifelse(x < bp, bp - x, 0)
b2 <- function(x, bp) ifelse(x < bp, 0, x - bp)
#Wrapper for Mixed effects model with variable break point
foo <- function(bp)
{
mod <- lmer(Reaction ~ b1(Days, bp) + b2(Days, bp) + (b1(Days, bp) + b2(Days, bp) | Subject), data = sleepstudy)
deviance(mod)
}
search.range <- c(min(sleepstudy$Days)+0.5,max(sleepstudy$Days)-0.5)
foo.opt <- optimize(foo, interval = search.range)
bp <- foo.opt$minimum
bp
[1] 6.071932
mod <- lmer(Reaction ~ b1(Days, bp) + b2(Days, bp) + (b1(Days, bp) + b2(Days, bp) | Subject), data = sleepstudy) | Linear regression uses the general linear equation $Y=b_0+∑(b_i X_i)+\epsilon$ where $Y$ is a continuous dependent variable and independent variables $X_i$ are usually continuous (but can also be binary, e.g. when the linear model is used in a t-test) or other discrete domains. $\epsilon$ is a term for the variance that is not explained by the model and is usually just called "error". Individual dependent values denoted by $Y_j$ can be solved by modifying the equation a little: $Y_j=b_0 + \sum{(b_i X_{ij})+\epsilon_j}$ Logistic regression is another generalized linear model (GLM) procedure using the same basic formula, but instead of the continuous $Y$, it is regressing for the probability of a categorical outcome. In simplest form, this means that we're considering just one outcome variable and two states of that variable- either 0 or 1. The equation for the probability of $Y=1$ looks like this:
$$
P(Y=1) = {1 \over 1+e^{-(b_0+\sum{(b_iX_i)})}}
$$ Your independent variables $X_i$ can be continuous or binary. The regression coefficients $b_i$ can be exponentiated to give you the change in odds of $Y$ per change in $X_i$, i.e., $Odds={P(Y=1) \over P(Y=0)}={P(Y=1) \over 1-P(Y=1)}$ and ${\Delta Odds}= e^{b_i}$. $\Delta Odds$ is called the odds ratio, $Odds(X_i+1)\over Odds(X_i)$. In English, you can say that the odds of $Y=1$ increase by a factor of $e^{b_i}$ per unit change in $X_i$. Example: If you wanted to see how body mass index predicts blood cholesterol (a continuous measure), you'd use linear regression as described at the top of my answer. If you wanted to see how BMI predicts the odds of being a diabetic (a binary diagnosis), you'd use logistic regression. | {
"source": [
"https://stats.stackexchange.com/questions/29327",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11604/"
]
} |
29,345 | Textbooks typically have nice example plots of of the basis for uniform splines when they're explaining the topic. Something like a row of little triangles for a linear spline, or a row of little humps for a cubic spline. This is a typical example: http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_introcom_a0000000525.htm I'm wondering if there is an easy way to generate a plot of the spline basis using standard R functions (like bs or ns). I guess there's some simple piece of matrix arithmetic combined with a trivial R program which will spit out a pretty plots of a spline basis in an elegant way. I just can't think of it! | Try this, as an example for B-splines: x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:ncol(spl)) lines(spl[,j]~x, lwd=2, col=j) Giving this: | {
"source": [
"https://stats.stackexchange.com/questions/29345",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8991/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.