idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
β | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
β | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
51,601 | Is it appropriate to examine an interaction effect that is almost statistically significant? | You might consider switching to mixed effects modelling, which in some cases provides superior power over ANOVA. | Is it appropriate to examine an interaction effect that is almost statistically significant? | You might consider switching to mixed effects modelling, which in some cases provides superior power over ANOVA. | Is it appropriate to examine an interaction effect that is almost statistically significant?
You might consider switching to mixed effects modelling, which in some cases provides superior power over ANOVA. | Is it appropriate to examine an interaction effect that is almost statistically significant?
You might consider switching to mixed effects modelling, which in some cases provides superior power over ANOVA. |
51,602 | Is it appropriate to examine an interaction effect that is almost statistically significant? | I agree with the others that you certainly can explore this interaction, but, if it's not significant, you might not have much power to aid your analysis. | Is it appropriate to examine an interaction effect that is almost statistically significant? | I agree with the others that you certainly can explore this interaction, but, if it's not significant, you might not have much power to aid your analysis. | Is it appropriate to examine an interaction effect that is almost statistically significant?
I agree with the others that you certainly can explore this interaction, but, if it's not significant, you might not have much power to aid your analysis. | Is it appropriate to examine an interaction effect that is almost statistically significant?
I agree with the others that you certainly can explore this interaction, but, if it's not significant, you might not have much power to aid your analysis. |
51,603 | Is it appropriate to examine an interaction effect that is almost statistically significant? | To say something similar to the other answers in slightly different words.
I would do the following:
Report that the (hopefully expected) interaction is almost or marginal significant or that there is a trend towards significance (these expressions are all common, at least in psychology). Then, state that therefore I inspect this interaction further with follow up simple main effects analyses or contrasts.
It is absolutely no problem to do so, if your main hypotheses are within in this interaction. As said before, omnibus tests of interaction do not have the highest power.
See also here. | Is it appropriate to examine an interaction effect that is almost statistically significant? | To say something similar to the other answers in slightly different words.
I would do the following:
Report that the (hopefully expected) interaction is almost or marginal significant or that there is | Is it appropriate to examine an interaction effect that is almost statistically significant?
To say something similar to the other answers in slightly different words.
I would do the following:
Report that the (hopefully expected) interaction is almost or marginal significant or that there is a trend towards significance (these expressions are all common, at least in psychology). Then, state that therefore I inspect this interaction further with follow up simple main effects analyses or contrasts.
It is absolutely no problem to do so, if your main hypotheses are within in this interaction. As said before, omnibus tests of interaction do not have the highest power.
See also here. | Is it appropriate to examine an interaction effect that is almost statistically significant?
To say something similar to the other answers in slightly different words.
I would do the following:
Report that the (hopefully expected) interaction is almost or marginal significant or that there is |
51,604 | Get the number of parameters of a linear model | Try something like:
> x <- replicate(2, rnorm(100))
> y <- 1.2*x[,1]+rnorm(100)
> summary(lm.fit <- lm(y~x))
> length(lm.fit$coefficients)
[1] 3
> # or
> length(coef(lm.fit))
[1] 3
You can have a better idea of what an R object includes with
> str(lm.fit) | Get the number of parameters of a linear model | Try something like:
> x <- replicate(2, rnorm(100))
> y <- 1.2*x[,1]+rnorm(100)
> summary(lm.fit <- lm(y~x))
> length(lm.fit$coefficients)
[1] 3
> # or
> length(coef(lm.fit))
[1] 3
You can have a bet | Get the number of parameters of a linear model
Try something like:
> x <- replicate(2, rnorm(100))
> y <- 1.2*x[,1]+rnorm(100)
> summary(lm.fit <- lm(y~x))
> length(lm.fit$coefficients)
[1] 3
> # or
> length(coef(lm.fit))
[1] 3
You can have a better idea of what an R object includes with
> str(lm.fit) | Get the number of parameters of a linear model
Try something like:
> x <- replicate(2, rnorm(100))
> y <- 1.2*x[,1]+rnorm(100)
> summary(lm.fit <- lm(y~x))
> length(lm.fit$coefficients)
[1] 3
> # or
> length(coef(lm.fit))
[1] 3
You can have a bet |
51,605 | Get the number of parameters of a linear model | A more general approach is to use the logLik() function. It returns an object with the attribute df that gives the fitted models degrees of freedom. The benefit of this approach is that it works with many other model classes (including glm). In the case of ordinary linear regression (lm) this corresponds to the number of parameters + 1 for the estimate of the error variance.
From the logLik documentation:
For "lm" fits it is assumed that the scale has been estimated (by maximum likelihood or REML), and all the constants in the log-likelihood are included.
You can get the number of observations this way too.
> X1 <- rnorm(10)
> X2 <- rnorm(10)
> Y <- X1 + X2 + rnorm(10)
> model <- lm(Y~X1+X2)
> ll <- logLik(model)
> attributes(ll)
$nall
[1] 10
$nobs
[1] 10
$df
[1] 4
$class
[1] "logLik" | Get the number of parameters of a linear model | A more general approach is to use the logLik() function. It returns an object with the attribute df that gives the fitted models degrees of freedom. The benefit of this approach is that it works wit | Get the number of parameters of a linear model
A more general approach is to use the logLik() function. It returns an object with the attribute df that gives the fitted models degrees of freedom. The benefit of this approach is that it works with many other model classes (including glm). In the case of ordinary linear regression (lm) this corresponds to the number of parameters + 1 for the estimate of the error variance.
From the logLik documentation:
For "lm" fits it is assumed that the scale has been estimated (by maximum likelihood or REML), and all the constants in the log-likelihood are included.
You can get the number of observations this way too.
> X1 <- rnorm(10)
> X2 <- rnorm(10)
> Y <- X1 + X2 + rnorm(10)
> model <- lm(Y~X1+X2)
> ll <- logLik(model)
> attributes(ll)
$nall
[1] 10
$nobs
[1] 10
$df
[1] 4
$class
[1] "logLik" | Get the number of parameters of a linear model
A more general approach is to use the logLik() function. It returns an object with the attribute df that gives the fitted models degrees of freedom. The benefit of this approach is that it works wit |
51,606 | Get the number of parameters of a linear model | May be it's a little bit hackish but you can do :
n <- length(coefficients(model)) | Get the number of parameters of a linear model | May be it's a little bit hackish but you can do :
n <- length(coefficients(model)) | Get the number of parameters of a linear model
May be it's a little bit hackish but you can do :
n <- length(coefficients(model)) | Get the number of parameters of a linear model
May be it's a little bit hackish but you can do :
n <- length(coefficients(model)) |
51,607 | Get the number of parameters of a linear model | I think you could use the component lm.fit$rank or else subtract lm.fit$df.residual from the sample size to get what you want. (I assume you want the number of free parameters.) | Get the number of parameters of a linear model | I think you could use the component lm.fit$rank or else subtract lm.fit$df.residual from the sample size to get what you want. (I assume you want the number of free parameters.) | Get the number of parameters of a linear model
I think you could use the component lm.fit$rank or else subtract lm.fit$df.residual from the sample size to get what you want. (I assume you want the number of free parameters.) | Get the number of parameters of a linear model
I think you could use the component lm.fit$rank or else subtract lm.fit$df.residual from the sample size to get what you want. (I assume you want the number of free parameters.) |
51,608 | Get the number of parameters of a linear model | The R function logLik seems to be an attractive solution to extract model degree of freedom in general, since it can be applied to many model objects including lm, glm, nls, Arima to name a few.
But the df of attributes(logLik(obj)) seems to be 1 larger than the true value. So use it with caution. | Get the number of parameters of a linear model | The R function logLik seems to be an attractive solution to extract model degree of freedom in general, since it can be applied to many model objects including lm, glm, nls, Arima to name a few.
But | Get the number of parameters of a linear model
The R function logLik seems to be an attractive solution to extract model degree of freedom in general, since it can be applied to many model objects including lm, glm, nls, Arima to name a few.
But the df of attributes(logLik(obj)) seems to be 1 larger than the true value. So use it with caution. | Get the number of parameters of a linear model
The R function logLik seems to be an attractive solution to extract model degree of freedom in general, since it can be applied to many model objects including lm, glm, nls, Arima to name a few.
But |
51,609 | Get the number of parameters of a linear model | Here is another solution if your model is nls, with an example dataset in r:
# Fit a nls model (this is a light response curve of photosynthesis, in this case)
mod<-nls(response~(alpha*x+Pm-sqrt(((alpha*x+Pm)^2)-4*alpha*x*Pm*theta))/(2*theta)+Rd, data = Data, start=list(alpha=0.05, theta=0.5, Rd=-1, Pm=30))
# There are four parameters, Pm, alpha, theta and Rd.
length(mod$m$getPars())
As @Sam pointed out, logLik gives the number of parameters +1.
attributes(logLik(mod))$df
Here is the example dataset, so you can reproduce the commands above.
Data<-structure(list(x = c(-8, -3.97194388777555, 0.0561122244488974, 4.08416833667335, 8.11222444889779, 12.1402805611222, 16.1683366733467, 20.1963927855711, 24.2244488977956, 28.25250501002, 32.2805611222445, 36.3086172344689, 40.3366733466934, 44.3647294589178, 48.3927855711423, 52.4208416833667, 56.4488977955912, 60.4769539078156, 64.5050100200401, 68.5330661322645, 72.561122244489, 76.5891783567134, 80.6172344689379, 84.6452905811623, 88.6733466933868, 92.7014028056112, 96.7294589178357, 100.75751503006, 104.785571142285, 108.813627254509, 112.841683366733, 116.869739478958, 120.897795591182, 124.925851703407, 128.953907815631, 132.981963927856, 137.01002004008, 141.038076152305, 145.066132264529, 149.094188376753, 153.122244488978, 157.150300601202, 161.178356713427, 165.206412825651, 169.234468937876, 173.2625250501, 177.290581162325, 181.318637274549, 185.346693386774, 189.374749498998, 193.402805611222, 197.430861723447, 201.458917835671, 205.486973947896, 209.51503006012, 213.543086172345, 217.571142284569, 221.599198396794, 225.627254509018, 229.655310621242, 233.683366733467, 237.711422845691, 241.739478957916, 245.76753507014, 249.795591182365, 253.823647294589, 257.851703406814, 261.879759519038, 265.907815631263, 269.935871743487, 273.963927855711, 277.991983967936, 282.02004008016, 286.048096192385, 290.076152304609, 294.104208416834, 298.132264529058, 302.160320641283, 306.188376753507, 310.216432865731, 314.244488977956, 318.27254509018, 322.300601202405, 326.328657314629, 330.356713426854, 334.384769539078, 338.412825651303, 342.440881763527, 346.468937875751, 350.496993987976, 354.5250501002, 358.553106212425, 362.581162324649, 366.609218436874, 370.637274549098, 374.665330661323, 378.693386773547, 382.721442885772, 386.749498997996, 390.77755511022, 394.805611222445, 398.833667334669, 402.861723446894, 406.889779559118, 410.917835671343, 414.945891783567, 418.973947895792, 423.002004008016, 427.03006012024, 431.058116232465, 435.086172344689, 439.114228456914, 443.142284569138, 447.170340681363, 451.198396793587, 455.226452905812, 459.254509018036, 463.28256513026, 467.310621242485, 471.338677354709, 475.366733466934, 479.394789579158, 483.422845691383, 487.450901803607, 491.478957915832, 495.507014028056, 499.535070140281, 503.563126252505, 507.591182364729, 511.619238476954, 515.647294589178, 519.675350701403, 523.703406813627, 527.731462925852, 531.759519038076, 535.787575150301, 539.815631262525, 543.843687374749, 547.871743486974, 551.899799599198, 555.927855711423, 559.955911823647, 563.983967935872, 568.012024048096, 572.040080160321, 576.068136272545, 580.09619238477, 584.124248496994, 588.152304609218, 592.180360721443, 596.208416833667, 600.236472945892, 604.264529058116, 608.292585170341, 612.320641282565, 616.348697394789, 620.376753507014, 624.404809619238, 628.432865731463, 632.460921843687, 636.488977955912, 640.517034068136, 644.545090180361, 648.573146292585, 652.60120240481, 656.629258517034, 660.657314629259, 664.685370741483, 668.713426853707, 672.741482965932, 676.769539078156, 680.797595190381, 684.825651302605, 688.85370741483, 692.881763527054, 696.909819639279, 700.937875751503, 704.965931863727, 708.993987975952, 713.022044088176, 717.050100200401, 721.078156312625, 725.10621242485, 729.134268537074, 733.162324649299, 737.190380761523, 741.218436873747, 745.246492985972, 749.274549098196, 753.302605210421, 757.330661322645, 761.35871743487, 765.386773547094, 769.414829659319, 773.442885771543, 777.470941883768, 781.498997995992, 785.527054108216, 789.555110220441, 793.583166332665, 797.61122244489, 801.639278557114, 805.667334669339, 809.695390781563, 813.723446893788, 817.751503006012, 821.779559118236, 825.807615230461, 829.835671342685, 833.86372745491, 837.891783567134, 841.919839679359, 845.947895791583, 849.975951903808, 854.004008016032, 858.032064128256, 862.060120240481, 866.088176352705, 870.11623246493, 874.144288577154, 878.172344689379, 882.200400801603, 886.228456913828, 890.256513026052, 894.284569138277, 898.312625250501, 902.340681362725, 906.36873747495, 910.396793587174, 914.424849699399, 918.452905811623, 922.480961923848, 926.509018036072, 930.537074148297, 934.565130260521, 938.593186372745, 942.62124248497, 946.649298597194, 950.677354709419, 954.705410821643, 958.733466933868, 962.761523046092, 966.789579158317, 970.817635270541, 974.845691382765, 978.87374749499, 982.901803607214, 986.929859719439, 990.957915831663, 994.985971943888, 999.014028056112, 1003.04208416834, 1007.07014028056, 1011.09819639279, 1015.12625250501, 1019.15430861723, 1023.18236472946, 1027.21042084168, 1031.23847695391, 1035.26653306613, 1039.29458917836, 1043.32264529058, 1047.35070140281, 1051.37875751503, 1055.40681362725, 1059.43486973948, 1063.4629258517, 1067.49098196393, 1071.51903807615, 1075.54709418838, 1079.5751503006, 1083.60320641283, 1087.63126252505, 1091.65931863727, 1095.6873747495, 1099.71543086172, 1103.74348697395, 1107.77154308617, 1111.7995991984, 1115.82765531062, 1119.85571142285, 1123.88376753507, 1127.91182364729, 1131.93987975952, 1135.96793587174, 1139.99599198397, 1144.02404809619, 1148.05210420842, 1152.08016032064, 1156.10821643287, 1160.13627254509, 1164.16432865731, 1168.19238476954, 1172.22044088176, 1176.24849699399, 1180.27655310621, 1184.30460921844, 1188.33266533066, 1192.36072144289, 1196.38877755511, 1200.41683366733, 1204.44488977956, 1208.47294589178, 1212.50100200401, 1216.52905811623, 1220.55711422846, 1224.58517034068, 1228.61322645291, 1232.64128256513, 1236.66933867735, 1240.69739478958, 1244.7254509018, 1248.75350701403, 1252.78156312625, 1256.80961923848, 1260.8376753507, 1264.86573146293, 1268.89378757515, 1272.92184368737, 1276.9498997996, 1280.97795591182, 1285.00601202405, 1289.03406813627, 1293.0621242485, 1297.09018036072, 1301.11823647295, 1305.14629258517, 1309.17434869739, 1313.20240480962, 1317.23046092184, 1321.25851703407, 1325.28657314629, 1329.31462925852, 1333.34268537074, 1337.37074148297, 1341.39879759519, 1345.42685370741, 1349.45490981964, 1353.48296593186, 1357.51102204409, 1361.53907815631, 1365.56713426854, 1369.59519038076, 1373.62324649299, 1377.65130260521, 1381.67935871743, 1385.70741482966, 1389.73547094188, 1393.76352705411, 1397.79158316633, 1401.81963927856, 1405.84769539078, 1409.87575150301, 1413.90380761523, 1417.93186372745, 1421.95991983968, 1425.9879759519, 1430.01603206413, 1434.04408817635, 1438.07214428858, 1442.1002004008, 1446.12825651303, 1450.15631262525, 1454.18436873747, 1458.2124248497, 1462.24048096192, 1466.26853707415, 1470.29659318637, 1474.3246492986, 1478.35270541082, 1482.38076152305, 1486.40881763527, 1490.43687374749, 1494.46492985972, 1498.49298597194,1502.52104208417, 1506.54909819639, 1510.57715430862, 1514.60521042084, 1518.63326653307, 1522.66132264529, 1526.68937875751, 1530.71743486974, 1534.74549098196, 1538.77354709419, 1542.80160320641, 1546.82965931864, 1550.85771543086, 1554.88577154309, 1558.91382765531, 1562.94188376754, 1566.96993987976, 1570.99799599198, 1575.02605210421, 1579.05410821643, 1583.08216432866, 1587.11022044088, 1591.13827655311, 1595.16633266533, 1599.19438877756, 1603.22244488978, 1607.250501002, 1611.27855711423, 1615.30661322645, 1619.33466933868, 1623.3627254509, 1627.39078156313, 1631.41883767535, 1635.44689378758, 1639.4749498998, 1643.50300601202, 1647.53106212425, 1651.55911823647, 1655.5871743487, 1659.61523046092, 1663.64328657315, 1667.67134268537, 1671.6993987976, 1675.72745490982,
1679.75551102204, 1683.78356713427, 1687.81162324649, 1691.83967935872,
1695.86773547094, 1699.89579158317, 1703.92384769539, 1707.95190380762,
1711.97995991984, 1716.00801603206, 1720.03607214429, 1724.06412825651,
1728.09218436874, 1732.12024048096, 1736.14829659319, 1740.17635270541,
1744.20440881764, 1748.23246492986, 1752.26052104208, 1756.28857715431,
1760.31663326653, 1764.34468937876, 1768.37274549098, 1772.40080160321,
1776.42885771543, 1780.45691382766, 1784.48496993988, 1788.5130260521,
1792.54108216433, 1796.56913827655, 1800.59719438878, 1804.625250501,
1808.65330661323, 1812.68136272545, 1816.70941883768, 1820.7374749499,
1824.76553106212, 1828.79358717435, 1832.82164328657, 1836.8496993988,
1840.87775551102, 1844.90581162325, 1848.93386773547, 1852.9619238477,
1856.98997995992, 1861.01803607214, 1865.04609218437, 1869.07414829659,
1873.10220440882, 1877.13026052104, 1881.15831663327, 1885.18637274549,
1889.21442885772, 1893.24248496994, 1897.27054108216, 1901.29859719439,
1905.32665330661, 1909.35470941884, 1913.38276553106, 1917.41082164329,
1921.43887775551, 1925.46693386774, 1929.49498997996, 1933.52304609218,
1937.55110220441, 1941.57915831663, 1945.60721442886, 1949.63527054108,
1953.66332665331, 1957.69138276553, 1961.71943887776, 1965.74749498998,
1969.7755511022, 1973.80360721443, 1977.83166332665, 1981.85971943888,
1985.8877755511, 1989.91583166333, 1993.94388777555, 1997.97194388778,
2002), response = c(-3.58957478518253, -3.62025086478834, -1.04832718629931,
-0.932815570713867, -2.0059547584204, -1.80378256401687, -1.07078839210859,
-1.47563839178252, 0.172463665814555, -0.247936953593207, -0.547987522372114,
-0.942197740134927, -0.173128438044605, -1.07747254510461, 1.10668446828936,
1.05530666799013, 0.777604622119702, 1.06274622733744, 0.900238206583803,
1.31412394039238, 1.89499625411371, 1.04033412499542, 1.38191631121636,
0.999431819175433, 2.68812421375804, 1.47394328081264, 2.72714724358873,
1.71016394833754, 0.81098268882713, 4.46594845284442, 3.61870598680648,
3.79687265260293, 3.3731838954602, 1.43858183778185, 2.61595066682473,
3.76691451510184, 3.13442724430605, 3.10293620326479, 4.45945367139878,
3.76865197544365, 3.65420270503196, 4.08016861916108, 4.67570693185897,
5.00811845024315, 5.6912658729918, 6.48591088940087, 4.50595287440596,
6.43194883116461, 5.21375289859846, 6.06886948554962, 5.85835327159824,
4.9625308106984, 4.89022500473493, 7.12834025120892, 6.94879254991973,
6.21082185779307, 7.62504959939155, 6.23255482453099, 6.6760218279274,
8.20691580592652, 7.57507394269, 6.06526163497662, 8.75382401626834,
5.70620745514469, 7.26383229655605, 7.46498556138395, 8.7515167860644,
7.91466462562348, 7.58703158895482, 9.11188767524796, 8.4186401599397,
6.89096958793887, 7.89158602926514, 10.4612794167006, 7.58318779363375,
9.84544304325303, 10.6353526028036, 9.19575959965921, 9.83902692183846,
9.37218617742756, 11.2655524091541, 10.0589316149929, 8.61325152269854,
10.5773880802015, 10.5060419723174, 11.0722463547765, 10.3359011103665,
10.026378680973, 11.0063599845354, 11.0079405342041, 10.5867493518446,
10.8943760523027, 12.1443513080492, 11.8396074471423, 10.3124495936648,
13.9727593750822, 10.7840244636707, 12.1751132761597, 10.0114338279253,
12.7398881495442, 11.3496328681914, 12.7336858883009, 11.0014003683233,
12.6329145237188, 12.8418626240243, 11.3677960547355, 11.5889466372296,
12.0209030882168, 12.2848054559541, 12.6322237596396, 11.3575397626941,
13.0492956855345, 13.6665727062986, 12.2082939516701, 13.0091518201203,
14.3681537231903, 12.4880551714586, 13.5355568479074, 15.6635883385552,
12.8784669688002, 13.8963858223423, 12.8161474815643, 13.6667437747109,
12.8263133597971, 14.7562225192782, 13.1809926403271, 14.0891781138646,
12.5346214636793, 14.1004036308871, 15.8435075654244, 14.358625443445,
15.8909481383511, 14.4402749451377, 14.1360376888721, 13.3905410466758,
14.7537155862745, 14.4835085007599, 14.1315006601281, 14.4150998388548,
15.6952736181339, 13.6437863278995, 17.2083398797518, 14.9719864616975,
16.8432168022786, 15.1937342978635, 15.1806060147844, 16.3096405343474,
16.3108213264068, 14.4242808390335, 15.8595948000895, 15.5877173710543,
15.5005730489912, 17.375526668469, 15.5740671948281, 16.1243289084692,
15.9749299593302, 17.4147022645292, 18.3700381963525, 13.8143770413523,
17.9351262044202, 16.7714876710299, 17.0384639772996, 17.0307757081918,
15.6030920144074, 16.581042152256, 18.8792908358865, 17.998114032928,
19.1840884966728, 17.1930308829494, 18.9704837093962, 17.5398954859783,
15.6751901660724, 17.7071141172369, 16.8992272557539, 16.2076829263534,
16.5355407990975, 17.4297211354828, 17.1911852119539, 17.2638441341946,
18.7667030206444, 16.9370746309351, 18.1716182139223, 17.6984658296715,
17.8432253646, 18.6164049373694, 19.4441317654593, 19.0489224386334,
16.3878174080612, 18.0512428503099, 19.1803487982935, 17.7734002014158,
20.131130771463, 16.151222737491, 19.0662238624346, 18.9637572654357,
20.4006961448895, 17.7452159814725, 18.1767897518064, 19.6895978265634,
17.2415335849138, 19.205432003556, 20.9033294515514, 16.61019386946,
18.6486223450292, 19.3495539638913, 17.763309794548, 18.589914109576,
21.3750173075524, 19.0790221140521, 19.073590595151, 20.3110876727817,
19.1894218140841, 20.2372395516702, 19.1312943191063, 17.8507423410755,
19.9857184946213, 18.3289008834065, 16.564668684551, 17.2726209464131,
20.0768841859564, 19.7105832275701, 20.959237751695, 20.6413358795587,
19.3538018851944, 19.3803009186455, 18.9965213444349, 19.9312882190988,
20.2498165546154, 20.6602172892069, 19.6600862738064, 20.7052435265234,
19.7988958134332, 19.369849065903, 20.7005328602028, 21.3901352510254,
19.9494464318285, 18.5158234507894, 21.0145796229888, 20.3935414006186,
20.228895878692, 19.4396050963157, 18.9481819487733, 21.8170718923232,
22.6484071413006, 19.404428374176, 19.574407890811, 19.4134174307073,
22.6210955210436, 18.7184436222435, 20.5595496522691, 21.6974872718944,
20.9363644838996, 21.0687129490903, 21.0730189950955, 22.9666927093666,
22.0189027581993, 22.1601832780259, 20.168790844715, 20.4220194493454,
20.9451564541633, 20.1454319823462, 22.4361464351762, 20.6198647651334,
21.4646217438031, 20.5212888882417, 21.3233246747502, 20.7726040755242,
21.3588194110876, 22.7473931004582, 21.7283608509904, 21.9963790126902,
21.3213711223197, 22.5202653835615, 22.6692559419185, 22.9911437265527,
21.0362741019696, 21.2546278761312, 21.664584144617, 22.7666620244984,
20.256702413919, 23.6864315079695, 25.1766563370701, 21.4323283908586,
22.2962186929496, 20.3888136631929, 21.9484525615365, 20.7708407269101,
22.0665761310048, 20.8305234503782, 23.3305723015105, 22.2899098775578,
22.9720194868776, 22.5367848999167, 21.6382285092653, 22.6582172780793,
22.1156796214891, 24.2288369587372, 24.380280200404, 23.2915320073326,
22.3614159703802, 23.6251133797724, 24.1562618798916, 22.9911339744999,
21.7891891596172, 24.2161382746419, 23.1962083357621, 22.0959042015395,
22.8241885064288, 23.3231637970656, 24.0340390959745, 23.5064604248494,
22.4648350821346, 23.8485522024163, 24.6314343012963, 22.9591922937084,
24.5796619452326, 25.2338435690998, 22.0696906772769, 23.6679180689824,
22.5138604024011, 23.62164299563, 22.3916895742358, 23.0102593680772,
24.6826004252832, 21.8474956782121, 23.5836084839005, 23.6096237989221,
24.0581957504008, 22.7617053199355, 23.7904812246471, 23.8525951859381,
23.7702560619677, 23.6740561490702, 23.6727772337761, 23.5782892540828,
24.8012234487132, 22.0448723900358, 26.2446733566806, 24.8901137698615,
22.4107383673891, 22.3840252340722, 24.8813300530568, 22.8290655247652,
23.6816776347442, 23.862147975553, 24.9440352220054, 25.602724064874,
23.4397180626159, 25.2979283706869, 24.0699252609402, 23.8621362261058,
24.8933828032982, 25.4178762176794, 24.0836079308251, 25.3557779622394,
26.3252940303689, 22.0335051185389, 24.0425149395311, 23.7462684205931,
24.6982077315874, 25.3143328826626, 24.6013450624622, 24.7563487352452,
25.9257328576421, 25.179000436827, 25.8003241661114, 25.0022285204079,
24.9192596758005, 25.8822418156569, 25.1998009463868, 23.6441535434575,
24.9426082712576, 24.9539001217398, 22.4577044619626, 23.0437260725105,
25.9689982595835, 25.9549342010041, 24.5707096652445, 25.6550469220972,
25.2431319260493, 23.7512180651784, 22.9751208621517, 25.3155008103098,
25.0914281968248, 24.1433425470407, 25.6845257893322, 25.207267799716,
24.3853301121691, 25.133367186717, 25.0426003422979, 24.9313250719437,
25.8037876224851, 25.4260286817899, 26.3335050818361, 24.9988357868418,
26.7810824981553, 24.6721890010742, 25.4994779162627, 28.2205742456111,
25.6593981734269, 26.8141269785953, 24.325998577855, 26.772250209721,
26.6486130860901, 25.2148338608099, 24.435128109376, 25.5111215965461,
24.7745034027075, 24.7478101468244, 24.9726113562946, 24.2103141988101,
26.6316537321719, 25.3866804819055, 25.2535960824473, 24.9324610909463,
26.4330171406269, 24.9555194105079, 24.8680345245781, 27.314004407881,
25.8497922647969, 25.4365960643182, 27.0640153839225, 26.4115799971681,
24.5066151411807, 24.0770545981929, 24.5770973417194, 25.6158328694023,
23.7228591099721, 25.2506608858009, 26.3845249599514, 24.986755107786,
26.0229796277567, 27.9314117903394, 26.720855107423, 26.2748970068974,
26.9619543149989, 28.1177017951961, 26.0980959165123, 26.4789358277429,
27.2640757074367, 27.2474159343653, 25.9992993819392, 26.0604493049298,
26.6130593710637, 25.7460954198758, 25.4886951704501, 27.249597964836,
25.8789271815857, 26.1229507075738, 24.0907215160597, 26.4228821021775,
27.7405264571259, 24.266723432942, 25.0542999074318, 24.1121125081467,
26.935296245382, 26.4950565230244, 25.4617820698142, 25.0477114029224,
27.6099550887943, 24.9703729596582, 26.354887191175, 28.5678615888521,
24.7591852732502, 26.0070564298366, 28.1832364680811, 27.4758598980069,
25.8492469607027, 25.3129416645233, 26.4844595022583, 27.4346593674058,
25.0199478369712, 27.4787736031246, 26.6510807143448, 25.9521368094847,
27.4912196718334, 26.4381208497156, 23.7514049354602, 27.6911406664994,
25.7380847181156, 27.3970753479465, 25.2686534213865, 26.625449280584,
27.6915237551073, 28.2585398441531, 25.0055625526628, 28.9395902046147,
27.1152633304668, 27.4458034523227, 28.4665155499299, 25.7325216430656,
27.4268908314492, 26.0789361712655, 26.5857269918754, 28.0991123511731,
28.833576055686, 27.9601071193266, 26.1630868619856, 29.3301360779517,
28.4273979734602)), .Names = c("x", "response"), row.names = c(NA,
-500L), class = "data.frame") | Get the number of parameters of a linear model | Here is another solution if your model is nls, with an example dataset in r:
# Fit a nls model (this is a light response curve of photosynthesis, in this case)
mod<-nls(response~(alpha*x+Pm-sqrt(((alp | Get the number of parameters of a linear model
Here is another solution if your model is nls, with an example dataset in r:
# Fit a nls model (this is a light response curve of photosynthesis, in this case)
mod<-nls(response~(alpha*x+Pm-sqrt(((alpha*x+Pm)^2)-4*alpha*x*Pm*theta))/(2*theta)+Rd, data = Data, start=list(alpha=0.05, theta=0.5, Rd=-1, Pm=30))
# There are four parameters, Pm, alpha, theta and Rd.
length(mod$m$getPars())
As @Sam pointed out, logLik gives the number of parameters +1.
attributes(logLik(mod))$df
Here is the example dataset, so you can reproduce the commands above.
Data<-structure(list(x = c(-8, -3.97194388777555, 0.0561122244488974, 4.08416833667335, 8.11222444889779, 12.1402805611222, 16.1683366733467, 20.1963927855711, 24.2244488977956, 28.25250501002, 32.2805611222445, 36.3086172344689, 40.3366733466934, 44.3647294589178, 48.3927855711423, 52.4208416833667, 56.4488977955912, 60.4769539078156, 64.5050100200401, 68.5330661322645, 72.561122244489, 76.5891783567134, 80.6172344689379, 84.6452905811623, 88.6733466933868, 92.7014028056112, 96.7294589178357, 100.75751503006, 104.785571142285, 108.813627254509, 112.841683366733, 116.869739478958, 120.897795591182, 124.925851703407, 128.953907815631, 132.981963927856, 137.01002004008, 141.038076152305, 145.066132264529, 149.094188376753, 153.122244488978, 157.150300601202, 161.178356713427, 165.206412825651, 169.234468937876, 173.2625250501, 177.290581162325, 181.318637274549, 185.346693386774, 189.374749498998, 193.402805611222, 197.430861723447, 201.458917835671, 205.486973947896, 209.51503006012, 213.543086172345, 217.571142284569, 221.599198396794, 225.627254509018, 229.655310621242, 233.683366733467, 237.711422845691, 241.739478957916, 245.76753507014, 249.795591182365, 253.823647294589, 257.851703406814, 261.879759519038, 265.907815631263, 269.935871743487, 273.963927855711, 277.991983967936, 282.02004008016, 286.048096192385, 290.076152304609, 294.104208416834, 298.132264529058, 302.160320641283, 306.188376753507, 310.216432865731, 314.244488977956, 318.27254509018, 322.300601202405, 326.328657314629, 330.356713426854, 334.384769539078, 338.412825651303, 342.440881763527, 346.468937875751, 350.496993987976, 354.5250501002, 358.553106212425, 362.581162324649, 366.609218436874, 370.637274549098, 374.665330661323, 378.693386773547, 382.721442885772, 386.749498997996, 390.77755511022, 394.805611222445, 398.833667334669, 402.861723446894, 406.889779559118, 410.917835671343, 414.945891783567, 418.973947895792, 423.002004008016, 427.03006012024, 431.058116232465, 435.086172344689, 439.114228456914, 443.142284569138, 447.170340681363, 451.198396793587, 455.226452905812, 459.254509018036, 463.28256513026, 467.310621242485, 471.338677354709, 475.366733466934, 479.394789579158, 483.422845691383, 487.450901803607, 491.478957915832, 495.507014028056, 499.535070140281, 503.563126252505, 507.591182364729, 511.619238476954, 515.647294589178, 519.675350701403, 523.703406813627, 527.731462925852, 531.759519038076, 535.787575150301, 539.815631262525, 543.843687374749, 547.871743486974, 551.899799599198, 555.927855711423, 559.955911823647, 563.983967935872, 568.012024048096, 572.040080160321, 576.068136272545, 580.09619238477, 584.124248496994, 588.152304609218, 592.180360721443, 596.208416833667, 600.236472945892, 604.264529058116, 608.292585170341, 612.320641282565, 616.348697394789, 620.376753507014, 624.404809619238, 628.432865731463, 632.460921843687, 636.488977955912, 640.517034068136, 644.545090180361, 648.573146292585, 652.60120240481, 656.629258517034, 660.657314629259, 664.685370741483, 668.713426853707, 672.741482965932, 676.769539078156, 680.797595190381, 684.825651302605, 688.85370741483, 692.881763527054, 696.909819639279, 700.937875751503, 704.965931863727, 708.993987975952, 713.022044088176, 717.050100200401, 721.078156312625, 725.10621242485, 729.134268537074, 733.162324649299, 737.190380761523, 741.218436873747, 745.246492985972, 749.274549098196, 753.302605210421, 757.330661322645, 761.35871743487, 765.386773547094, 769.414829659319, 773.442885771543, 777.470941883768, 781.498997995992, 785.527054108216, 789.555110220441, 793.583166332665, 797.61122244489, 801.639278557114, 805.667334669339, 809.695390781563, 813.723446893788, 817.751503006012, 821.779559118236, 825.807615230461, 829.835671342685, 833.86372745491, 837.891783567134, 841.919839679359, 845.947895791583, 849.975951903808, 854.004008016032, 858.032064128256, 862.060120240481, 866.088176352705, 870.11623246493, 874.144288577154, 878.172344689379, 882.200400801603, 886.228456913828, 890.256513026052, 894.284569138277, 898.312625250501, 902.340681362725, 906.36873747495, 910.396793587174, 914.424849699399, 918.452905811623, 922.480961923848, 926.509018036072, 930.537074148297, 934.565130260521, 938.593186372745, 942.62124248497, 946.649298597194, 950.677354709419, 954.705410821643, 958.733466933868, 962.761523046092, 966.789579158317, 970.817635270541, 974.845691382765, 978.87374749499, 982.901803607214, 986.929859719439, 990.957915831663, 994.985971943888, 999.014028056112, 1003.04208416834, 1007.07014028056, 1011.09819639279, 1015.12625250501, 1019.15430861723, 1023.18236472946, 1027.21042084168, 1031.23847695391, 1035.26653306613, 1039.29458917836, 1043.32264529058, 1047.35070140281, 1051.37875751503, 1055.40681362725, 1059.43486973948, 1063.4629258517, 1067.49098196393, 1071.51903807615, 1075.54709418838, 1079.5751503006, 1083.60320641283, 1087.63126252505, 1091.65931863727, 1095.6873747495, 1099.71543086172, 1103.74348697395, 1107.77154308617, 1111.7995991984, 1115.82765531062, 1119.85571142285, 1123.88376753507, 1127.91182364729, 1131.93987975952, 1135.96793587174, 1139.99599198397, 1144.02404809619, 1148.05210420842, 1152.08016032064, 1156.10821643287, 1160.13627254509, 1164.16432865731, 1168.19238476954, 1172.22044088176, 1176.24849699399, 1180.27655310621, 1184.30460921844, 1188.33266533066, 1192.36072144289, 1196.38877755511, 1200.41683366733, 1204.44488977956, 1208.47294589178, 1212.50100200401, 1216.52905811623, 1220.55711422846, 1224.58517034068, 1228.61322645291, 1232.64128256513, 1236.66933867735, 1240.69739478958, 1244.7254509018, 1248.75350701403, 1252.78156312625, 1256.80961923848, 1260.8376753507, 1264.86573146293, 1268.89378757515, 1272.92184368737, 1276.9498997996, 1280.97795591182, 1285.00601202405, 1289.03406813627, 1293.0621242485, 1297.09018036072, 1301.11823647295, 1305.14629258517, 1309.17434869739, 1313.20240480962, 1317.23046092184, 1321.25851703407, 1325.28657314629, 1329.31462925852, 1333.34268537074, 1337.37074148297, 1341.39879759519, 1345.42685370741, 1349.45490981964, 1353.48296593186, 1357.51102204409, 1361.53907815631, 1365.56713426854, 1369.59519038076, 1373.62324649299, 1377.65130260521, 1381.67935871743, 1385.70741482966, 1389.73547094188, 1393.76352705411, 1397.79158316633, 1401.81963927856, 1405.84769539078, 1409.87575150301, 1413.90380761523, 1417.93186372745, 1421.95991983968, 1425.9879759519, 1430.01603206413, 1434.04408817635, 1438.07214428858, 1442.1002004008, 1446.12825651303, 1450.15631262525, 1454.18436873747, 1458.2124248497, 1462.24048096192, 1466.26853707415, 1470.29659318637, 1474.3246492986, 1478.35270541082, 1482.38076152305, 1486.40881763527, 1490.43687374749, 1494.46492985972, 1498.49298597194,1502.52104208417, 1506.54909819639, 1510.57715430862, 1514.60521042084, 1518.63326653307, 1522.66132264529, 1526.68937875751, 1530.71743486974, 1534.74549098196, 1538.77354709419, 1542.80160320641, 1546.82965931864, 1550.85771543086, 1554.88577154309, 1558.91382765531, 1562.94188376754, 1566.96993987976, 1570.99799599198, 1575.02605210421, 1579.05410821643, 1583.08216432866, 1587.11022044088, 1591.13827655311, 1595.16633266533, 1599.19438877756, 1603.22244488978, 1607.250501002, 1611.27855711423, 1615.30661322645, 1619.33466933868, 1623.3627254509, 1627.39078156313, 1631.41883767535, 1635.44689378758, 1639.4749498998, 1643.50300601202, 1647.53106212425, 1651.55911823647, 1655.5871743487, 1659.61523046092, 1663.64328657315, 1667.67134268537, 1671.6993987976, 1675.72745490982,
1679.75551102204, 1683.78356713427, 1687.81162324649, 1691.83967935872,
1695.86773547094, 1699.89579158317, 1703.92384769539, 1707.95190380762,
1711.97995991984, 1716.00801603206, 1720.03607214429, 1724.06412825651,
1728.09218436874, 1732.12024048096, 1736.14829659319, 1740.17635270541,
1744.20440881764, 1748.23246492986, 1752.26052104208, 1756.28857715431,
1760.31663326653, 1764.34468937876, 1768.37274549098, 1772.40080160321,
1776.42885771543, 1780.45691382766, 1784.48496993988, 1788.5130260521,
1792.54108216433, 1796.56913827655, 1800.59719438878, 1804.625250501,
1808.65330661323, 1812.68136272545, 1816.70941883768, 1820.7374749499,
1824.76553106212, 1828.79358717435, 1832.82164328657, 1836.8496993988,
1840.87775551102, 1844.90581162325, 1848.93386773547, 1852.9619238477,
1856.98997995992, 1861.01803607214, 1865.04609218437, 1869.07414829659,
1873.10220440882, 1877.13026052104, 1881.15831663327, 1885.18637274549,
1889.21442885772, 1893.24248496994, 1897.27054108216, 1901.29859719439,
1905.32665330661, 1909.35470941884, 1913.38276553106, 1917.41082164329,
1921.43887775551, 1925.46693386774, 1929.49498997996, 1933.52304609218,
1937.55110220441, 1941.57915831663, 1945.60721442886, 1949.63527054108,
1953.66332665331, 1957.69138276553, 1961.71943887776, 1965.74749498998,
1969.7755511022, 1973.80360721443, 1977.83166332665, 1981.85971943888,
1985.8877755511, 1989.91583166333, 1993.94388777555, 1997.97194388778,
2002), response = c(-3.58957478518253, -3.62025086478834, -1.04832718629931,
-0.932815570713867, -2.0059547584204, -1.80378256401687, -1.07078839210859,
-1.47563839178252, 0.172463665814555, -0.247936953593207, -0.547987522372114,
-0.942197740134927, -0.173128438044605, -1.07747254510461, 1.10668446828936,
1.05530666799013, 0.777604622119702, 1.06274622733744, 0.900238206583803,
1.31412394039238, 1.89499625411371, 1.04033412499542, 1.38191631121636,
0.999431819175433, 2.68812421375804, 1.47394328081264, 2.72714724358873,
1.71016394833754, 0.81098268882713, 4.46594845284442, 3.61870598680648,
3.79687265260293, 3.3731838954602, 1.43858183778185, 2.61595066682473,
3.76691451510184, 3.13442724430605, 3.10293620326479, 4.45945367139878,
3.76865197544365, 3.65420270503196, 4.08016861916108, 4.67570693185897,
5.00811845024315, 5.6912658729918, 6.48591088940087, 4.50595287440596,
6.43194883116461, 5.21375289859846, 6.06886948554962, 5.85835327159824,
4.9625308106984, 4.89022500473493, 7.12834025120892, 6.94879254991973,
6.21082185779307, 7.62504959939155, 6.23255482453099, 6.6760218279274,
8.20691580592652, 7.57507394269, 6.06526163497662, 8.75382401626834,
5.70620745514469, 7.26383229655605, 7.46498556138395, 8.7515167860644,
7.91466462562348, 7.58703158895482, 9.11188767524796, 8.4186401599397,
6.89096958793887, 7.89158602926514, 10.4612794167006, 7.58318779363375,
9.84544304325303, 10.6353526028036, 9.19575959965921, 9.83902692183846,
9.37218617742756, 11.2655524091541, 10.0589316149929, 8.61325152269854,
10.5773880802015, 10.5060419723174, 11.0722463547765, 10.3359011103665,
10.026378680973, 11.0063599845354, 11.0079405342041, 10.5867493518446,
10.8943760523027, 12.1443513080492, 11.8396074471423, 10.3124495936648,
13.9727593750822, 10.7840244636707, 12.1751132761597, 10.0114338279253,
12.7398881495442, 11.3496328681914, 12.7336858883009, 11.0014003683233,
12.6329145237188, 12.8418626240243, 11.3677960547355, 11.5889466372296,
12.0209030882168, 12.2848054559541, 12.6322237596396, 11.3575397626941,
13.0492956855345, 13.6665727062986, 12.2082939516701, 13.0091518201203,
14.3681537231903, 12.4880551714586, 13.5355568479074, 15.6635883385552,
12.8784669688002, 13.8963858223423, 12.8161474815643, 13.6667437747109,
12.8263133597971, 14.7562225192782, 13.1809926403271, 14.0891781138646,
12.5346214636793, 14.1004036308871, 15.8435075654244, 14.358625443445,
15.8909481383511, 14.4402749451377, 14.1360376888721, 13.3905410466758,
14.7537155862745, 14.4835085007599, 14.1315006601281, 14.4150998388548,
15.6952736181339, 13.6437863278995, 17.2083398797518, 14.9719864616975,
16.8432168022786, 15.1937342978635, 15.1806060147844, 16.3096405343474,
16.3108213264068, 14.4242808390335, 15.8595948000895, 15.5877173710543,
15.5005730489912, 17.375526668469, 15.5740671948281, 16.1243289084692,
15.9749299593302, 17.4147022645292, 18.3700381963525, 13.8143770413523,
17.9351262044202, 16.7714876710299, 17.0384639772996, 17.0307757081918,
15.6030920144074, 16.581042152256, 18.8792908358865, 17.998114032928,
19.1840884966728, 17.1930308829494, 18.9704837093962, 17.5398954859783,
15.6751901660724, 17.7071141172369, 16.8992272557539, 16.2076829263534,
16.5355407990975, 17.4297211354828, 17.1911852119539, 17.2638441341946,
18.7667030206444, 16.9370746309351, 18.1716182139223, 17.6984658296715,
17.8432253646, 18.6164049373694, 19.4441317654593, 19.0489224386334,
16.3878174080612, 18.0512428503099, 19.1803487982935, 17.7734002014158,
20.131130771463, 16.151222737491, 19.0662238624346, 18.9637572654357,
20.4006961448895, 17.7452159814725, 18.1767897518064, 19.6895978265634,
17.2415335849138, 19.205432003556, 20.9033294515514, 16.61019386946,
18.6486223450292, 19.3495539638913, 17.763309794548, 18.589914109576,
21.3750173075524, 19.0790221140521, 19.073590595151, 20.3110876727817,
19.1894218140841, 20.2372395516702, 19.1312943191063, 17.8507423410755,
19.9857184946213, 18.3289008834065, 16.564668684551, 17.2726209464131,
20.0768841859564, 19.7105832275701, 20.959237751695, 20.6413358795587,
19.3538018851944, 19.3803009186455, 18.9965213444349, 19.9312882190988,
20.2498165546154, 20.6602172892069, 19.6600862738064, 20.7052435265234,
19.7988958134332, 19.369849065903, 20.7005328602028, 21.3901352510254,
19.9494464318285, 18.5158234507894, 21.0145796229888, 20.3935414006186,
20.228895878692, 19.4396050963157, 18.9481819487733, 21.8170718923232,
22.6484071413006, 19.404428374176, 19.574407890811, 19.4134174307073,
22.6210955210436, 18.7184436222435, 20.5595496522691, 21.6974872718944,
20.9363644838996, 21.0687129490903, 21.0730189950955, 22.9666927093666,
22.0189027581993, 22.1601832780259, 20.168790844715, 20.4220194493454,
20.9451564541633, 20.1454319823462, 22.4361464351762, 20.6198647651334,
21.4646217438031, 20.5212888882417, 21.3233246747502, 20.7726040755242,
21.3588194110876, 22.7473931004582, 21.7283608509904, 21.9963790126902,
21.3213711223197, 22.5202653835615, 22.6692559419185, 22.9911437265527,
21.0362741019696, 21.2546278761312, 21.664584144617, 22.7666620244984,
20.256702413919, 23.6864315079695, 25.1766563370701, 21.4323283908586,
22.2962186929496, 20.3888136631929, 21.9484525615365, 20.7708407269101,
22.0665761310048, 20.8305234503782, 23.3305723015105, 22.2899098775578,
22.9720194868776, 22.5367848999167, 21.6382285092653, 22.6582172780793,
22.1156796214891, 24.2288369587372, 24.380280200404, 23.2915320073326,
22.3614159703802, 23.6251133797724, 24.1562618798916, 22.9911339744999,
21.7891891596172, 24.2161382746419, 23.1962083357621, 22.0959042015395,
22.8241885064288, 23.3231637970656, 24.0340390959745, 23.5064604248494,
22.4648350821346, 23.8485522024163, 24.6314343012963, 22.9591922937084,
24.5796619452326, 25.2338435690998, 22.0696906772769, 23.6679180689824,
22.5138604024011, 23.62164299563, 22.3916895742358, 23.0102593680772,
24.6826004252832, 21.8474956782121, 23.5836084839005, 23.6096237989221,
24.0581957504008, 22.7617053199355, 23.7904812246471, 23.8525951859381,
23.7702560619677, 23.6740561490702, 23.6727772337761, 23.5782892540828,
24.8012234487132, 22.0448723900358, 26.2446733566806, 24.8901137698615,
22.4107383673891, 22.3840252340722, 24.8813300530568, 22.8290655247652,
23.6816776347442, 23.862147975553, 24.9440352220054, 25.602724064874,
23.4397180626159, 25.2979283706869, 24.0699252609402, 23.8621362261058,
24.8933828032982, 25.4178762176794, 24.0836079308251, 25.3557779622394,
26.3252940303689, 22.0335051185389, 24.0425149395311, 23.7462684205931,
24.6982077315874, 25.3143328826626, 24.6013450624622, 24.7563487352452,
25.9257328576421, 25.179000436827, 25.8003241661114, 25.0022285204079,
24.9192596758005, 25.8822418156569, 25.1998009463868, 23.6441535434575,
24.9426082712576, 24.9539001217398, 22.4577044619626, 23.0437260725105,
25.9689982595835, 25.9549342010041, 24.5707096652445, 25.6550469220972,
25.2431319260493, 23.7512180651784, 22.9751208621517, 25.3155008103098,
25.0914281968248, 24.1433425470407, 25.6845257893322, 25.207267799716,
24.3853301121691, 25.133367186717, 25.0426003422979, 24.9313250719437,
25.8037876224851, 25.4260286817899, 26.3335050818361, 24.9988357868418,
26.7810824981553, 24.6721890010742, 25.4994779162627, 28.2205742456111,
25.6593981734269, 26.8141269785953, 24.325998577855, 26.772250209721,
26.6486130860901, 25.2148338608099, 24.435128109376, 25.5111215965461,
24.7745034027075, 24.7478101468244, 24.9726113562946, 24.2103141988101,
26.6316537321719, 25.3866804819055, 25.2535960824473, 24.9324610909463,
26.4330171406269, 24.9555194105079, 24.8680345245781, 27.314004407881,
25.8497922647969, 25.4365960643182, 27.0640153839225, 26.4115799971681,
24.5066151411807, 24.0770545981929, 24.5770973417194, 25.6158328694023,
23.7228591099721, 25.2506608858009, 26.3845249599514, 24.986755107786,
26.0229796277567, 27.9314117903394, 26.720855107423, 26.2748970068974,
26.9619543149989, 28.1177017951961, 26.0980959165123, 26.4789358277429,
27.2640757074367, 27.2474159343653, 25.9992993819392, 26.0604493049298,
26.6130593710637, 25.7460954198758, 25.4886951704501, 27.249597964836,
25.8789271815857, 26.1229507075738, 24.0907215160597, 26.4228821021775,
27.7405264571259, 24.266723432942, 25.0542999074318, 24.1121125081467,
26.935296245382, 26.4950565230244, 25.4617820698142, 25.0477114029224,
27.6099550887943, 24.9703729596582, 26.354887191175, 28.5678615888521,
24.7591852732502, 26.0070564298366, 28.1832364680811, 27.4758598980069,
25.8492469607027, 25.3129416645233, 26.4844595022583, 27.4346593674058,
25.0199478369712, 27.4787736031246, 26.6510807143448, 25.9521368094847,
27.4912196718334, 26.4381208497156, 23.7514049354602, 27.6911406664994,
25.7380847181156, 27.3970753479465, 25.2686534213865, 26.625449280584,
27.6915237551073, 28.2585398441531, 25.0055625526628, 28.9395902046147,
27.1152633304668, 27.4458034523227, 28.4665155499299, 25.7325216430656,
27.4268908314492, 26.0789361712655, 26.5857269918754, 28.0991123511731,
28.833576055686, 27.9601071193266, 26.1630868619856, 29.3301360779517,
28.4273979734602)), .Names = c("x", "response"), row.names = c(NA,
-500L), class = "data.frame") | Get the number of parameters of a linear model
Here is another solution if your model is nls, with an example dataset in r:
# Fit a nls model (this is a light response curve of photosynthesis, in this case)
mod<-nls(response~(alpha*x+Pm-sqrt(((alp |
51,610 | Proportion data with number of trials known (and separation?): GLM or beta regression? | The data you have are really a classic binomial setting and you can use binomial GLMs to model the data, either using the standard maximum likelihood (ML) estimator or using a bias-reduced (BR) estimator (Firth's penalized estimator). I wouldn't use beta regression here.
For treatment b you have 49 successes and 0 failures.
subset(dataset, treatment == "b")
## treatment success fail n
## 6 b 10 0 10
## 7 b 10 0 10
## 8 b 9 0 9
## 9 b 10 0 10
## 10 b 10 0 10
Hence there is quasi-complete separation leading to an infinite ML estimator while the BR estimator is guaranteed to be finite. Note that the BR estimator provides a principled solution for the separation while the trimming of the proportion data is rather ad hoc.
For fitting the binomial GLM with ML and BR you can use the glm() function and combine it with the brglm2 package (Kosmidis & Firth, 2020, Biometrika, doi:10.1093/biomet/asaa052). An easier way to specify the binomial response is to use a matrix where the first column contains the successes and the second column the failures:
ml <- glm(cbind(success, fail) ~ treatment, data = dataset, family = binomial)
summary(ml)
## Call:
## glm(formula = cbind(success, fail) ~ treatment, family = binomial,
## data = dataset)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.81457 -0.33811 0.00012 0.54942 1.86737
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.6325 0.3001 2.108 0.0351 *
## treatmentb 20.4676 3308.1100 0.006 0.9951
## treatmentc 1.0257 0.4888 2.099 0.0359 *
## treatmentd 0.3119 0.4351 0.717 0.4734
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 43.36 on 19 degrees of freedom
## Residual deviance: 13.40 on 16 degrees of freedom
## AIC: 56.022
##
## Number of Fisher Scoring iterations: 18
The corresponding BR estimator can be obtained by using the "brglmFit" method (provided in brglm2) instead of the default "glm.fit" method:
library("brglm2")
br <- glm(cbind(success, fail) ~ treatment, data = dataset, family = binomial,
method = "brglmFit")
summary(br)
## Call:
## glm(formula = cbind(success, fail) ~ treatment, family = binomial,
## data = dataset, method = "brglmFit")
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7498 -0.2890 0.4368 0.6030 1.9096
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.6190 0.2995 2.067 0.03875 *
## treatmentb 3.9761 1.4667 2.711 0.00671 **
## treatmentc 0.9904 0.4834 2.049 0.04049 *
## treatmentd 0.3041 0.4336 0.701 0.48304
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 43.362 on 19 degrees of freedom
## Residual deviance: 14.407 on 16 degrees of freedom
## AIC: 57.029
##
## Type of estimator: AS_mixed (mixed bias-reducing adjusted score equations)
## Number of Fisher Scoring iterations: 1
Note that the coefficient estimates, standard errors, and t statistics for the non-separated treatments only change very slightly. The main difference is the finite estimate for treatment b.
Because the BR adjustment is so slight, you can still employ the usual inference (Wald, likelihood ratio, etc.) and information criteria etc. In R with brglm2 this is particularly easy because the br model fitted above still inherits from glm. As an illustration we can assess the overall significance of the treatment factor:
br0 <- update(br, . ~ 1)
AIC(br0, br)
## df AIC
## br0 1 79.98449
## br 4 57.02927
library("lmtest")
lrtest(br0, br)
## Likelihood ratio test
##
## Model 1: cbind(success, fail) ~ 1
## Model 2: cbind(success, fail) ~ treatment
## #Df LogLik Df Chisq Pr(>Chisq)
## 1 1 -38.992
## 2 4 -24.515 3 28.955 2.288e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Or we can look at all pairwise contrasts (aka Tukey contrasts) of the four treatments:
library("multomp")
summary(glht(br, linfct = mcp(treatment = "Tukey")))
## Simultaneous Tests for General Linear Hypotheses
##
## Multiple Comparisons of Means: Tukey Contrasts
##
## Fit: glm(formula = cbind(success, fail) ~ treatment, family = ## binomial,
## data = dataset, method = "brglmFit")
##
## Linear Hypotheses:
## Estimate Std. Error z value Pr(>|z|)
## b - a == 0 3.9761 1.4667 2.711 0.0286 *
## c - a == 0 0.9904 0.4834 2.049 0.1513
## d - a == 0 0.3041 0.4336 0.701 0.8864
## c - b == 0 -2.9857 1.4851 -2.010 0.1641
## d - b == 0 -3.6720 1.4696 -2.499 0.0517 .
## d - c == 0 -0.6863 0.4922 -1.394 0.4743
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)
Finally, for the standard ML estimator you can also use the quasibinomial family as you did in your post, although in this data set you don't have overdispersion (rather a little bit of underdispersion). However, the brglm2 package does not support this at the moment. But given that there is no evidence for overdispersion I would not be concerned about this here. | Proportion data with number of trials known (and separation?): GLM or beta regression? | The data you have are really a classic binomial setting and you can use binomial GLMs to model the data, either using the standard maximum likelihood (ML) estimator or using a bias-reduced (BR) estima | Proportion data with number of trials known (and separation?): GLM or beta regression?
The data you have are really a classic binomial setting and you can use binomial GLMs to model the data, either using the standard maximum likelihood (ML) estimator or using a bias-reduced (BR) estimator (Firth's penalized estimator). I wouldn't use beta regression here.
For treatment b you have 49 successes and 0 failures.
subset(dataset, treatment == "b")
## treatment success fail n
## 6 b 10 0 10
## 7 b 10 0 10
## 8 b 9 0 9
## 9 b 10 0 10
## 10 b 10 0 10
Hence there is quasi-complete separation leading to an infinite ML estimator while the BR estimator is guaranteed to be finite. Note that the BR estimator provides a principled solution for the separation while the trimming of the proportion data is rather ad hoc.
For fitting the binomial GLM with ML and BR you can use the glm() function and combine it with the brglm2 package (Kosmidis & Firth, 2020, Biometrika, doi:10.1093/biomet/asaa052). An easier way to specify the binomial response is to use a matrix where the first column contains the successes and the second column the failures:
ml <- glm(cbind(success, fail) ~ treatment, data = dataset, family = binomial)
summary(ml)
## Call:
## glm(formula = cbind(success, fail) ~ treatment, family = binomial,
## data = dataset)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.81457 -0.33811 0.00012 0.54942 1.86737
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.6325 0.3001 2.108 0.0351 *
## treatmentb 20.4676 3308.1100 0.006 0.9951
## treatmentc 1.0257 0.4888 2.099 0.0359 *
## treatmentd 0.3119 0.4351 0.717 0.4734
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 43.36 on 19 degrees of freedom
## Residual deviance: 13.40 on 16 degrees of freedom
## AIC: 56.022
##
## Number of Fisher Scoring iterations: 18
The corresponding BR estimator can be obtained by using the "brglmFit" method (provided in brglm2) instead of the default "glm.fit" method:
library("brglm2")
br <- glm(cbind(success, fail) ~ treatment, data = dataset, family = binomial,
method = "brglmFit")
summary(br)
## Call:
## glm(formula = cbind(success, fail) ~ treatment, family = binomial,
## data = dataset, method = "brglmFit")
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7498 -0.2890 0.4368 0.6030 1.9096
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.6190 0.2995 2.067 0.03875 *
## treatmentb 3.9761 1.4667 2.711 0.00671 **
## treatmentc 0.9904 0.4834 2.049 0.04049 *
## treatmentd 0.3041 0.4336 0.701 0.48304
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 43.362 on 19 degrees of freedom
## Residual deviance: 14.407 on 16 degrees of freedom
## AIC: 57.029
##
## Type of estimator: AS_mixed (mixed bias-reducing adjusted score equations)
## Number of Fisher Scoring iterations: 1
Note that the coefficient estimates, standard errors, and t statistics for the non-separated treatments only change very slightly. The main difference is the finite estimate for treatment b.
Because the BR adjustment is so slight, you can still employ the usual inference (Wald, likelihood ratio, etc.) and information criteria etc. In R with brglm2 this is particularly easy because the br model fitted above still inherits from glm. As an illustration we can assess the overall significance of the treatment factor:
br0 <- update(br, . ~ 1)
AIC(br0, br)
## df AIC
## br0 1 79.98449
## br 4 57.02927
library("lmtest")
lrtest(br0, br)
## Likelihood ratio test
##
## Model 1: cbind(success, fail) ~ 1
## Model 2: cbind(success, fail) ~ treatment
## #Df LogLik Df Chisq Pr(>Chisq)
## 1 1 -38.992
## 2 4 -24.515 3 28.955 2.288e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Or we can look at all pairwise contrasts (aka Tukey contrasts) of the four treatments:
library("multomp")
summary(glht(br, linfct = mcp(treatment = "Tukey")))
## Simultaneous Tests for General Linear Hypotheses
##
## Multiple Comparisons of Means: Tukey Contrasts
##
## Fit: glm(formula = cbind(success, fail) ~ treatment, family = ## binomial,
## data = dataset, method = "brglmFit")
##
## Linear Hypotheses:
## Estimate Std. Error z value Pr(>|z|)
## b - a == 0 3.9761 1.4667 2.711 0.0286 *
## c - a == 0 0.9904 0.4834 2.049 0.1513
## d - a == 0 0.3041 0.4336 0.701 0.8864
## c - b == 0 -2.9857 1.4851 -2.010 0.1641
## d - b == 0 -3.6720 1.4696 -2.499 0.0517 .
## d - c == 0 -0.6863 0.4922 -1.394 0.4743
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## (Adjusted p values reported -- single-step method)
Finally, for the standard ML estimator you can also use the quasibinomial family as you did in your post, although in this data set you don't have overdispersion (rather a little bit of underdispersion). However, the brglm2 package does not support this at the moment. But given that there is no evidence for overdispersion I would not be concerned about this here. | Proportion data with number of trials known (and separation?): GLM or beta regression?
The data you have are really a classic binomial setting and you can use binomial GLMs to model the data, either using the standard maximum likelihood (ML) estimator or using a bias-reduced (BR) estima |
51,611 | Proportion data with number of trials known (and separation?): GLM or beta regression? | It isn't that beta regression on its own solved the problem. It's that you adjusted the data with a line of code:
y.doubleprime = ((y.prime*(length(y.prime)-1))+0.5)/length(y.prime))
so that the beta regression software didn't have to deal with the exact 0 or 1 proportions that it (like logistic regression) can't handle directly. A vignette on the betareg package says that's "a useful transformation in practice ... where n is the sample size." I'm not sure whether that adjustment based on n should be done for each set of observations as you seem to (I'm not fluent in tidyverse) or if that should be based on the entire set of observations. Presumably you followed the recommendation of the paper you linked.
In this thread one of the package authors describes the proper use of weights in the betareg() function: "make sure that your weights are scaled such that sum(weights) corresponds to the number of independent observations." That seems to be what you did.
As you "score mortality not on individuals, but on groups of individuals as a proportion (the denominator, i.e., number of trials, is known)," there's an alternative that would allow the Firth penalization. Just put your data into a longer form with one row per alive/dead (0/1) observation.
For example, you would expand your first batch of treatment a observations into 6 rows with outcome 0 and 4 with outcome 1. Then the outcome would be a set of 0/1 values as the logistf package expects. If you want to keep track of which batch of a treatment was involved (your example suggests 5 separate batches for each of the 4 treatments) you could annotate that in each row, too.
Added in response to another answer:
I wasn't aware of the brglm2 package recommended in another answer and agree that is probably the simplest and most appropriate way to proceed. Make sure you understand the nature of the penalization you use, however. For example, the original Firth method penalizes both the intercept and the regression coefficients. Thus probability estimates from such a model might be biased even though you get better coefficient estimates. Modified approaches are now available in the logistf and brglm2 packages. | Proportion data with number of trials known (and separation?): GLM or beta regression? | It isn't that beta regression on its own solved the problem. It's that you adjusted the data with a line of code:
y.doubleprime = ((y.prime*(length(y.prime)-1))+0.5)/length(y.prime))
so that the beta | Proportion data with number of trials known (and separation?): GLM or beta regression?
It isn't that beta regression on its own solved the problem. It's that you adjusted the data with a line of code:
y.doubleprime = ((y.prime*(length(y.prime)-1))+0.5)/length(y.prime))
so that the beta regression software didn't have to deal with the exact 0 or 1 proportions that it (like logistic regression) can't handle directly. A vignette on the betareg package says that's "a useful transformation in practice ... where n is the sample size." I'm not sure whether that adjustment based on n should be done for each set of observations as you seem to (I'm not fluent in tidyverse) or if that should be based on the entire set of observations. Presumably you followed the recommendation of the paper you linked.
In this thread one of the package authors describes the proper use of weights in the betareg() function: "make sure that your weights are scaled such that sum(weights) corresponds to the number of independent observations." That seems to be what you did.
As you "score mortality not on individuals, but on groups of individuals as a proportion (the denominator, i.e., number of trials, is known)," there's an alternative that would allow the Firth penalization. Just put your data into a longer form with one row per alive/dead (0/1) observation.
For example, you would expand your first batch of treatment a observations into 6 rows with outcome 0 and 4 with outcome 1. Then the outcome would be a set of 0/1 values as the logistf package expects. If you want to keep track of which batch of a treatment was involved (your example suggests 5 separate batches for each of the 4 treatments) you could annotate that in each row, too.
Added in response to another answer:
I wasn't aware of the brglm2 package recommended in another answer and agree that is probably the simplest and most appropriate way to proceed. Make sure you understand the nature of the penalization you use, however. For example, the original Firth method penalizes both the intercept and the regression coefficients. Thus probability estimates from such a model might be biased even though you get better coefficient estimates. Modified approaches are now available in the logistf and brglm2 packages. | Proportion data with number of trials known (and separation?): GLM or beta regression?
It isn't that beta regression on its own solved the problem. It's that you adjusted the data with a line of code:
y.doubleprime = ((y.prime*(length(y.prime)-1))+0.5)/length(y.prime))
so that the beta |
51,612 | Proportion data with number of trials known (and separation?): GLM or beta regression? | There's already a good answer from Achim that points out that this is really not a good scenario for beta-regression, because you seem to be in a binomial sampling situation and beta-regression will simply be an approximation to that.
If one uses a binomial likelihood, the only question is how to appropriately deal with all 0s or all 1s. Software for standard logistic regression is not really an option, because it will not converge (the maximum likelihood estimator for some log-odds ratios will be $\pm \infty$, which leads to the algorithm stopping with some really large estimates and standard errors without converging). There's several alternative options:
If your scenario is really as simple as your example and you just have a single proportion to estimate per group and there's no covariates or experimental structure (like randomization, several treatments occurring together within different experiments etc.), you can of course just work with Clopper-Pearson confidence intervals and median-unbiased estimates for each proportion on its own.
I assume there's some more complexity to your real problem, where some kind of regression approach is needed. In that case, exact logistic regression can give you exact confidence intervals (of course sometimes the upper or lower limit of those will be $\pm \infty$) and median unbiased estimates. There's fewer software options for that than for other approaches (e.g. PROC LOGISTIC in SAS covers it, as well as StatXact, while R - as far as I know - only has a close approximation to it through a MCMC approach via the elrm package*). In the absence of covariates and anything else to take into account, exact logistic regression will end up doing the same as option (0).
Firth penalized likelihood method (corresponding to Bayesian maximum-a-posteriori estimation with Jeffreys prior on all coefficients incl. the intercept), which at times can result in some weird estimates (but pretty sensible confidence intervals).
Bayesian logistic regression with some suitable priors. This can often behave slightly better than options 1 and 2, gives you more flexibility and even let's you reflect any particular prior knowledge (but doesn't force you to, you can keep your priors pretty vague or weakly informative).
Hierarchical models (whether Bayesian or not) that assume that information should be borrowed between some units (i.e. shrinkage towards common parameters).
There's plenty of good software of (4) and (5), e.g. in R there's the brms and rstanarm packages.
* Regarding the elrm package that "claims" to do exact logistic regression via a MCMC approach, I have tried it in the past and was not sure that it approximates the exact solution really well. E.g. here is a simple example where it provides numerically rather different results than the SAS implementation that I would tend to trust a lot:
library(elrm)
elrmfit1 = elrm(y/n ~ treatment,
interest = ~treatment,
r=2, iter = 100000, burnIn = 500,
dataset = data.frame(y=c(0, 10),
n=c(10,10),
treatment=factor(c(0,1))))
summary(elrmfit1)
That produces an estimate of the log-odds ratio of 3.9839 with 95% CI from 2.119672 to Inf. In contrast, PROC LOGISTIC in SAS produces a log-odds ratio of 4.7723 with 95% CI 2.7564 to Infinity. | Proportion data with number of trials known (and separation?): GLM or beta regression? | There's already a good answer from Achim that points out that this is really not a good scenario for beta-regression, because you seem to be in a binomial sampling situation and beta-regression will s | Proportion data with number of trials known (and separation?): GLM or beta regression?
There's already a good answer from Achim that points out that this is really not a good scenario for beta-regression, because you seem to be in a binomial sampling situation and beta-regression will simply be an approximation to that.
If one uses a binomial likelihood, the only question is how to appropriately deal with all 0s or all 1s. Software for standard logistic regression is not really an option, because it will not converge (the maximum likelihood estimator for some log-odds ratios will be $\pm \infty$, which leads to the algorithm stopping with some really large estimates and standard errors without converging). There's several alternative options:
If your scenario is really as simple as your example and you just have a single proportion to estimate per group and there's no covariates or experimental structure (like randomization, several treatments occurring together within different experiments etc.), you can of course just work with Clopper-Pearson confidence intervals and median-unbiased estimates for each proportion on its own.
I assume there's some more complexity to your real problem, where some kind of regression approach is needed. In that case, exact logistic regression can give you exact confidence intervals (of course sometimes the upper or lower limit of those will be $\pm \infty$) and median unbiased estimates. There's fewer software options for that than for other approaches (e.g. PROC LOGISTIC in SAS covers it, as well as StatXact, while R - as far as I know - only has a close approximation to it through a MCMC approach via the elrm package*). In the absence of covariates and anything else to take into account, exact logistic regression will end up doing the same as option (0).
Firth penalized likelihood method (corresponding to Bayesian maximum-a-posteriori estimation with Jeffreys prior on all coefficients incl. the intercept), which at times can result in some weird estimates (but pretty sensible confidence intervals).
Bayesian logistic regression with some suitable priors. This can often behave slightly better than options 1 and 2, gives you more flexibility and even let's you reflect any particular prior knowledge (but doesn't force you to, you can keep your priors pretty vague or weakly informative).
Hierarchical models (whether Bayesian or not) that assume that information should be borrowed between some units (i.e. shrinkage towards common parameters).
There's plenty of good software of (4) and (5), e.g. in R there's the brms and rstanarm packages.
* Regarding the elrm package that "claims" to do exact logistic regression via a MCMC approach, I have tried it in the past and was not sure that it approximates the exact solution really well. E.g. here is a simple example where it provides numerically rather different results than the SAS implementation that I would tend to trust a lot:
library(elrm)
elrmfit1 = elrm(y/n ~ treatment,
interest = ~treatment,
r=2, iter = 100000, burnIn = 500,
dataset = data.frame(y=c(0, 10),
n=c(10,10),
treatment=factor(c(0,1))))
summary(elrmfit1)
That produces an estimate of the log-odds ratio of 3.9839 with 95% CI from 2.119672 to Inf. In contrast, PROC LOGISTIC in SAS produces a log-odds ratio of 4.7723 with 95% CI 2.7564 to Infinity. | Proportion data with number of trials known (and separation?): GLM or beta regression?
There's already a good answer from Achim that points out that this is really not a good scenario for beta-regression, because you seem to be in a binomial sampling situation and beta-regression will s |
51,613 | What core topics would all statisticians be required to know? | I agree with @Bayequentist but would go further.
"Statistician" is a broader term than just people with a PhD in statistics. My PhD is in psychometrics, but I've worked as a statistician for more than 20 years. (When talking to statisticians, I call myself a data analyst).
I know less theory than a lot of people (probably less than almost all the regular answerers on this site) but I've got a lot of practical experience with data.
So, e.g. I'd expect anyone called a statistician to know something about various kinds of regression. But what about them? Need we all be able to prove various theorems? I don't think so. Need we all be able to take a messy data set and figure out how to model it (and know how to interact with the substance-matter expert)? I don't think so.
(I can do the second, but not the first). | What core topics would all statisticians be required to know? | I agree with @Bayequentist but would go further.
"Statistician" is a broader term than just people with a PhD in statistics. My PhD is in psychometrics, but I've worked as a statistician for more than | What core topics would all statisticians be required to know?
I agree with @Bayequentist but would go further.
"Statistician" is a broader term than just people with a PhD in statistics. My PhD is in psychometrics, but I've worked as a statistician for more than 20 years. (When talking to statisticians, I call myself a data analyst).
I know less theory than a lot of people (probably less than almost all the regular answerers on this site) but I've got a lot of practical experience with data.
So, e.g. I'd expect anyone called a statistician to know something about various kinds of regression. But what about them? Need we all be able to prove various theorems? I don't think so. Need we all be able to take a messy data set and figure out how to model it (and know how to interact with the substance-matter expert)? I don't think so.
(I can do the second, but not the first). | What core topics would all statisticians be required to know?
I agree with @Bayequentist but would go further.
"Statistician" is a broader term than just people with a PhD in statistics. My PhD is in psychometrics, but I've worked as a statistician for more than |
51,614 | What core topics would all statisticians be required to know? | Statisticians are an extremely diverse set of professionals/researchers, so the set of core topics that all of them should know is actually quite small. If you visit websites of statistics graduate programs in the US, you'll see that the common core topics are as follows: probability theory; theory of statistical inference; theory of (generalized) linear models; and basic computer programming skills. | What core topics would all statisticians be required to know? | Statisticians are an extremely diverse set of professionals/researchers, so the set of core topics that all of them should know is actually quite small. If you visit websites of statistics graduate pr | What core topics would all statisticians be required to know?
Statisticians are an extremely diverse set of professionals/researchers, so the set of core topics that all of them should know is actually quite small. If you visit websites of statistics graduate programs in the US, you'll see that the common core topics are as follows: probability theory; theory of statistical inference; theory of (generalized) linear models; and basic computer programming skills. | What core topics would all statisticians be required to know?
Statisticians are an extremely diverse set of professionals/researchers, so the set of core topics that all of them should know is actually quite small. If you visit websites of statistics graduate pr |
51,615 | What core topics would all statisticians be required to know? | Applied statisticians should know
conditional probability inside and out; this is the source of a great deal of misunderstandings about p-values and type I assertion probability $\alpha$ as well as holding back more usage of the Bayesian paradigm
experimental design, sources of bias and variability
measurement properties and how to optimize them
how to translate subject matter knowledge into model specification
which modeling assumptions matter the most, and which type of model flexibility should be prioritized (e.g., in many situations nonlinearity is more damaging than non-additivity)
how to specify and interpret details of regression models (at least up to specification of nonlinear interaction terms)
model uncertainty and how it damages inference, and understanding that trying multiple models can destroy inference
understand that getting a Bayesian posterior probability of normality is better than trying to use the data to decide whether to assume normality or not
how to specify flexible models so that you don't need to worry so much about model uncertainty
instead of learning all the standard statistical tests and ANOVA, learn how to accomplish them through modeling (this includes standard nonparametric tests such as Wilcoxon and Kruskal-Wallis) | What core topics would all statisticians be required to know? | Applied statisticians should know
conditional probability inside and out; this is the source of a great deal of misunderstandings about p-values and type I assertion probability $\alpha$ as well as h | What core topics would all statisticians be required to know?
Applied statisticians should know
conditional probability inside and out; this is the source of a great deal of misunderstandings about p-values and type I assertion probability $\alpha$ as well as holding back more usage of the Bayesian paradigm
experimental design, sources of bias and variability
measurement properties and how to optimize them
how to translate subject matter knowledge into model specification
which modeling assumptions matter the most, and which type of model flexibility should be prioritized (e.g., in many situations nonlinearity is more damaging than non-additivity)
how to specify and interpret details of regression models (at least up to specification of nonlinear interaction terms)
model uncertainty and how it damages inference, and understanding that trying multiple models can destroy inference
understand that getting a Bayesian posterior probability of normality is better than trying to use the data to decide whether to assume normality or not
how to specify flexible models so that you don't need to worry so much about model uncertainty
instead of learning all the standard statistical tests and ANOVA, learn how to accomplish them through modeling (this includes standard nonparametric tests such as Wilcoxon and Kruskal-Wallis) | What core topics would all statisticians be required to know?
Applied statisticians should know
conditional probability inside and out; this is the source of a great deal of misunderstandings about p-values and type I assertion probability $\alpha$ as well as h |
51,616 | What core topics would all statisticians be required to know? | A statistician, as in a Mathematical that is specialized in Statistics, as in my experience, has a basic set of theorical knowledge in:
Probability Theory (most importat one)
Mathematical Inference
Mathematical optimization
Regression
Basic Programming
Exploratory Data Research
Stochastic methods
Keep in mind that, even if it's not as complex as other fields of the Mathematics at face value, it's deeply rooted in Mesure theory, so it's still a specialization of an already specialized cience that it's the mathematics.
Now, recently, due to the technological advancement, computer knowledge has become a very sought after skill in the field, so knowledge in Big Data and basic Data Science it's becoming the 8th item in the list above.
Every statistics degree with a rigorous basis of theorical knowledge would contain every element in the list above. | What core topics would all statisticians be required to know? | A statistician, as in a Mathematical that is specialized in Statistics, as in my experience, has a basic set of theorical knowledge in:
Probability Theory (most importat one)
Mathematical Inference
M | What core topics would all statisticians be required to know?
A statistician, as in a Mathematical that is specialized in Statistics, as in my experience, has a basic set of theorical knowledge in:
Probability Theory (most importat one)
Mathematical Inference
Mathematical optimization
Regression
Basic Programming
Exploratory Data Research
Stochastic methods
Keep in mind that, even if it's not as complex as other fields of the Mathematics at face value, it's deeply rooted in Mesure theory, so it's still a specialization of an already specialized cience that it's the mathematics.
Now, recently, due to the technological advancement, computer knowledge has become a very sought after skill in the field, so knowledge in Big Data and basic Data Science it's becoming the 8th item in the list above.
Every statistics degree with a rigorous basis of theorical knowledge would contain every element in the list above. | What core topics would all statisticians be required to know?
A statistician, as in a Mathematical that is specialized in Statistics, as in my experience, has a basic set of theorical knowledge in:
Probability Theory (most importat one)
Mathematical Inference
M |
51,617 | What kind of t-test should I use for testing the significance of a Professor's IQ? | You don't need any significance tests. You already have the IQ of the professor in question, namely, 125, which is not equal to 100. The situation that you seem to have this situation confused with is when you have a sample of people from some population and you want to make an inference about the mean IQ of the population. Saying that the sample mean is "significantly" different from 100 means that you've decided that the population mean isn't 100. But there's no population to make inferences about here, since the mean IQ of the population you're comparing the professor to is already known. | What kind of t-test should I use for testing the significance of a Professor's IQ? | You don't need any significance tests. You already have the IQ of the professor in question, namely, 125, which is not equal to 100. The situation that you seem to have this situation confused with is | What kind of t-test should I use for testing the significance of a Professor's IQ?
You don't need any significance tests. You already have the IQ of the professor in question, namely, 125, which is not equal to 100. The situation that you seem to have this situation confused with is when you have a sample of people from some population and you want to make an inference about the mean IQ of the population. Saying that the sample mean is "significantly" different from 100 means that you've decided that the population mean isn't 100. But there's no population to make inferences about here, since the mean IQ of the population you're comparing the professor to is already known. | What kind of t-test should I use for testing the significance of a Professor's IQ?
You don't need any significance tests. You already have the IQ of the professor in question, namely, 125, which is not equal to 100. The situation that you seem to have this situation confused with is |
51,618 | What kind of t-test should I use for testing the significance of a Professor's IQ? | If the professor scored significantly below average, than that would be worth reporting. Therefore you are interested in both ends of the tails and need to perform a two tailed test.
Usually you compare your p with some alpha. That alpha is agreed upon to be 0.05 just as testing is agreed to be two tailed.
These are two reasons for two tails. | What kind of t-test should I use for testing the significance of a Professor's IQ? | If the professor scored significantly below average, than that would be worth reporting. Therefore you are interested in both ends of the tails and need to perform a two tailed test.
Usually you compa | What kind of t-test should I use for testing the significance of a Professor's IQ?
If the professor scored significantly below average, than that would be worth reporting. Therefore you are interested in both ends of the tails and need to perform a two tailed test.
Usually you compare your p with some alpha. That alpha is agreed upon to be 0.05 just as testing is agreed to be two tailed.
These are two reasons for two tails. | What kind of t-test should I use for testing the significance of a Professor's IQ?
If the professor scored significantly below average, than that would be worth reporting. Therefore you are interested in both ends of the tails and need to perform a two tailed test.
Usually you compa |
51,619 | Is it fine to slightly overfit, if its giving you good predictive power? | How do you know that your model is overfitted? If an "overfitted" model (let us call it model A) is giving you truly better predictive power (no cheating, honest out of sample assessment) than some benchmark model that you think is non-overfitted (call it model B), I would suspect that model B is actually underfitted while model A is
less-underfitted than model B or
non-overfitted or perhaps
slightly overfitted (but not as severely as model B is underfitted).
So I would say it is fine to use model A in place of model B if you have to choose one of the two.
Regarding whether to keep both predictors or drop one, I would suggest making the choice based on out-of-sample performance assessment. If a model containing both of them gives better forecasts, choose it. | Is it fine to slightly overfit, if its giving you good predictive power? | How do you know that your model is overfitted? If an "overfitted" model (let us call it model A) is giving you truly better predictive power (no cheating, honest out of sample assessment) than some be | Is it fine to slightly overfit, if its giving you good predictive power?
How do you know that your model is overfitted? If an "overfitted" model (let us call it model A) is giving you truly better predictive power (no cheating, honest out of sample assessment) than some benchmark model that you think is non-overfitted (call it model B), I would suspect that model B is actually underfitted while model A is
less-underfitted than model B or
non-overfitted or perhaps
slightly overfitted (but not as severely as model B is underfitted).
So I would say it is fine to use model A in place of model B if you have to choose one of the two.
Regarding whether to keep both predictors or drop one, I would suggest making the choice based on out-of-sample performance assessment. If a model containing both of them gives better forecasts, choose it. | Is it fine to slightly overfit, if its giving you good predictive power?
How do you know that your model is overfitted? If an "overfitted" model (let us call it model A) is giving you truly better predictive power (no cheating, honest out of sample assessment) than some be |
51,620 | Is it fine to slightly overfit, if its giving you good predictive power? | Sounds like your issue is collinearity rather than overfitting, as user1320502 suggested in a comment.
Do you know where these two variables came from? For example, if one is $x$ and one is $x^2$, centering the variables may help.
If all you care about is predicting, collinearity is not a direct problem. But if you look at other things (confidence intervals, etc), collinearity will affect things.
You might look at similar questions like:
How to solve collinearity problems in OLS regression?
How to prevent collinearity? (the question is about preventing collinearity, but see Aleksandr Blekh's answer which talks about dealing with collinearity if you didn't prevent it in the first place.
Importance of multiple linear regression assumptions when building predictive regression models (see the OP's comment on Michael Bishop's answer: nested CV) | Is it fine to slightly overfit, if its giving you good predictive power? | Sounds like your issue is collinearity rather than overfitting, as user1320502 suggested in a comment.
Do you know where these two variables came from? For example, if one is $x$ and one is $x^2$, cen | Is it fine to slightly overfit, if its giving you good predictive power?
Sounds like your issue is collinearity rather than overfitting, as user1320502 suggested in a comment.
Do you know where these two variables came from? For example, if one is $x$ and one is $x^2$, centering the variables may help.
If all you care about is predicting, collinearity is not a direct problem. But if you look at other things (confidence intervals, etc), collinearity will affect things.
You might look at similar questions like:
How to solve collinearity problems in OLS regression?
How to prevent collinearity? (the question is about preventing collinearity, but see Aleksandr Blekh's answer which talks about dealing with collinearity if you didn't prevent it in the first place.
Importance of multiple linear regression assumptions when building predictive regression models (see the OP's comment on Michael Bishop's answer: nested CV) | Is it fine to slightly overfit, if its giving you good predictive power?
Sounds like your issue is collinearity rather than overfitting, as user1320502 suggested in a comment.
Do you know where these two variables came from? For example, if one is $x$ and one is $x^2$, cen |
51,621 | In Bayesian linear regression, why do we assume parameter prior has zero mean? | Encoding assumptions about the data isn't quite the role of the prior distribution in a Bayesian model.
The prior does not reflect any assumptions about the data: It mathematically captures any assumptions the analyst makes about model parameters before observing the data. The posterior distribution reflects both the prior and observed data.
Take this quote from your question:
This means $x^Tw$ has mean zero, so $f(x)$ is zero. Why would you want to have a model with mean zero? You want to predict $y$!
Unless we are assuming the data $y$ has mean zero, otherwise, this doesn't make sense to me.
That $f(x)=0$ is only true using the prior distribution for $w$. It will not be true if one uses the posterior distribution for $w$.
Whether a zero-mean prior accurately encodes your assumptions is another story; but whatever assumptions it does capture do not apply to the data. | In Bayesian linear regression, why do we assume parameter prior has zero mean? | Encoding assumptions about the data isn't quite the role of the prior distribution in a Bayesian model.
The prior does not reflect any assumptions about the data: It mathematically captures any assump | In Bayesian linear regression, why do we assume parameter prior has zero mean?
Encoding assumptions about the data isn't quite the role of the prior distribution in a Bayesian model.
The prior does not reflect any assumptions about the data: It mathematically captures any assumptions the analyst makes about model parameters before observing the data. The posterior distribution reflects both the prior and observed data.
Take this quote from your question:
This means $x^Tw$ has mean zero, so $f(x)$ is zero. Why would you want to have a model with mean zero? You want to predict $y$!
Unless we are assuming the data $y$ has mean zero, otherwise, this doesn't make sense to me.
That $f(x)=0$ is only true using the prior distribution for $w$. It will not be true if one uses the posterior distribution for $w$.
Whether a zero-mean prior accurately encodes your assumptions is another story; but whatever assumptions it does capture do not apply to the data. | In Bayesian linear regression, why do we assume parameter prior has zero mean?
Encoding assumptions about the data isn't quite the role of the prior distribution in a Bayesian model.
The prior does not reflect any assumptions about the data: It mathematically captures any assump |
51,622 | In Bayesian linear regression, why do we assume parameter prior has zero mean? | You can use any prior that you want. It does not have to be normal, it can have different mean, or it can have no mean (Cauchy)... It is your subjective choice that you make before seeing the data.
Recall that posterior is likelihood times prior
$$ \underbrace{P(\theta|D)}_\text{posterior} \propto \underbrace{P(D|\theta)}_\text{likelihood} \times \underbrace{P(\theta)}_\text{prior} $$
As mentioned by others, prior shows you initial, out-of-data, beliefs about your model and then is updated using observed data. So you can set mean of your prior to zero and if observed data provides enough information so to move it to another value, posterior would be different than zero. This idea is actually often used for purpose, for example Spiegelhalter (2004) describes as different priors may help to test different hypothesis against the data and facilitate decision-making, where zero-mean prior could be used as "sceptical" prior.
As a picture is (sometimes) worth a thousand words, you can check and run yourself one of the examples in JavaScript library bayes.js for MCMC sampling. The example illustrates simple model for estimating $\mu$ and $\sigma$ for normal distribution (i.e. intercept-only regression if you prefer to think of it like this), where model is defined as follows:
$$ x_i \sim \mathrm{Normal}(\mu, \sigma) $$
$$ \mu \sim \mathrm{Normal}(0, 100) $$
$$ \sigma \sim \mathrm{Uniform}(0, 100) $$
You can run the example to convince yourself that not much MCMC iterations are needed for algorithm to converge and for posterior mean to update from prior zero to something around $185$. Prior mean is not what we want posterior mean to be, but what do we think of our model before seeing the data and in many cases before seeing the data you do not know if regression parameters have any effect, i.e. if they are different than zero, so often it is a reasonable choice.
Spiegelhalter, D. J. (2004). Incorporating Bayesian ideas into health-care evaluation. Statistical Science, 156-174. | In Bayesian linear regression, why do we assume parameter prior has zero mean? | You can use any prior that you want. It does not have to be normal, it can have different mean, or it can have no mean (Cauchy)... It is your subjective choice that you make before seeing the data.
Re | In Bayesian linear regression, why do we assume parameter prior has zero mean?
You can use any prior that you want. It does not have to be normal, it can have different mean, or it can have no mean (Cauchy)... It is your subjective choice that you make before seeing the data.
Recall that posterior is likelihood times prior
$$ \underbrace{P(\theta|D)}_\text{posterior} \propto \underbrace{P(D|\theta)}_\text{likelihood} \times \underbrace{P(\theta)}_\text{prior} $$
As mentioned by others, prior shows you initial, out-of-data, beliefs about your model and then is updated using observed data. So you can set mean of your prior to zero and if observed data provides enough information so to move it to another value, posterior would be different than zero. This idea is actually often used for purpose, for example Spiegelhalter (2004) describes as different priors may help to test different hypothesis against the data and facilitate decision-making, where zero-mean prior could be used as "sceptical" prior.
As a picture is (sometimes) worth a thousand words, you can check and run yourself one of the examples in JavaScript library bayes.js for MCMC sampling. The example illustrates simple model for estimating $\mu$ and $\sigma$ for normal distribution (i.e. intercept-only regression if you prefer to think of it like this), where model is defined as follows:
$$ x_i \sim \mathrm{Normal}(\mu, \sigma) $$
$$ \mu \sim \mathrm{Normal}(0, 100) $$
$$ \sigma \sim \mathrm{Uniform}(0, 100) $$
You can run the example to convince yourself that not much MCMC iterations are needed for algorithm to converge and for posterior mean to update from prior zero to something around $185$. Prior mean is not what we want posterior mean to be, but what do we think of our model before seeing the data and in many cases before seeing the data you do not know if regression parameters have any effect, i.e. if they are different than zero, so often it is a reasonable choice.
Spiegelhalter, D. J. (2004). Incorporating Bayesian ideas into health-care evaluation. Statistical Science, 156-174. | In Bayesian linear regression, why do we assume parameter prior has zero mean?
You can use any prior that you want. It does not have to be normal, it can have different mean, or it can have no mean (Cauchy)... It is your subjective choice that you make before seeing the data.
Re |
51,623 | In Bayesian linear regression, why do we assume parameter prior has zero mean? | @user13985 IDK what you mean by no prediction. In your particular model you can derive the posterior distribution if you use a normal prior.
If $\mathbf{y}\sim \text{N}(\mathbf{X}\beta,\mathbf{R})$ and $\beta \sim \text{N}(\mathbf{a},\mathbf{B})$ then the posterior is $\beta|\mathbf{y} \sim \text{N}(\mu, \Sigma)$ where $$\mu = \Sigma\left(\mathbf{X}^\intercal\mathbf{R}^{-1}\mathbf{y} + \mathbf{B}^{-1}\mathbf{a}\right)\quad\text{and}\quad\Sigma = \left(\mathbf{X}^\intercal\mathbf{R}^{-1}\mathbf{X} + \mathbf{B}^{-1}\right)^{-1}$$
You can see that the posterior mean is a combination of the observed data $y$ and the prior mean $a$. Using a zero mean places more emphasis on the data $y$, using a non-zero mean shifts the posterior mean. So I don't see what you mean by "destroys the model". The prior is basically your prior belief in the variables, if you don't know anything about the variables, let the data speak for itself by using a zero mean prior. | In Bayesian linear regression, why do we assume parameter prior has zero mean? | @user13985 IDK what you mean by no prediction. In your particular model you can derive the posterior distribution if you use a normal prior.
If $\mathbf{y}\sim \text{N}(\mathbf{X}\beta,\mathbf{R})$ a | In Bayesian linear regression, why do we assume parameter prior has zero mean?
@user13985 IDK what you mean by no prediction. In your particular model you can derive the posterior distribution if you use a normal prior.
If $\mathbf{y}\sim \text{N}(\mathbf{X}\beta,\mathbf{R})$ and $\beta \sim \text{N}(\mathbf{a},\mathbf{B})$ then the posterior is $\beta|\mathbf{y} \sim \text{N}(\mu, \Sigma)$ where $$\mu = \Sigma\left(\mathbf{X}^\intercal\mathbf{R}^{-1}\mathbf{y} + \mathbf{B}^{-1}\mathbf{a}\right)\quad\text{and}\quad\Sigma = \left(\mathbf{X}^\intercal\mathbf{R}^{-1}\mathbf{X} + \mathbf{B}^{-1}\right)^{-1}$$
You can see that the posterior mean is a combination of the observed data $y$ and the prior mean $a$. Using a zero mean places more emphasis on the data $y$, using a non-zero mean shifts the posterior mean. So I don't see what you mean by "destroys the model". The prior is basically your prior belief in the variables, if you don't know anything about the variables, let the data speak for itself by using a zero mean prior. | In Bayesian linear regression, why do we assume parameter prior has zero mean?
@user13985 IDK what you mean by no prediction. In your particular model you can derive the posterior distribution if you use a normal prior.
If $\mathbf{y}\sim \text{N}(\mathbf{X}\beta,\mathbf{R})$ a |
51,624 | In Bayesian linear regression, why do we assume parameter prior has zero mean? | Perhaps one way to help motivate this is that selecting a normal prior with mean 0 is equivalent to ridge regression (i.e. adding an L2 penalty on your estimated coefficient). Ridge regression has been proven to be helpful in regression, therefore adding a normal prior with mean 0 should be helpful too. | In Bayesian linear regression, why do we assume parameter prior has zero mean? | Perhaps one way to help motivate this is that selecting a normal prior with mean 0 is equivalent to ridge regression (i.e. adding an L2 penalty on your estimated coefficient). Ridge regression has bee | In Bayesian linear regression, why do we assume parameter prior has zero mean?
Perhaps one way to help motivate this is that selecting a normal prior with mean 0 is equivalent to ridge regression (i.e. adding an L2 penalty on your estimated coefficient). Ridge regression has been proven to be helpful in regression, therefore adding a normal prior with mean 0 should be helpful too. | In Bayesian linear regression, why do we assume parameter prior has zero mean?
Perhaps one way to help motivate this is that selecting a normal prior with mean 0 is equivalent to ridge regression (i.e. adding an L2 penalty on your estimated coefficient). Ridge regression has bee |
51,625 | Converting a Model from square feet to square meter | The intercept has exactly the same units as your response variable, here house price, and is thus unaffected by changing the units of area.
Note incidentally that while the prediction of a negative price for a property with zero area (and with zero values for any other predictors) will be outside the range of the data, an intercept of that magnitude may signal that your regression functional form is a poor choice, as also hinted by @Mark L. Stone. But much depends on what the currency is. Perhaps 44850 is small change in your unstated currency. More generally, I wouldn't expect a simple straight-line model to be automatically a good choice for house price and area. But if you want advice on that front, please show us your data and ask a new question.
The slope associated with area has units (units of currency) / (units of area) and so to convert, given a change to square metres, you must multiply by (feet/metre)$^2$. The conversion factor lies beyond statistics and is easy to Google, but the exact definitions 12 inches $=$ 1 foot and 25.4 mm $=$ 1 inch render it subject to your favourite means for simple calculations.
It is a pity that statistics seems to be not often taught together with thinking about dimensions and units of measurement. For one splendid article with several insights see
Finney, D. J. 1977. Dimensions of statistics. Journal of the Royal Statistical Society Series C (Applied Statistics) 26(3): 285β289. http://doi.org/10.2307/2346969 (Finney 1917$-$2018) | Converting a Model from square feet to square meter | The intercept has exactly the same units as your response variable, here house price, and is thus unaffected by changing the units of area.
Note incidentally that while the prediction of a negative p | Converting a Model from square feet to square meter
The intercept has exactly the same units as your response variable, here house price, and is thus unaffected by changing the units of area.
Note incidentally that while the prediction of a negative price for a property with zero area (and with zero values for any other predictors) will be outside the range of the data, an intercept of that magnitude may signal that your regression functional form is a poor choice, as also hinted by @Mark L. Stone. But much depends on what the currency is. Perhaps 44850 is small change in your unstated currency. More generally, I wouldn't expect a simple straight-line model to be automatically a good choice for house price and area. But if you want advice on that front, please show us your data and ask a new question.
The slope associated with area has units (units of currency) / (units of area) and so to convert, given a change to square metres, you must multiply by (feet/metre)$^2$. The conversion factor lies beyond statistics and is easy to Google, but the exact definitions 12 inches $=$ 1 foot and 25.4 mm $=$ 1 inch render it subject to your favourite means for simple calculations.
It is a pity that statistics seems to be not often taught together with thinking about dimensions and units of measurement. For one splendid article with several insights see
Finney, D. J. 1977. Dimensions of statistics. Journal of the Royal Statistical Society Series C (Applied Statistics) 26(3): 285β289. http://doi.org/10.2307/2346969 (Finney 1917$-$2018) | Converting a Model from square feet to square meter
The intercept has exactly the same units as your response variable, here house price, and is thus unaffected by changing the units of area.
Note incidentally that while the prediction of a negative p |
51,626 | Converting a Model from square feet to square meter | One way to figure this out (one that works with models more complex than linear regression and conversions more complex than square feet/meters, like Fahrenheit to Celsius) is to write your model and conversion as equations, and do the substitution.
So you have a linear model $y = f(x) = mx + b$, where $x$ is in square feet. What you want is a model $y = g(x')$, where $x'$ is in square meters.
What you now need is the relationship between $x$ and $x'$. For square feet to square meters, this would be $x = 10.7639 \cdot x'$ -- Be careful about the direction of change. What we want to do is take the desired quantity (the area in square meters) and convert it to the quantity we have the model for (the area in square feet).
Now all we have to do is substitute the value of $x$ in the original predictive model with the value of $x$ obtained from our conversion formula. $y = m(10.7639 \cdot x') + b = 10.7639 \cdot m \cdot x' + b$, where $m$ and $b$ are their original (square-foot) values.
This sort of manipulation becomes easier and has a built-in check if you keep in mind dimensional analysis. That is, instead of working with plain numbers, you work with both numbers and units, and consider the units to be part of the values. In the dimensional analysis approach $x$ would would come with the unit '$ft^2$', $m$ would have '$USD/ft^2$' and $b$ would be in '$USD$'. Thus the dimensional analysis of the original equation would be $y = (USD/ft^2) \cdot (ft^2) + USD$. Cancel factors, and permit addition of like units, and you end up with $y = USD$, as you should. In the new formulation, the conversion factor has units of $ft^2/m^2$, so the dimensional analysis of the final equation ($y = 10.7639 \cdot m \cdot x' + b$) would be $y = (ft^2/m^2) \cdot (USD/ft^2) \cdot (m^2) + USD$. Again, cancel like factors through division/multiplication, and allow their combination through addition, and you end up with $y = USD$. If you messed up, then things wouldn't cancel cleanly. | Converting a Model from square feet to square meter | One way to figure this out (one that works with models more complex than linear regression and conversions more complex than square feet/meters, like Fahrenheit to Celsius) is to write your model and | Converting a Model from square feet to square meter
One way to figure this out (one that works with models more complex than linear regression and conversions more complex than square feet/meters, like Fahrenheit to Celsius) is to write your model and conversion as equations, and do the substitution.
So you have a linear model $y = f(x) = mx + b$, where $x$ is in square feet. What you want is a model $y = g(x')$, where $x'$ is in square meters.
What you now need is the relationship between $x$ and $x'$. For square feet to square meters, this would be $x = 10.7639 \cdot x'$ -- Be careful about the direction of change. What we want to do is take the desired quantity (the area in square meters) and convert it to the quantity we have the model for (the area in square feet).
Now all we have to do is substitute the value of $x$ in the original predictive model with the value of $x$ obtained from our conversion formula. $y = m(10.7639 \cdot x') + b = 10.7639 \cdot m \cdot x' + b$, where $m$ and $b$ are their original (square-foot) values.
This sort of manipulation becomes easier and has a built-in check if you keep in mind dimensional analysis. That is, instead of working with plain numbers, you work with both numbers and units, and consider the units to be part of the values. In the dimensional analysis approach $x$ would would come with the unit '$ft^2$', $m$ would have '$USD/ft^2$' and $b$ would be in '$USD$'. Thus the dimensional analysis of the original equation would be $y = (USD/ft^2) \cdot (ft^2) + USD$. Cancel factors, and permit addition of like units, and you end up with $y = USD$, as you should. In the new formulation, the conversion factor has units of $ft^2/m^2$, so the dimensional analysis of the final equation ($y = 10.7639 \cdot m \cdot x' + b$) would be $y = (ft^2/m^2) \cdot (USD/ft^2) \cdot (m^2) + USD$. Again, cancel like factors through division/multiplication, and allow their combination through addition, and you end up with $y = USD$. If you messed up, then things wouldn't cancel cleanly. | Converting a Model from square feet to square meter
One way to figure this out (one that works with models more complex than linear regression and conversions more complex than square feet/meters, like Fahrenheit to Celsius) is to write your model and |
51,627 | Converting a Model from square feet to square meter | There are 0.092903 square meters in 1 square foot so slope should be (280.76)*(0.092903) and intercept remains same | Converting a Model from square feet to square meter | There are 0.092903 square meters in 1 square foot so slope should be (280.76)*(0.092903) and intercept remains same | Converting a Model from square feet to square meter
There are 0.092903 square meters in 1 square foot so slope should be (280.76)*(0.092903) and intercept remains same | Converting a Model from square feet to square meter
There are 0.092903 square meters in 1 square foot so slope should be (280.76)*(0.092903) and intercept remains same |
51,628 | Converting a Model from square feet to square meter | I can answer this. In this case, there will not be any change in the intercept.However,only the slope changes with the change in the magnitude. This house costs 280.76 per sqft. Hence it would cost 3022.08 per sq.metre assuming that there is no currency change and keeping in mind the scale of change from sqft to sq.metre (1 sq.ft = 0.092903 sq.metres) | Converting a Model from square feet to square meter | I can answer this. In this case, there will not be any change in the intercept.However,only the slope changes with the change in the magnitude. This house costs 280.76 per sqft. Hence it would cost 30 | Converting a Model from square feet to square meter
I can answer this. In this case, there will not be any change in the intercept.However,only the slope changes with the change in the magnitude. This house costs 280.76 per sqft. Hence it would cost 3022.08 per sq.metre assuming that there is no currency change and keeping in mind the scale of change from sqft to sq.metre (1 sq.ft = 0.092903 sq.metres) | Converting a Model from square feet to square meter
I can answer this. In this case, there will not be any change in the intercept.However,only the slope changes with the change in the magnitude. This house costs 280.76 per sqft. Hence it would cost 30 |
51,629 | How Can I Calculate Standard Deviation (step-by-step) in R? [closed] | > a <- c(179,160,136,227)
> sd(a)
[1] 38.57892
> sqrt(sum((a-mean(a))^2/(length(a)-1)))
[1] 38.57892
``` | How Can I Calculate Standard Deviation (step-by-step) in R? [closed] | > a <- c(179,160,136,227)
> sd(a)
[1] 38.57892
> sqrt(sum((a-mean(a))^2/(length(a)-1)))
[1] 38.57892
``` | How Can I Calculate Standard Deviation (step-by-step) in R? [closed]
> a <- c(179,160,136,227)
> sd(a)
[1] 38.57892
> sqrt(sum((a-mean(a))^2/(length(a)-1)))
[1] 38.57892
``` | How Can I Calculate Standard Deviation (step-by-step) in R? [closed]
> a <- c(179,160,136,227)
> sd(a)
[1] 38.57892
> sqrt(sum((a-mean(a))^2/(length(a)-1)))
[1] 38.57892
``` |
51,630 | How Can I Calculate Standard Deviation (step-by-step) in R? [closed] | So, you want to calculate the standard deviation step-by-step. So, firstly, you should calculate the sum of the differences of all data points with the mean.
Have a variable called count and set it to the value 0.
For that, you loop through the data set with a variable, say i and subtract i every time with the mean. Mean can be calculated as mean(dataset).
Add the result of every loop iteration to count, by count = count + (i-mean)^2
Now, divide the count variable by len(dataset) - 1
The result is the variance. So, for calculating the standard deviation, you have to square root the above value.
In R, you do this as: sqrt(variance)
Finally, the result you get after applying the square root is the Standard Deviation. | How Can I Calculate Standard Deviation (step-by-step) in R? [closed] | So, you want to calculate the standard deviation step-by-step. So, firstly, you should calculate the sum of the differences of all data points with the mean.
Have a variable called count and set it to | How Can I Calculate Standard Deviation (step-by-step) in R? [closed]
So, you want to calculate the standard deviation step-by-step. So, firstly, you should calculate the sum of the differences of all data points with the mean.
Have a variable called count and set it to the value 0.
For that, you loop through the data set with a variable, say i and subtract i every time with the mean. Mean can be calculated as mean(dataset).
Add the result of every loop iteration to count, by count = count + (i-mean)^2
Now, divide the count variable by len(dataset) - 1
The result is the variance. So, for calculating the standard deviation, you have to square root the above value.
In R, you do this as: sqrt(variance)
Finally, the result you get after applying the square root is the Standard Deviation. | How Can I Calculate Standard Deviation (step-by-step) in R? [closed]
So, you want to calculate the standard deviation step-by-step. So, firstly, you should calculate the sum of the differences of all data points with the mean.
Have a variable called count and set it to |
51,631 | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | The best upper bound is $1$, no matter what the values of $R_1^2$ and $R_2^2$ may be.
The following discussion explains why, in three increasingly detailed ways. The first explanation gives geometric intuition, leading to a simple example. The second one translates that into a procedure to generate specific datasets that give rise to this example. The third one generalizes this procedure to show how any mathematically possible value of $R^2$ can be achieved, given arbitrary values of $R_1^2$ and $R_2^2$.
I adopt a notation in which the independent variables are named $x_1$ and $x_2$ (rather than $x$ and $z$), so that the distinction between the independent and dependent variables remains clear.
(A comment by the alert @f coppens compels me to add that these results change when one or more of the regressions does not include a constant term, because then the relationship between $R^2$ and the correlation coefficients changes. The methods used to obtain these results continue to work. Interested readers may enjoy deriving a more general answer for that situation.)
For the simple regressions (1) and (2), the $R_i^2$ are the squares of the correlation coefficients between $x_i$ and $y$. Relationships among correlation coefficients are just angular relationships among unit vectors in disguise, because the correlation coefficient of two variables $x$ and $y$ (considered as column $n$-vectors) is the dot product of their normalized (unit-length) versions, which in turn is the cosine of the angle between them.
In these geometric terms, the question asks
How close can a vector $y$ come to the plane generated by $x_1$ and $x_2$, given the angles between $y$ and the $x_i$?
Evidently $y$ can actually be in that plane, provided you put $y$ at a given angle $\theta_1$ with $x_1$ and then place $x_2$ at a given angle $\theta_2$ with $y$. When that happens, the $R^2$ for regression (3) is $1$, demonstrating there is no meaningful upper bound on $R^2$.
Geometric thinking is no longer considered rigorous, but it leads us to a rigorous example. Start with two orthogonal unit vectors $u$ and $v$, each of which is orthogonal to a vector of ones (so that we can accommodate a constant term in all three regressions). Given $R_1^2$ and $R_2^2$, let $\rho_i^2 = R_i^2$ be choices of their square roots. To place vectors $x_1$, $y$, and $x_2$ at the required angles, set
$$\eqalign{
&x_1 &= u\\&y&=\rho_1 u + \sqrt{1-\rho_1^2} v\\ &x_2 &= (\rho_1\rho_2-\sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2})u + (\rho_1\sqrt{1-\rho_2^2}\sqrt{1-\rho_1^2})v.}$$
Since $u\cdot u = v\cdot v = 1$ and $u\cdot v = 0$, you can verify that $x_2\cdot x_2 = 1$ as required,
$$y\cdot x_1 = \rho_1,$$
and
$$\eqalign{
y\cdot x_2 &= \rho_1\left(\rho_1\rho_2-\sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right) + \sqrt{1-\rho_1^2}\left(\rho_1\sqrt{1-\rho_2^2}\sqrt{1-\rho_1^2}\right) \\
&= \rho_2,}$$
as intended.
For a completely concrete example with $n\ge 3$ observations, start with any two $n$-vectors $u_0$ and $v_0$ which are linearly independent and linearly independent of the $n$-vector $\mathbf{1}=(1,1\ldots, 1)$. Apply the Gram-Schmidt process to the sequence $\mathbf{1}, u_0, v_0$ to produce an orthonormal basis $\mathbf{1}/\sqrt{n}, u, v$. Use the $u$ and $v$ that result. For instance, for $n=3$ you might start with $u_0 = (1,0,0)$ and $v_0=(0,1,0)$. The Gram-Schmidt orthogonalization of them yields $u = (2,-1,-1)/\sqrt{6}$ and $v=(0,1,-1)/\sqrt{2})$. Apply the preceding formulas to these for any given $R_1^2$ and $R_2^2$ you desire. This will result in a dataset consisting of the $3$-vectors $x_1$, $x_2$, and $y$ with the specified values of $R_1^2, R_2^2$, and $R^2 = 1$.
A similar approach, starting with mutually orthonormal vectors $u_0, v_0, w_0$, can be used to construct examples in which $R^2$ achieves any specified value in the interval $[\max(R_1^2, R_2^2), 1]$. Order the $x_i$ so that $R_1^2 \ge R_2^2$. Writing $y = \alpha u_0 + \beta v_0 + \gamma w_0$, $x_1 = u_0$, and $x_2 = \rho_{12}u_0 + \sqrt{1-\rho_{12}^2}v_0$, compute that $\rho_1 = \alpha$ and $\rho_2 = \alpha \rho_{12} + \beta \sqrt{1-\rho_{12}^2}$. From this, and the fact that $\alpha^2+\beta^2+\gamma^2=1$, solve and find that
$$\beta = \frac{\rho_2 - \rho_1\rho_{12}}{\sqrt{1-\rho_{12}^2}}$$
and $\gamma = \sqrt{1-\alpha^2 - \beta^2}$. For this square root to exist, $\beta$ needs to be small, but that can be guaranteed by choosing $\rho_{12}$ (the correlation between the two independent variables $x_1$ and $x_2$) to be small in size, because as $\rho_{12}$ approaches $\rho_2/\rho_1$, (which is possible because the absolute value of this ratio does not exceed $1$), $\beta$ approaches zero continuously.
The cognoscenti will recognize the relationship between the formula for $\beta$ and a certain partial correlation coefficient. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | The best upper bound is $1$, no matter what the values of $R_1^2$ and $R_2^2$ may be.
The following discussion explains why, in three increasingly detailed ways. The first explanation gives geometric | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
The best upper bound is $1$, no matter what the values of $R_1^2$ and $R_2^2$ may be.
The following discussion explains why, in three increasingly detailed ways. The first explanation gives geometric intuition, leading to a simple example. The second one translates that into a procedure to generate specific datasets that give rise to this example. The third one generalizes this procedure to show how any mathematically possible value of $R^2$ can be achieved, given arbitrary values of $R_1^2$ and $R_2^2$.
I adopt a notation in which the independent variables are named $x_1$ and $x_2$ (rather than $x$ and $z$), so that the distinction between the independent and dependent variables remains clear.
(A comment by the alert @f coppens compels me to add that these results change when one or more of the regressions does not include a constant term, because then the relationship between $R^2$ and the correlation coefficients changes. The methods used to obtain these results continue to work. Interested readers may enjoy deriving a more general answer for that situation.)
For the simple regressions (1) and (2), the $R_i^2$ are the squares of the correlation coefficients between $x_i$ and $y$. Relationships among correlation coefficients are just angular relationships among unit vectors in disguise, because the correlation coefficient of two variables $x$ and $y$ (considered as column $n$-vectors) is the dot product of their normalized (unit-length) versions, which in turn is the cosine of the angle between them.
In these geometric terms, the question asks
How close can a vector $y$ come to the plane generated by $x_1$ and $x_2$, given the angles between $y$ and the $x_i$?
Evidently $y$ can actually be in that plane, provided you put $y$ at a given angle $\theta_1$ with $x_1$ and then place $x_2$ at a given angle $\theta_2$ with $y$. When that happens, the $R^2$ for regression (3) is $1$, demonstrating there is no meaningful upper bound on $R^2$.
Geometric thinking is no longer considered rigorous, but it leads us to a rigorous example. Start with two orthogonal unit vectors $u$ and $v$, each of which is orthogonal to a vector of ones (so that we can accommodate a constant term in all three regressions). Given $R_1^2$ and $R_2^2$, let $\rho_i^2 = R_i^2$ be choices of their square roots. To place vectors $x_1$, $y$, and $x_2$ at the required angles, set
$$\eqalign{
&x_1 &= u\\&y&=\rho_1 u + \sqrt{1-\rho_1^2} v\\ &x_2 &= (\rho_1\rho_2-\sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2})u + (\rho_1\sqrt{1-\rho_2^2}\sqrt{1-\rho_1^2})v.}$$
Since $u\cdot u = v\cdot v = 1$ and $u\cdot v = 0$, you can verify that $x_2\cdot x_2 = 1$ as required,
$$y\cdot x_1 = \rho_1,$$
and
$$\eqalign{
y\cdot x_2 &= \rho_1\left(\rho_1\rho_2-\sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right) + \sqrt{1-\rho_1^2}\left(\rho_1\sqrt{1-\rho_2^2}\sqrt{1-\rho_1^2}\right) \\
&= \rho_2,}$$
as intended.
For a completely concrete example with $n\ge 3$ observations, start with any two $n$-vectors $u_0$ and $v_0$ which are linearly independent and linearly independent of the $n$-vector $\mathbf{1}=(1,1\ldots, 1)$. Apply the Gram-Schmidt process to the sequence $\mathbf{1}, u_0, v_0$ to produce an orthonormal basis $\mathbf{1}/\sqrt{n}, u, v$. Use the $u$ and $v$ that result. For instance, for $n=3$ you might start with $u_0 = (1,0,0)$ and $v_0=(0,1,0)$. The Gram-Schmidt orthogonalization of them yields $u = (2,-1,-1)/\sqrt{6}$ and $v=(0,1,-1)/\sqrt{2})$. Apply the preceding formulas to these for any given $R_1^2$ and $R_2^2$ you desire. This will result in a dataset consisting of the $3$-vectors $x_1$, $x_2$, and $y$ with the specified values of $R_1^2, R_2^2$, and $R^2 = 1$.
A similar approach, starting with mutually orthonormal vectors $u_0, v_0, w_0$, can be used to construct examples in which $R^2$ achieves any specified value in the interval $[\max(R_1^2, R_2^2), 1]$. Order the $x_i$ so that $R_1^2 \ge R_2^2$. Writing $y = \alpha u_0 + \beta v_0 + \gamma w_0$, $x_1 = u_0$, and $x_2 = \rho_{12}u_0 + \sqrt{1-\rho_{12}^2}v_0$, compute that $\rho_1 = \alpha$ and $\rho_2 = \alpha \rho_{12} + \beta \sqrt{1-\rho_{12}^2}$. From this, and the fact that $\alpha^2+\beta^2+\gamma^2=1$, solve and find that
$$\beta = \frac{\rho_2 - \rho_1\rho_{12}}{\sqrt{1-\rho_{12}^2}}$$
and $\gamma = \sqrt{1-\alpha^2 - \beta^2}$. For this square root to exist, $\beta$ needs to be small, but that can be guaranteed by choosing $\rho_{12}$ (the correlation between the two independent variables $x_1$ and $x_2$) to be small in size, because as $\rho_{12}$ approaches $\rho_2/\rho_1$, (which is possible because the absolute value of this ratio does not exceed $1$), $\beta$ approaches zero continuously.
The cognoscenti will recognize the relationship between the formula for $\beta$ and a certain partial correlation coefficient. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
The best upper bound is $1$, no matter what the values of $R_1^2$ and $R_2^2$ may be.
The following discussion explains why, in three increasingly detailed ways. The first explanation gives geometric |
51,632 | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | @whuber: negative $R^2$ is possible in a regression model without an intercept.
In a regression model with an intercept, the definition of $R^2$ is based on a decomposition of the total sum of squares, i.e. $\sum_i (Y_i - \bar{Y})^2$, $\bar{Y}$ is the average of $Y$, the dependent variable. (Note that $R^2$ it is not defined as the square of the correlation coefficient because the latter is only defined between two variables, therefore, only in the case with one independent variable (and an intercept), the squared correlation coefficient is equal to the $R^2$).
The decomposition of the total sum of squares (TSS) goes as follows: $TSS=\sum_i (Y_i - \bar{Y})^2=\sum_i (Y_i - \hat{Y}_i + \hat{Y}_i - \bar{Y})^2$, where $\hat{Y}_i$ are the predictions of the regression model. It follows that $TSS=\sum_i (Y_i - \hat{Y}_i )^2 + \sum_i (\hat{Y}_i - \bar{Y})^2 +2 \sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})$.
In a model with an intercept, it can be shown that the third term on the right hand side is zero, but this can not be shown for a model without an intercept (see e.g. D.N. Gujaratti, ''Basic Econometrics'' or W.H. Greene, ''Econometric Analysis'').
In the model with an intercept
As the third term can be shown to be zero, $TSS=\sum_i (Y_i - \hat{Y}_i )^2 + \sum_i (\hat{Y}_i - \bar{Y})^2$. $\sum_i (Y_i - \hat{Y}_i )^2$ is the sum of the squared residuals (Residual sum of squares (RSS)) and $\sum_i (\hat{Y}_i - \bar{Y})^2$ is the sum of squared differences between the predictions and the average value of Y, this is called the explained sum of squares (ESS). So we find, in a model with an intercept, that $TSS=RSS+ESS$ of that $1=\frac{RSS}{TSS}+\frac{ESS}{TSS}$.
And from this they define the $R^2$ as $R^2\stackrel{def}{=}1-\frac{RSS}{TSS}$.
By the above, for a model with an intercept, it follows that $R^2=\frac{ESS}{TSS}$. As $ESS$ and $TSS$ are sums of squares, it follows that $R^2 \ge 0$.
In the model without an intercept.
In the model without an intercept it can not be shown that $\sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})=0$. Therefore $TSS=RSS+ESS+2\sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})$. So $R^2\stackrel{def}{=}1-\frac{RSS}{TSS}=\frac{ESS+2\sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})}{TSS}$. The second term in the numerator can be negative, so the sign of $R^2$ can not be shown to be positive. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | @whuber: negative $R^2$ is possible in a regression model without an intercept.
In a regression model with an intercept, the definition of $R^2$ is based on a decomposition of the total sum of square | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
@whuber: negative $R^2$ is possible in a regression model without an intercept.
In a regression model with an intercept, the definition of $R^2$ is based on a decomposition of the total sum of squares, i.e. $\sum_i (Y_i - \bar{Y})^2$, $\bar{Y}$ is the average of $Y$, the dependent variable. (Note that $R^2$ it is not defined as the square of the correlation coefficient because the latter is only defined between two variables, therefore, only in the case with one independent variable (and an intercept), the squared correlation coefficient is equal to the $R^2$).
The decomposition of the total sum of squares (TSS) goes as follows: $TSS=\sum_i (Y_i - \bar{Y})^2=\sum_i (Y_i - \hat{Y}_i + \hat{Y}_i - \bar{Y})^2$, where $\hat{Y}_i$ are the predictions of the regression model. It follows that $TSS=\sum_i (Y_i - \hat{Y}_i )^2 + \sum_i (\hat{Y}_i - \bar{Y})^2 +2 \sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})$.
In a model with an intercept, it can be shown that the third term on the right hand side is zero, but this can not be shown for a model without an intercept (see e.g. D.N. Gujaratti, ''Basic Econometrics'' or W.H. Greene, ''Econometric Analysis'').
In the model with an intercept
As the third term can be shown to be zero, $TSS=\sum_i (Y_i - \hat{Y}_i )^2 + \sum_i (\hat{Y}_i - \bar{Y})^2$. $\sum_i (Y_i - \hat{Y}_i )^2$ is the sum of the squared residuals (Residual sum of squares (RSS)) and $\sum_i (\hat{Y}_i - \bar{Y})^2$ is the sum of squared differences between the predictions and the average value of Y, this is called the explained sum of squares (ESS). So we find, in a model with an intercept, that $TSS=RSS+ESS$ of that $1=\frac{RSS}{TSS}+\frac{ESS}{TSS}$.
And from this they define the $R^2$ as $R^2\stackrel{def}{=}1-\frac{RSS}{TSS}$.
By the above, for a model with an intercept, it follows that $R^2=\frac{ESS}{TSS}$. As $ESS$ and $TSS$ are sums of squares, it follows that $R^2 \ge 0$.
In the model without an intercept.
In the model without an intercept it can not be shown that $\sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})=0$. Therefore $TSS=RSS+ESS+2\sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})$. So $R^2\stackrel{def}{=}1-\frac{RSS}{TSS}=\frac{ESS+2\sum_i (Y_i - \hat{Y}_i ) (\hat{Y}_i - \bar{Y})}{TSS}$. The second term in the numerator can be negative, so the sign of $R^2$ can not be shown to be positive. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
@whuber: negative $R^2$ is possible in a regression model without an intercept.
In a regression model with an intercept, the definition of $R^2$ is based on a decomposition of the total sum of square |
51,633 | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | Summing up the first two equations you find $2y_i=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}$, if you subtract twice the third equation then you find $0=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}-2a-2bx_i-2cz_i-2e_i$ or $(a_1+a_2-2a+e_{i,1}+e_{i,2}-2e_i)+(b_1-2b)x_i+(b_2-2c)z_i=0$.
But from $(a_1+a_2-2a+e_{i,1}+e_{i,2}-2e_i)=0$ it does not follow that $a=\frac{a_1+a_2}{2}$ ?
Note also that the $a$'s are constants while the $e$'s are vectors. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | Summing up the first two equations you find $2y_i=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}$, if you subtract twice the third equation then you find $0=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}-2a-2bx_i-2cz_i | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
Summing up the first two equations you find $2y_i=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}$, if you subtract twice the third equation then you find $0=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}-2a-2bx_i-2cz_i-2e_i$ or $(a_1+a_2-2a+e_{i,1}+e_{i,2}-2e_i)+(b_1-2b)x_i+(b_2-2c)z_i=0$.
But from $(a_1+a_2-2a+e_{i,1}+e_{i,2}-2e_i)=0$ it does not follow that $a=\frac{a_1+a_2}{2}$ ?
Note also that the $a$'s are constants while the $e$'s are vectors. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
Summing up the first two equations you find $2y_i=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}$, if you subtract twice the third equation then you find $0=a_1+a_2+b_1x_i+b_2z_i+e_{i,1}+e_{i,2}-2a-2bx_i-2cz_i |
51,634 | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | I think I formalized the concern I had.
Summing up 1 and 2 and transforming both sides you get:
$y_i-(a_1+b_1*x_i) + y_i-(a_2+b_2*z_i)= e_{i,1}+e_{i,2} \sim N(0,\sigma_1^2+\sigma_2^2)$. Using the OLS estimates, you obtain, $\sum_{i} (e_{i,1}+e_{i,2})^2= \sum_{i} (y_i-(\hat{a_1}+\hat{b_1}*x_i) + y_i-(\hat{a_2}+\hat{b_2}*z_i))^2= \sum_{i} (y_i-(\hat{a_1}+\hat{b_1}*x_i))^2+\sum_{i}(y_i-(\hat{a_2}+\hat{b_2}*z_i))^2+2*\sum_{i}(y_i-(\hat{a_1}+\hat{b_1}*x_i))(y_i-(\hat{a_2}+\hat{b_2}*z_i))= \sum_ie_{i,1}^2+\sum_ie_{i,2}^2+2cov(e_1,e_2)\sim N(0,\sigma_1^2+\sigma_2^2)\, iff\, cov(e_1,e_2)=0$
Therefore, the summation works iff the co-variance between the errors is zero.
could jump from line 1 to line 3 directly. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound? | I think I formalized the concern I had.
Summing up 1 and 2 and transforming both sides you get:
$y_i-(a_1+b_1*x_i) + y_i-(a_2+b_2*z_i)= e_{i,1}+e_{i,2} \sim N(0,\sigma_1^2+\sigma_2^2)$. Using the OLS | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
I think I formalized the concern I had.
Summing up 1 and 2 and transforming both sides you get:
$y_i-(a_1+b_1*x_i) + y_i-(a_2+b_2*z_i)= e_{i,1}+e_{i,2} \sim N(0,\sigma_1^2+\sigma_2^2)$. Using the OLS estimates, you obtain, $\sum_{i} (e_{i,1}+e_{i,2})^2= \sum_{i} (y_i-(\hat{a_1}+\hat{b_1}*x_i) + y_i-(\hat{a_2}+\hat{b_2}*z_i))^2= \sum_{i} (y_i-(\hat{a_1}+\hat{b_1}*x_i))^2+\sum_{i}(y_i-(\hat{a_2}+\hat{b_2}*z_i))^2+2*\sum_{i}(y_i-(\hat{a_1}+\hat{b_1}*x_i))(y_i-(\hat{a_2}+\hat{b_2}*z_i))= \sum_ie_{i,1}^2+\sum_ie_{i,2}^2+2cov(e_1,e_2)\sim N(0,\sigma_1^2+\sigma_2^2)\, iff\, cov(e_1,e_2)=0$
Therefore, the summation works iff the co-variance between the errors is zero.
could jump from line 1 to line 3 directly. | Can the coefficient of determination $R^2$ be more than one? What is its upper bound?
I think I formalized the concern I had.
Summing up 1 and 2 and transforming both sides you get:
$y_i-(a_1+b_1*x_i) + y_i-(a_2+b_2*z_i)= e_{i,1}+e_{i,2} \sim N(0,\sigma_1^2+\sigma_2^2)$. Using the OLS |
51,635 | Is it always required to achieve stationarity before performing any time-series analysis? | ARIMA models are not stationary, ARMAs are. ARIMA includes the integration terms, e.g. a random walk model is ARIMA(0,1,0) and it's not stationary.
There's a couple of different ways to exponentially smooth, here's EWMA and a different version. Neither of them requires stationarity.
Here's an example in MATLAB with fitting ARIMA(0,1,1) into S&P 500 index in 2014, and smoothing it. You can see that the series are clearly non-stationary, and both approaches work fine.
d=fetch(fred,'sp500','1-jan-2014','31-dec-2014');
y = d.Data(:,2);
y=y(~isnan(y));
dt = d.Data(~isnan(y),1);
subplot(2,1,1)
plot(dt,y,'r')
datetick
% arima(0,1,1)
fitA=estimate(arima(0,1,1),log(y));
R=infer(fitA,log(y));
hold on
plot(dt,exp(log(y)-R),'k')
legend('Actual', ...
'ARIMA(0,1,1)', 'location','best');
ylabel('S&P 500');
xlabel('Date');
title('S&P 500 Smoothing');plot(dt,y,'r')
% exponential smooth
alpha = 0.45;
ewma = ones(size(y));
ewma(1) = log(y(1));
for i=2:length(y)
ewma(i) = alpha*log(y(i))+(1-alpha)*ewma(i-1);
end
subplot(2,1,2)
plot(dt,y,'r')
datetick
hold on
plot(dt, exp(ewma),'b');
legend('Actual', ...
'Exponential Smoothing', 'location','best');
ylabel('S&P 500');
xlabel('Date'); | Is it always required to achieve stationarity before performing any time-series analysis? | ARIMA models are not stationary, ARMAs are. ARIMA includes the integration terms, e.g. a random walk model is ARIMA(0,1,0) and it's not stationary.
There's a couple of different ways to exponentially | Is it always required to achieve stationarity before performing any time-series analysis?
ARIMA models are not stationary, ARMAs are. ARIMA includes the integration terms, e.g. a random walk model is ARIMA(0,1,0) and it's not stationary.
There's a couple of different ways to exponentially smooth, here's EWMA and a different version. Neither of them requires stationarity.
Here's an example in MATLAB with fitting ARIMA(0,1,1) into S&P 500 index in 2014, and smoothing it. You can see that the series are clearly non-stationary, and both approaches work fine.
d=fetch(fred,'sp500','1-jan-2014','31-dec-2014');
y = d.Data(:,2);
y=y(~isnan(y));
dt = d.Data(~isnan(y),1);
subplot(2,1,1)
plot(dt,y,'r')
datetick
% arima(0,1,1)
fitA=estimate(arima(0,1,1),log(y));
R=infer(fitA,log(y));
hold on
plot(dt,exp(log(y)-R),'k')
legend('Actual', ...
'ARIMA(0,1,1)', 'location','best');
ylabel('S&P 500');
xlabel('Date');
title('S&P 500 Smoothing');plot(dt,y,'r')
% exponential smooth
alpha = 0.45;
ewma = ones(size(y));
ewma(1) = log(y(1));
for i=2:length(y)
ewma(i) = alpha*log(y(i))+(1-alpha)*ewma(i-1);
end
subplot(2,1,2)
plot(dt,y,'r')
datetick
hold on
plot(dt, exp(ewma),'b');
legend('Actual', ...
'Exponential Smoothing', 'location','best');
ylabel('S&P 500');
xlabel('Date'); | Is it always required to achieve stationarity before performing any time-series analysis?
ARIMA models are not stationary, ARMAs are. ARIMA includes the integration terms, e.g. a random walk model is ARIMA(0,1,0) and it's not stationary.
There's a couple of different ways to exponentially |
51,636 | Is it always required to achieve stationarity before performing any time-series analysis? | exponential smoothing models do not assume stationary data.
Citation: see Hyndman and AthanaΒsopouΒlos:
"every ETS [exponential smoothing] model is non-stationary" | Is it always required to achieve stationarity before performing any time-series analysis? | exponential smoothing models do not assume stationary data.
Citation: see Hyndman and AthanaΒsopouΒlos:
"every ETS [exponential smoothing] model is non-stationary" | Is it always required to achieve stationarity before performing any time-series analysis?
exponential smoothing models do not assume stationary data.
Citation: see Hyndman and AthanaΒsopouΒlos:
"every ETS [exponential smoothing] model is non-stationary" | Is it always required to achieve stationarity before performing any time-series analysis?
exponential smoothing models do not assume stationary data.
Citation: see Hyndman and AthanaΒsopouΒlos:
"every ETS [exponential smoothing] model is non-stationary" |
51,637 | Is it always required to achieve stationarity before performing any time-series analysis? | Exponential Smoothing model is a particular form of an ARIMA model. Instead of identifying the patterns, outliers and trends, you assume them. | Is it always required to achieve stationarity before performing any time-series analysis? | Exponential Smoothing model is a particular form of an ARIMA model. Instead of identifying the patterns, outliers and trends, you assume them. | Is it always required to achieve stationarity before performing any time-series analysis?
Exponential Smoothing model is a particular form of an ARIMA model. Instead of identifying the patterns, outliers and trends, you assume them. | Is it always required to achieve stationarity before performing any time-series analysis?
Exponential Smoothing model is a particular form of an ARIMA model. Instead of identifying the patterns, outliers and trends, you assume them. |
51,638 | Is it always required to achieve stationarity before performing any time-series analysis? | Structural time series models do not assume stationarity either.
These models a la Andrew Harvey are estimated via Kalman filter type of algorithms.
http://www.stat.yale.edu/~lc436/papers/Harvey_Peters1990.pdf | Is it always required to achieve stationarity before performing any time-series analysis? | Structural time series models do not assume stationarity either.
These models a la Andrew Harvey are estimated via Kalman filter type of algorithms.
http://www.stat.yale.edu/~lc436/papers/Harvey_Pe | Is it always required to achieve stationarity before performing any time-series analysis?
Structural time series models do not assume stationarity either.
These models a la Andrew Harvey are estimated via Kalman filter type of algorithms.
http://www.stat.yale.edu/~lc436/papers/Harvey_Peters1990.pdf | Is it always required to achieve stationarity before performing any time-series analysis?
Structural time series models do not assume stationarity either.
These models a la Andrew Harvey are estimated via Kalman filter type of algorithms.
http://www.stat.yale.edu/~lc436/papers/Harvey_Pe |
51,639 | Is it always required to achieve stationarity before performing any time-series analysis? | As it stands, I do not even agree that stationarity "needs to be achieved" before performing time-series analysis with an ARIMA process. It needs to be clarified what the goal of the analysis is.
For example, there are tons of posts on this site relating to testing the null $\rho=1$ in the model $Y_t=\rho Y_{t-1}+\epsilon_t$, which, under the null, is an ARIMA(0,1,0) model, i.e., the Dickey-Fuller test. This shows that it is possible to do hypothesis testing in ARIMA models.
As another example, take the DF distribution itself:
$$
T(\hat{\rho}-1)\Rightarrow\frac{W(1)^2-1}{2\int_0^1W(r)^2d r},
$$
where $W$ is standard Brownian motion. To be sure, a "non-standard" random variable, but one which, for example, shows that $\hat{\rho}-1=\mathcal{O}_P(T^{-1})$. Thus, OLS consistently estimates the parameter of the process - even at a faster than the usual $\sqrt{T}$-rate!
On the other hand, if you mean that things like the central limit theorem or a law of large numbers are required to work, then, yes, stationarity may be important. | Is it always required to achieve stationarity before performing any time-series analysis? | As it stands, I do not even agree that stationarity "needs to be achieved" before performing time-series analysis with an ARIMA process. It needs to be clarified what the goal of the analysis is.
For | Is it always required to achieve stationarity before performing any time-series analysis?
As it stands, I do not even agree that stationarity "needs to be achieved" before performing time-series analysis with an ARIMA process. It needs to be clarified what the goal of the analysis is.
For example, there are tons of posts on this site relating to testing the null $\rho=1$ in the model $Y_t=\rho Y_{t-1}+\epsilon_t$, which, under the null, is an ARIMA(0,1,0) model, i.e., the Dickey-Fuller test. This shows that it is possible to do hypothesis testing in ARIMA models.
As another example, take the DF distribution itself:
$$
T(\hat{\rho}-1)\Rightarrow\frac{W(1)^2-1}{2\int_0^1W(r)^2d r},
$$
where $W$ is standard Brownian motion. To be sure, a "non-standard" random variable, but one which, for example, shows that $\hat{\rho}-1=\mathcal{O}_P(T^{-1})$. Thus, OLS consistently estimates the parameter of the process - even at a faster than the usual $\sqrt{T}$-rate!
On the other hand, if you mean that things like the central limit theorem or a law of large numbers are required to work, then, yes, stationarity may be important. | Is it always required to achieve stationarity before performing any time-series analysis?
As it stands, I do not even agree that stationarity "needs to be achieved" before performing time-series analysis with an ARIMA process. It needs to be clarified what the goal of the analysis is.
For |
51,640 | What to do when your likelihood function has a double product with small values near zero - log transform doesn't work? | This is just a small addition to @whuber's answer (+1).
To compute $$\log \sum \exp (x_i)$$ without overflowing or underflowing, a "log-sum-trick" is often used, see e.g. here for description. The trick is to compute $$\max(x_i) + \log \sum \exp (x_i - \max(x_i))$$ instead. This will prevent overflowing, and if some terms underflow then they are irrelevant anyway.
Using @whuber's example of $x_i=1\ldots 1000$, here is a demonstration in Matlab. Direct computation
x = 1:1000;
log(sum(exp(x)))
yields Inf, but
a = max(x) + log(sum(exp(x - max(x))));
display(num2str(a, 20))
yields 1000.4586751453871329, correct to 17 places, as in @whuber's answer. | What to do when your likelihood function has a double product with small values near zero - log tran | This is just a small addition to @whuber's answer (+1).
To compute $$\log \sum \exp (x_i)$$ without overflowing or underflowing, a "log-sum-trick" is often used, see e.g. here for description. The tri | What to do when your likelihood function has a double product with small values near zero - log transform doesn't work?
This is just a small addition to @whuber's answer (+1).
To compute $$\log \sum \exp (x_i)$$ without overflowing or underflowing, a "log-sum-trick" is often used, see e.g. here for description. The trick is to compute $$\max(x_i) + \log \sum \exp (x_i - \max(x_i))$$ instead. This will prevent overflowing, and if some terms underflow then they are irrelevant anyway.
Using @whuber's example of $x_i=1\ldots 1000$, here is a demonstration in Matlab. Direct computation
x = 1:1000;
log(sum(exp(x)))
yields Inf, but
a = max(x) + log(sum(exp(x - max(x))));
display(num2str(a, 20))
yields 1000.4586751453871329, correct to 17 places, as in @whuber's answer. | What to do when your likelihood function has a double product with small values near zero - log tran
This is just a small addition to @whuber's answer (+1).
To compute $$\log \sum \exp (x_i)$$ without overflowing or underflowing, a "log-sum-trick" is often used, see e.g. here for description. The tri |
51,641 | What to do when your likelihood function has a double product with small values near zero - log transform doesn't work? | I recently had to deal with the same issue when computing conditional probabilities involving numbers on the order of $10^{-10000}$ (because normalizing the probability distribution would have required a great deal of unnecessary calculations). The heart of this difficulty, which comes up repeatedly in statistical computation, concerns taking logarithms of sums which themselves can overflow or underflow the computing platform's form of numeric representation (typically IEEE doubles). In the present case (to abstract away all irrelevant detail) we may write
$$L = \prod_i \left(\sum_j x_{ij}\right)$$
where the products $x_{i1} = \prod_{s=1}^{S_i}L_{is}(y\space|\space \rho_A)\phi$ and $x_{i2} = \prod_{s=1}^{S_i}L_{is}(y\space|\space \rho_B)(1-\phi)$ are themselves best computed using logarithms. Naturally we would like to compute $L$ as
$$L = \exp(\log(L));\quad \log(L) = \sum_i \log \left(\sum_j x_{ij}\right)$$
and in most applications, having $\log(L)$ is all that is really needed. There is the rub: how to compute logarithms of sums? Dropping the (now superfluous) subscript $i$, and assuming the $x_j$ will be computed using logarithms, we are asking how to calculate expressions like
$$\log\sum_j \exp(\log(x_j))$$
without over- or underflow.
(Note, too, that in many statistical applications it suffices to obtain a natural logarithm to an accuracy of only one or two decimal places, so high precision is not usually needed.)
The solution is to keep track of the magnitudes of the arguments. When $x_j \gg x_k$, then (to within a relative error of $x_k/x_j$) it is the case that $x_j + x_k = x_j$.
Often this magnitude-tracking can be done by estimating the order of magnitude of the sum (in some preliminary approximate calculation, for instance) and simply ignoring any term which is much smaller in magnitude. For more generality and flexibility, we can examine the relative orders of magnitude as we go along. This works best when summands have the most similar orders of magnitude, so that as little precision as possible is lost in each partial sum. Provided, then, that we arrange to sum the terms from smallest to largest, we may repeatedly apply this useful approximation and, in the end, lose no meaningful precision.
In many statistical applications (such as computing log likelihoods, which frequently involve Gamma and Beta functions) it is handy to have built-in functions to return the logarithms of factorials $n!$ and binomial coefficients $\binom{n}{k}$. Most statistical and numerical computing platforms have such functions--even Excel does (GAMMALN). Use these wherever possible.
Here is R code to illustrate how this could be implemented. So that it will serve as (executable) pseudocode, only simple Fortran-like expressions are used: they will port easily to almost any computing environment.
log.sum <- function(y, precision=log(10^17)) {
#
# Returns the logarithm of sum(exp(y)) without overflow or underflow.
# Achieves a relative error approximately e^(-precision).
#
# Some precision may be lost in computing `log(1+exp(*))`, depending on
# the computing platform.
#
if (length(y) == 0) return(1)
log.plus <- function(a, b) ifelse(abs(b-a) > precision,
max(b,a), max(b,a) + log(1 + exp(-abs(b-a))))
y <- sort(y)
x <- y[1]
for (z in y[-1]) x <- log.plus(x, z)
return (x)
}
For example, let's compute $\log\sum_{i=0}^{1000}\exp(i) = 1000.45867514538708\ldots$ in R with and without this method:
x <- seq(0, 1000, by=1); print(log(sum(exp(x))), digits=18)
[1] Inf
Summation simply overflows. However,
print(log.sum(x), digits=18)
[1] 1000.45867514538713
is accurate to 17 full decimal places, which is all one can expect of IEEE doubles. | What to do when your likelihood function has a double product with small values near zero - log tran | I recently had to deal with the same issue when computing conditional probabilities involving numbers on the order of $10^{-10000}$ (because normalizing the probability distribution would have require | What to do when your likelihood function has a double product with small values near zero - log transform doesn't work?
I recently had to deal with the same issue when computing conditional probabilities involving numbers on the order of $10^{-10000}$ (because normalizing the probability distribution would have required a great deal of unnecessary calculations). The heart of this difficulty, which comes up repeatedly in statistical computation, concerns taking logarithms of sums which themselves can overflow or underflow the computing platform's form of numeric representation (typically IEEE doubles). In the present case (to abstract away all irrelevant detail) we may write
$$L = \prod_i \left(\sum_j x_{ij}\right)$$
where the products $x_{i1} = \prod_{s=1}^{S_i}L_{is}(y\space|\space \rho_A)\phi$ and $x_{i2} = \prod_{s=1}^{S_i}L_{is}(y\space|\space \rho_B)(1-\phi)$ are themselves best computed using logarithms. Naturally we would like to compute $L$ as
$$L = \exp(\log(L));\quad \log(L) = \sum_i \log \left(\sum_j x_{ij}\right)$$
and in most applications, having $\log(L)$ is all that is really needed. There is the rub: how to compute logarithms of sums? Dropping the (now superfluous) subscript $i$, and assuming the $x_j$ will be computed using logarithms, we are asking how to calculate expressions like
$$\log\sum_j \exp(\log(x_j))$$
without over- or underflow.
(Note, too, that in many statistical applications it suffices to obtain a natural logarithm to an accuracy of only one or two decimal places, so high precision is not usually needed.)
The solution is to keep track of the magnitudes of the arguments. When $x_j \gg x_k$, then (to within a relative error of $x_k/x_j$) it is the case that $x_j + x_k = x_j$.
Often this magnitude-tracking can be done by estimating the order of magnitude of the sum (in some preliminary approximate calculation, for instance) and simply ignoring any term which is much smaller in magnitude. For more generality and flexibility, we can examine the relative orders of magnitude as we go along. This works best when summands have the most similar orders of magnitude, so that as little precision as possible is lost in each partial sum. Provided, then, that we arrange to sum the terms from smallest to largest, we may repeatedly apply this useful approximation and, in the end, lose no meaningful precision.
In many statistical applications (such as computing log likelihoods, which frequently involve Gamma and Beta functions) it is handy to have built-in functions to return the logarithms of factorials $n!$ and binomial coefficients $\binom{n}{k}$. Most statistical and numerical computing platforms have such functions--even Excel does (GAMMALN). Use these wherever possible.
Here is R code to illustrate how this could be implemented. So that it will serve as (executable) pseudocode, only simple Fortran-like expressions are used: they will port easily to almost any computing environment.
log.sum <- function(y, precision=log(10^17)) {
#
# Returns the logarithm of sum(exp(y)) without overflow or underflow.
# Achieves a relative error approximately e^(-precision).
#
# Some precision may be lost in computing `log(1+exp(*))`, depending on
# the computing platform.
#
if (length(y) == 0) return(1)
log.plus <- function(a, b) ifelse(abs(b-a) > precision,
max(b,a), max(b,a) + log(1 + exp(-abs(b-a))))
y <- sort(y)
x <- y[1]
for (z in y[-1]) x <- log.plus(x, z)
return (x)
}
For example, let's compute $\log\sum_{i=0}^{1000}\exp(i) = 1000.45867514538708\ldots$ in R with and without this method:
x <- seq(0, 1000, by=1); print(log(sum(exp(x))), digits=18)
[1] Inf
Summation simply overflows. However,
print(log.sum(x), digits=18)
[1] 1000.45867514538713
is accurate to 17 full decimal places, which is all one can expect of IEEE doubles. | What to do when your likelihood function has a double product with small values near zero - log tran
I recently had to deal with the same issue when computing conditional probabilities involving numbers on the order of $10^{-10000}$ (because normalizing the probability distribution would have require |
51,642 | How to deal with Z-score greater than 3? | Let me repeat (and correct) what I've said in my comment ad reply to your edit.
You have to transfer from $X$ to $Z$ in order to use a z-score table. Since a z-score table contains a small finite subset of values, you often must settle for an approximation. So you could also settle for $P(Z<-3)\approx 0$ and $P(Z< 3)\approx 1$ (NB: $P(Z>3)\approx 1$ was a typo, sorry.)
As to $P(-1.25<Z<3.75)$, I'll use this z-score table:
$$P(-1.25<Z<3.75)=P(Z<3.75)-P(Z<-1.25)\approx 1-0.1056=0.8944$$ | How to deal with Z-score greater than 3? | Let me repeat (and correct) what I've said in my comment ad reply to your edit.
You have to transfer from $X$ to $Z$ in order to use a z-score table. Since a z-score table contains a small finite subs | How to deal with Z-score greater than 3?
Let me repeat (and correct) what I've said in my comment ad reply to your edit.
You have to transfer from $X$ to $Z$ in order to use a z-score table. Since a z-score table contains a small finite subset of values, you often must settle for an approximation. So you could also settle for $P(Z<-3)\approx 0$ and $P(Z< 3)\approx 1$ (NB: $P(Z>3)\approx 1$ was a typo, sorry.)
As to $P(-1.25<Z<3.75)$, I'll use this z-score table:
$$P(-1.25<Z<3.75)=P(Z<3.75)-P(Z<-1.25)\approx 1-0.1056=0.8944$$ | How to deal with Z-score greater than 3?
Let me repeat (and correct) what I've said in my comment ad reply to your edit.
You have to transfer from $X$ to $Z$ in order to use a z-score table. Since a z-score table contains a small finite subs |
51,643 | How to deal with Z-score greater than 3? | The standard normal ranges from $-\infty$ to $\infty$.
Your problem appears to be that your table doesn't go further.
Your question should therefore be modified to ask "*How do I deal with the fact that my table doesn't go as high as my $Z$ value?*"
[Note that in your last paragraph, you have become confused. The region you're evaluating probability for is $Z<3.75$ but the boundary value of $Z$ you're trying to look up in the table (the $3.75$) is $>3$, as in your title.]
It seems like not having the value in your table would be a problem, but it's a very small one $-$ since your answer for $P(0<Z<3.75)$ can't be smaller than $P(0<Z<3)\approx 0.4999$ and can't be larger than $P(0<Z<\infty)=0.5$, you shouldn't have much difficulty narrowing the answer down to 3 significant figures of accuracy even so.
Additional accuracy (though I really don't think you need it) can be obtained by many methods. Here are three:
i) finding better tables (these seem to be of the same form as the ones you're apparently using)
ii) using a package that will evaluate standard normal cdfs for you. I just used R (simply typing pnorm(3.75) to obtain $P(-\infty<Z<3.75)$).
iii) using numerical integration to approximate the area between 3 and 3.75. For example, via Simpson's rule, a single interval (3 points) gives 0.0017 (the correct answer is 0.0013 to 4dp). Alternatively, because the density is convex in this region (indeed, as whuber points out in comments, convex for $Z>1$ and $Z< -1$), the integral will be bounded below by the midpoint rule and above by the trapezoidal rule, which usefully bounds where the answer can lie
But, really, just using the limits provided by 3 and $\infty$ is plenty, I imagine. | How to deal with Z-score greater than 3? | The standard normal ranges from $-\infty$ to $\infty$.
Your problem appears to be that your table doesn't go further.
Your question should therefore be modified to ask "*How do I deal with the fact t | How to deal with Z-score greater than 3?
The standard normal ranges from $-\infty$ to $\infty$.
Your problem appears to be that your table doesn't go further.
Your question should therefore be modified to ask "*How do I deal with the fact that my table doesn't go as high as my $Z$ value?*"
[Note that in your last paragraph, you have become confused. The region you're evaluating probability for is $Z<3.75$ but the boundary value of $Z$ you're trying to look up in the table (the $3.75$) is $>3$, as in your title.]
It seems like not having the value in your table would be a problem, but it's a very small one $-$ since your answer for $P(0<Z<3.75)$ can't be smaller than $P(0<Z<3)\approx 0.4999$ and can't be larger than $P(0<Z<\infty)=0.5$, you shouldn't have much difficulty narrowing the answer down to 3 significant figures of accuracy even so.
Additional accuracy (though I really don't think you need it) can be obtained by many methods. Here are three:
i) finding better tables (these seem to be of the same form as the ones you're apparently using)
ii) using a package that will evaluate standard normal cdfs for you. I just used R (simply typing pnorm(3.75) to obtain $P(-\infty<Z<3.75)$).
iii) using numerical integration to approximate the area between 3 and 3.75. For example, via Simpson's rule, a single interval (3 points) gives 0.0017 (the correct answer is 0.0013 to 4dp). Alternatively, because the density is convex in this region (indeed, as whuber points out in comments, convex for $Z>1$ and $Z< -1$), the integral will be bounded below by the midpoint rule and above by the trapezoidal rule, which usefully bounds where the answer can lie
But, really, just using the limits provided by 3 and $\infty$ is plenty, I imagine. | How to deal with Z-score greater than 3?
The standard normal ranges from $-\infty$ to $\infty$.
Your problem appears to be that your table doesn't go further.
Your question should therefore be modified to ask "*How do I deal with the fact t |
51,644 | How to deal with Z-score greater than 3? | For $z >0$, the right tail of the standard normal distribution
(that is, the area to the right of $z$) which is often denoted by
$Q(z)$ is bounded as follows:
$$\frac{\exp(-z^2/2)}{\sqrt{2\pi}}\left (\frac{1}{z} - \frac{1}{z^3}\right )
\ < \ Q(z) \ < \ \frac{\exp(-z^2/2)}{\sqrt{2\pi}}\left (\frac{1}{z}\right ).$$
See, for example this answer on math.SE for a proof. The bounds blow up/down to $\pm \infty$ as $z \to 0$ but are quite useful in the
regions not covered by typical tables of the cumulative distribution function
of the standard normal random variable. For example,
$$0.0000873 < Q(3.75) < 0.0000940$$ while the actual value is slightly larger than $0.0000884$ | How to deal with Z-score greater than 3? | For $z >0$, the right tail of the standard normal distribution
(that is, the area to the right of $z$) which is often denoted by
$Q(z)$ is bounded as follows:
$$\frac{\exp(-z^2/2)}{\sqrt{2\pi}}\left | How to deal with Z-score greater than 3?
For $z >0$, the right tail of the standard normal distribution
(that is, the area to the right of $z$) which is often denoted by
$Q(z)$ is bounded as follows:
$$\frac{\exp(-z^2/2)}{\sqrt{2\pi}}\left (\frac{1}{z} - \frac{1}{z^3}\right )
\ < \ Q(z) \ < \ \frac{\exp(-z^2/2)}{\sqrt{2\pi}}\left (\frac{1}{z}\right ).$$
See, for example this answer on math.SE for a proof. The bounds blow up/down to $\pm \infty$ as $z \to 0$ but are quite useful in the
regions not covered by typical tables of the cumulative distribution function
of the standard normal random variable. For example,
$$0.0000873 < Q(3.75) < 0.0000940$$ while the actual value is slightly larger than $0.0000884$ | How to deal with Z-score greater than 3?
For $z >0$, the right tail of the standard normal distribution
(that is, the area to the right of $z$) which is often denoted by
$Q(z)$ is bounded as follows:
$$\frac{\exp(-z^2/2)}{\sqrt{2\pi}}\left |
51,645 | How to deal with Z-score greater than 3? | The z (i.e., normal) distribution is not bounded. $\mathcal N(\mu=70,\sigma=4)$ is not standard normal either β that refers to $\mathcal N(0,1)$. If you're wondering what the p value is for z = 3.75, you can find it in r with pnorm(3.75). (You could also use pnorm(85,70,4).) The result is p = 0.9999116.
If you want an exact p value, I think you're going to have an easier time getting it in R or some other statistical software than by dealing with the quantile function directly...but FWIW, here's that equation:
$$F^{-1}(p)
= \mu + \sigma\Phi^{-1}(p)
= \mu + \sigma\sqrt2\,\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1)$$
In the above, $\rm erf$ refers to the error function.
In light of your comments and edit to the question, I think I should decline to provide more than this as a "hint" in following our policy on self-study questions. | How to deal with Z-score greater than 3? | The z (i.e., normal) distribution is not bounded. $\mathcal N(\mu=70,\sigma=4)$ is not standard normal either β that refers to $\mathcal N(0,1)$. If you're wondering what the p value is for z = 3.75, | How to deal with Z-score greater than 3?
The z (i.e., normal) distribution is not bounded. $\mathcal N(\mu=70,\sigma=4)$ is not standard normal either β that refers to $\mathcal N(0,1)$. If you're wondering what the p value is for z = 3.75, you can find it in r with pnorm(3.75). (You could also use pnorm(85,70,4).) The result is p = 0.9999116.
If you want an exact p value, I think you're going to have an easier time getting it in R or some other statistical software than by dealing with the quantile function directly...but FWIW, here's that equation:
$$F^{-1}(p)
= \mu + \sigma\Phi^{-1}(p)
= \mu + \sigma\sqrt2\,\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1)$$
In the above, $\rm erf$ refers to the error function.
In light of your comments and edit to the question, I think I should decline to provide more than this as a "hint" in following our policy on self-study questions. | How to deal with Z-score greater than 3?
The z (i.e., normal) distribution is not bounded. $\mathcal N(\mu=70,\sigma=4)$ is not standard normal either β that refers to $\mathcal N(0,1)$. If you're wondering what the p value is for z = 3.75, |
51,646 | transformation to normality of the dependent variable in multiple regression | I'd planned to link to an answer with a good list (with discussion) of the regression assumptions an answer with the multiple regression assumptions, but I can't find a completely suitable one for what I had in mind. There are plenty of discussions of the issues (especially in comments), but not quite everything I think is needed in one place.
The regression model looks like this:
Most of the regression assumptions relate to the error component(s) of the model.
So, time for the multiple regression assumptions. [Formal hypothesis testing of the assumptions is generally not recommended - it mostly answers the wrong question, for starters. Diagnostic displays (residual plots for example) are commonly used.]
This is a typical way to organize the list, but depending on how you frame things, people may add more or put them together a bit differently. Approximately in order of importance:
0. To fit a regression doesn't require these assumptions, except perhaps (arguably) the first. The assumptions potentially matter when doing hypothesis tests and producing confidence intervals and - most importantly - prediction intervals (for which several of them matter a fair bit).
The model for the mean is correct ("Linearity"). The model is assumed to be linear in the (supplied) predictors and linear in the parameters*. (NB A quadratic model, or even a sinusoidal model, for example, can still be linear in the predictors, if you supply the right ones.)
*(and in most situations, that all the important terms are included)
This might be checked by examining residuals against fitted values, or against any independent variables that might have non-linear relationships; added-variable plots could be used to see whether any variables not in the model are important.
The $x$'s are observed without error
This generally isn't something you can assess by looking at the data set itself; it will usually proceed from knowing something about the variables and how they're collected. A person's height might be treated as fixed (even though the measurement of it is subject to both variation over time and measurement error) - the variation is very small, but for example a person's blood pressure is typically much more variable - if you measured a second time a little later, it might be quite different.
Constant error variance ("homoskedasticity").
This would normally be assessed either: (i) by looking at residuals against fitted (to check for variance related to the mean), or against variates that the error variance is particularly expected to be related to; or (ii) looking at some function of squared residuals (as the best available measure of observation variance) against the same things.
For example, one of the default diagnostic displays for R's linear regression is a plot of $\sqrt{|r_i|}$ vs fitted values, where $r_i$ is the standardized residual, which would be the fourth root of the squared standardized residuals. This transformation is mostly used to make the distribution less skew, facilitating comparisons without being dominated by the largest values but it also serves a purpose in not making relatively moderate changes in spread look very dramatic as they might with sat squared residuals.
Independence. The errors are assumed to be independent of each other (and of the $x$'s).
There are many ways that errors can exhibit dependence; you generally need some prior expectation of the form of dependence to assess it. If the data are observed over time (or along some spatial dimension), serial dependence would be an obvious thing to check for (perhaps via a sample autocorrelation function plot).
The errors are assumed to be normal (with zero mean).
The assumption about zero mean overall is uncheckable, since any non-zero mean is absorbed into the intercept (constant) term. Locally nonzero-mean would show up in the plot of residuals vs fitted plot as a lack of fit. The assumption of normality might be assessed (for example) via a Q-Q plot.
In larger samples, the last assumption becomes much less important, except for prediction intervals (where it always matters for the usual normal-theory inference).
Note that the collection of dependent variables ($Y$'s) is not assumed to be normal. At any given combination of $x$-values (IVs) they are normal, but the whole sample of $Y$'s will then be a mixture of normals with different means ... and - depending on the particular collection of combination of independent variable values, that might be very non-normal.
Which is to say, there's no point looking at the distribution of the IV to assess the normality assumption, because that's not what is assumed normal. The error term is assumed normal for the most usual forms of inference, which you estimate by the residuals.
Note that it's not required to assume normality even to perform inference; there are numerous alternatives that allow inference either via hypothesis tests (e.g. a permutation test) or confidence intervals (e.g. bootstrap intervals or intervals based on nonparametric correlation between residuals and predictor) and the relationship between the two forms of inference; there's also different parametric assumptions that can be accommodated with linear regression (e.g. fitting a Poisson or gamma GLM with identity link.
Examples of non-normal theory fits:
(a) One is illustrated here -- the red line in the plot there is the linear regression fitted using a Gamma GLM (a parametric assumption); tests of coefficients are easy to obtain from GLM output; this approach also generalizes to "multiple regression" easily.
(b) This answer shows estimated lines based on nonparametric correlations; tests and intervals can be generated for those.
A big problem with transforming to achieve normality
Let's say all the other regression assumptions are reasonable, apart from the normality assumption.
Then you apply some nonlinear transformation in the hopes of making the residuals look more normal.
Suddenly, your previously linear relationships are no longer linear.
Suddenly your spread of points about the fit is no longer constant.
Two assumptions that may matter much more than normality are no longer appropriate. | transformation to normality of the dependent variable in multiple regression | I'd planned to link to an answer with a good list (with discussion) of the regression assumptions an answer with the multiple regression assumptions, but I can't find a completely suitable one for wha | transformation to normality of the dependent variable in multiple regression
I'd planned to link to an answer with a good list (with discussion) of the regression assumptions an answer with the multiple regression assumptions, but I can't find a completely suitable one for what I had in mind. There are plenty of discussions of the issues (especially in comments), but not quite everything I think is needed in one place.
The regression model looks like this:
Most of the regression assumptions relate to the error component(s) of the model.
So, time for the multiple regression assumptions. [Formal hypothesis testing of the assumptions is generally not recommended - it mostly answers the wrong question, for starters. Diagnostic displays (residual plots for example) are commonly used.]
This is a typical way to organize the list, but depending on how you frame things, people may add more or put them together a bit differently. Approximately in order of importance:
0. To fit a regression doesn't require these assumptions, except perhaps (arguably) the first. The assumptions potentially matter when doing hypothesis tests and producing confidence intervals and - most importantly - prediction intervals (for which several of them matter a fair bit).
The model for the mean is correct ("Linearity"). The model is assumed to be linear in the (supplied) predictors and linear in the parameters*. (NB A quadratic model, or even a sinusoidal model, for example, can still be linear in the predictors, if you supply the right ones.)
*(and in most situations, that all the important terms are included)
This might be checked by examining residuals against fitted values, or against any independent variables that might have non-linear relationships; added-variable plots could be used to see whether any variables not in the model are important.
The $x$'s are observed without error
This generally isn't something you can assess by looking at the data set itself; it will usually proceed from knowing something about the variables and how they're collected. A person's height might be treated as fixed (even though the measurement of it is subject to both variation over time and measurement error) - the variation is very small, but for example a person's blood pressure is typically much more variable - if you measured a second time a little later, it might be quite different.
Constant error variance ("homoskedasticity").
This would normally be assessed either: (i) by looking at residuals against fitted (to check for variance related to the mean), or against variates that the error variance is particularly expected to be related to; or (ii) looking at some function of squared residuals (as the best available measure of observation variance) against the same things.
For example, one of the default diagnostic displays for R's linear regression is a plot of $\sqrt{|r_i|}$ vs fitted values, where $r_i$ is the standardized residual, which would be the fourth root of the squared standardized residuals. This transformation is mostly used to make the distribution less skew, facilitating comparisons without being dominated by the largest values but it also serves a purpose in not making relatively moderate changes in spread look very dramatic as they might with sat squared residuals.
Independence. The errors are assumed to be independent of each other (and of the $x$'s).
There are many ways that errors can exhibit dependence; you generally need some prior expectation of the form of dependence to assess it. If the data are observed over time (or along some spatial dimension), serial dependence would be an obvious thing to check for (perhaps via a sample autocorrelation function plot).
The errors are assumed to be normal (with zero mean).
The assumption about zero mean overall is uncheckable, since any non-zero mean is absorbed into the intercept (constant) term. Locally nonzero-mean would show up in the plot of residuals vs fitted plot as a lack of fit. The assumption of normality might be assessed (for example) via a Q-Q plot.
In larger samples, the last assumption becomes much less important, except for prediction intervals (where it always matters for the usual normal-theory inference).
Note that the collection of dependent variables ($Y$'s) is not assumed to be normal. At any given combination of $x$-values (IVs) they are normal, but the whole sample of $Y$'s will then be a mixture of normals with different means ... and - depending on the particular collection of combination of independent variable values, that might be very non-normal.
Which is to say, there's no point looking at the distribution of the IV to assess the normality assumption, because that's not what is assumed normal. The error term is assumed normal for the most usual forms of inference, which you estimate by the residuals.
Note that it's not required to assume normality even to perform inference; there are numerous alternatives that allow inference either via hypothesis tests (e.g. a permutation test) or confidence intervals (e.g. bootstrap intervals or intervals based on nonparametric correlation between residuals and predictor) and the relationship between the two forms of inference; there's also different parametric assumptions that can be accommodated with linear regression (e.g. fitting a Poisson or gamma GLM with identity link.
Examples of non-normal theory fits:
(a) One is illustrated here -- the red line in the plot there is the linear regression fitted using a Gamma GLM (a parametric assumption); tests of coefficients are easy to obtain from GLM output; this approach also generalizes to "multiple regression" easily.
(b) This answer shows estimated lines based on nonparametric correlations; tests and intervals can be generated for those.
A big problem with transforming to achieve normality
Let's say all the other regression assumptions are reasonable, apart from the normality assumption.
Then you apply some nonlinear transformation in the hopes of making the residuals look more normal.
Suddenly, your previously linear relationships are no longer linear.
Suddenly your spread of points about the fit is no longer constant.
Two assumptions that may matter much more than normality are no longer appropriate. | transformation to normality of the dependent variable in multiple regression
I'd planned to link to an answer with a good list (with discussion) of the regression assumptions an answer with the multiple regression assumptions, but I can't find a completely suitable one for wha |
51,647 | Q: what book on Bayesian statistics, preferably with R? [duplicate] | Peter D. Huff. A First Course in Bayesian Statistical Methods. Springer (2010)
Also
Andrew Gelman et. al. Bayesian Data Analysis (3rd ed.). CRC (2013)
The Gelman book isn't constrained to R but also uses Stan, a probabilistic programming language similar to BUGS or JAGS. I believe earlier editions of the book used BUGS instead of Stan, which is probably very similar.
And finally:
John Kruschke. Doing Bayesian Data Analysis: A tutorial with R and BUGS. Academic Press (2011)
More BUGS than R, but probably the most pragmatic of the three books I've suggested. Don't let the cover deter you, this is a perfectly respectable text. | Q: what book on Bayesian statistics, preferably with R? [duplicate] | Peter D. Huff. A First Course in Bayesian Statistical Methods. Springer (2010)
Also
Andrew Gelman et. al. Bayesian Data Analysis (3rd ed.). CRC (2013)
The Gelman book isn't constrained to R but als | Q: what book on Bayesian statistics, preferably with R? [duplicate]
Peter D. Huff. A First Course in Bayesian Statistical Methods. Springer (2010)
Also
Andrew Gelman et. al. Bayesian Data Analysis (3rd ed.). CRC (2013)
The Gelman book isn't constrained to R but also uses Stan, a probabilistic programming language similar to BUGS or JAGS. I believe earlier editions of the book used BUGS instead of Stan, which is probably very similar.
And finally:
John Kruschke. Doing Bayesian Data Analysis: A tutorial with R and BUGS. Academic Press (2011)
More BUGS than R, but probably the most pragmatic of the three books I've suggested. Don't let the cover deter you, this is a perfectly respectable text. | Q: what book on Bayesian statistics, preferably with R? [duplicate]
Peter D. Huff. A First Course in Bayesian Statistical Methods. Springer (2010)
Also
Andrew Gelman et. al. Bayesian Data Analysis (3rd ed.). CRC (2013)
The Gelman book isn't constrained to R but als |
51,648 | Q: what book on Bayesian statistics, preferably with R? [duplicate] | Fortuitous timing, as Bayesian Data Analysis, 3rd ed was just released. It's a good general-purpose text, with an emphasis on hierarchical methods, a section on advanced computation (that is, Markov chain Monte Carlo), and an appendix on Gelman's Bayesian inference tool, rstan.
The text focuses on statistics rather than programming, though, so perhaps this answer does not fit your R needs. That said, I've been able to recreate the text examples in R simply based on his clear prose descriptions. | Q: what book on Bayesian statistics, preferably with R? [duplicate] | Fortuitous timing, as Bayesian Data Analysis, 3rd ed was just released. It's a good general-purpose text, with an emphasis on hierarchical methods, a section on advanced computation (that is, Markov | Q: what book on Bayesian statistics, preferably with R? [duplicate]
Fortuitous timing, as Bayesian Data Analysis, 3rd ed was just released. It's a good general-purpose text, with an emphasis on hierarchical methods, a section on advanced computation (that is, Markov chain Monte Carlo), and an appendix on Gelman's Bayesian inference tool, rstan.
The text focuses on statistics rather than programming, though, so perhaps this answer does not fit your R needs. That said, I've been able to recreate the text examples in R simply based on his clear prose descriptions. | Q: what book on Bayesian statistics, preferably with R? [duplicate]
Fortuitous timing, as Bayesian Data Analysis, 3rd ed was just released. It's a good general-purpose text, with an emphasis on hierarchical methods, a section on advanced computation (that is, Markov |
51,649 | Q: what book on Bayesian statistics, preferably with R? [duplicate] | Both are introductory, but useful imho:
Bayesian Computation With R, by Jim Albert
Applied Bayesian Statistics, With R and OpenBUGS Examples, by | Q: what book on Bayesian statistics, preferably with R? [duplicate] | Both are introductory, but useful imho:
Bayesian Computation With R, by Jim Albert
Applied Bayesian Statistics, With R and OpenBUGS Examples, by | Q: what book on Bayesian statistics, preferably with R? [duplicate]
Both are introductory, but useful imho:
Bayesian Computation With R, by Jim Albert
Applied Bayesian Statistics, With R and OpenBUGS Examples, by | Q: what book on Bayesian statistics, preferably with R? [duplicate]
Both are introductory, but useful imho:
Bayesian Computation With R, by Jim Albert
Applied Bayesian Statistics, With R and OpenBUGS Examples, by |
51,650 | How to prepare variables with mild skew for multiple regression? | There's no requirement that this data be normal for regression, only the residuals of the model. So, do your regression and check the residuals and then see if you need to transform anything. | How to prepare variables with mild skew for multiple regression? | There's no requirement that this data be normal for regression, only the residuals of the model. So, do your regression and check the residuals and then see if you need to transform anything. | How to prepare variables with mild skew for multiple regression?
There's no requirement that this data be normal for regression, only the residuals of the model. So, do your regression and check the residuals and then see if you need to transform anything. | How to prepare variables with mild skew for multiple regression?
There's no requirement that this data be normal for regression, only the residuals of the model. So, do your regression and check the residuals and then see if you need to transform anything. |
51,651 | How to prepare variables with mild skew for multiple regression? | Your original data looks fine. I've seen datasets where the skew was far more extreme than that. Do your regression and check the diagnostics (in particular, see if your estimated trend makes sense and there is no strong evidence of non-additivity) before making any transformations. | How to prepare variables with mild skew for multiple regression? | Your original data looks fine. I've seen datasets where the skew was far more extreme than that. Do your regression and check the diagnostics (in particular, see if your estimated trend makes sense an | How to prepare variables with mild skew for multiple regression?
Your original data looks fine. I've seen datasets where the skew was far more extreme than that. Do your regression and check the diagnostics (in particular, see if your estimated trend makes sense and there is no strong evidence of non-additivity) before making any transformations. | How to prepare variables with mild skew for multiple regression?
Your original data looks fine. I've seen datasets where the skew was far more extreme than that. Do your regression and check the diagnostics (in particular, see if your estimated trend makes sense an |
51,652 | How to prepare variables with mild skew for multiple regression? | If you're looking for a transformation of the data, you might want to consider the Box-Cox transformation which is reviewed in this article. | How to prepare variables with mild skew for multiple regression? | If you're looking for a transformation of the data, you might want to consider the Box-Cox transformation which is reviewed in this article. | How to prepare variables with mild skew for multiple regression?
If you're looking for a transformation of the data, you might want to consider the Box-Cox transformation which is reviewed in this article. | How to prepare variables with mild skew for multiple regression?
If you're looking for a transformation of the data, you might want to consider the Box-Cox transformation which is reviewed in this article. |
51,653 | A multitrait-multimethod matrix and data set | I worked on this some the other day when you posted your same question to stack overflow. What I will provide won't be a finished solution, but hopefully it will give you enough ideas to finish the presentation on your own.
This is what I could produce in SPSS, I have posted some code here using the same logic in R with ggplot2, but I'm not as familiar with ggplot2 as I am with SPSS so it is a bit off from producing something as close to floor ready as I can with SPSS.
As I said in the comment to your SO post, in a grammar of graphics style you can refer to the methods as panels (facets or small-multiples are other synonyms) and the traits as defining location along the X and Y axis. Even though they are nominal categories (so there order is arbitrary) we can still treat them the say way we do continuous variables in a scatterplot. That is, we can assign observations X and Y locations in a cartesian coordinate system defined by the categories.
So the shape your data needs to be in (in either SPSS or R) to produce this graphic is as follows (this is a read data statement for SPSS, but this should be readily transferable to a variety of languages).
data list free / Method_X Method_Y Traits_X Traits_Y (4A1) Corr (F3.2).
begin data
1 1 a a .89
1 1 a b .51
1 1 b b .89
1 1 a c .38
1 1 b c .37
1 1 c c .76
1 2 a a .57
1 2 b a .22
1 2 c a .09
1 2 a b .22
1 2 b b .57
1 2 c b .10
1 2 a c .11
1 2 b c .11
1 2 c c .46
2 2 a a .93
2 2 a b .68
2 2 b b .94
2 2 a c .59
2 2 b c .58
2 2 c c .84
1 3 a a .56
1 3 b a .22
1 3 c a .11
1 3 a b .23
1 3 b b .58
1 3 c b .12
1 3 a c .11
1 3 b c .11
1 3 c c .45
2 3 a a .67
2 3 b a .42
2 3 c a .33
2 3 a b .43
2 3 b b .66
2 3 c b .34
2 3 a c .34
2 3 b c .32
2 3 c c .58
3 3 a a .94
3 3 a b .67
3 3 b b .92
3 3 a c .58
3 3 b c .60
3 3 c c .85
end data.
Now, for my graph I want to define one more variable (the variable used to color the blocks) and add some meta-data which propagates to the graph in SPSS.
value labels Method_X Method_Y
1 'Method 1'
2 'Method 2'
3 'Method 3'.
compute type = 0.
if method_x = method_y and traits_x = traits_y type = 1.
if method_x = method_y and traits_x <> traits_y type = 2.
if method_x <> method_y and traits_x = traits_y type = 3.
if method_x <> method_y and traits_x <> traits_y type = 4.
value labels type
1 'reliability'
2 'validity'
3 'heterotrait-monomethod'
4 'heterotrait-heteromethod'.
Now the fun part, generating the graph. SPSS's graphics language, GPL, is not as intuitive as what Hadley has written for ggplot2, but I can help break it down some. Basically everything is superfluous for our discussion here except the GUIDE statements and below (so just focus on those for now).
GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=Traits_Y Traits_X Method_Y Method_X corr type
MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
DATA: Traits_X=col(source(s), name("Traits_X"), unit.category())
DATA: Traits_Y=col(source(s), name("Traits_Y"), unit.category())
DATA: Method_Y=col(source(s), name("Method_Y"), unit.category())
DATA: Method_X=col(source(s), name("Method_X"), unit.category())
DATA: type=col(source(s), name("type"), unit.category())
DATA: corr=col(source(s), name("corr"))
GUIDE: axis(dim(1), null())
GUIDE: axis(dim(2))
GUIDE: axis(dim(3), opposite())
GUIDE: axis(dim(4))
SCALE: cat(dim(2), reverse())
SCALE: cat(aesthetic(aesthetic.color.interior), map(("1", color.black), ("2", color.darkgrey), ("3", color.lightgrey), ("4",color.white)))
ELEMENT: polygon(position(Traits_X*Traits_Y*Method_X*Method_Y), color.interior(type), label(corr))
END GPL.
The element statement in essence specifies where things go in the plot (and what gets assigned what colors and labels. In this example the variable Traits_X gets mapped to the x axis (dim(1)), Traits_Y goes to the y axis dim(2), Method_X gets mapped to the panels going horizontally dim(3), and Method_Y gets mapped to the panels running vertically dim(4). Everything else just has to do with aesthetics in the plot (what gets what color and what label goes where).
Not all chart elements are directly exposed in SPSS syntax (you often have to hack the chart template to have certain aspects produced in a particular manner), but post-hoc editing gets you along ways in this instance. The extent I am able to reproduce the above chart (without going to great extremes) is inserted at the beginning of the question.
Two things I cannot do in SPSS (without doing hacky things with inserting text boxes on my own) are superscripts/subscripts and having label text in different colors (the labels are there, just black). These are things I would just print the graph to PDF and edit some more in Inkscape or Illustrator. I know you can do subscripts and superscripts in R labels, but one thing to note is this would break the grammar I have previously provided, as the categorical Y axis change between panels.
I could do the dashed boxes fairly easily in SPSS's editor (as well as the other text), but the arrow I could not. I know you wanted a solution in R, and I'm sure most of this logic can be ported to R code (using whatever packages you want).
A note, in some of the comments it appears Tyler and me were confused about what MTMM is (at least I was). This page by David Kenny goes into more detail on what the method is and how to estimate such models. | A multitrait-multimethod matrix and data set | I worked on this some the other day when you posted your same question to stack overflow. What I will provide won't be a finished solution, but hopefully it will give you enough ideas to finish the pr | A multitrait-multimethod matrix and data set
I worked on this some the other day when you posted your same question to stack overflow. What I will provide won't be a finished solution, but hopefully it will give you enough ideas to finish the presentation on your own.
This is what I could produce in SPSS, I have posted some code here using the same logic in R with ggplot2, but I'm not as familiar with ggplot2 as I am with SPSS so it is a bit off from producing something as close to floor ready as I can with SPSS.
As I said in the comment to your SO post, in a grammar of graphics style you can refer to the methods as panels (facets or small-multiples are other synonyms) and the traits as defining location along the X and Y axis. Even though they are nominal categories (so there order is arbitrary) we can still treat them the say way we do continuous variables in a scatterplot. That is, we can assign observations X and Y locations in a cartesian coordinate system defined by the categories.
So the shape your data needs to be in (in either SPSS or R) to produce this graphic is as follows (this is a read data statement for SPSS, but this should be readily transferable to a variety of languages).
data list free / Method_X Method_Y Traits_X Traits_Y (4A1) Corr (F3.2).
begin data
1 1 a a .89
1 1 a b .51
1 1 b b .89
1 1 a c .38
1 1 b c .37
1 1 c c .76
1 2 a a .57
1 2 b a .22
1 2 c a .09
1 2 a b .22
1 2 b b .57
1 2 c b .10
1 2 a c .11
1 2 b c .11
1 2 c c .46
2 2 a a .93
2 2 a b .68
2 2 b b .94
2 2 a c .59
2 2 b c .58
2 2 c c .84
1 3 a a .56
1 3 b a .22
1 3 c a .11
1 3 a b .23
1 3 b b .58
1 3 c b .12
1 3 a c .11
1 3 b c .11
1 3 c c .45
2 3 a a .67
2 3 b a .42
2 3 c a .33
2 3 a b .43
2 3 b b .66
2 3 c b .34
2 3 a c .34
2 3 b c .32
2 3 c c .58
3 3 a a .94
3 3 a b .67
3 3 b b .92
3 3 a c .58
3 3 b c .60
3 3 c c .85
end data.
Now, for my graph I want to define one more variable (the variable used to color the blocks) and add some meta-data which propagates to the graph in SPSS.
value labels Method_X Method_Y
1 'Method 1'
2 'Method 2'
3 'Method 3'.
compute type = 0.
if method_x = method_y and traits_x = traits_y type = 1.
if method_x = method_y and traits_x <> traits_y type = 2.
if method_x <> method_y and traits_x = traits_y type = 3.
if method_x <> method_y and traits_x <> traits_y type = 4.
value labels type
1 'reliability'
2 'validity'
3 'heterotrait-monomethod'
4 'heterotrait-heteromethod'.
Now the fun part, generating the graph. SPSS's graphics language, GPL, is not as intuitive as what Hadley has written for ggplot2, but I can help break it down some. Basically everything is superfluous for our discussion here except the GUIDE statements and below (so just focus on those for now).
GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=Traits_Y Traits_X Method_Y Method_X corr type
MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
DATA: Traits_X=col(source(s), name("Traits_X"), unit.category())
DATA: Traits_Y=col(source(s), name("Traits_Y"), unit.category())
DATA: Method_Y=col(source(s), name("Method_Y"), unit.category())
DATA: Method_X=col(source(s), name("Method_X"), unit.category())
DATA: type=col(source(s), name("type"), unit.category())
DATA: corr=col(source(s), name("corr"))
GUIDE: axis(dim(1), null())
GUIDE: axis(dim(2))
GUIDE: axis(dim(3), opposite())
GUIDE: axis(dim(4))
SCALE: cat(dim(2), reverse())
SCALE: cat(aesthetic(aesthetic.color.interior), map(("1", color.black), ("2", color.darkgrey), ("3", color.lightgrey), ("4",color.white)))
ELEMENT: polygon(position(Traits_X*Traits_Y*Method_X*Method_Y), color.interior(type), label(corr))
END GPL.
The element statement in essence specifies where things go in the plot (and what gets assigned what colors and labels. In this example the variable Traits_X gets mapped to the x axis (dim(1)), Traits_Y goes to the y axis dim(2), Method_X gets mapped to the panels going horizontally dim(3), and Method_Y gets mapped to the panels running vertically dim(4). Everything else just has to do with aesthetics in the plot (what gets what color and what label goes where).
Not all chart elements are directly exposed in SPSS syntax (you often have to hack the chart template to have certain aspects produced in a particular manner), but post-hoc editing gets you along ways in this instance. The extent I am able to reproduce the above chart (without going to great extremes) is inserted at the beginning of the question.
Two things I cannot do in SPSS (without doing hacky things with inserting text boxes on my own) are superscripts/subscripts and having label text in different colors (the labels are there, just black). These are things I would just print the graph to PDF and edit some more in Inkscape or Illustrator. I know you can do subscripts and superscripts in R labels, but one thing to note is this would break the grammar I have previously provided, as the categorical Y axis change between panels.
I could do the dashed boxes fairly easily in SPSS's editor (as well as the other text), but the arrow I could not. I know you wanted a solution in R, and I'm sure most of this logic can be ported to R code (using whatever packages you want).
A note, in some of the comments it appears Tyler and me were confused about what MTMM is (at least I was). This page by David Kenny goes into more detail on what the method is and how to estimate such models. | A multitrait-multimethod matrix and data set
I worked on this some the other day when you posted your same question to stack overflow. What I will provide won't be a finished solution, but hopefully it will give you enough ideas to finish the pr |
51,654 | A multitrait-multimethod matrix and data set | I built on @Andy W's R-code and hope my changes are useful to someone else.
I mainly changed it, so that it
obeys the new syntax (no more opts) in ggplot2, so no more warnings
adds the correlations as text
now correlation text size reflects its effect size
colour scheme shows the type of correlation (hetero/mono-trait/method).
put the legend in the empty upper right triangle
The function also contains my way for creating the data in the right format from a dataframe or correlation table. This depends on having your trait and method encoded in the variable name and you'd probably want to extract CFA loadings for a more solid look at the matter. In my case I first wanted to eyeball the correlations with a bit more visual structure. If you have your correlations/loadings in long format already it should be easy to adapt the function or to cast the long to wide.
Edit:
I put this in a package on Github. You can get it using devtools::install_github("rubenarslan/formr"), the function is then formr:mtmm.
## function for rendering a multi trait multi method matrix
mtmm = function (
variables, # data frame of variables that are supposed to be correlated
reliabilities = NULL, # reliabilties: column 1: scale, column 2: rel. coefficient
split_regex = "\\.", # regular expression to separate construct and method from the variable name. the first two matched groups are chosen
cors = NULL
) {
library(stringr); library(Hmisc); library(reshape2); library(ggplot2)
if(is.null(cors))
cors = cor(variables, use="pairwise.complete.obs") # select variables
var.names = colnames(cors)
corm = melt(cors)
corm = corm[ corm[,'Var1']!=corm[,'Var2'] , ] # substitute the 1s with the scale reliabilities here
if(!is.null(reliabilities)) {
rel = reliabilities
names(rel) = c('Var1','value')
rel$Var2 = rel$Var1
rel = rel[which(rel$Var1 %in% var.names), c('Var1','Var2','value')]
corm = rbind(corm,rel)
}
if(any(is.na(str_split_fixed(corm$Var1,split_regex,n = 2))))
{
print(unique(str_split_fixed(corm$Var1,split_regex,n = 2)))
stop ("regex broken")
}
corm[, c('trait_X','method_X')] = str_split_fixed(corm$Var1,split_regex,n = 2) # regex matching our column naming schema to extract trait and method
corm[, c('trait_Y','method_Y')] = str_split_fixed(corm$Var2,split_regex,n = 2)
corm[,c('var1.s','var2.s')] <- t(apply(corm[,c('Var1','Var2')], 1, sort)) # sort pairs to find dupes
corm[which(
corm[ ,'trait_X']==corm[,'trait_Y']
& corm[,'method_X']!=corm[,'method_Y']),'type'] = 'monotrait-heteromethod (validity)'
corm[which(
corm[ ,'trait_X']!=corm[,'trait_Y']
& corm[,'method_X']==corm[,'method_Y']), 'type'] = 'heterotrait-monomethod'
corm[which(
corm[ ,'trait_X']!=corm[,'trait_Y']
& corm[,'method_X']!=corm[,'method_Y']), 'type'] = 'heterotrait-heteromethod'
corm[which(
corm[, 'trait_X']==corm[,'trait_Y']
& corm[,'method_X']==corm[,'method_Y']), 'type'] = 'monotrait-monomethod (reliability)'
corm$trait_X = factor(corm$trait_X)
corm$trait_Y = factor(corm$trait_Y,levels=rev(levels(corm$trait_X)))
corm$method_X = factor(corm$method_X)
corm$method_Y = factor(corm$method_Y,levels=levels(corm$method_X))
corm = corm[order(corm$method_X,corm$trait_X),]
corm = corm[!duplicated(corm[,c('var1.s','var2.s')]), ] # remove dupe pairs
#building ggplot
mtmm_plot <- ggplot(data= corm) + # the melted correlation matrix
geom_tile(aes(x = trait_X, y = trait_Y, fill = type)) +
geom_text(aes(x = trait_X, y = trait_Y, label = str_replace(round(value,2),"0\\.", ".") ,size=log(value^2))) + # the correlation text
facet_grid(method_Y ~ method_X) +
ylab("")+ xlab("")+
theme_bw(base_size = 18) +
theme(panel.background = element_rect(colour = NA),
panel.grid.minor = element_blank(),
axis.line = element_line(),
strip.background = element_blank(),
panel.grid = element_blank(),
legend.position = c(1,1),
legend.justification = c(1, 1)
) +
scale_fill_brewer('Type') +
scale_size("Absolute size",guide=F) +
scale_colour_gradient(guide=F)
mtmm_plot
}
data.mtmm = data.frame(
'Ach.self report' = rnorm(200),'Pow.self report'= rnorm(200),'Aff.self report'= rnorm(200),
'Ach.peer report' = rnorm(200),'Pow.peer report'= rnorm(200),'Aff.peer report'= rnorm(200),
'Ach.diary' = rnorm(200),'Pow.diary'= rnorm(200),'Aff.diary'= rnorm(200))
reliabilities = data.frame(scale = names(data.mtmm), rel = runif(length(names(data.mtmm))))
mtmm(data.mtmm, reliabilities = reliabilities) | A multitrait-multimethod matrix and data set | I built on @Andy W's R-code and hope my changes are useful to someone else.
I mainly changed it, so that it
obeys the new syntax (no more opts) in ggplot2, so no more warnings
adds the correlations | A multitrait-multimethod matrix and data set
I built on @Andy W's R-code and hope my changes are useful to someone else.
I mainly changed it, so that it
obeys the new syntax (no more opts) in ggplot2, so no more warnings
adds the correlations as text
now correlation text size reflects its effect size
colour scheme shows the type of correlation (hetero/mono-trait/method).
put the legend in the empty upper right triangle
The function also contains my way for creating the data in the right format from a dataframe or correlation table. This depends on having your trait and method encoded in the variable name and you'd probably want to extract CFA loadings for a more solid look at the matter. In my case I first wanted to eyeball the correlations with a bit more visual structure. If you have your correlations/loadings in long format already it should be easy to adapt the function or to cast the long to wide.
Edit:
I put this in a package on Github. You can get it using devtools::install_github("rubenarslan/formr"), the function is then formr:mtmm.
## function for rendering a multi trait multi method matrix
mtmm = function (
variables, # data frame of variables that are supposed to be correlated
reliabilities = NULL, # reliabilties: column 1: scale, column 2: rel. coefficient
split_regex = "\\.", # regular expression to separate construct and method from the variable name. the first two matched groups are chosen
cors = NULL
) {
library(stringr); library(Hmisc); library(reshape2); library(ggplot2)
if(is.null(cors))
cors = cor(variables, use="pairwise.complete.obs") # select variables
var.names = colnames(cors)
corm = melt(cors)
corm = corm[ corm[,'Var1']!=corm[,'Var2'] , ] # substitute the 1s with the scale reliabilities here
if(!is.null(reliabilities)) {
rel = reliabilities
names(rel) = c('Var1','value')
rel$Var2 = rel$Var1
rel = rel[which(rel$Var1 %in% var.names), c('Var1','Var2','value')]
corm = rbind(corm,rel)
}
if(any(is.na(str_split_fixed(corm$Var1,split_regex,n = 2))))
{
print(unique(str_split_fixed(corm$Var1,split_regex,n = 2)))
stop ("regex broken")
}
corm[, c('trait_X','method_X')] = str_split_fixed(corm$Var1,split_regex,n = 2) # regex matching our column naming schema to extract trait and method
corm[, c('trait_Y','method_Y')] = str_split_fixed(corm$Var2,split_regex,n = 2)
corm[,c('var1.s','var2.s')] <- t(apply(corm[,c('Var1','Var2')], 1, sort)) # sort pairs to find dupes
corm[which(
corm[ ,'trait_X']==corm[,'trait_Y']
& corm[,'method_X']!=corm[,'method_Y']),'type'] = 'monotrait-heteromethod (validity)'
corm[which(
corm[ ,'trait_X']!=corm[,'trait_Y']
& corm[,'method_X']==corm[,'method_Y']), 'type'] = 'heterotrait-monomethod'
corm[which(
corm[ ,'trait_X']!=corm[,'trait_Y']
& corm[,'method_X']!=corm[,'method_Y']), 'type'] = 'heterotrait-heteromethod'
corm[which(
corm[, 'trait_X']==corm[,'trait_Y']
& corm[,'method_X']==corm[,'method_Y']), 'type'] = 'monotrait-monomethod (reliability)'
corm$trait_X = factor(corm$trait_X)
corm$trait_Y = factor(corm$trait_Y,levels=rev(levels(corm$trait_X)))
corm$method_X = factor(corm$method_X)
corm$method_Y = factor(corm$method_Y,levels=levels(corm$method_X))
corm = corm[order(corm$method_X,corm$trait_X),]
corm = corm[!duplicated(corm[,c('var1.s','var2.s')]), ] # remove dupe pairs
#building ggplot
mtmm_plot <- ggplot(data= corm) + # the melted correlation matrix
geom_tile(aes(x = trait_X, y = trait_Y, fill = type)) +
geom_text(aes(x = trait_X, y = trait_Y, label = str_replace(round(value,2),"0\\.", ".") ,size=log(value^2))) + # the correlation text
facet_grid(method_Y ~ method_X) +
ylab("")+ xlab("")+
theme_bw(base_size = 18) +
theme(panel.background = element_rect(colour = NA),
panel.grid.minor = element_blank(),
axis.line = element_line(),
strip.background = element_blank(),
panel.grid = element_blank(),
legend.position = c(1,1),
legend.justification = c(1, 1)
) +
scale_fill_brewer('Type') +
scale_size("Absolute size",guide=F) +
scale_colour_gradient(guide=F)
mtmm_plot
}
data.mtmm = data.frame(
'Ach.self report' = rnorm(200),'Pow.self report'= rnorm(200),'Aff.self report'= rnorm(200),
'Ach.peer report' = rnorm(200),'Pow.peer report'= rnorm(200),'Aff.peer report'= rnorm(200),
'Ach.diary' = rnorm(200),'Pow.diary'= rnorm(200),'Aff.diary'= rnorm(200))
reliabilities = data.frame(scale = names(data.mtmm), rel = runif(length(names(data.mtmm))))
mtmm(data.mtmm, reliabilities = reliabilities) | A multitrait-multimethod matrix and data set
I built on @Andy W's R-code and hope my changes are useful to someone else.
I mainly changed it, so that it
obeys the new syntax (no more opts) in ggplot2, so no more warnings
adds the correlations |
51,655 | A multitrait-multimethod matrix and data set | It looks like I forgot to link to the original resource I used to construct this picture, that was used as an illustration for an old course (I tend to prefer B&W pictures :-). I know nothing about the data, and that was not of primary interest at the time I used it (it was done with Omnigraffle for Mac).
If the question is about how to reach such figures, you can try to generate correlation matrices on your own, using the excellent psych package. (Be sure to check William Revelle's website.) However, for well-established data you could probably refer to
Brown, TA (2006). Confirmatory Factor Analysis for Applied
Research. The Guilford Press.
See data for Table 6.1. Some context (pp. 214-216):
In this illustration, the researcher whishes to examine the construct
validity of the DSM-IV Cluster A personality disorders, which are
enduring patterns of symptoms characterized by odd or eccentric
behaviors (American Psychiatric Association, 1994). Cluster A is
comprised of three personality disorder constructs: (1) paranoid (an
enduring pattern of distrust and suspicion such that others' motives
are interpreted as malevolent); (2) schizoid (an enduring pattern of
detachment from social relationships and restricted range of emotional
expression); and (3) schizotypal (an enduring pattern of acute
discomfort in social relationships, cognitive and perceptual
distortions, and behavioral eccentricities). In a sample of 500
patients, each of these three traits is measured by three assessment
methods: (1) a self-report inventory of personality disorders; (2)
dimensional ratings from a structured clinical interview of
personality disorders; and (3) observational ratings made by
paraprofessional staff. Thus, Table 6.1 is a 3 (T) x 3 (M) matrix,
arranged such that the correlations among the different traits
(personality disorders: paranoid, schizotypal, schizoid) are nested
within each method (assessment type: inventory, clinical interview,
observer ratings).
The result should look like this:
If you are using R, you might be interested in looking into the mtmm() function from the psy package (which can be used to assess convergent and discriminant validity within a single measurement instrument as well), as already mentioned in earlier replies of mine: How to compute correlation between/within groups of variables?, Which package to use for convergent and discriminant validity in R? | A multitrait-multimethod matrix and data set | It looks like I forgot to link to the original resource I used to construct this picture, that was used as an illustration for an old course (I tend to prefer B&W pictures :-). I know nothing about th | A multitrait-multimethod matrix and data set
It looks like I forgot to link to the original resource I used to construct this picture, that was used as an illustration for an old course (I tend to prefer B&W pictures :-). I know nothing about the data, and that was not of primary interest at the time I used it (it was done with Omnigraffle for Mac).
If the question is about how to reach such figures, you can try to generate correlation matrices on your own, using the excellent psych package. (Be sure to check William Revelle's website.) However, for well-established data you could probably refer to
Brown, TA (2006). Confirmatory Factor Analysis for Applied
Research. The Guilford Press.
See data for Table 6.1. Some context (pp. 214-216):
In this illustration, the researcher whishes to examine the construct
validity of the DSM-IV Cluster A personality disorders, which are
enduring patterns of symptoms characterized by odd or eccentric
behaviors (American Psychiatric Association, 1994). Cluster A is
comprised of three personality disorder constructs: (1) paranoid (an
enduring pattern of distrust and suspicion such that others' motives
are interpreted as malevolent); (2) schizoid (an enduring pattern of
detachment from social relationships and restricted range of emotional
expression); and (3) schizotypal (an enduring pattern of acute
discomfort in social relationships, cognitive and perceptual
distortions, and behavioral eccentricities). In a sample of 500
patients, each of these three traits is measured by three assessment
methods: (1) a self-report inventory of personality disorders; (2)
dimensional ratings from a structured clinical interview of
personality disorders; and (3) observational ratings made by
paraprofessional staff. Thus, Table 6.1 is a 3 (T) x 3 (M) matrix,
arranged such that the correlations among the different traits
(personality disorders: paranoid, schizotypal, schizoid) are nested
within each method (assessment type: inventory, clinical interview,
observer ratings).
The result should look like this:
If you are using R, you might be interested in looking into the mtmm() function from the psy package (which can be used to assess convergent and discriminant validity within a single measurement instrument as well), as already mentioned in earlier replies of mine: How to compute correlation between/within groups of variables?, Which package to use for convergent and discriminant validity in R? | A multitrait-multimethod matrix and data set
It looks like I forgot to link to the original resource I used to construct this picture, that was used as an illustration for an old course (I tend to prefer B&W pictures :-). I know nothing about th |
51,656 | Is it "rows by columns" or "columns by rows"? | "That depends." Rows are usually considered observations, and columns are variables. So I would say widgets by color level in your context. But it really depends on which are your dependent and independent variables (or how you're interpreting the data). | Is it "rows by columns" or "columns by rows"? | "That depends." Rows are usually considered observations, and columns are variables. So I would say widgets by color level in your context. But it really depends on which are your dependent and ind | Is it "rows by columns" or "columns by rows"?
"That depends." Rows are usually considered observations, and columns are variables. So I would say widgets by color level in your context. But it really depends on which are your dependent and independent variables (or how you're interpreting the data). | Is it "rows by columns" or "columns by rows"?
"That depends." Rows are usually considered observations, and columns are variables. So I would say widgets by color level in your context. But it really depends on which are your dependent and ind |
51,657 | Is it "rows by columns" or "columns by rows"? | Of course you can view this table either way by transposing it. Conventionally, in a database rows represent objects and columns contain their attributes, whence this presentation would typically be viewed as a list of widgets, not a list of color levels. | Is it "rows by columns" or "columns by rows"? | Of course you can view this table either way by transposing it. Conventionally, in a database rows represent objects and columns contain their attributes, whence this presentation would typically be | Is it "rows by columns" or "columns by rows"?
Of course you can view this table either way by transposing it. Conventionally, in a database rows represent objects and columns contain their attributes, whence this presentation would typically be viewed as a list of widgets, not a list of color levels. | Is it "rows by columns" or "columns by rows"?
Of course you can view this table either way by transposing it. Conventionally, in a database rows represent objects and columns contain their attributes, whence this presentation would typically be |
51,658 | Is it "rows by columns" or "columns by rows"? | Typically, we talk about a r(ow) X (c)olumn matrix, from Linear Algebra. So, a matrix with 2 rows and 3 columns is a 2 X 3 matrix. By that logic, I'd call your data frame a "Widgets by Color" table. | Is it "rows by columns" or "columns by rows"? | Typically, we talk about a r(ow) X (c)olumn matrix, from Linear Algebra. So, a matrix with 2 rows and 3 columns is a 2 X 3 matrix. By that logic, I'd call your data frame a "Widgets by Color" table. | Is it "rows by columns" or "columns by rows"?
Typically, we talk about a r(ow) X (c)olumn matrix, from Linear Algebra. So, a matrix with 2 rows and 3 columns is a 2 X 3 matrix. By that logic, I'd call your data frame a "Widgets by Color" table. | Is it "rows by columns" or "columns by rows"?
Typically, we talk about a r(ow) X (c)olumn matrix, from Linear Algebra. So, a matrix with 2 rows and 3 columns is a 2 X 3 matrix. By that logic, I'd call your data frame a "Widgets by Color" table. |
51,659 | Is it "rows by columns" or "columns by rows"? | for a table like that, I say it the same way I'd say "n by k" for a matrix with n rows and k columns (i.e. rows first). | Is it "rows by columns" or "columns by rows"? | for a table like that, I say it the same way I'd say "n by k" for a matrix with n rows and k columns (i.e. rows first). | Is it "rows by columns" or "columns by rows"?
for a table like that, I say it the same way I'd say "n by k" for a matrix with n rows and k columns (i.e. rows first). | Is it "rows by columns" or "columns by rows"?
for a table like that, I say it the same way I'd say "n by k" for a matrix with n rows and k columns (i.e. rows first). |
51,660 | Convolution of Poisson with Binomial distribution? | Let's start by looking at a single pulse and figure out the distribution of the number of photons in that pulse that get through the filter. To do this, let $N$ denote the initial number of photons in the pulse and let $X$ denote the number of photons that make it through the filter. Then you have the model:
$$\begin{align}
N &\sim \text{Pois}(\lambda), \\[6pt]
X|N &\sim \text{Bin}(N,\theta). \\[6pt]
\end{align}$$
The marginal distribution of $X$ is obtained using the law of total probability, to wit:
$$\begin{align}
p_X(x) \equiv \mathbb{P}(X=x)
&= \sum_{n=0}^\infty \mathbb{P}(X=x|N=n) \cdot \mathbb{P}(N=n) \\[6pt]
&= \sum_{n=0}^\infty \text{Bin}(x|n,\theta) \cdot \text{Pois}(n|\lambda) \\[6pt]
&= \sum_{n=x}^\infty \frac{n!}{x! (n-x)!} \theta^x (1-\theta)^{n-x} \cdot \frac{\lambda^n}{n!} e^{-\lambda} \\[6pt]
&= \frac{(\theta \lambda)^x}{x!} e^{-\theta \lambda} \sum_{n=x}^\infty \frac{((1-\theta)\lambda)^{n-x}}{(n-x)!} e^{-(1-\theta)\lambda} \\[6pt]
&= \frac{(\theta \lambda)^x}{x!} e^{-\theta \lambda} \sum_{r=0}^\infty \frac{((1-\theta)\lambda)^r}{r!} e^{-(1-\theta)\lambda} \\[6pt]
&= \text{Pois}(x| \theta \lambda) \sum_{r=0}^\infty \text{Pois}(r| (1-\theta)\lambda) \\[12pt]
&= \text{Pois}(x| \theta \lambda). \\[6pt]
\end{align}$$
This gives us the marginal distribution $X \sim \text{Pois}(\theta \lambda)$ for the number of photons that make it through the filter in a single pulse. This is called "thinning" the Poisson variable/process --- it leads to another Poisson variable/process but with the mean parameter reduced proportionately to the thinning. The result shown here can also be proved using the generating functions for the distribution; see e.g., here.
Now suppose we have $k$ independent pulses of the same type (i.e., with the same parameters) and let $X_1,...,X_k \sim \text{Pois}(\theta \lambda)$ denote the number of photons that go through the filter from each of these pulses. Then the total number of photons that make it through the filter is:
$$S_k = X_1 + \cdots + X_k.$$
The marginal distribution of $S_k$ is a $k$-fold convolution of the $\text{Pois}(\theta \lambda)$ distribution, which is:
$$S_k \sim \text{Pois}(k \theta \lambda).$$
This is the distribution of the number of photons that make it through the filter from $k$ pulses with mean-photons $\lambda$ and filter penetration probability $\theta$. | Convolution of Poisson with Binomial distribution? | Let's start by looking at a single pulse and figure out the distribution of the number of photons in that pulse that get through the filter. To do this, let $N$ denote the initial number of photons i | Convolution of Poisson with Binomial distribution?
Let's start by looking at a single pulse and figure out the distribution of the number of photons in that pulse that get through the filter. To do this, let $N$ denote the initial number of photons in the pulse and let $X$ denote the number of photons that make it through the filter. Then you have the model:
$$\begin{align}
N &\sim \text{Pois}(\lambda), \\[6pt]
X|N &\sim \text{Bin}(N,\theta). \\[6pt]
\end{align}$$
The marginal distribution of $X$ is obtained using the law of total probability, to wit:
$$\begin{align}
p_X(x) \equiv \mathbb{P}(X=x)
&= \sum_{n=0}^\infty \mathbb{P}(X=x|N=n) \cdot \mathbb{P}(N=n) \\[6pt]
&= \sum_{n=0}^\infty \text{Bin}(x|n,\theta) \cdot \text{Pois}(n|\lambda) \\[6pt]
&= \sum_{n=x}^\infty \frac{n!}{x! (n-x)!} \theta^x (1-\theta)^{n-x} \cdot \frac{\lambda^n}{n!} e^{-\lambda} \\[6pt]
&= \frac{(\theta \lambda)^x}{x!} e^{-\theta \lambda} \sum_{n=x}^\infty \frac{((1-\theta)\lambda)^{n-x}}{(n-x)!} e^{-(1-\theta)\lambda} \\[6pt]
&= \frac{(\theta \lambda)^x}{x!} e^{-\theta \lambda} \sum_{r=0}^\infty \frac{((1-\theta)\lambda)^r}{r!} e^{-(1-\theta)\lambda} \\[6pt]
&= \text{Pois}(x| \theta \lambda) \sum_{r=0}^\infty \text{Pois}(r| (1-\theta)\lambda) \\[12pt]
&= \text{Pois}(x| \theta \lambda). \\[6pt]
\end{align}$$
This gives us the marginal distribution $X \sim \text{Pois}(\theta \lambda)$ for the number of photons that make it through the filter in a single pulse. This is called "thinning" the Poisson variable/process --- it leads to another Poisson variable/process but with the mean parameter reduced proportionately to the thinning. The result shown here can also be proved using the generating functions for the distribution; see e.g., here.
Now suppose we have $k$ independent pulses of the same type (i.e., with the same parameters) and let $X_1,...,X_k \sim \text{Pois}(\theta \lambda)$ denote the number of photons that go through the filter from each of these pulses. Then the total number of photons that make it through the filter is:
$$S_k = X_1 + \cdots + X_k.$$
The marginal distribution of $S_k$ is a $k$-fold convolution of the $\text{Pois}(\theta \lambda)$ distribution, which is:
$$S_k \sim \text{Pois}(k \theta \lambda).$$
This is the distribution of the number of photons that make it through the filter from $k$ pulses with mean-photons $\lambda$ and filter penetration probability $\theta$. | Convolution of Poisson with Binomial distribution?
Let's start by looking at a single pulse and figure out the distribution of the number of photons in that pulse that get through the filter. To do this, let $N$ denote the initial number of photons i |
51,661 | Convolution of Poisson with Binomial distribution? | Intuition
You can view it intuitively as following.
The Poisson distribution describes the number of counts for a Poisson process taking some time $T$ (like your pulse taking some time $T$ with photons being emitted randomly with a specific rate).
You could randomly designate each event/case/photon as $X_i = 0$ or $X_i = 1$ (in the image below this is shown as black/white circles on a line).
Effectively this is the same as generating two seperate independent Poisson processes (each taking times $T_0, T_1$ and with $T_0+T_1 = T$) and then mixing the points.
You can verify that this correct by the following thought: The sum of two Poisson variables is another Poisson variable, and each point will have $T_i/T$ probability of being $i$.
So the number of cases $X_i = 1$ is another Poisson distributed variable.
related: Probability of compound Poisson process | Convolution of Poisson with Binomial distribution? | Intuition
You can view it intuitively as following.
The Poisson distribution describes the number of counts for a Poisson process taking some time $T$ (like your pulse taking some time $T$ with photon | Convolution of Poisson with Binomial distribution?
Intuition
You can view it intuitively as following.
The Poisson distribution describes the number of counts for a Poisson process taking some time $T$ (like your pulse taking some time $T$ with photons being emitted randomly with a specific rate).
You could randomly designate each event/case/photon as $X_i = 0$ or $X_i = 1$ (in the image below this is shown as black/white circles on a line).
Effectively this is the same as generating two seperate independent Poisson processes (each taking times $T_0, T_1$ and with $T_0+T_1 = T$) and then mixing the points.
You can verify that this correct by the following thought: The sum of two Poisson variables is another Poisson variable, and each point will have $T_i/T$ probability of being $i$.
So the number of cases $X_i = 1$ is another Poisson distributed variable.
related: Probability of compound Poisson process | Convolution of Poisson with Binomial distribution?
Intuition
You can view it intuitively as following.
The Poisson distribution describes the number of counts for a Poisson process taking some time $T$ (like your pulse taking some time $T$ with photon |
51,662 | Convolution of Poisson with Binomial distribution? | This sounds like a compound Poisson distribution. You have a Poisson distributed number $N$ of binomial trials $X_i$, each trial coming from one incoming photon. Each photon either passes through or not - as long as the absorption probability is constant, each such choice is a Bernoulli trial, i.e., $P(X_i=1)=p$. So in the end you have a binomial distribution for the total number of photons passing through, with the binomial parameter $N$ being Poisson distributed, or a Poisson-binomial compound. This may be helpful: Compound of Binomial and Poisson random variable. | Convolution of Poisson with Binomial distribution? | This sounds like a compound Poisson distribution. You have a Poisson distributed number $N$ of binomial trials $X_i$, each trial coming from one incoming photon. Each photon either passes through or n | Convolution of Poisson with Binomial distribution?
This sounds like a compound Poisson distribution. You have a Poisson distributed number $N$ of binomial trials $X_i$, each trial coming from one incoming photon. Each photon either passes through or not - as long as the absorption probability is constant, each such choice is a Bernoulli trial, i.e., $P(X_i=1)=p$. So in the end you have a binomial distribution for the total number of photons passing through, with the binomial parameter $N$ being Poisson distributed, or a Poisson-binomial compound. This may be helpful: Compound of Binomial and Poisson random variable. | Convolution of Poisson with Binomial distribution?
This sounds like a compound Poisson distribution. You have a Poisson distributed number $N$ of binomial trials $X_i$, each trial coming from one incoming photon. Each photon either passes through or n |
51,663 | What is the "grid" in Bayesian grid approximations? | Bayes theorem is
$$
p(\theta|X) = \frac{p(X|\theta)\,p(\theta)}{p(X)}
$$
where, by the law of total probability, for discrete distributions $p(X) = \sum_\theta \,p(X|\theta)\,p(\theta)$. So for the numerator, you multiply the likelihood by the prior, and for the denominator, you need to do the same for all the possible values of $\theta$ and sum them to normalize it.
It gets more complicated if we're dealing with continuous variables. If $\theta$ is continuous, "all" the values of theta mean infinitely many real numbers, we can't just sum them. In such a case, we need to take the integral
$$
p(X) = \int p(X|\theta)\,p(\theta)\, d\theta
$$
The problem is that this is not necessarily straightforward. That is why to do this we often use approximations like Laplace approximation, variational inference, MCMC sampling (see Monte Carlo integration), or other ways of numerically approximating the integral. Grid approximation is one of those methods. It approximates the integral with Riemann sum
$$
\int p(X|\theta)\,p(\theta)\, d\theta \approx \sum_{i \in G} p(X|\theta_i)\,p(\theta_i) \, \Delta\theta_i
$$
where $G$ is our grid and $\Delta\theta_i = \theta_i - \theta_{i-1}$. Notice that in the example from the book uniform prior was used, so $\Delta\theta_i$ was constant and was dropped from the calculation.
The grid is simply a set of points used to evaluate the function used for the sake of approximating the integral. The more points, the more precise the approximation is. It should also cover the range of possible values for $\theta$, e.g. if it is a Gaussian, it should not range lower than two or three standard deviations to account for 95% or more of the possible values. Picking the grids is a separate subject in itself (how large, uniform or not, etc).
Using Riemann sum intuitively makes sense, as you can think of the integral as a sum over infinitely many elements
$$
\lim_{n \to \infty} \sum_{i=1}^n f(x_i) \Delta x_i = \int \,f(x) \,dx
$$
If the concepts of integral calculus and Riemann sums are not clear to you, I highly recommend the Khan academy videos explaining them in greater detail.
That said, there are many much better alternatives to grid approximation, so unless you are solving a simple, low-dimensional, problem, this is not something you should use. | What is the "grid" in Bayesian grid approximations? | Bayes theorem is
$$
p(\theta|X) = \frac{p(X|\theta)\,p(\theta)}{p(X)}
$$
where, by the law of total probability, for discrete distributions $p(X) = \sum_\theta \,p(X|\theta)\,p(\theta)$. So for the nu | What is the "grid" in Bayesian grid approximations?
Bayes theorem is
$$
p(\theta|X) = \frac{p(X|\theta)\,p(\theta)}{p(X)}
$$
where, by the law of total probability, for discrete distributions $p(X) = \sum_\theta \,p(X|\theta)\,p(\theta)$. So for the numerator, you multiply the likelihood by the prior, and for the denominator, you need to do the same for all the possible values of $\theta$ and sum them to normalize it.
It gets more complicated if we're dealing with continuous variables. If $\theta$ is continuous, "all" the values of theta mean infinitely many real numbers, we can't just sum them. In such a case, we need to take the integral
$$
p(X) = \int p(X|\theta)\,p(\theta)\, d\theta
$$
The problem is that this is not necessarily straightforward. That is why to do this we often use approximations like Laplace approximation, variational inference, MCMC sampling (see Monte Carlo integration), or other ways of numerically approximating the integral. Grid approximation is one of those methods. It approximates the integral with Riemann sum
$$
\int p(X|\theta)\,p(\theta)\, d\theta \approx \sum_{i \in G} p(X|\theta_i)\,p(\theta_i) \, \Delta\theta_i
$$
where $G$ is our grid and $\Delta\theta_i = \theta_i - \theta_{i-1}$. Notice that in the example from the book uniform prior was used, so $\Delta\theta_i$ was constant and was dropped from the calculation.
The grid is simply a set of points used to evaluate the function used for the sake of approximating the integral. The more points, the more precise the approximation is. It should also cover the range of possible values for $\theta$, e.g. if it is a Gaussian, it should not range lower than two or three standard deviations to account for 95% or more of the possible values. Picking the grids is a separate subject in itself (how large, uniform or not, etc).
Using Riemann sum intuitively makes sense, as you can think of the integral as a sum over infinitely many elements
$$
\lim_{n \to \infty} \sum_{i=1}^n f(x_i) \Delta x_i = \int \,f(x) \,dx
$$
If the concepts of integral calculus and Riemann sums are not clear to you, I highly recommend the Khan academy videos explaining them in greater detail.
That said, there are many much better alternatives to grid approximation, so unless you are solving a simple, low-dimensional, problem, this is not something you should use. | What is the "grid" in Bayesian grid approximations?
Bayes theorem is
$$
p(\theta|X) = \frac{p(X|\theta)\,p(\theta)}{p(X)}
$$
where, by the law of total probability, for discrete distributions $p(X) = \sum_\theta \,p(X|\theta)\,p(\theta)$. So for the nu |
51,664 | What is the "grid" in Bayesian grid approximations? | Let me make up an example to make it easier to understand. It is not what the book says but it is the same idea and I think it will make it even easier.
Suppose you have independent samples $x_1,...,x_{20}$ from a $\textbf{Nor}(\mu,\sigma^2)$ distribution. You would like to draw/find the posterior distribution of the data with the priors (for example) $\mu\sim \textbf{Unif}(0,20)$ and $\sigma \sim \textbf{Exp}(0.5)$. Let us generate some fake data in R,
set.seed(2024)
data = rnorm(20, mean = 10, sd = 2)
The posterior distribution estimates $\mu$ and $\sigma$, i.e. it is a two dimensional distribution. Let $f(\mu,\sigma)$ denote the likelihood/posterior of the data given by Bayes theorem. Therefore,
$$ f(\mu,\sigma) = (\text{constant}) \times \prod_{k=1}^{20} f_X(x_k) g(\mu) h(\sigma) $$
Here $f_X(\cdot)$ is the PDF of $X\sim \textbf{Nor}(\mu,\sigma^2)$, $g(\cdot)$ is the PDF of $\mu$, and $h(\cdot)$ is the PDF of $\sigma$. Once we choose a specific choice of $\mu$ and $\sigma$ we can evaluate $f_X(x_k)$ to get a number.
Instead of using Calculus we use a discrete approximation. We take the $\mu$ and discreterize it, say from $0$ to $20$, and then discretize $\sigma$ from, say $0$ to $5$. Then we simplify evaluate the value of the posterior at each of those points. Let us illustrate this with some code. So first we define this posterior function in R,
f = function(mu,sigma){
prod(dnorm(data, mean = mu, sd = sigma))*dunif(mu, min = 0, max = 20)*dexp(sigma, rate = 0.5)
}
Next we generate a discretization.
mu = seq( from = 0, to = 20, length.out = 30)
sigma = seq( from = 0, to = 5, length.out = 30)
Now we can store evaluate the posterior at each of those points. We will store all of those combinations into a matrix of possibilities.
posterior = matrix(NA, nrow = 30, ncol = 30)
for(n in 1:30){
for(m in 1:30){
posterior[n,m] = f(mu[n],sigma[m])
}}
Now do not forget to normalize your posterior! Here is the code which will accomplish this:
mu.thiccness = mu[2] - mu[1]
sigma.thiccness = sigma[2] - sigma[1]
posterior = posterior/sum( posterior*mu.thiccness*sigma.thiccness )
Note: There is a mistake in ``Statistical Rethinking'', the author only sums the values, he did not take into account the thiccness of the grid.
Now you can display your posterior as a matrix,
View(posterior)
But even better is to visualize it. The posterior here can be visualized as a two-dimensional distribution, i.e. a surface. Here is some code to generate this picture.
persp( mu, sigma, posterior,
theta = 30, phi = 20, col = "red",
shade = 0.5, ticktype = "detailed" )
From this picture you can see that the posterior is peaked at its most likely estimate. Exactly what should happen. | What is the "grid" in Bayesian grid approximations? | Let me make up an example to make it easier to understand. It is not what the book says but it is the same idea and I think it will make it even easier.
Suppose you have independent samples $x_1,..., | What is the "grid" in Bayesian grid approximations?
Let me make up an example to make it easier to understand. It is not what the book says but it is the same idea and I think it will make it even easier.
Suppose you have independent samples $x_1,...,x_{20}$ from a $\textbf{Nor}(\mu,\sigma^2)$ distribution. You would like to draw/find the posterior distribution of the data with the priors (for example) $\mu\sim \textbf{Unif}(0,20)$ and $\sigma \sim \textbf{Exp}(0.5)$. Let us generate some fake data in R,
set.seed(2024)
data = rnorm(20, mean = 10, sd = 2)
The posterior distribution estimates $\mu$ and $\sigma$, i.e. it is a two dimensional distribution. Let $f(\mu,\sigma)$ denote the likelihood/posterior of the data given by Bayes theorem. Therefore,
$$ f(\mu,\sigma) = (\text{constant}) \times \prod_{k=1}^{20} f_X(x_k) g(\mu) h(\sigma) $$
Here $f_X(\cdot)$ is the PDF of $X\sim \textbf{Nor}(\mu,\sigma^2)$, $g(\cdot)$ is the PDF of $\mu$, and $h(\cdot)$ is the PDF of $\sigma$. Once we choose a specific choice of $\mu$ and $\sigma$ we can evaluate $f_X(x_k)$ to get a number.
Instead of using Calculus we use a discrete approximation. We take the $\mu$ and discreterize it, say from $0$ to $20$, and then discretize $\sigma$ from, say $0$ to $5$. Then we simplify evaluate the value of the posterior at each of those points. Let us illustrate this with some code. So first we define this posterior function in R,
f = function(mu,sigma){
prod(dnorm(data, mean = mu, sd = sigma))*dunif(mu, min = 0, max = 20)*dexp(sigma, rate = 0.5)
}
Next we generate a discretization.
mu = seq( from = 0, to = 20, length.out = 30)
sigma = seq( from = 0, to = 5, length.out = 30)
Now we can store evaluate the posterior at each of those points. We will store all of those combinations into a matrix of possibilities.
posterior = matrix(NA, nrow = 30, ncol = 30)
for(n in 1:30){
for(m in 1:30){
posterior[n,m] = f(mu[n],sigma[m])
}}
Now do not forget to normalize your posterior! Here is the code which will accomplish this:
mu.thiccness = mu[2] - mu[1]
sigma.thiccness = sigma[2] - sigma[1]
posterior = posterior/sum( posterior*mu.thiccness*sigma.thiccness )
Note: There is a mistake in ``Statistical Rethinking'', the author only sums the values, he did not take into account the thiccness of the grid.
Now you can display your posterior as a matrix,
View(posterior)
But even better is to visualize it. The posterior here can be visualized as a two-dimensional distribution, i.e. a surface. Here is some code to generate this picture.
persp( mu, sigma, posterior,
theta = 30, phi = 20, col = "red",
shade = 0.5, ticktype = "detailed" )
From this picture you can see that the posterior is peaked at its most likely estimate. Exactly what should happen. | What is the "grid" in Bayesian grid approximations?
Let me make up an example to make it easier to understand. It is not what the book says but it is the same idea and I think it will make it even easier.
Suppose you have independent samples $x_1,..., |
51,665 | What is the "grid" in Bayesian grid approximations? | Grid approximations let you compute a discrete posterior approximation
The Cartesian grid used in this grid approximation is for one-dimensional parameter space so it consists of a set of vertices at evenly spaced points over the parameter range. If you were to use a Cartesian grid for a two-dimensional parameter space it would look like a standard square lattice, and if you were to use a Cartesian grid for a three-dimensional parameter space it would look like a standard cubic lattice.
The idea of using the grid is that it gives you a discrete prior distribution with support on a finite number of points in the parameter space, which makes it simple to compute the corresponding posterior (which is also a discrete distribution over the vertices in the grid). Remember that when you do computing, the computer can only handle a finite number of calculations, so a grid approximation to a continuum lets you compute an answer. The discrete prior on the vertices of the grid approximate a prior on the larger continuum over the parameter range. So long as the grid is sufficiently "fine" relative to the changes in the true prior and likelihood, it should give you a good approximation to the continuous posterior it is designed to approximate. | What is the "grid" in Bayesian grid approximations? | Grid approximations let you compute a discrete posterior approximation
The Cartesian grid used in this grid approximation is for one-dimensional parameter space so it consists of a set of vertices at | What is the "grid" in Bayesian grid approximations?
Grid approximations let you compute a discrete posterior approximation
The Cartesian grid used in this grid approximation is for one-dimensional parameter space so it consists of a set of vertices at evenly spaced points over the parameter range. If you were to use a Cartesian grid for a two-dimensional parameter space it would look like a standard square lattice, and if you were to use a Cartesian grid for a three-dimensional parameter space it would look like a standard cubic lattice.
The idea of using the grid is that it gives you a discrete prior distribution with support on a finite number of points in the parameter space, which makes it simple to compute the corresponding posterior (which is also a discrete distribution over the vertices in the grid). Remember that when you do computing, the computer can only handle a finite number of calculations, so a grid approximation to a continuum lets you compute an answer. The discrete prior on the vertices of the grid approximate a prior on the larger continuum over the parameter range. So long as the grid is sufficiently "fine" relative to the changes in the true prior and likelihood, it should give you a good approximation to the continuous posterior it is designed to approximate. | What is the "grid" in Bayesian grid approximations?
Grid approximations let you compute a discrete posterior approximation
The Cartesian grid used in this grid approximation is for one-dimensional parameter space so it consists of a set of vertices at |
51,666 | ML modelling where the output affects the DGP | is ML recommended in decision-making processes which inherently changes the nature of the environment?
What is the alternative, given that we have to make decisions? Would HL (Human Learning) work better? Or NL (No Learning)?
I work in retail forecasting, and our forecasts are used in automatic replenishment in supermarkets. If our forecasts are too low, then not enough stock gets sent to the supermarket, so sales will be constrained by deficient stock ("censored" sales), and subsequent forecasts that learn from these censored sales will be biased low, leading again to lower stocks. A vicious circle that could end in a "silent delisting", when the system thinks there is no demand for a product and essentially does not restock any more.
However, the exact same mechanism could happen if there was a human rather than a machine doing the forecasting. Thus, the solution is not to take ML out of the loop, but to understand the feedback loops that tie the ML results back into new ML training data, and mitigate any adverse effects. In my case, that means monitoring stockouts and removing censored sales from the training data. | ML modelling where the output affects the DGP | is ML recommended in decision-making processes which inherently changes the nature of the environment?
What is the alternative, given that we have to make decisions? Would HL (Human Learning) work be | ML modelling where the output affects the DGP
is ML recommended in decision-making processes which inherently changes the nature of the environment?
What is the alternative, given that we have to make decisions? Would HL (Human Learning) work better? Or NL (No Learning)?
I work in retail forecasting, and our forecasts are used in automatic replenishment in supermarkets. If our forecasts are too low, then not enough stock gets sent to the supermarket, so sales will be constrained by deficient stock ("censored" sales), and subsequent forecasts that learn from these censored sales will be biased low, leading again to lower stocks. A vicious circle that could end in a "silent delisting", when the system thinks there is no demand for a product and essentially does not restock any more.
However, the exact same mechanism could happen if there was a human rather than a machine doing the forecasting. Thus, the solution is not to take ML out of the loop, but to understand the feedback loops that tie the ML results back into new ML training data, and mitigate any adverse effects. In my case, that means monitoring stockouts and removing censored sales from the training data. | ML modelling where the output affects the DGP
is ML recommended in decision-making processes which inherently changes the nature of the environment?
What is the alternative, given that we have to make decisions? Would HL (Human Learning) work be |
51,667 | ML modelling where the output affects the DGP | That's a common problem with training ML models using data generated from a set-up where a ML model is already part of what happens. E.g. with search engine data, the results shown on the first page (and even better in first spot) are much more likely to be clicked than those shown further down. Additionally, there can be odd feedback loops in this kind of set-up, so one needs to be careful about such problems.
One way to avoid getting no information on the results that are too far down on the list of search engine results (or in inventory management things that are out of stock), one solution is to occasionally randomly move some results up. In the case of inventory management that would mean instead of stocking 0 of a product predicted to not sell, occasionally randomly order some to find out what happens. However, that might need some safeguards or that could get embarrassing (e.g. large quantities of perishable Christmas food newly in stock in February, a pile of summer clothes at the start of winter etc.).
Additionally, in some settings you still get useful feedback even if too little stock was ordered, if you obtain the right data. E.g. if you track whether stock-outs occur or even how many people searched for a product (but found it to be out of stock).
In summary, it's definitely a problem to be aware of and that one can try to address in many ways. | ML modelling where the output affects the DGP | That's a common problem with training ML models using data generated from a set-up where a ML model is already part of what happens. E.g. with search engine data, the results shown on the first page ( | ML modelling where the output affects the DGP
That's a common problem with training ML models using data generated from a set-up where a ML model is already part of what happens. E.g. with search engine data, the results shown on the first page (and even better in first spot) are much more likely to be clicked than those shown further down. Additionally, there can be odd feedback loops in this kind of set-up, so one needs to be careful about such problems.
One way to avoid getting no information on the results that are too far down on the list of search engine results (or in inventory management things that are out of stock), one solution is to occasionally randomly move some results up. In the case of inventory management that would mean instead of stocking 0 of a product predicted to not sell, occasionally randomly order some to find out what happens. However, that might need some safeguards or that could get embarrassing (e.g. large quantities of perishable Christmas food newly in stock in February, a pile of summer clothes at the start of winter etc.).
Additionally, in some settings you still get useful feedback even if too little stock was ordered, if you obtain the right data. E.g. if you track whether stock-outs occur or even how many people searched for a product (but found it to be out of stock).
In summary, it's definitely a problem to be aware of and that one can try to address in many ways. | ML modelling where the output affects the DGP
That's a common problem with training ML models using data generated from a set-up where a ML model is already part of what happens. E.g. with search engine data, the results shown on the first page ( |
51,668 | Is the use of cutoffs for dichotomisation of biomarkers really that bad? | It's really that bad, as detailed in many of Patrick and my writings. Think about it these ways, for starters: If you categorize a marker the loss of information is so great that you will need to go an collect more markers to make up for the loss. Why not instead get the most information out of a single marker? Then there is the issue that it is easy to show that the threshold for a marker is a function of the continuous values of all the other predictors. See the Information Loss chapter in BBR. For example, if the sex of the patient is an important factor with regard to the likelihood of disease, you'll find that a different biomarker threshold is needed for females vs. males.
Categorization simplifies decision making but only by making it worse. Many things only seem to be simple precisely because they are wrong.
We are taught to assess goodness of fit of models. A model that assumes a piecewise flat relationship between a biomarker and outcome will be easily shown to have poor fit to the data. For example look at the log-likelihood for a piecewise flat model (i.e., one using thresholding) and a flexible but everywhere-smooth model. | Is the use of cutoffs for dichotomisation of biomarkers really that bad? | It's really that bad, as detailed in many of Patrick and my writings. Think about it these ways, for starters: If you categorize a marker the loss of information is so great that you will need to go | Is the use of cutoffs for dichotomisation of biomarkers really that bad?
It's really that bad, as detailed in many of Patrick and my writings. Think about it these ways, for starters: If you categorize a marker the loss of information is so great that you will need to go an collect more markers to make up for the loss. Why not instead get the most information out of a single marker? Then there is the issue that it is easy to show that the threshold for a marker is a function of the continuous values of all the other predictors. See the Information Loss chapter in BBR. For example, if the sex of the patient is an important factor with regard to the likelihood of disease, you'll find that a different biomarker threshold is needed for females vs. males.
Categorization simplifies decision making but only by making it worse. Many things only seem to be simple precisely because they are wrong.
We are taught to assess goodness of fit of models. A model that assumes a piecewise flat relationship between a biomarker and outcome will be easily shown to have poor fit to the data. For example look at the log-likelihood for a piecewise flat model (i.e., one using thresholding) and a flexible but everywhere-smooth model. | Is the use of cutoffs for dichotomisation of biomarkers really that bad?
It's really that bad, as detailed in many of Patrick and my writings. Think about it these ways, for starters: If you categorize a marker the loss of information is so great that you will need to go |
51,669 | Is the use of cutoffs for dichotomisation of biomarkers really that bad? | Although cutoffs in biomarker values are admittedly not helpful for modeling within a single population (+1 to Frank Harrell's answer on that basis), they make sense when, together with extensive subject-matter knowledge, they can delineate fundamentally different populations that benefit from different therapy. In such situations, testing can classify most cases with high probability (based on separate high and low cutoffs) while assigning intermediate cases to further testing or more cautious application of the findings. Breast cancer, the example of the OP in which tumors are evaluated for estrogen receptor alpha (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2), is such a situation.
This hexbin plot illustrates the 2-dimensional distribution of the expression of HER2 versus ER among 1093 primary breast cancers, based on RNA sequencing data from The Cancer Genome Atlas. Note the logarithmic scales on both axes.
Density plots for ER and HER2 make the class distinctions clearer.
ER shows a clearly bimodal distribution with modal values differing by 2 orders of magnitude and few tumors having intermediate values. HER2 values show a major peak encompassing about 90% of tumors, but about 10% show substantially larger expression, up to 20-fold more than that of the major peak.
Decades of clinical and laboratory research have shown that these distributions represent fundamentally different biological classes of breast cancer, driven by different tumorigenic processes and responding best to different types of therapy. For example, with respect to tumors having high expression of HER2 (also called ERBB2), Wikipedia summarizes:
Amplification, also known as the over-expression of the ERBB2 gene, occurs in approximately 15-30% of breast cancers. It is strongly associated with increased disease recurrence and a poor prognosis; however, drug agents targeting HER2 in breast cancer have significantly positively altered the otherwise poor-prognosis natural history of HER2-positive breast cancer... HER2 is the target of the monoclonal antibody trastuzumab (marketed as Herceptin). Trastuzumab is effective only in cancers where HER2 is over-expressed.
Similarly, patients with high ER expression can benefit from "endocrine therapy" that interrupts cellular signaling pathways mediated by estrogens. But cases at the lower left of the hexbin plot represent "triple-negative" breast cancers, low in all of ER, PR and HER2, that do not respond to therapies that target either estrogen signaling or HER2 signaling.* They need different therapies.
In practice, expression of ER, PR and HER2 is evaluated at the protein level by immunohistochemistry (IHC) rather than by the RNA sequencing that provided data for this plot. IHC makes at best semi-quantitative assessments of protein expression. For example, the Allred score cited by the OP combines the intensity and extent of IHC staining into an 8-point scale.
Nevertheless, given the large differences between high- and low-expressing tumors with respect to levels of these particular proteins, it's generally possible to assign a tumor to one of those biologically distinct classes based on IHC. The current recommendation of the College of American Pathologists (CAP) is to use the percentage of tumor-cell nuclei showing ER expression as the criterion, with tumors having <1% called ER-negative and those with >10% ER-positive. Tumors having 1% to 10% on that scale are reported as "ER Low Positive" with specific cautions about therapeutic choices that might be based on that finding. Similar guidelines for HER2 suggest an additional type of assay, in-situ hybridization, when IHC results are equivocal.
In that respect, behavior in practice follows what Harrell says on page 258 of Regression Modeling Strategies: "Physicians sometimes profess to desiring a binary decision model, but if given a probability they will rightfully apply different thresholds for treating different patients or for ordering other diagnostic tests." Based on clinical outcome studies, the CAP guidelines identify levels of expression that have high probability of correct classification, and recommend further tests and caution otherwise.
Finally, the classifications based on ER, PR, and HER2 expression are only part of what's considered in choosing therapy. Those are considered together with the tumor's histological appearance and size, whether it has spread to lymph nodes or beyond, further gene-expression or genetic tests, and patient preferences for breast conservation to select a combination of surgery, radiation, and pre-operative and adjuvant (post-surgical) drug therapies. The National Comprehensive Cancer Network publishes a 200+ page book of Guidelines for breast cancer therapy based on such considerations, documenting the level of evidence behind its recommendations, with 700+ references to the literature.
Data and code:
From Firebrowse, select BRCA for breast cancer, click on the mRNASeq bar, and select illuminahiseq_rnaseqv2-RSEM_genes_normalized. Restrict to primary tumors (sample type 01). Put into a data frame, here called brcaRNAseqPrimDF. Then in R:
library(ggplot2)
## for hexbin
ggplot(brcaRNAseqPrimDF, aes(x=ESR1.2099, y=ERBB2.2064)) + scale_x_log10() + scale_y_log10() + geom_hex(bins=50) + xlab("ER") + ylab("HER2") + theme(aspect.ratio = 1)
## for density plots
pHER2 <- ggplot(brcaRNAseqPrimDF,aes(x=ERBB2.2064))+scale_x_log10()+geom_density()+xlab("HER2")+geom_rug(alpha=0.2)
pESR1 <- ggplot(brcaRNAseqPrimDF,aes(x=ESR1.2099))+scale_x_log10()+geom_density()+xlab("ER")+geom_rug(alpha=0.2)
library(gridExtra)
grid.arrange(pESR1,pHER2,ncol=2)
ESR1 is Estrogen Receptor alpha. ERBB2 is the currently accepted name for HER2.
*For simplicity I'm not trying to incorporate the third classic immunohistochemical marker, prostaglandin receptor, as essentially all cases with low ER also have low prostaglandin receptor expression (called PGR.5241 in the data). | Is the use of cutoffs for dichotomisation of biomarkers really that bad? | Although cutoffs in biomarker values are admittedly not helpful for modeling within a single population (+1 to Frank Harrell's answer on that basis), they make sense when, together with extensive subj | Is the use of cutoffs for dichotomisation of biomarkers really that bad?
Although cutoffs in biomarker values are admittedly not helpful for modeling within a single population (+1 to Frank Harrell's answer on that basis), they make sense when, together with extensive subject-matter knowledge, they can delineate fundamentally different populations that benefit from different therapy. In such situations, testing can classify most cases with high probability (based on separate high and low cutoffs) while assigning intermediate cases to further testing or more cautious application of the findings. Breast cancer, the example of the OP in which tumors are evaluated for estrogen receptor alpha (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2), is such a situation.
This hexbin plot illustrates the 2-dimensional distribution of the expression of HER2 versus ER among 1093 primary breast cancers, based on RNA sequencing data from The Cancer Genome Atlas. Note the logarithmic scales on both axes.
Density plots for ER and HER2 make the class distinctions clearer.
ER shows a clearly bimodal distribution with modal values differing by 2 orders of magnitude and few tumors having intermediate values. HER2 values show a major peak encompassing about 90% of tumors, but about 10% show substantially larger expression, up to 20-fold more than that of the major peak.
Decades of clinical and laboratory research have shown that these distributions represent fundamentally different biological classes of breast cancer, driven by different tumorigenic processes and responding best to different types of therapy. For example, with respect to tumors having high expression of HER2 (also called ERBB2), Wikipedia summarizes:
Amplification, also known as the over-expression of the ERBB2 gene, occurs in approximately 15-30% of breast cancers. It is strongly associated with increased disease recurrence and a poor prognosis; however, drug agents targeting HER2 in breast cancer have significantly positively altered the otherwise poor-prognosis natural history of HER2-positive breast cancer... HER2 is the target of the monoclonal antibody trastuzumab (marketed as Herceptin). Trastuzumab is effective only in cancers where HER2 is over-expressed.
Similarly, patients with high ER expression can benefit from "endocrine therapy" that interrupts cellular signaling pathways mediated by estrogens. But cases at the lower left of the hexbin plot represent "triple-negative" breast cancers, low in all of ER, PR and HER2, that do not respond to therapies that target either estrogen signaling or HER2 signaling.* They need different therapies.
In practice, expression of ER, PR and HER2 is evaluated at the protein level by immunohistochemistry (IHC) rather than by the RNA sequencing that provided data for this plot. IHC makes at best semi-quantitative assessments of protein expression. For example, the Allred score cited by the OP combines the intensity and extent of IHC staining into an 8-point scale.
Nevertheless, given the large differences between high- and low-expressing tumors with respect to levels of these particular proteins, it's generally possible to assign a tumor to one of those biologically distinct classes based on IHC. The current recommendation of the College of American Pathologists (CAP) is to use the percentage of tumor-cell nuclei showing ER expression as the criterion, with tumors having <1% called ER-negative and those with >10% ER-positive. Tumors having 1% to 10% on that scale are reported as "ER Low Positive" with specific cautions about therapeutic choices that might be based on that finding. Similar guidelines for HER2 suggest an additional type of assay, in-situ hybridization, when IHC results are equivocal.
In that respect, behavior in practice follows what Harrell says on page 258 of Regression Modeling Strategies: "Physicians sometimes profess to desiring a binary decision model, but if given a probability they will rightfully apply different thresholds for treating different patients or for ordering other diagnostic tests." Based on clinical outcome studies, the CAP guidelines identify levels of expression that have high probability of correct classification, and recommend further tests and caution otherwise.
Finally, the classifications based on ER, PR, and HER2 expression are only part of what's considered in choosing therapy. Those are considered together with the tumor's histological appearance and size, whether it has spread to lymph nodes or beyond, further gene-expression or genetic tests, and patient preferences for breast conservation to select a combination of surgery, radiation, and pre-operative and adjuvant (post-surgical) drug therapies. The National Comprehensive Cancer Network publishes a 200+ page book of Guidelines for breast cancer therapy based on such considerations, documenting the level of evidence behind its recommendations, with 700+ references to the literature.
Data and code:
From Firebrowse, select BRCA for breast cancer, click on the mRNASeq bar, and select illuminahiseq_rnaseqv2-RSEM_genes_normalized. Restrict to primary tumors (sample type 01). Put into a data frame, here called brcaRNAseqPrimDF. Then in R:
library(ggplot2)
## for hexbin
ggplot(brcaRNAseqPrimDF, aes(x=ESR1.2099, y=ERBB2.2064)) + scale_x_log10() + scale_y_log10() + geom_hex(bins=50) + xlab("ER") + ylab("HER2") + theme(aspect.ratio = 1)
## for density plots
pHER2 <- ggplot(brcaRNAseqPrimDF,aes(x=ERBB2.2064))+scale_x_log10()+geom_density()+xlab("HER2")+geom_rug(alpha=0.2)
pESR1 <- ggplot(brcaRNAseqPrimDF,aes(x=ESR1.2099))+scale_x_log10()+geom_density()+xlab("ER")+geom_rug(alpha=0.2)
library(gridExtra)
grid.arrange(pESR1,pHER2,ncol=2)
ESR1 is Estrogen Receptor alpha. ERBB2 is the currently accepted name for HER2.
*For simplicity I'm not trying to incorporate the third classic immunohistochemical marker, prostaglandin receptor, as essentially all cases with low ER also have low prostaglandin receptor expression (called PGR.5241 in the data). | Is the use of cutoffs for dichotomisation of biomarkers really that bad?
Although cutoffs in biomarker values are admittedly not helpful for modeling within a single population (+1 to Frank Harrell's answer on that basis), they make sense when, together with extensive subj |
51,670 | Can duplicate examples create multi-collinearity? | As Christoph Hanck suggested, duplicate examples will not cause any problem of multi-collinearity. The multi-collinearity "problem" is caused by duplicate "columns" instead of "rows" in data matrix.
Intuitively, rows are data samples, and columns are features/measures, multi-collinearity means we have redundant features/measures, think about we can measure a person's height in different units, e.g., in cm and in inches.
On the other hand, it is perfectly normal to have "duplicated samples", like the other answer suggested, there are many cases that people have same gender and education status.
Assume if we have a data matrix $X$, if two columns of $X$ are the same, or one column of $X$ can be derived by a linear combination of other columns, $X'X$ will not be full rank and multi-collinearity will occur.
Regularization will help for multi-collinearity. Even small amount of regularization (L1 or L2) will make $X'X$ matrix full rank.
Related posts
What is rank deficiency, and how to deal with it?
Least Squares Regression Step-By-Step Linear Algebra Computation
What algorithm is used in linear regression? | Can duplicate examples create multi-collinearity? | As Christoph Hanck suggested, duplicate examples will not cause any problem of multi-collinearity. The multi-collinearity "problem" is caused by duplicate "columns" instead of "rows" in data matrix.
| Can duplicate examples create multi-collinearity?
As Christoph Hanck suggested, duplicate examples will not cause any problem of multi-collinearity. The multi-collinearity "problem" is caused by duplicate "columns" instead of "rows" in data matrix.
Intuitively, rows are data samples, and columns are features/measures, multi-collinearity means we have redundant features/measures, think about we can measure a person's height in different units, e.g., in cm and in inches.
On the other hand, it is perfectly normal to have "duplicated samples", like the other answer suggested, there are many cases that people have same gender and education status.
Assume if we have a data matrix $X$, if two columns of $X$ are the same, or one column of $X$ can be derived by a linear combination of other columns, $X'X$ will not be full rank and multi-collinearity will occur.
Regularization will help for multi-collinearity. Even small amount of regularization (L1 or L2) will make $X'X$ matrix full rank.
Related posts
What is rank deficiency, and how to deal with it?
Least Squares Regression Step-By-Step Linear Algebra Computation
What algorithm is used in linear regression? | Can duplicate examples create multi-collinearity?
As Christoph Hanck suggested, duplicate examples will not cause any problem of multi-collinearity. The multi-collinearity "problem" is caused by duplicate "columns" instead of "rows" in data matrix.
|
51,671 | Can duplicate examples create multi-collinearity? | There is no problem with duplicate rows. Consider some applied examples, such as "returns to education", i.e., how much do people earn given their education, gender, experience etc. There can, and often will, be people in the dataset with the same gender, years of education and years of labor market experience. So there is also no fix necessary.
Note that multicollinearity implies that the regressor matrix does not have full column rank, which in turn is what we need for the $X'X$ matrix involved in the OLS estimator to be invertible, or, put differently, for all coefficients to be identifiable.
Finding duplicate rows, if still desired, might be done as follows:
set.seed(1)
n <- 10
education <- sample(10:15, n, replace=T)
gender <- sample(0:1, n, replace=T)
(X <- cbind(education, gender))
duplicated(X)
> (X <- cbind(education, gender))
education gender
[1,] 10 0
[2,] 13 0
[3,] 10 0
[4,] 11 1
[5,] 14 1
[6,] 12 1
[7,] 15 1
[8,] 11 0
[9,] 12 0
[10,] 12 0
> duplicated(X)
[1] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE | Can duplicate examples create multi-collinearity? | There is no problem with duplicate rows. Consider some applied examples, such as "returns to education", i.e., how much do people earn given their education, gender, experience etc. There can, and oft | Can duplicate examples create multi-collinearity?
There is no problem with duplicate rows. Consider some applied examples, such as "returns to education", i.e., how much do people earn given their education, gender, experience etc. There can, and often will, be people in the dataset with the same gender, years of education and years of labor market experience. So there is also no fix necessary.
Note that multicollinearity implies that the regressor matrix does not have full column rank, which in turn is what we need for the $X'X$ matrix involved in the OLS estimator to be invertible, or, put differently, for all coefficients to be identifiable.
Finding duplicate rows, if still desired, might be done as follows:
set.seed(1)
n <- 10
education <- sample(10:15, n, replace=T)
gender <- sample(0:1, n, replace=T)
(X <- cbind(education, gender))
duplicated(X)
> (X <- cbind(education, gender))
education gender
[1,] 10 0
[2,] 13 0
[3,] 10 0
[4,] 11 1
[5,] 14 1
[6,] 12 1
[7,] 15 1
[8,] 11 0
[9,] 12 0
[10,] 12 0
> duplicated(X)
[1] FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE TRUE | Can duplicate examples create multi-collinearity?
There is no problem with duplicate rows. Consider some applied examples, such as "returns to education", i.e., how much do people earn given their education, gender, experience etc. There can, and oft |
51,672 | Simulating a Coin toss [duplicate] | Somewhat related example: One way to generate 10 tosses of a coin
with probability $0.4$ of heads is to use rbinom:
set.seed(123); rbinom(10, 1, .4)
[1] 0 1 0 1 1 0 0 1 0 0
Another way is to use the binomial inverse CDF (quantile) function) qbinom to transform uniform
random numbers from runif get the desired
Bernoulli distribution.
set.seed(123); qbinom(runif(10), 1, .4)
[1] 0 1 0 1 1 0 0 1 0 0
This suggests that R uses qbinom with runif to
get rbinom--in this instance.
However, for success probabilities greater than $0.5,$
it seems that R uses a variant of this method, and results starting with the same seed differ.
set.seed(123); rbinom(10, 1, .6)
[1] 1 0 1 0 0 1 1 0 1 1
set.seed(123); qbinom(runif(10), 1, .6)
[1] 0 1 1 1 1 0 1 1 1 1
If you use the same seed and you access pseudorandom numbers (as with runif) in exactly the same order
to do exactly the same procedure, you will get the same
result.
Another example: Means of 100 samples of size 10 from
the population $\mathsf{Norm}(\mu = 50, \sigma = 7).$
set.seed(1234)
a = replicate(100, mean(rnorm(10, 50, 7)))
summary(a)
Min. 1st Qu. Median Mean 3rd Qu. Max.
44.64 48.41 49.69 49.81 51.37 55.41
set.seed(1234)
x = rnorm(1000, 50, 7)
MAT = matrix(x, byrow = T, nrow=100)
a = rowMeans(MAT)
summary(a)
Min. 1st Qu. Median Mean 3rd Qu. Max.
44.64 48.41 49.69 49.81 51.37 55.41
However, if we omit byrow=T in making the matrix,
R will use its default, which is to fill the matrix by columns, and we will get a different summary (except, of course, for the mean of means).
set.seed(1234)
x = rnorm(1000, 50, 7)
MAT = matrix(x, nrow=100)
a = rowMeans(MAT)
summary(a)
Min. 1st Qu. Median Mean 3rd Qu. Max.
42.80 48.47 49.80 49.81 51.20 55.27 | Simulating a Coin toss [duplicate] | Somewhat related example: One way to generate 10 tosses of a coin
with probability $0.4$ of heads is to use rbinom:
set.seed(123); rbinom(10, 1, .4)
[1] 0 1 0 1 1 0 0 1 0 0
Another way is to use the | Simulating a Coin toss [duplicate]
Somewhat related example: One way to generate 10 tosses of a coin
with probability $0.4$ of heads is to use rbinom:
set.seed(123); rbinom(10, 1, .4)
[1] 0 1 0 1 1 0 0 1 0 0
Another way is to use the binomial inverse CDF (quantile) function) qbinom to transform uniform
random numbers from runif get the desired
Bernoulli distribution.
set.seed(123); qbinom(runif(10), 1, .4)
[1] 0 1 0 1 1 0 0 1 0 0
This suggests that R uses qbinom with runif to
get rbinom--in this instance.
However, for success probabilities greater than $0.5,$
it seems that R uses a variant of this method, and results starting with the same seed differ.
set.seed(123); rbinom(10, 1, .6)
[1] 1 0 1 0 0 1 1 0 1 1
set.seed(123); qbinom(runif(10), 1, .6)
[1] 0 1 1 1 1 0 1 1 1 1
If you use the same seed and you access pseudorandom numbers (as with runif) in exactly the same order
to do exactly the same procedure, you will get the same
result.
Another example: Means of 100 samples of size 10 from
the population $\mathsf{Norm}(\mu = 50, \sigma = 7).$
set.seed(1234)
a = replicate(100, mean(rnorm(10, 50, 7)))
summary(a)
Min. 1st Qu. Median Mean 3rd Qu. Max.
44.64 48.41 49.69 49.81 51.37 55.41
set.seed(1234)
x = rnorm(1000, 50, 7)
MAT = matrix(x, byrow = T, nrow=100)
a = rowMeans(MAT)
summary(a)
Min. 1st Qu. Median Mean 3rd Qu. Max.
44.64 48.41 49.69 49.81 51.37 55.41
However, if we omit byrow=T in making the matrix,
R will use its default, which is to fill the matrix by columns, and we will get a different summary (except, of course, for the mean of means).
set.seed(1234)
x = rnorm(1000, 50, 7)
MAT = matrix(x, nrow=100)
a = rowMeans(MAT)
summary(a)
Min. 1st Qu. Median Mean 3rd Qu. Max.
42.80 48.47 49.80 49.81 51.20 55.27 | Simulating a Coin toss [duplicate]
Somewhat related example: One way to generate 10 tosses of a coin
with probability $0.4$ of heads is to use rbinom:
set.seed(123); rbinom(10, 1, .4)
[1] 0 1 0 1 1 0 0 1 0 0
Another way is to use the |
51,673 | Simulating a Coin toss [duplicate] | An historical note about sample is that it got recently modified for being biased in some extreme situations (as commented by Chris Haug). In earlier versions of R such as 3.4.4, still running on my ownlaptop, the outcome of the above would be the same as a cdf inversion (and as the complement of the standard Uniform draw):
> set.seed(1)
> rbinom(10,1,0.5)
[1] 0 0 1 1 0 1 1 1 1 0
> set.seed(1)
> sample(c(0,1), 10, replace = TRUE)
[1] 0 0 1 1 0 1 1 1 1 0
> set.seed(1)
> qbinom(runif(10),1,0.5) #inverse cdf
[1] 0 0 1 1 0 1 1 1 1 0
> set.seed(1)
> 1*(runif(10)>0.5) #complement!
[1] 0 0 1 1 0 1 1 1 1 0
When checking the C code behind the R function rbinom, the adopted approach relies on a single Uniform when $np<30$:
/* inverse cdf logic for mean less than 30 */
repeat {
ix = 0;
f = qn;
u = unif_rand();
repeat {
if (u < f)
goto finis;
if (ix > 110)
break;
u -= f;
ix++;
f *= (g / ix - r);
and a much more involved resolution otherwise ($np\ge 30$), resolution including an accept-reject step,
/*------- np = n*p >= 30 : ----- */
repeat {
u = unif_rand() * p4;
v = unif_rand();
hence a random number of Uniforms. On the other hand, sample (in a version of 2017!) uses a cascade of C functions:
if (replace) {
int i, nc = 0;
for (i = 0; i < n; i++) if(n * p[i] > 0.1) nc++;
if (nc > 200)
walker_ProbSampleReplace(n, p, INTEGER(x), k, INTEGER(y));
else
ProbSampleReplace(n, p, INTEGER(x), k, INTEGER(y));
} else
ProbSampleNoReplace(n, p, INTEGER(x), k, INTEGER(y));
For instance, ProbSampleReplace is based on a single Uniform call:
/* compute the sample */
for (i = 0; i < nans; i++) {
rU = unif_rand();
for (j = 0; j < nm1; j++) {
if (rU <= p[j])
and the other ones as well, which is not to say that they return the same outcome than rbinom! | Simulating a Coin toss [duplicate] | An historical note about sample is that it got recently modified for being biased in some extreme situations (as commented by Chris Haug). In earlier versions of R such as 3.4.4, still running on my o | Simulating a Coin toss [duplicate]
An historical note about sample is that it got recently modified for being biased in some extreme situations (as commented by Chris Haug). In earlier versions of R such as 3.4.4, still running on my ownlaptop, the outcome of the above would be the same as a cdf inversion (and as the complement of the standard Uniform draw):
> set.seed(1)
> rbinom(10,1,0.5)
[1] 0 0 1 1 0 1 1 1 1 0
> set.seed(1)
> sample(c(0,1), 10, replace = TRUE)
[1] 0 0 1 1 0 1 1 1 1 0
> set.seed(1)
> qbinom(runif(10),1,0.5) #inverse cdf
[1] 0 0 1 1 0 1 1 1 1 0
> set.seed(1)
> 1*(runif(10)>0.5) #complement!
[1] 0 0 1 1 0 1 1 1 1 0
When checking the C code behind the R function rbinom, the adopted approach relies on a single Uniform when $np<30$:
/* inverse cdf logic for mean less than 30 */
repeat {
ix = 0;
f = qn;
u = unif_rand();
repeat {
if (u < f)
goto finis;
if (ix > 110)
break;
u -= f;
ix++;
f *= (g / ix - r);
and a much more involved resolution otherwise ($np\ge 30$), resolution including an accept-reject step,
/*------- np = n*p >= 30 : ----- */
repeat {
u = unif_rand() * p4;
v = unif_rand();
hence a random number of Uniforms. On the other hand, sample (in a version of 2017!) uses a cascade of C functions:
if (replace) {
int i, nc = 0;
for (i = 0; i < n; i++) if(n * p[i] > 0.1) nc++;
if (nc > 200)
walker_ProbSampleReplace(n, p, INTEGER(x), k, INTEGER(y));
else
ProbSampleReplace(n, p, INTEGER(x), k, INTEGER(y));
} else
ProbSampleNoReplace(n, p, INTEGER(x), k, INTEGER(y));
For instance, ProbSampleReplace is based on a single Uniform call:
/* compute the sample */
for (i = 0; i < nans; i++) {
rU = unif_rand();
for (j = 0; j < nm1; j++) {
if (rU <= p[j])
and the other ones as well, which is not to say that they return the same outcome than rbinom! | Simulating a Coin toss [duplicate]
An historical note about sample is that it got recently modified for being biased in some extreme situations (as commented by Chris Haug). In earlier versions of R such as 3.4.4, still running on my o |
51,674 | Simulating a Coin toss [duplicate] | I tried getting to the source code for both, but couldn't find it.
I did, however, find the references for the building of the two algorithms. They do not use the same references, so it is reasonable that they do not generate them in a similar manner. Indeed, sample() function briefly says it uses an easier way to handle random numbers, presumably through different generation. | Simulating a Coin toss [duplicate] | I tried getting to the source code for both, but couldn't find it.
I did, however, find the references for the building of the two algorithms. They do not use the same references, so it is reasonable | Simulating a Coin toss [duplicate]
I tried getting to the source code for both, but couldn't find it.
I did, however, find the references for the building of the two algorithms. They do not use the same references, so it is reasonable that they do not generate them in a similar manner. Indeed, sample() function briefly says it uses an easier way to handle random numbers, presumably through different generation. | Simulating a Coin toss [duplicate]
I tried getting to the source code for both, but couldn't find it.
I did, however, find the references for the building of the two algorithms. They do not use the same references, so it is reasonable |
51,675 | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model | "If all you have is a hammer, everything looks like a nail." The dataset you have is small, possibly underrepresented, and of unknown quality, since it is argued that many cases could have not been diagnosed. You observe an exponential growth, a common phenomena in many natural and artificial processes. The curve fits well, but I'd bet that other similar curves would also fit well.
Notice that the the Antoine equation mentioned by you, is a very flexible one, since it can account for constant ($\alpha$), exponential ($b/T_L$), and linear ($c\log T_L$) growth curves. This makes it easy to fit to many datasets.
Moreover, with this kind of data, it may be harder to model it at early stages. Notice that you could fit a linear growth model to the earliest period. Later, quadratic may fit just fine. Later, exponential would fit better, where the exact rate may be hard to catch, since by definition "the more it grows, then the more it grows", and it may easy speed up quite rapidly. It may be easy to fit some curve to such data, but the best test of it, would be a test of time, i.e. validating it on future data. | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model | "If all you have is a hammer, everything looks like a nail." The dataset you have is small, possibly underrepresented, and of unknown quality, since it is argued that many cases could have not been di | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model
"If all you have is a hammer, everything looks like a nail." The dataset you have is small, possibly underrepresented, and of unknown quality, since it is argued that many cases could have not been diagnosed. You observe an exponential growth, a common phenomena in many natural and artificial processes. The curve fits well, but I'd bet that other similar curves would also fit well.
Notice that the the Antoine equation mentioned by you, is a very flexible one, since it can account for constant ($\alpha$), exponential ($b/T_L$), and linear ($c\log T_L$) growth curves. This makes it easy to fit to many datasets.
Moreover, with this kind of data, it may be harder to model it at early stages. Notice that you could fit a linear growth model to the earliest period. Later, quadratic may fit just fine. Later, exponential would fit better, where the exact rate may be hard to catch, since by definition "the more it grows, then the more it grows", and it may easy speed up quite rapidly. It may be easy to fit some curve to such data, but the best test of it, would be a test of time, i.e. validating it on future data. | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model
"If all you have is a hammer, everything looks like a nail." The dataset you have is small, possibly underrepresented, and of unknown quality, since it is argued that many cases could have not been di |
51,676 | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model | The growth of infected cases $y$ is more or less exponential but the growth rate $c$ is not constant.
$$ \frac{\partial y}{\partial t} \approx c y$$
For instance, note in the graph how the change in cases from day to day depends on the number of cases in a particular day and the increase in cases is larger when the current cases are large. But, instead of a linear relation as with simple exponential growth, you get some curve that decreases in slope as $y$ becomes larger (or equivalent when time is further, the cause is not clear here).
There are many type of equations that model exponential growth where the growth rate $c$ is not constant. Many of these models look a lot the same when you are observing the growth for only a short period of time. Because then the variation in growth is not large and is easily approximated by one or the other. In our case, a simple polynomial fit is actually doing the best (in terms of less sum of square residuals).
But this fit should only be considered as an empirical relationship. There is not a strong underlying meaning, and if there is any meaning* then it is not tested by such fits and entirely hypothetical.
With these 18 data points, we only know that the relative growth has made some peak above 100% per day and after that decreased.
Is it because of some mechanism how the disease spreads, or is it because how the cases are reported (is the data clean)? When multiple interpretations are possible for the same curve (and the slight variations in residuals make not much difference in deciding which one makes a better explanation than the other) then we need more (and different) measurements in order to test the different interpretations.
Question: Given this limited data what can we infer about the corona virus growth rate and how can we reject the vapor pressure model as a mere spurious correlation.
There is indeed limited data to say which model is correct. However, for the case of the VPM model we can say that it is spurious and incorrect (we need not more data for this). We can say this using: logic of the mechanism (it doesn't make sense), expert knowledge, previous experience, the fact that curves look a lot the same on a small range (increasing the coincidence that curves look the same).
*You could say that a relation like $ \frac{\partial y}{\partial t} \approx c y^n$ somewhat makes sense as some sort of growth based on a power of $y$ (a toy model would be the growth of a circle where the increase in the area of the circle relates to the circumference of the circle).
Comparison with larger data range
When we use data with a larger range (e.g. this data from wikipedia, which has at the time of writing points of 27 days and days 5-23 correspond to your data) then we can see how your VPM curve could coincidentally seem to fit(/explain) the data.
The VPM model is in the small range (from 5 to 23 days) approximately similar to a linear/polynomial model:
compare:
$$\left[ log(y) \right ]^\prime = \frac{y^\prime }{y} \approx a + bt$$
with
$$\left[ log(y) \right ]^\prime = \frac{y^\prime }{y} \approx a/t^2 + b/t$$
the latter can be approximately linear in a small range (note that the VPM actually already fails for the small values for your 18 data points, which you can see well on the log-scale, but these small values count less strongly in the sum of squared residuals; what the VPM seem to do well is fit the little jump with the 100% increase; in hindsight, we can say that this should be considered just fitting noise)
Predictions/extrapolation
Note 1: I am well aware of spurious correlation. But with only 3 weeks of data, we many not be able to detect a different trend this early. Hence I am reporting the best fit regardless.
It is not very useful to fit curves in this way. It doesn't tell much information when there is not a good underlying theory.
In the image below two models are extrapolated and they give a final number of 45.3k or 47.8k cases and at that point (after only 2 or 3 days) the growth rate is zero (according to those models).
This extrapolation is not very useful. We don't know whether the model is truly like a curve that happens to fit well (and we could devise many other curves that would fit equally reasonable).
There may be more parameters involved that we do not take into account. The fits with those polynomials are not expressing what happens outside the range. It is not difficult to imagine that the growth will be for a long time nonzero, this is a scenario that is not modeled by a 'random' fit with a polynomial or some other method like a curve fitting tool that just checks a limited set (67) of various models (that may have nothing to do with the situation).
This optimistic extrapolation with polynomial models looks even more dramatic when we look at the absolute growth of cases. Currently, this is in the ten thousands per day. The trend in the last days does not show that this is gonna decrease so quickly and it seems like we are gonna hit above 50 000 cases. | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model | The growth of infected cases $y$ is more or less exponential but the growth rate $c$ is not constant.
$$ \frac{\partial y}{\partial t} \approx c y$$
For instance, note in the graph how the change in c | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model
The growth of infected cases $y$ is more or less exponential but the growth rate $c$ is not constant.
$$ \frac{\partial y}{\partial t} \approx c y$$
For instance, note in the graph how the change in cases from day to day depends on the number of cases in a particular day and the increase in cases is larger when the current cases are large. But, instead of a linear relation as with simple exponential growth, you get some curve that decreases in slope as $y$ becomes larger (or equivalent when time is further, the cause is not clear here).
There are many type of equations that model exponential growth where the growth rate $c$ is not constant. Many of these models look a lot the same when you are observing the growth for only a short period of time. Because then the variation in growth is not large and is easily approximated by one or the other. In our case, a simple polynomial fit is actually doing the best (in terms of less sum of square residuals).
But this fit should only be considered as an empirical relationship. There is not a strong underlying meaning, and if there is any meaning* then it is not tested by such fits and entirely hypothetical.
With these 18 data points, we only know that the relative growth has made some peak above 100% per day and after that decreased.
Is it because of some mechanism how the disease spreads, or is it because how the cases are reported (is the data clean)? When multiple interpretations are possible for the same curve (and the slight variations in residuals make not much difference in deciding which one makes a better explanation than the other) then we need more (and different) measurements in order to test the different interpretations.
Question: Given this limited data what can we infer about the corona virus growth rate and how can we reject the vapor pressure model as a mere spurious correlation.
There is indeed limited data to say which model is correct. However, for the case of the VPM model we can say that it is spurious and incorrect (we need not more data for this). We can say this using: logic of the mechanism (it doesn't make sense), expert knowledge, previous experience, the fact that curves look a lot the same on a small range (increasing the coincidence that curves look the same).
*You could say that a relation like $ \frac{\partial y}{\partial t} \approx c y^n$ somewhat makes sense as some sort of growth based on a power of $y$ (a toy model would be the growth of a circle where the increase in the area of the circle relates to the circumference of the circle).
Comparison with larger data range
When we use data with a larger range (e.g. this data from wikipedia, which has at the time of writing points of 27 days and days 5-23 correspond to your data) then we can see how your VPM curve could coincidentally seem to fit(/explain) the data.
The VPM model is in the small range (from 5 to 23 days) approximately similar to a linear/polynomial model:
compare:
$$\left[ log(y) \right ]^\prime = \frac{y^\prime }{y} \approx a + bt$$
with
$$\left[ log(y) \right ]^\prime = \frac{y^\prime }{y} \approx a/t^2 + b/t$$
the latter can be approximately linear in a small range (note that the VPM actually already fails for the small values for your 18 data points, which you can see well on the log-scale, but these small values count less strongly in the sum of squared residuals; what the VPM seem to do well is fit the little jump with the 100% increase; in hindsight, we can say that this should be considered just fitting noise)
Predictions/extrapolation
Note 1: I am well aware of spurious correlation. But with only 3 weeks of data, we many not be able to detect a different trend this early. Hence I am reporting the best fit regardless.
It is not very useful to fit curves in this way. It doesn't tell much information when there is not a good underlying theory.
In the image below two models are extrapolated and they give a final number of 45.3k or 47.8k cases and at that point (after only 2 or 3 days) the growth rate is zero (according to those models).
This extrapolation is not very useful. We don't know whether the model is truly like a curve that happens to fit well (and we could devise many other curves that would fit equally reasonable).
There may be more parameters involved that we do not take into account. The fits with those polynomials are not expressing what happens outside the range. It is not difficult to imagine that the growth will be for a long time nonzero, this is a scenario that is not modeled by a 'random' fit with a polynomial or some other method like a curve fitting tool that just checks a limited set (67) of various models (that may have nothing to do with the situation).
This optimistic extrapolation with polynomial models looks even more dramatic when we look at the absolute growth of cases. Currently, this is in the ten thousands per day. The trend in the last days does not show that this is gonna decrease so quickly and it seems like we are gonna hit above 50 000 cases. | Coronavirus growth rate and its possibly spurious resemblance to vapor pressure model
The growth of infected cases $y$ is more or less exponential but the growth rate $c$ is not constant.
$$ \frac{\partial y}{\partial t} \approx c y$$
For instance, note in the graph how the change in c |
51,677 | why there is no βerrorβ term in survival analysis? | The distributional assumptions behind a relative risk model are hidden in the baseline hazard function $h_0(t)$. If you specify a form for this function, then you completely specify the distribution of your data.
For example, $h_0(t) = \phi \psi t^{\phi - 1}$ corresponds to the Weibull distribution. | why there is no βerrorβ term in survival analysis? | The distributional assumptions behind a relative risk model are hidden in the baseline hazard function $h_0(t)$. If you specify a form for this function, then you completely specify the distribution o | why there is no βerrorβ term in survival analysis?
The distributional assumptions behind a relative risk model are hidden in the baseline hazard function $h_0(t)$. If you specify a form for this function, then you completely specify the distribution of your data.
For example, $h_0(t) = \phi \psi t^{\phi - 1}$ corresponds to the Weibull distribution. | why there is no βerrorβ term in survival analysis?
The distributional assumptions behind a relative risk model are hidden in the baseline hazard function $h_0(t)$. If you specify a form for this function, then you completely specify the distribution o |
51,678 | why there is no βerrorβ term in survival analysis? | There absolutely is an "error" in survival analysis.
You can define the "time to event" according to a probability model with some $$g(T) = b (X, t) + \epsilon(X,t)$$
where $g$ would usually be something like a log transform. Of course requiring $\epsilon$ to be normal, identically distributed, or even stationary is a rather strong assumption that just doesn't play out in real life. But if we allow $\epsilon$ to be quite general, the Cox proportional hazard model is a special case of the above display. Is this an abuse of notation? Maybe. Note we are mpt guaranteed any of the desirable properties of independence between the parameters. But if we think carefully about what an error is, it's not that it doesn't exist, it's just not a helpful notation to facilitate scientific investigation.
This "fully parametric" approach can be very efficient when it's true. A fully parametric "Weibull" model is actually a lot like a linear regression model for survival data, where the scale parameter is a lot like an error variance (dispersion parameter)
You could predict survival time for a given subject, subtract that from observed survival time, and this "residual" can be flexibly modeled using semiparametric splines to describe the distribution and mean-variance relationship. More commonly, we use the difference of predicted and observed cumulative hazard (Schoenfeld) residuals and their theoretical basis to infer the appropriateness of the proportional hazards assumption.
Theoretically, $\hat{S}^{-1}(T) \sim B(0,1)$. That is, the survival times under a quantile-transform, follow a stationary Brownian Bridge. So there is a relation between the probability model and a fundamentally random process. One could inspect diagnostic plots to assess the adequacy of $\hat{S}$ as an estimator of $S$. | why there is no βerrorβ term in survival analysis? | There absolutely is an "error" in survival analysis.
You can define the "time to event" according to a probability model with some $$g(T) = b (X, t) + \epsilon(X,t)$$
where $g$ would usually be some | why there is no βerrorβ term in survival analysis?
There absolutely is an "error" in survival analysis.
You can define the "time to event" according to a probability model with some $$g(T) = b (X, t) + \epsilon(X,t)$$
where $g$ would usually be something like a log transform. Of course requiring $\epsilon$ to be normal, identically distributed, or even stationary is a rather strong assumption that just doesn't play out in real life. But if we allow $\epsilon$ to be quite general, the Cox proportional hazard model is a special case of the above display. Is this an abuse of notation? Maybe. Note we are mpt guaranteed any of the desirable properties of independence between the parameters. But if we think carefully about what an error is, it's not that it doesn't exist, it's just not a helpful notation to facilitate scientific investigation.
This "fully parametric" approach can be very efficient when it's true. A fully parametric "Weibull" model is actually a lot like a linear regression model for survival data, where the scale parameter is a lot like an error variance (dispersion parameter)
You could predict survival time for a given subject, subtract that from observed survival time, and this "residual" can be flexibly modeled using semiparametric splines to describe the distribution and mean-variance relationship. More commonly, we use the difference of predicted and observed cumulative hazard (Schoenfeld) residuals and their theoretical basis to infer the appropriateness of the proportional hazards assumption.
Theoretically, $\hat{S}^{-1}(T) \sim B(0,1)$. That is, the survival times under a quantile-transform, follow a stationary Brownian Bridge. So there is a relation between the probability model and a fundamentally random process. One could inspect diagnostic plots to assess the adequacy of $\hat{S}$ as an estimator of $S$. | why there is no βerrorβ term in survival analysis?
There absolutely is an "error" in survival analysis.
You can define the "time to event" according to a probability model with some $$g(T) = b (X, t) + \epsilon(X,t)$$
where $g$ would usually be some |
51,679 | why there is no βerrorβ term in survival analysis? | Simple Linear Regression Model
\begin{equation}
Y_i=B_0+B_1 X_i+Ξ΅_i
\end{equation}
Where
$Y_i$ is the value of the response variable in the ith trial
$Ξ΅_i $ is a random error term with mean $E[Ξ΅_i]=0$ and variance $Ο^2 [Ξ΅_i ]=Ο^2$
\begin{equation}
E[Y_i ]=B_0+B_1 X_i
\end{equation}
Consider the simple linear regression model
\begin{equation}
Y_i=B_0+B_1 X_i+Ξ΅_i\\ Y_i=0,1 \end{equation}
Where the outcome $Y_i$ is binary, taking on the value of either 0 or 1. The expected response $E[Y_i]$ has a special meaning in this case. Since $E[Ξ΅_i]=0$ we have:
\begin{equation} E[Y_i ]=B_0+B_1 X_i
\end{equation}
Consider $Y_i$ to be a Bernoulli random variable for which we can state the probability distribution as follows:
\begin{equation} P(Y_i=1)=Ο_i \end{equation}
\begin{equation} P(Y_i=0)=1-Ο_i \end{equation}
\begin{equation}
E[Y_i ]=B_0+B_1 X_i= Ο_i
\end{equation}
Simple Logistic Regression Model
First, we require a formal statement of the simple logistic regression model. Recall that when the response variable is binary, taking on the value 1 and o with probabilities Ο and 1-Ο , respectively, Y is a bernoulli random variable with parameter $E[Y]=Ο$. We could state the the simple logistic regression model in model the following fashion:
$Y_i$ are independent Bernoulli random variable with expected
Value $E[Y_i ] =Ο_i$ , where:
\begin{equation}
E[Y_i ] =Ο_i= exp( B_0+B_1 X_i)/(1+exp(B_0+B_1 X_i))
\end{equation}
Poisson Distribution
\begin{equation}
f(Y)=(ΞΌ^Y exp(-ΞΌ))/Y!
\end{equation}
$E[Y]=ΞΌ$
$Ο^2 [Y]=ΞΌ$
Poisson Regression Model
The poisson regression model, Like any nonlinear regression medol, can be stated as follows:
\begin{equation}
Y_i=E[Y_i ]+Ξ΅_i \\i=1,2,β¦..,n
\end{equation}
The mean response for the $i$th case, to be denoted now by $ΞΌ_i$ for simplicity, is assumed as always to be a function of the set of predictor variables ,$ X_1,β¦..,X_(p-1)$. We use the notation $ΞΌ$($X_i$,$B$) to denote the function that relates the mean response $ΞΌ_i$ to $X_i$, the values of the predictor variable for case $i$ , and B, the values of the regression cofficients. Some commonly used functions for poisson regression are:
\begin{equation}
ΞΌ_i= ΞΌ(X_i,B)=(X_i,B)
\end{equation}
\begin{equation} ΞΌ_i= ΞΌ(X_i,B)=exp(X_i,B) \end{equation}
\begin{equation} ΞΌ_i= ΞΌ(X_i,B)=log_e(X_i,B) \end{equation}
That , this models called Generalized Linear Model (GLM).
Survival analysis
Consider an AFT model with one predictor X. The model can be expressed on the log scale as:
\begin{equation}
log (T)= a_0+a_1 X+Ξ΅
\end{equation}
Where $Ξ΅$ is a random error following some distribution.
T (Exponential, Weibull, Log-logistic and Lognormal )
log (T) (Extreme value, Extreme value, Logistic and Normal)
but cox proportional hazard model, The distributional assumptions behind hidden in the baseline hazard function $h_0 (t)$ | why there is no βerrorβ term in survival analysis? | Simple Linear Regression Model
\begin{equation}
Y_i=B_0+B_1 X_i+Ξ΅_i
\end{equation}
Where
$Y_i$ is the value of the response variable in the ith trial
$Ξ΅_i $ is a random error term with mean $E[Ξ΅_i] | why there is no βerrorβ term in survival analysis?
Simple Linear Regression Model
\begin{equation}
Y_i=B_0+B_1 X_i+Ξ΅_i
\end{equation}
Where
$Y_i$ is the value of the response variable in the ith trial
$Ξ΅_i $ is a random error term with mean $E[Ξ΅_i]=0$ and variance $Ο^2 [Ξ΅_i ]=Ο^2$
\begin{equation}
E[Y_i ]=B_0+B_1 X_i
\end{equation}
Consider the simple linear regression model
\begin{equation}
Y_i=B_0+B_1 X_i+Ξ΅_i\\ Y_i=0,1 \end{equation}
Where the outcome $Y_i$ is binary, taking on the value of either 0 or 1. The expected response $E[Y_i]$ has a special meaning in this case. Since $E[Ξ΅_i]=0$ we have:
\begin{equation} E[Y_i ]=B_0+B_1 X_i
\end{equation}
Consider $Y_i$ to be a Bernoulli random variable for which we can state the probability distribution as follows:
\begin{equation} P(Y_i=1)=Ο_i \end{equation}
\begin{equation} P(Y_i=0)=1-Ο_i \end{equation}
\begin{equation}
E[Y_i ]=B_0+B_1 X_i= Ο_i
\end{equation}
Simple Logistic Regression Model
First, we require a formal statement of the simple logistic regression model. Recall that when the response variable is binary, taking on the value 1 and o with probabilities Ο and 1-Ο , respectively, Y is a bernoulli random variable with parameter $E[Y]=Ο$. We could state the the simple logistic regression model in model the following fashion:
$Y_i$ are independent Bernoulli random variable with expected
Value $E[Y_i ] =Ο_i$ , where:
\begin{equation}
E[Y_i ] =Ο_i= exp( B_0+B_1 X_i)/(1+exp(B_0+B_1 X_i))
\end{equation}
Poisson Distribution
\begin{equation}
f(Y)=(ΞΌ^Y exp(-ΞΌ))/Y!
\end{equation}
$E[Y]=ΞΌ$
$Ο^2 [Y]=ΞΌ$
Poisson Regression Model
The poisson regression model, Like any nonlinear regression medol, can be stated as follows:
\begin{equation}
Y_i=E[Y_i ]+Ξ΅_i \\i=1,2,β¦..,n
\end{equation}
The mean response for the $i$th case, to be denoted now by $ΞΌ_i$ for simplicity, is assumed as always to be a function of the set of predictor variables ,$ X_1,β¦..,X_(p-1)$. We use the notation $ΞΌ$($X_i$,$B$) to denote the function that relates the mean response $ΞΌ_i$ to $X_i$, the values of the predictor variable for case $i$ , and B, the values of the regression cofficients. Some commonly used functions for poisson regression are:
\begin{equation}
ΞΌ_i= ΞΌ(X_i,B)=(X_i,B)
\end{equation}
\begin{equation} ΞΌ_i= ΞΌ(X_i,B)=exp(X_i,B) \end{equation}
\begin{equation} ΞΌ_i= ΞΌ(X_i,B)=log_e(X_i,B) \end{equation}
That , this models called Generalized Linear Model (GLM).
Survival analysis
Consider an AFT model with one predictor X. The model can be expressed on the log scale as:
\begin{equation}
log (T)= a_0+a_1 X+Ξ΅
\end{equation}
Where $Ξ΅$ is a random error following some distribution.
T (Exponential, Weibull, Log-logistic and Lognormal )
log (T) (Extreme value, Extreme value, Logistic and Normal)
but cox proportional hazard model, The distributional assumptions behind hidden in the baseline hazard function $h_0 (t)$ | why there is no βerrorβ term in survival analysis?
Simple Linear Regression Model
\begin{equation}
Y_i=B_0+B_1 X_i+Ξ΅_i
\end{equation}
Where
$Y_i$ is the value of the response variable in the ith trial
$Ξ΅_i $ is a random error term with mean $E[Ξ΅_i] |
51,680 | why there is no βerrorβ term in survival analysis? | This Answer is limited to frequentist statistics and statistical model without random effect.
In fact, the statistical modeling is to find the conditional distribution of response variable conditional on fixed values of the covariates, i.e., distribution of $Y|X=x$. When writing the statistical model, abiding following 3 steps will keep you from mathematical mistakes.
Find the form of the distribution of $Y|X=x$.
List the parameters that determine the distribution.
Write down how the covariate determine the parameters through the unknown constant parameters.
Example 1: Subject = 5-16 year old boys (indexed by $i$), response variable $Y$ = height, Covariate $X$ = age.
E1-1: Distribution form: $Y_i|X_i \sim Normal$
E1-2: Parameters for normal: mean $\mu_i$ and variance $\sigma_i^2$
E1-3: Functions for parameters: $\mu_i = \mu_0+\beta X_i$ and $\sigma_i^2=\sigma^2$
It is the same as $Y_i = \mu_0 +\beta X_i +\epsilon_i$ and $\epsilon_i \sim N(0,\sigma^2)$
Example 2: Subject = Men older than 65 years (indexed by $i$), response variable ($Y$) = death or alive in the next full year, covariate ($X$) = age.
E2-1: Distribution form: $Y_i=\text {death}|X_i$ follows Bernoulli with parameter $\pi_i$.
E2-2: Parameter for Bernoulli: $\pi_i$, the probability that i-th person dies in next year
E2-3: Function of parameters: $\pi_i = \frac{e^{\beta_0+\beta_1X_i}}{1+e^{\beta_0+\beta_1X_i}}$ or $log(\frac {\pi_i}{1-\pi_i}) = \beta_0+\beta_1X_i$
It is logistic regression and no $\epsilon$ after $\beta_0+\beta_1X_i$.
Example 3: 10 minuets at a specific street from 6:00 am to 9:00 am, $Y_i=$ # of cars passed the street at specific place, $X=int((\text{begining time - 6:00 in minuets})/10)$
E3-1: Distribution: Poisson
E3-2: Parameter: $\lambda_i$
E3-3: Function of parameter: $\lambda_i = e^{\beta_0+\beta_1X_i}$ or $log(\lambda_i)=\beta_0+\beta_1X_i$
It is Poisson regression and also no $\epsilon$ after $\beta_0+\beta_1X_i$.
OP's question:
Distribution: Any probability distributions belong to proportional hazard family
Parameters: Depend on distribution, but we do not need to know because of proportional hazard assumption.
Function of parameter: $$h_i(t) = h_0(t) \exp \left ( \sum_{k = 1}^p \beta_k z_{ik} \right )$$ Need to know the fact that hazard determines the probability distribution of the survival time.
Obviously, there is no position for $\epsilon$.
If you still do not believe there is no $\epsilon$ in logistic, Poisson and Cox proportional hazard model, you can consider following two questions.
In the linear model, $\epsilon$ appears in the process of model establishment. In the final conclusions, we can and need to estimate the variance of $\epsilon$. We also can estimate $\epsilon$ itself by $Y-\hat Y$. We also know that $Var(\hat \beta = (X'X)^{-1}\sigma$.
In other 3 kinds of models, if you insist there is $\epsilon$, why it does not appear in the process of model establishment? What is the effect of $\epsilon$ on the model? Did and could we estimate anything related to $\epsilon$?
So if you insist there is $\epsilon$ in that 3 kind models, then $\epsilon$ acts like a ghost, when you want it, it would appear,; when you do not it, it would disappear. But in mathematical statistics, this kind of ghost is not allowed in the model.
You may ask why it is acceptable that baseline hazard function $\lambda_0(t)$ also appears in the model specification and disappears in the model fitting process and final results. The reason is in the process of model establishment, the $\lambda_0(t)$ is cancelled under the assumption of proportional hazard. If you really interesting in $\lambda_0(t)$, you can get its estimate, $do not like $\epsilon$ which cannot be estimated.
Why linear model $Y\sim N(X\beta, \sigma^2)$ has an alternative expression $Y=X\beta + \epsilon$, $\epsilon \sim N(0,\sigma^2)$, and other models do not have alternative expression with $\epsilon$?
(will continue) | why there is no βerrorβ term in survival analysis? | This Answer is limited to frequentist statistics and statistical model without random effect.
In fact, the statistical modeling is to find the conditional distribution of response variable conditiona | why there is no βerrorβ term in survival analysis?
This Answer is limited to frequentist statistics and statistical model without random effect.
In fact, the statistical modeling is to find the conditional distribution of response variable conditional on fixed values of the covariates, i.e., distribution of $Y|X=x$. When writing the statistical model, abiding following 3 steps will keep you from mathematical mistakes.
Find the form of the distribution of $Y|X=x$.
List the parameters that determine the distribution.
Write down how the covariate determine the parameters through the unknown constant parameters.
Example 1: Subject = 5-16 year old boys (indexed by $i$), response variable $Y$ = height, Covariate $X$ = age.
E1-1: Distribution form: $Y_i|X_i \sim Normal$
E1-2: Parameters for normal: mean $\mu_i$ and variance $\sigma_i^2$
E1-3: Functions for parameters: $\mu_i = \mu_0+\beta X_i$ and $\sigma_i^2=\sigma^2$
It is the same as $Y_i = \mu_0 +\beta X_i +\epsilon_i$ and $\epsilon_i \sim N(0,\sigma^2)$
Example 2: Subject = Men older than 65 years (indexed by $i$), response variable ($Y$) = death or alive in the next full year, covariate ($X$) = age.
E2-1: Distribution form: $Y_i=\text {death}|X_i$ follows Bernoulli with parameter $\pi_i$.
E2-2: Parameter for Bernoulli: $\pi_i$, the probability that i-th person dies in next year
E2-3: Function of parameters: $\pi_i = \frac{e^{\beta_0+\beta_1X_i}}{1+e^{\beta_0+\beta_1X_i}}$ or $log(\frac {\pi_i}{1-\pi_i}) = \beta_0+\beta_1X_i$
It is logistic regression and no $\epsilon$ after $\beta_0+\beta_1X_i$.
Example 3: 10 minuets at a specific street from 6:00 am to 9:00 am, $Y_i=$ # of cars passed the street at specific place, $X=int((\text{begining time - 6:00 in minuets})/10)$
E3-1: Distribution: Poisson
E3-2: Parameter: $\lambda_i$
E3-3: Function of parameter: $\lambda_i = e^{\beta_0+\beta_1X_i}$ or $log(\lambda_i)=\beta_0+\beta_1X_i$
It is Poisson regression and also no $\epsilon$ after $\beta_0+\beta_1X_i$.
OP's question:
Distribution: Any probability distributions belong to proportional hazard family
Parameters: Depend on distribution, but we do not need to know because of proportional hazard assumption.
Function of parameter: $$h_i(t) = h_0(t) \exp \left ( \sum_{k = 1}^p \beta_k z_{ik} \right )$$ Need to know the fact that hazard determines the probability distribution of the survival time.
Obviously, there is no position for $\epsilon$.
If you still do not believe there is no $\epsilon$ in logistic, Poisson and Cox proportional hazard model, you can consider following two questions.
In the linear model, $\epsilon$ appears in the process of model establishment. In the final conclusions, we can and need to estimate the variance of $\epsilon$. We also can estimate $\epsilon$ itself by $Y-\hat Y$. We also know that $Var(\hat \beta = (X'X)^{-1}\sigma$.
In other 3 kinds of models, if you insist there is $\epsilon$, why it does not appear in the process of model establishment? What is the effect of $\epsilon$ on the model? Did and could we estimate anything related to $\epsilon$?
So if you insist there is $\epsilon$ in that 3 kind models, then $\epsilon$ acts like a ghost, when you want it, it would appear,; when you do not it, it would disappear. But in mathematical statistics, this kind of ghost is not allowed in the model.
You may ask why it is acceptable that baseline hazard function $\lambda_0(t)$ also appears in the model specification and disappears in the model fitting process and final results. The reason is in the process of model establishment, the $\lambda_0(t)$ is cancelled under the assumption of proportional hazard. If you really interesting in $\lambda_0(t)$, you can get its estimate, $do not like $\epsilon$ which cannot be estimated.
Why linear model $Y\sim N(X\beta, \sigma^2)$ has an alternative expression $Y=X\beta + \epsilon$, $\epsilon \sim N(0,\sigma^2)$, and other models do not have alternative expression with $\epsilon$?
(will continue) | why there is no βerrorβ term in survival analysis?
This Answer is limited to frequentist statistics and statistical model without random effect.
In fact, the statistical modeling is to find the conditional distribution of response variable conditiona |
51,681 | How to determine the optimal threshold to achieve the highest accuracy | I suspect that the answer is "no", i.e., that there is no such way.
Here is an illustration, where we plot the predicted probabilities against the true labels:
Since the denominator $P+N$ in the formula for accuracy does not change, what you are trying to do is to shift the horizontal red line up or down (the height being the threshold you are interested in) in order to maximize the number of "positive" dots above the line plus the number of "negative" dots below the line. Where this optimal line lies depends entirely on the shape of the two point clouds, i.e., the conditional distribution of the predicted probabilities per true label.
Your best bet is likely a bisection search.
That said, I recommend you look at
Why is accuracy not the best measure for assessing classification models?
Is accuracy an improper scoring rule in a binary classification setting?
Classification probability threshold | How to determine the optimal threshold to achieve the highest accuracy | I suspect that the answer is "no", i.e., that there is no such way.
Here is an illustration, where we plot the predicted probabilities against the true labels:
Since the denominator $P+N$ in the form | How to determine the optimal threshold to achieve the highest accuracy
I suspect that the answer is "no", i.e., that there is no such way.
Here is an illustration, where we plot the predicted probabilities against the true labels:
Since the denominator $P+N$ in the formula for accuracy does not change, what you are trying to do is to shift the horizontal red line up or down (the height being the threshold you are interested in) in order to maximize the number of "positive" dots above the line plus the number of "negative" dots below the line. Where this optimal line lies depends entirely on the shape of the two point clouds, i.e., the conditional distribution of the predicted probabilities per true label.
Your best bet is likely a bisection search.
That said, I recommend you look at
Why is accuracy not the best measure for assessing classification models?
Is accuracy an improper scoring rule in a binary classification setting?
Classification probability threshold | How to determine the optimal threshold to achieve the highest accuracy
I suspect that the answer is "no", i.e., that there is no such way.
Here is an illustration, where we plot the predicted probabilities against the true labels:
Since the denominator $P+N$ in the form |
51,682 | How to determine the optimal threshold to achieve the highest accuracy | Agreeing with @StephanKolassa, I'll just look from an algorithmic perspective. You'll need to sort your samples with respect to produced probabilities, which is $O(n\log n)$, if you've $n$ data samples. Then, your true class labels will order like
$$0\ 0 \ 1\ 0\ 0\ 1 \ ...\ 1 \ 1\ 0\ 1 $$
Then, we'll put a separator $|$ at some position in this array; this'll represent your threshold. At most there are $n+1$ positions to put it. Even if you calculate the accuracy for each of these positions, you won't be worse than the sorting complexity. After getting the maximum accuracy, the threshold may just be chosen as the average of the neighboring samples. | How to determine the optimal threshold to achieve the highest accuracy | Agreeing with @StephanKolassa, I'll just look from an algorithmic perspective. You'll need to sort your samples with respect to produced probabilities, which is $O(n\log n)$, if you've $n$ data sample | How to determine the optimal threshold to achieve the highest accuracy
Agreeing with @StephanKolassa, I'll just look from an algorithmic perspective. You'll need to sort your samples with respect to produced probabilities, which is $O(n\log n)$, if you've $n$ data samples. Then, your true class labels will order like
$$0\ 0 \ 1\ 0\ 0\ 1 \ ...\ 1 \ 1\ 0\ 1 $$
Then, we'll put a separator $|$ at some position in this array; this'll represent your threshold. At most there are $n+1$ positions to put it. Even if you calculate the accuracy for each of these positions, you won't be worse than the sorting complexity. After getting the maximum accuracy, the threshold may just be chosen as the average of the neighboring samples. | How to determine the optimal threshold to achieve the highest accuracy
Agreeing with @StephanKolassa, I'll just look from an algorithmic perspective. You'll need to sort your samples with respect to produced probabilities, which is $O(n\log n)$, if you've $n$ data sample |
51,683 | How to determine the optimal threshold to achieve the highest accuracy | I implemented the solution proposed by Stephan Kolassa in python:
def opt_threshold_acc(y_true, y_pred):
A = list(zip(y_true, y_pred))
A = sorted(A, key=lambda x: x[1])
total = len(A)
tp = len([1 for x in A if x[0]==1])
tn = 0
th_acc = []
for x in A:
th = x[1]
if x[0] == 1:
tp -= 1
else:
tn += 1
acc = (tp + tn) / total
th_acc.append((th, acc))
return max(th_acc, key=lambda x: x[1]) | How to determine the optimal threshold to achieve the highest accuracy | I implemented the solution proposed by Stephan Kolassa in python:
def opt_threshold_acc(y_true, y_pred):
A = list(zip(y_true, y_pred))
A = sorted(A, key=lambda x: x[1])
total = len(A)
| How to determine the optimal threshold to achieve the highest accuracy
I implemented the solution proposed by Stephan Kolassa in python:
def opt_threshold_acc(y_true, y_pred):
A = list(zip(y_true, y_pred))
A = sorted(A, key=lambda x: x[1])
total = len(A)
tp = len([1 for x in A if x[0]==1])
tn = 0
th_acc = []
for x in A:
th = x[1]
if x[0] == 1:
tp -= 1
else:
tn += 1
acc = (tp + tn) / total
th_acc.append((th, acc))
return max(th_acc, key=lambda x: x[1]) | How to determine the optimal threshold to achieve the highest accuracy
I implemented the solution proposed by Stephan Kolassa in python:
def opt_threshold_acc(y_true, y_pred):
A = list(zip(y_true, y_pred))
A = sorted(A, key=lambda x: x[1])
total = len(A)
|
51,684 | How to understand confusion matrix for 3x3 | Based on the 3x3 confusion matrix in your example (assuming I'm understanding the labels correctly) the columns are the predictions and the rows must therefore be the actual values. The main diagonal (64, 237, 165) gives the correct predictions. That is, the cases where the actual values and the model predictions are the same.
The first row are the actual males. The model predicted 64 of these correctly and incorrectly predicted 46 of the males to be female and 139 of the males to be infants.
Looking at the male column, of the 128 males predicted by the model (sum of column M), 64 were actually males, while 12 were females incorrectly predicted to be males and 52 were infants incorrectly predicted to be males.
Analogous interpretations apply to the other columns and rows.
Predicted
M F I
Actual M 64 46 139
F 12 237 42
I 52 79 165
If this is a multinomial logistic regression model, then the model output would be predicted probabilities that each observation belongs to a particular class, rather than predicted classes. The links in @StephenKolassa's answer discuss the issue of scoring rules and you may want to consider what scoring rule will result in classifications that minimize a loss function tailored to your specific needs. | How to understand confusion matrix for 3x3 | Based on the 3x3 confusion matrix in your example (assuming I'm understanding the labels correctly) the columns are the predictions and the rows must therefore be the actual values. The main diagonal | How to understand confusion matrix for 3x3
Based on the 3x3 confusion matrix in your example (assuming I'm understanding the labels correctly) the columns are the predictions and the rows must therefore be the actual values. The main diagonal (64, 237, 165) gives the correct predictions. That is, the cases where the actual values and the model predictions are the same.
The first row are the actual males. The model predicted 64 of these correctly and incorrectly predicted 46 of the males to be female and 139 of the males to be infants.
Looking at the male column, of the 128 males predicted by the model (sum of column M), 64 were actually males, while 12 were females incorrectly predicted to be males and 52 were infants incorrectly predicted to be males.
Analogous interpretations apply to the other columns and rows.
Predicted
M F I
Actual M 64 46 139
F 12 237 42
I 52 79 165
If this is a multinomial logistic regression model, then the model output would be predicted probabilities that each observation belongs to a particular class, rather than predicted classes. The links in @StephenKolassa's answer discuss the issue of scoring rules and you may want to consider what scoring rule will result in classifications that minimize a loss function tailored to your specific needs. | How to understand confusion matrix for 3x3
Based on the 3x3 confusion matrix in your example (assuming I'm understanding the labels correctly) the columns are the predictions and the rows must therefore be the actual values. The main diagonal |
51,685 | How to understand confusion matrix for 3x3 | True Positive, False Positive and similar counts and rates only make sense if there is a notion of "positive" and "negative" classes in your data. That is, only if you have exactly two classes. You have three classes, not two.
In your case, you can more or less reasonably discuss analogues, like "True Male" numbers: take the number of cases you correctly (!) classify as male and divide by the total number of males in the test sample.
Note that TPR, FPR, accuracy and similar KPIs have major problems if used to evaluate classification models. | How to understand confusion matrix for 3x3 | True Positive, False Positive and similar counts and rates only make sense if there is a notion of "positive" and "negative" classes in your data. That is, only if you have exactly two classes. You ha | How to understand confusion matrix for 3x3
True Positive, False Positive and similar counts and rates only make sense if there is a notion of "positive" and "negative" classes in your data. That is, only if you have exactly two classes. You have three classes, not two.
In your case, you can more or less reasonably discuss analogues, like "True Male" numbers: take the number of cases you correctly (!) classify as male and divide by the total number of males in the test sample.
Note that TPR, FPR, accuracy and similar KPIs have major problems if used to evaluate classification models. | How to understand confusion matrix for 3x3
True Positive, False Positive and similar counts and rates only make sense if there is a notion of "positive" and "negative" classes in your data. That is, only if you have exactly two classes. You ha |
51,686 | How to understand confusion matrix for 3x3 | TP, TN, FP, FN - in 3x3 matrix could be defined PER CLASS
In the above example:
For M class:
TP - real M predicted as M (64)
TN - real F predicted as F and real I predicted as I (237+165)
FP - real F and I predicted as M (12+52)
FN - real M predicted as F or I (46+139)
Then you can calculate Precision and Recall metrics (per class). | How to understand confusion matrix for 3x3 | TP, TN, FP, FN - in 3x3 matrix could be defined PER CLASS
In the above example:
For M class:
TP - real M predicted as M (64)
TN - real F predicted as F and real I predicted as I (237+165)
FP - real F | How to understand confusion matrix for 3x3
TP, TN, FP, FN - in 3x3 matrix could be defined PER CLASS
In the above example:
For M class:
TP - real M predicted as M (64)
TN - real F predicted as F and real I predicted as I (237+165)
FP - real F and I predicted as M (12+52)
FN - real M predicted as F or I (46+139)
Then you can calculate Precision and Recall metrics (per class). | How to understand confusion matrix for 3x3
TP, TN, FP, FN - in 3x3 matrix could be defined PER CLASS
In the above example:
For M class:
TP - real M predicted as M (64)
TN - real F predicted as F and real I predicted as I (237+165)
FP - real F |
51,687 | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | You are talking about Bayesian analysis, not Bayes theorem, but we know what you mean.
Let me hit you with an idea that is even more strange than the one you are thinking about. As long as you use your real prior density in constructing your model, then all Bayesian statistics are admissible, where admissibility is defined as the least risky way to make an estimate. This means that even in the K-T example, you will get an admissible statistic. This does exclude the case of degenerate priors.
K-T do not directly discuss the formation of priors, but the idea you are trying to get across is the idea of predictive accuracy with flawed prior distributions.
Now let's look at Bayesian predictions under two different knowledge sets.
For purposes of exposition, it is to be assumed that the American Congress,
exercising its well-noted wisdom, and due to heavy lobbying by the Society of
American Magicians, has decided to produce magical quarters. They authorize
the production of fair, double-sided and biased coins. The double-headed
and double-tailed coins are easy to evaluate, but the two-sided coins are not
without flipping them.
A decision is made to flip a coin eight times. From those flips, a
gamble will be made on how many heads will appear in the next eight flips. Coins either have a 2/3rds bias for heads, a 2/3rds bias for tails, or it is a perfectly fair coin. The coin that will be tossed is randomly selected from a large urn containing a representative sample of coins from the U.S. Mint.
There are two gamblers. One has no prior knowledge, but the other phoned the US Mint to determine the distribution of the coins that are produced. The first gambler gives 1/3rd probabilities for each case, but the knowledgeable gambler sets a fifty percent probability on a fair coin, and even chances for either two from the remaining probability.
The referee tosses the coin, and six heads are shown. This is not equal to any possible parameter. The maximum likelihood estimator is .75 as is the minimum variance unbiased estimator. Although this is not a possible solution, it does not violate theory.
Now both Bayesian gamblers need to make predictions. For the ignorant gambler, the mass function for the next eight gambles is:
$$\Pr(k=K)=\begin{pmatrix}8\\ k\end{pmatrix}\left[.0427{\frac{1}{3}}^k{\frac{2}{3}}^{8-k}+.2737{\frac{1}{2}}^8+.6838{\frac{2}{3}}^k{\frac{1}{3}}^{8-k}\right].$$
For the knowledgeable gambler, the mass function for the next eight gambles is:
$$\Pr(k=K)=\begin{pmatrix}8\\ k\end{pmatrix}\left[.0335{\frac{1}{3}}^k{\frac{2}{3}}^{8-k}+.4298{\frac{1}{2}}^8+.5367{\frac{2}{3}}^k{\frac{1}{3}}^{8-k}\right].$$
Even in this trivial case, the predictions do not match, yet both are admissible? Why?
Let's think about the two actors. They both have included all the information they have. There is nothing else. Further, although the knowledgeable actor does know the national distribution, they do not know the distribution to their local bank. It could be that they are all biased toward tails. Still, they both impounded all the information that they believe to be true.
Now let us again imagine that this game is played one more time. The two gamblers happen to be sitting side-by-side, and the ignorant gambler gets to see the odds of the knowledgeable gambler, and vice-versa. The ignorant gambler can recover the knowledgeable gambler's prior information at no cost by inverting their probabilities. Now both can use the extra knowledge.
The referee tosses four heads and four tails. This knowledge is combined to create a new prediction that is now joint among the gamblers. Its image is in the chart below.
A gambler who had only seen four heads and four tails and had not seen the prior tosses may have yet a third prediction. Interestingly, for Frequentist purposes, you cannot carry information over to a second sample, so the prediction is independent of prior knowledge. This is bad. What if it has been eight heads instead, or eight tails. The maximum likelihood estimator and minimum variance unbiased estimator would be for a double-headed or double tailed coin with no variance either.
For this second round prediction, no admissible Frequentist estimator exists. In the presence of prior knowledge, Frequentist statistics cease being admissible. Now an intelligent statistician would just combine the samples, but that does violate the rules unless you are doing a meta-analysis.
Your meta-analysis solution will still be problematic, though. A Frequentist prediction could be constructed from the intervals and the errors, but it would still be centered on 10/16ths, which is not a possible solution. Although it is "unbiased" it is also impossible. Using the errors would improve the case, but this still is not equal to the Bayesian method.
Furthermore, this is not limited to this contrived problem. Imagine a case where the data is approximately normal, but without support for the negative real numbers. I have seen plenty of time series analysis with coefficients that are impossible. They are valid minimum variance unbiased estimators, but they are also impossible solutions as they are excluded by theory and rationality. A Bayesian estimator would have put zero mass on the disallowed region, but the Frequentist cannot.
You are correct in understanding that Bayesian predictions should be biased, and in fact, all estimators made with proper priors are guaranteed to be biased. Further, the bias will be different. Yet there is no less risky solution and when they exist, only equally risky solutions when using Frequentist methods.
The Frequentist predictions do not depend upon the true value of $p$, which is also true for the Bayesian, but does depend upon the count of observed outcomes. If the Frequentist case is included, the the prediction becomes the following graph.
Because it cannot correct for the fact that some choices cannot happen, nor can it account for prior knowledge, the Frequentist prediction is actually more extreme because it averages over an infinite number of repetitions which have yet to happen. The predictive distribution turns out to be the hypergeometric distribution for the binomial.
Bias is the guaranteed price that you must pay for the generally increased Bayesian accuracy. You lose the guarantee against false positives. You lose unbiasedness. You gain valid gambling odds, which non-Bayesian methods cannot produce. | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | You are talking about Bayesian analysis, not Bayes theorem, but we know what you mean.
Let me hit you with an idea that is even more strange than the one you are thinking about. As long as you use yo | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
You are talking about Bayesian analysis, not Bayes theorem, but we know what you mean.
Let me hit you with an idea that is even more strange than the one you are thinking about. As long as you use your real prior density in constructing your model, then all Bayesian statistics are admissible, where admissibility is defined as the least risky way to make an estimate. This means that even in the K-T example, you will get an admissible statistic. This does exclude the case of degenerate priors.
K-T do not directly discuss the formation of priors, but the idea you are trying to get across is the idea of predictive accuracy with flawed prior distributions.
Now let's look at Bayesian predictions under two different knowledge sets.
For purposes of exposition, it is to be assumed that the American Congress,
exercising its well-noted wisdom, and due to heavy lobbying by the Society of
American Magicians, has decided to produce magical quarters. They authorize
the production of fair, double-sided and biased coins. The double-headed
and double-tailed coins are easy to evaluate, but the two-sided coins are not
without flipping them.
A decision is made to flip a coin eight times. From those flips, a
gamble will be made on how many heads will appear in the next eight flips. Coins either have a 2/3rds bias for heads, a 2/3rds bias for tails, or it is a perfectly fair coin. The coin that will be tossed is randomly selected from a large urn containing a representative sample of coins from the U.S. Mint.
There are two gamblers. One has no prior knowledge, but the other phoned the US Mint to determine the distribution of the coins that are produced. The first gambler gives 1/3rd probabilities for each case, but the knowledgeable gambler sets a fifty percent probability on a fair coin, and even chances for either two from the remaining probability.
The referee tosses the coin, and six heads are shown. This is not equal to any possible parameter. The maximum likelihood estimator is .75 as is the minimum variance unbiased estimator. Although this is not a possible solution, it does not violate theory.
Now both Bayesian gamblers need to make predictions. For the ignorant gambler, the mass function for the next eight gambles is:
$$\Pr(k=K)=\begin{pmatrix}8\\ k\end{pmatrix}\left[.0427{\frac{1}{3}}^k{\frac{2}{3}}^{8-k}+.2737{\frac{1}{2}}^8+.6838{\frac{2}{3}}^k{\frac{1}{3}}^{8-k}\right].$$
For the knowledgeable gambler, the mass function for the next eight gambles is:
$$\Pr(k=K)=\begin{pmatrix}8\\ k\end{pmatrix}\left[.0335{\frac{1}{3}}^k{\frac{2}{3}}^{8-k}+.4298{\frac{1}{2}}^8+.5367{\frac{2}{3}}^k{\frac{1}{3}}^{8-k}\right].$$
Even in this trivial case, the predictions do not match, yet both are admissible? Why?
Let's think about the two actors. They both have included all the information they have. There is nothing else. Further, although the knowledgeable actor does know the national distribution, they do not know the distribution to their local bank. It could be that they are all biased toward tails. Still, they both impounded all the information that they believe to be true.
Now let us again imagine that this game is played one more time. The two gamblers happen to be sitting side-by-side, and the ignorant gambler gets to see the odds of the knowledgeable gambler, and vice-versa. The ignorant gambler can recover the knowledgeable gambler's prior information at no cost by inverting their probabilities. Now both can use the extra knowledge.
The referee tosses four heads and four tails. This knowledge is combined to create a new prediction that is now joint among the gamblers. Its image is in the chart below.
A gambler who had only seen four heads and four tails and had not seen the prior tosses may have yet a third prediction. Interestingly, for Frequentist purposes, you cannot carry information over to a second sample, so the prediction is independent of prior knowledge. This is bad. What if it has been eight heads instead, or eight tails. The maximum likelihood estimator and minimum variance unbiased estimator would be for a double-headed or double tailed coin with no variance either.
For this second round prediction, no admissible Frequentist estimator exists. In the presence of prior knowledge, Frequentist statistics cease being admissible. Now an intelligent statistician would just combine the samples, but that does violate the rules unless you are doing a meta-analysis.
Your meta-analysis solution will still be problematic, though. A Frequentist prediction could be constructed from the intervals and the errors, but it would still be centered on 10/16ths, which is not a possible solution. Although it is "unbiased" it is also impossible. Using the errors would improve the case, but this still is not equal to the Bayesian method.
Furthermore, this is not limited to this contrived problem. Imagine a case where the data is approximately normal, but without support for the negative real numbers. I have seen plenty of time series analysis with coefficients that are impossible. They are valid minimum variance unbiased estimators, but they are also impossible solutions as they are excluded by theory and rationality. A Bayesian estimator would have put zero mass on the disallowed region, but the Frequentist cannot.
You are correct in understanding that Bayesian predictions should be biased, and in fact, all estimators made with proper priors are guaranteed to be biased. Further, the bias will be different. Yet there is no less risky solution and when they exist, only equally risky solutions when using Frequentist methods.
The Frequentist predictions do not depend upon the true value of $p$, which is also true for the Bayesian, but does depend upon the count of observed outcomes. If the Frequentist case is included, the the prediction becomes the following graph.
Because it cannot correct for the fact that some choices cannot happen, nor can it account for prior knowledge, the Frequentist prediction is actually more extreme because it averages over an infinite number of repetitions which have yet to happen. The predictive distribution turns out to be the hypergeometric distribution for the binomial.
Bias is the guaranteed price that you must pay for the generally increased Bayesian accuracy. You lose the guarantee against false positives. You lose unbiasedness. You gain valid gambling odds, which non-Bayesian methods cannot produce. | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
You are talking about Bayesian analysis, not Bayes theorem, but we know what you mean.
Let me hit you with an idea that is even more strange than the one you are thinking about. As long as you use yo |
51,688 | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | While a person might be wrong in a particular moment about the likelihood of a particular thing happening, the idea behind Bayes theorem (as applied to updating your understanding in the face of new information) is that the updated probability may not be entirely right, but that it will be more right than you were when you started.
I think of a sitution where I'm trying to estimate the number of sheep in a field - let's imagine that there are truly 100, but I'm going to estimate that there are no sheep at all (which is about as wrong as I can get). Then, I see a sheep, and update my estimate - now, I estimate that there's one sheep in the field! I'm still wrong, but I'm slightly less wrong than I was when I started. In this way, if you collect enough information, you can update your estimates to be closer to reality - and, indeed, by collecting enough data, you can get arbitrarily close to reality.
A really good description of this (albeit a pretty technical one) is in Savage's The Foundations of Statistics. It's a great read, and he develops a way of thinking about probability that makes a lot more sense from a Bayesian perspective. | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | While a person might be wrong in a particular moment about the likelihood of a particular thing happening, the idea behind Bayes theorem (as applied to updating your understanding in the face of new i | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
While a person might be wrong in a particular moment about the likelihood of a particular thing happening, the idea behind Bayes theorem (as applied to updating your understanding in the face of new information) is that the updated probability may not be entirely right, but that it will be more right than you were when you started.
I think of a sitution where I'm trying to estimate the number of sheep in a field - let's imagine that there are truly 100, but I'm going to estimate that there are no sheep at all (which is about as wrong as I can get). Then, I see a sheep, and update my estimate - now, I estimate that there's one sheep in the field! I'm still wrong, but I'm slightly less wrong than I was when I started. In this way, if you collect enough information, you can update your estimates to be closer to reality - and, indeed, by collecting enough data, you can get arbitrarily close to reality.
A really good description of this (albeit a pretty technical one) is in Savage's The Foundations of Statistics. It's a great read, and he develops a way of thinking about probability that makes a lot more sense from a Bayesian perspective. | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
While a person might be wrong in a particular moment about the likelihood of a particular thing happening, the idea behind Bayes theorem (as applied to updating your understanding in the face of new i |
51,689 | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | When you consider specifying a prior vs using a raw frequentist method, sometimes the prediction will be much better even with an apparently wrong prior because you don't need the prior to be precise at all to improve things. A very rough prior helps to rule out what is unrealistic, and this helps even if specified very imprecisely. It's a probabilistic method of restricting the parameter to a subset of possible values. For the joke: in the case of estimating the number of sheep in a field, ruling out values greater than $10^{52}$ is a safe guess.
A counter-productive prior is of course theoretically possible: believing firmly something that is totally unreal. But it is most often the consequence of a wrong mathematical understanding in a difficult formalization. This is an example of such a mistake: http://www.nowozin.net/sebastian/blog/estimating-discrete-entropy-part-3.html
If the overall formalization is good, wrong numerical information has less consequences. I was convinced by one of the simplest Bayesian methods: $L^2$ regularization in linear regression.
The model is $Y=\beta X+\epsilon$. If you have small data compared to features dimension, the frequentist basic estimator (MLE) $\hat\beta$ will most be often be extremely over-fitted and yield very poor predictions because you allow it to consider every possible $\beta$.. It's not rare that it has higher error than a constant predictor like 0 (in a real situation).
Now some vague intuition, experience, rumour... tells you that actually $\beta$ is unlikely to have a big norm, that estimations of $\beta$ with a big norm is just an effect of over-fitting.
You think the real $\beta$ tends to be reasonably small. You say: my $\beta$ is around 0 with variance... hum... dunno...say 1. Formally, this is a Gaussian prior on $\beta$. 1 is the regularization constant.
But if you choose 2 instead of 1, you'll get roughly the same results. And if you choose 1.2, you can't even see the difference. (not giving a general fact here, just for that it's the kind of thing we often observe). Actually there is a very wide range of values that will yield much better result than the non regularized estimator, and the error curve tends to be pretty flat around the optimal choice.
I did a few simulations with wrong prior specification in this case: you may assume a very false prior, yet the result are still far better than without regularization. Because a flat prior is worse than the worst misspecification you can reasonably think of.
As an hyper-parameter, the regularization coefficient can be chosen loosely without much consequence on the prediction. It tends to be true in many machine learning situations: the more you go to hyper-paramter, hyper-hpyer-paramters... the least it becomes sensitive to wrong specification. (usually and if the methods are good) | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | When you consider specifying a prior vs using a raw frequentist method, sometimes the prediction will be much better even with an apparently wrong prior because you don't need the prior to be precise | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
When you consider specifying a prior vs using a raw frequentist method, sometimes the prediction will be much better even with an apparently wrong prior because you don't need the prior to be precise at all to improve things. A very rough prior helps to rule out what is unrealistic, and this helps even if specified very imprecisely. It's a probabilistic method of restricting the parameter to a subset of possible values. For the joke: in the case of estimating the number of sheep in a field, ruling out values greater than $10^{52}$ is a safe guess.
A counter-productive prior is of course theoretically possible: believing firmly something that is totally unreal. But it is most often the consequence of a wrong mathematical understanding in a difficult formalization. This is an example of such a mistake: http://www.nowozin.net/sebastian/blog/estimating-discrete-entropy-part-3.html
If the overall formalization is good, wrong numerical information has less consequences. I was convinced by one of the simplest Bayesian methods: $L^2$ regularization in linear regression.
The model is $Y=\beta X+\epsilon$. If you have small data compared to features dimension, the frequentist basic estimator (MLE) $\hat\beta$ will most be often be extremely over-fitted and yield very poor predictions because you allow it to consider every possible $\beta$.. It's not rare that it has higher error than a constant predictor like 0 (in a real situation).
Now some vague intuition, experience, rumour... tells you that actually $\beta$ is unlikely to have a big norm, that estimations of $\beta$ with a big norm is just an effect of over-fitting.
You think the real $\beta$ tends to be reasonably small. You say: my $\beta$ is around 0 with variance... hum... dunno...say 1. Formally, this is a Gaussian prior on $\beta$. 1 is the regularization constant.
But if you choose 2 instead of 1, you'll get roughly the same results. And if you choose 1.2, you can't even see the difference. (not giving a general fact here, just for that it's the kind of thing we often observe). Actually there is a very wide range of values that will yield much better result than the non regularized estimator, and the error curve tends to be pretty flat around the optimal choice.
I did a few simulations with wrong prior specification in this case: you may assume a very false prior, yet the result are still far better than without regularization. Because a flat prior is worse than the worst misspecification you can reasonably think of.
As an hyper-parameter, the regularization coefficient can be chosen loosely without much consequence on the prediction. It tends to be true in many machine learning situations: the more you go to hyper-paramter, hyper-hpyer-paramters... the least it becomes sensitive to wrong specification. (usually and if the methods are good) | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
When you consider specifying a prior vs using a raw frequentist method, sometimes the prediction will be much better even with an apparently wrong prior because you don't need the prior to be precise |
51,690 | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | First, Bayes theorem doesn't make predictions. It's a mathematical law. But you have to get the probabilities right for it to work.
Second, you may be thinking of a Bayesian approach to data analysis. This does depend on priors but a) Sometimes (often) a uniform prior is chosen b) Other times, the prior is based on actual data.
Third, Kahnemann and Tversky really have nothing to do with this. They talk about how people reason with probability, even if the probabilities are given to them. For instance, a 10% risk of death is not viewed the same way as a 90% chance of survival. K & T do a lot of damage to the notion of a "rational human" but that's more about economics than statistics. | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities? | First, Bayes theorem doesn't make predictions. It's a mathematical law. But you have to get the probabilities right for it to work.
Second, you may be thinking of a Bayesian approach to data analysis | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
First, Bayes theorem doesn't make predictions. It's a mathematical law. But you have to get the probabilities right for it to work.
Second, you may be thinking of a Bayesian approach to data analysis. This does depend on priors but a) Sometimes (often) a uniform prior is chosen b) Other times, the prior is based on actual data.
Third, Kahnemann and Tversky really have nothing to do with this. They talk about how people reason with probability, even if the probabilities are given to them. For instance, a 10% risk of death is not viewed the same way as a 90% chance of survival. K & T do a lot of damage to the notion of a "rational human" but that's more about economics than statistics. | How does Bayesian analysis make accurate predictions using subjectively chosen probabilities?
First, Bayes theorem doesn't make predictions. It's a mathematical law. But you have to get the probabilities right for it to work.
Second, you may be thinking of a Bayesian approach to data analysis |
51,691 | How to do an ANOVA when your data are non-normal with possibly differing variances? | The data, as indicated by the variable names in the linked-to spreadsheet, pertains to number of organisms in 13 groups, so are some kind of count data. We could do better here if we knew some more about the data! But I do not agree with the answer by @gung that we should use some count data model, like Poisson regression. I will show graphically why I say that.
The Poisson distribution have variance equal to the mean, so a simple first analysis is to plot (empirical) variances against means. After reading the data into a data.frame (in R) I did:
> summary(dat)
Number Group
Min. :1.000 13 : 90
1st Qu.:3.000 1 : 76
Median :4.000 4 : 70
Mean :3.826 3 : 65
3rd Qu.:5.000 6 : 62
Max. :8.000 12 : 62
(Other):299
> s2<- with(dat, tapply(Number, Group, FUN=var))
> m <- with(dat, tapply(Number, Group, FUN=mean))
> plot(m, s2, ylim=c(1.5, 5.5))
> abline(0, 1, col="red")
The red line shows variance equal to the mean. We see that all the points are below this line, and there is not much evidence that the variance increases with the mean either. So this is not Poisson data. Also, while the variances vary, they do not vary too much (all are within a factor of two), so the usual equal-variance Anova $F$ test can probably be used, as it is reasonably robust against small variations in the variance. The classic book by Miller: "Beyond Anova" gives a factor of four as acceptable. So let us try Anova in R, with and without that assumption:
> oneway.test(Number ~ Group, data=dat, var.equal=FALSE)
One-way analysis of means (not assuming equal variances)
data: Number and Group
F = 10.265, num df = 11.00, denom df = 265.99, p-value = 1.29e-15
> oneway.test(Number ~ Group, data=dat, var.equal=TRUE)
One-way analysis of means
data: Number and Group
F = 9.4014, num df = 11, denom df = 712, p-value = 6.497e-16
Both give basically the same conclusion here: the null hypothesis of equality of means is rejected. Note that the $F$ ratios are very similar.
Let us also show a graphical analysis, a boxplot with "notches" showing confidence intervals for the median:
> boxplot(Number ~ Group, data=dat, notch=TRUE, col="blue",
varwidth=TRUE)
Warning message:
In bxp(list(stats = c(1, 2, 3, 5, 7, 2, 4, 5, 6, 7, 1, 3, 5, 6, :
some notches went outside hinges ('box'): maybe set notch=FALSE
We see that some of the confidence intervals do not overlap at all, again consistent with the Anova results. I would not be overly concerned with problems with the assumptions underlying Anova with this data, so the conclusion is quite robust. If wanted, it could be further analyzed with the bootstrap, but I will leave that to others. If further analysis is wanted, we would need more information about the data. | How to do an ANOVA when your data are non-normal with possibly differing variances? | The data, as indicated by the variable names in the linked-to spreadsheet, pertains to number of organisms in 13 groups, so are some kind of count data. We could do better here if we knew some more ab | How to do an ANOVA when your data are non-normal with possibly differing variances?
The data, as indicated by the variable names in the linked-to spreadsheet, pertains to number of organisms in 13 groups, so are some kind of count data. We could do better here if we knew some more about the data! But I do not agree with the answer by @gung that we should use some count data model, like Poisson regression. I will show graphically why I say that.
The Poisson distribution have variance equal to the mean, so a simple first analysis is to plot (empirical) variances against means. After reading the data into a data.frame (in R) I did:
> summary(dat)
Number Group
Min. :1.000 13 : 90
1st Qu.:3.000 1 : 76
Median :4.000 4 : 70
Mean :3.826 3 : 65
3rd Qu.:5.000 6 : 62
Max. :8.000 12 : 62
(Other):299
> s2<- with(dat, tapply(Number, Group, FUN=var))
> m <- with(dat, tapply(Number, Group, FUN=mean))
> plot(m, s2, ylim=c(1.5, 5.5))
> abline(0, 1, col="red")
The red line shows variance equal to the mean. We see that all the points are below this line, and there is not much evidence that the variance increases with the mean either. So this is not Poisson data. Also, while the variances vary, they do not vary too much (all are within a factor of two), so the usual equal-variance Anova $F$ test can probably be used, as it is reasonably robust against small variations in the variance. The classic book by Miller: "Beyond Anova" gives a factor of four as acceptable. So let us try Anova in R, with and without that assumption:
> oneway.test(Number ~ Group, data=dat, var.equal=FALSE)
One-way analysis of means (not assuming equal variances)
data: Number and Group
F = 10.265, num df = 11.00, denom df = 265.99, p-value = 1.29e-15
> oneway.test(Number ~ Group, data=dat, var.equal=TRUE)
One-way analysis of means
data: Number and Group
F = 9.4014, num df = 11, denom df = 712, p-value = 6.497e-16
Both give basically the same conclusion here: the null hypothesis of equality of means is rejected. Note that the $F$ ratios are very similar.
Let us also show a graphical analysis, a boxplot with "notches" showing confidence intervals for the median:
> boxplot(Number ~ Group, data=dat, notch=TRUE, col="blue",
varwidth=TRUE)
Warning message:
In bxp(list(stats = c(1, 2, 3, 5, 7, 2, 4, 5, 6, 7, 1, 3, 5, 6, :
some notches went outside hinges ('box'): maybe set notch=FALSE
We see that some of the confidence intervals do not overlap at all, again consistent with the Anova results. I would not be overly concerned with problems with the assumptions underlying Anova with this data, so the conclusion is quite robust. If wanted, it could be further analyzed with the bootstrap, but I will leave that to others. If further analysis is wanted, we would need more information about the data. | How to do an ANOVA when your data are non-normal with possibly differing variances?
The data, as indicated by the variable names in the linked-to spreadsheet, pertains to number of organisms in 13 groups, so are some kind of count data. We could do better here if we knew some more ab |
51,692 | How to do an ANOVA when your data are non-normal with possibly differing variances? | I notice that your response is called "Number of Organisms", and that all the values are non-negative integers. I suspect these are count data. They should not be treated as normally distributed and analyzed with a traditional ANOVA. Instead, a count GLM is appropriate. We can try Poisson regression:
anova(glm(N.Organisms~as.factor(Group), data=d, family=poisson), test="LRT")
# Analysis of Deviance Table
# Model: poisson, link: log
# Response: N.Organisms
# Terms added sequentially (first to last)
#
# Df Deviance Resid. Df Resid. Dev Pr(>Chi)
# NULL 723 619.31
# as.factor(Group) 12 78.907 711 540.41 6.669e-12 ***
# ---
# Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
aggregate(N.Organisms~Group, data=d,
FUN=function(x){ c(mean=mean(x),variance=var(x)) })
# Group N.Organisms.mean N.Organisms.variance
# 1 4.476190 2.161905
# 2 4.902439 1.790244
# 3 4.523077 2.690865
# 4 4.585714 2.883851
# 5 4.260000 2.767755
# 6 4.387097 3.060814
# 7 3.610169 2.690240
# 8 3.717391 3.540580
# 9 3.692308 2.334842
# 10 3.290909 3.284175
# 11 3.058824 2.256471
# 12 3.064516 2.520360
# 13 3.022222 2.471411
Oddly, your data seem to be underdispersed. That is rather unusual, and I'm not sure what to make of it. For robustness, we can try a simple chi-squared test, which should be underpowered (because it treats the counts as categories), nonetheless, it is highly significant as well:
with(d, table(Group, N.Organisms))
# N.Organisms
# Group 1 2 3 4 5 6 7 8
# 1 0 2 3 7 3 4 2 0
# 2 0 3 2 8 17 5 6 0
# 3 2 4 14 12 14 10 8 1
# 4 2 5 13 15 14 9 10 2
# 5 2 3 14 13 4 7 7 0
# 6 6 1 13 12 10 13 7 0
# 7 7 5 21 9 9 4 4 0
# 8 8 5 10 5 7 9 2 0
# 9 5 5 16 9 10 6 1 0
# 10 13 5 16 6 6 7 2 0
# 11 10 6 20 5 7 2 1 0
# 12 12 10 21 8 5 4 2 0
# 13 22 8 33 7 14 5 1 0
set.seed(1) # makes the simulated p-value reproducible
chisq.test(with(d, table(Group, N.Organisms)), simulate.p.value=T)
# Pearson's Chi-squared test with simulated p-value
# (based on 2000 replicates)
#
# data: with(d, table(Group, N.Organisms))
# X-squared = 172.02, df = NA, p-value = 0.0004998
Probably the best test would be the Kruskal-Wallis test, which is analogous to a one-way ANOVA for non-normal data. It won't be affected by the underdispersion, but will take into account the fact that $8$ organisms is $> 7$, and affords pairwise post-hoc comparisons easily via Dunn's test. (An implementation of Dunn's test for R was developed by our own @Alexis; for more on Dunn's test see here and here.) Note that the Kruskal-Wallis test does not assume that the variances are equal, although you won't be able to interpret a significant result as a simple shift in the medians if the shapes of the distributions differ.
kruskal.test(N.Organisms~as.factor(Group), data=d)
# Kruskal-Wallis rank sum test
#
# data: N.Organisms by as.factor(Group)
# Kruskal-Wallis chi-squared = 99.948, df = 12,
# p-value = 5.699e-16
library(dunn.test)
with(d, dunn.test(N.Organisms, as.factor(Group), method="Holm", kw=FALSE))
# Comparison of N.Organisms by group
# (Holm)
# Col Mean-|
# Row Mean | 1 2 3 4 5 6
# ---------+------------------------------------------------------------------
# 2 | -0.955248
# | 1.0000
# |
# 3 | -0.012069 1.270114
# | 0.4952 1.0000
# |
# 4 | -0.112368 1.161274 -0.144722
# | 1.0000 1.0000 1.0000
# |
# 5 | 0.586382 1.940375 0.826708 0.974481
# | 1.0000 0.9420 1.0000 1.0000
# |
# 6 | 0.216494 1.544994 0.324981 0.473740 -0.514632
# | 1.0000 1.0000 1.0000 1.0000 1.0000
# |
# 7 | 2.045019 3.816537 2.906720 3.098461 1.910109 2.556624
# | 0.8171 0.0043* 0.0950 0.0515 0.9260 0.2537
# |
# 8 | 1.678281 3.251401 2.309693 2.475995 1.417073 1.990415
# | 1.0000 0.0316 0.4704 0.3122 1.0000 0.8844
# |
# 9 | 1.755800 3.400923 2.456288 2.632395 1.522142 2.123496
# | 1.0000 0.0191* 0.3229 0.2077 1.0000 0.7080
# |
# 10 | 2.692297 4.589523 3.786058 3.987944 2.754012 3.433308
# | 0.1774 0.0002* 0.0047* 0.0022* 0.1501 0.0176*
# |
# 11 | 3.223104 5.206152 4.483634 4.691145 3.432921 4.131515
# | 0.0342 0.0000* 0.0002* 0.0001* 0.0173* 0.0012*
# |
# 12 | 3.321963 5.440195 4.741819 4.969661 3.610449 4.365576
# | 0.0250 0.0000* 0.0001* 0.0000* 0.0093* 0.0004*
# |
# 13 | 3.487721 5.846371 5.211207 5.479181 3.927490 4.789963
# | 0.0146* 0.0000* 0.0000* 0.0000* 0.0027* 0.0001*
#
# Col Mean-|
# Row Mean | 7 8 9 10 11 12
# ---------+------------------------------------------------------------------
# 8 | -0.394796
# | 1.0000
# |
# 9 | -0.345287 0.059170
# | 1.0000 1.0000
# |
# 10 | 0.912191 1.244372 1.223490
# | 1.0000 1.0000 1.0000
# |
# 11 | 1.652971 1.936170 1.936942 0.746270
# | 1.0000 0.8984 0.9232 1.0000
# |
# 12 | 1.754493 2.038829 2.046213 0.799659 0.016138
# | 1.0000 0.8086 0.8351 1.0000 0.9871
# |
# 13 | 1.943622 2.224783 2.246161 0.903324 0.054395 0.039280
# | 0.9609 0.5611 0.5433 1.0000 1.0000 1.0000 | How to do an ANOVA when your data are non-normal with possibly differing variances? | I notice that your response is called "Number of Organisms", and that all the values are non-negative integers. I suspect these are count data. They should not be treated as normally distributed and | How to do an ANOVA when your data are non-normal with possibly differing variances?
I notice that your response is called "Number of Organisms", and that all the values are non-negative integers. I suspect these are count data. They should not be treated as normally distributed and analyzed with a traditional ANOVA. Instead, a count GLM is appropriate. We can try Poisson regression:
anova(glm(N.Organisms~as.factor(Group), data=d, family=poisson), test="LRT")
# Analysis of Deviance Table
# Model: poisson, link: log
# Response: N.Organisms
# Terms added sequentially (first to last)
#
# Df Deviance Resid. Df Resid. Dev Pr(>Chi)
# NULL 723 619.31
# as.factor(Group) 12 78.907 711 540.41 6.669e-12 ***
# ---
# Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
aggregate(N.Organisms~Group, data=d,
FUN=function(x){ c(mean=mean(x),variance=var(x)) })
# Group N.Organisms.mean N.Organisms.variance
# 1 4.476190 2.161905
# 2 4.902439 1.790244
# 3 4.523077 2.690865
# 4 4.585714 2.883851
# 5 4.260000 2.767755
# 6 4.387097 3.060814
# 7 3.610169 2.690240
# 8 3.717391 3.540580
# 9 3.692308 2.334842
# 10 3.290909 3.284175
# 11 3.058824 2.256471
# 12 3.064516 2.520360
# 13 3.022222 2.471411
Oddly, your data seem to be underdispersed. That is rather unusual, and I'm not sure what to make of it. For robustness, we can try a simple chi-squared test, which should be underpowered (because it treats the counts as categories), nonetheless, it is highly significant as well:
with(d, table(Group, N.Organisms))
# N.Organisms
# Group 1 2 3 4 5 6 7 8
# 1 0 2 3 7 3 4 2 0
# 2 0 3 2 8 17 5 6 0
# 3 2 4 14 12 14 10 8 1
# 4 2 5 13 15 14 9 10 2
# 5 2 3 14 13 4 7 7 0
# 6 6 1 13 12 10 13 7 0
# 7 7 5 21 9 9 4 4 0
# 8 8 5 10 5 7 9 2 0
# 9 5 5 16 9 10 6 1 0
# 10 13 5 16 6 6 7 2 0
# 11 10 6 20 5 7 2 1 0
# 12 12 10 21 8 5 4 2 0
# 13 22 8 33 7 14 5 1 0
set.seed(1) # makes the simulated p-value reproducible
chisq.test(with(d, table(Group, N.Organisms)), simulate.p.value=T)
# Pearson's Chi-squared test with simulated p-value
# (based on 2000 replicates)
#
# data: with(d, table(Group, N.Organisms))
# X-squared = 172.02, df = NA, p-value = 0.0004998
Probably the best test would be the Kruskal-Wallis test, which is analogous to a one-way ANOVA for non-normal data. It won't be affected by the underdispersion, but will take into account the fact that $8$ organisms is $> 7$, and affords pairwise post-hoc comparisons easily via Dunn's test. (An implementation of Dunn's test for R was developed by our own @Alexis; for more on Dunn's test see here and here.) Note that the Kruskal-Wallis test does not assume that the variances are equal, although you won't be able to interpret a significant result as a simple shift in the medians if the shapes of the distributions differ.
kruskal.test(N.Organisms~as.factor(Group), data=d)
# Kruskal-Wallis rank sum test
#
# data: N.Organisms by as.factor(Group)
# Kruskal-Wallis chi-squared = 99.948, df = 12,
# p-value = 5.699e-16
library(dunn.test)
with(d, dunn.test(N.Organisms, as.factor(Group), method="Holm", kw=FALSE))
# Comparison of N.Organisms by group
# (Holm)
# Col Mean-|
# Row Mean | 1 2 3 4 5 6
# ---------+------------------------------------------------------------------
# 2 | -0.955248
# | 1.0000
# |
# 3 | -0.012069 1.270114
# | 0.4952 1.0000
# |
# 4 | -0.112368 1.161274 -0.144722
# | 1.0000 1.0000 1.0000
# |
# 5 | 0.586382 1.940375 0.826708 0.974481
# | 1.0000 0.9420 1.0000 1.0000
# |
# 6 | 0.216494 1.544994 0.324981 0.473740 -0.514632
# | 1.0000 1.0000 1.0000 1.0000 1.0000
# |
# 7 | 2.045019 3.816537 2.906720 3.098461 1.910109 2.556624
# | 0.8171 0.0043* 0.0950 0.0515 0.9260 0.2537
# |
# 8 | 1.678281 3.251401 2.309693 2.475995 1.417073 1.990415
# | 1.0000 0.0316 0.4704 0.3122 1.0000 0.8844
# |
# 9 | 1.755800 3.400923 2.456288 2.632395 1.522142 2.123496
# | 1.0000 0.0191* 0.3229 0.2077 1.0000 0.7080
# |
# 10 | 2.692297 4.589523 3.786058 3.987944 2.754012 3.433308
# | 0.1774 0.0002* 0.0047* 0.0022* 0.1501 0.0176*
# |
# 11 | 3.223104 5.206152 4.483634 4.691145 3.432921 4.131515
# | 0.0342 0.0000* 0.0002* 0.0001* 0.0173* 0.0012*
# |
# 12 | 3.321963 5.440195 4.741819 4.969661 3.610449 4.365576
# | 0.0250 0.0000* 0.0001* 0.0000* 0.0093* 0.0004*
# |
# 13 | 3.487721 5.846371 5.211207 5.479181 3.927490 4.789963
# | 0.0146* 0.0000* 0.0000* 0.0000* 0.0027* 0.0001*
#
# Col Mean-|
# Row Mean | 7 8 9 10 11 12
# ---------+------------------------------------------------------------------
# 8 | -0.394796
# | 1.0000
# |
# 9 | -0.345287 0.059170
# | 1.0000 1.0000
# |
# 10 | 0.912191 1.244372 1.223490
# | 1.0000 1.0000 1.0000
# |
# 11 | 1.652971 1.936170 1.936942 0.746270
# | 1.0000 0.8984 0.9232 1.0000
# |
# 12 | 1.754493 2.038829 2.046213 0.799659 0.016138
# | 1.0000 0.8086 0.8351 1.0000 0.9871
# |
# 13 | 1.943622 2.224783 2.246161 0.903324 0.054395 0.039280
# | 0.9609 0.5611 0.5433 1.0000 1.0000 1.0000 | How to do an ANOVA when your data are non-normal with possibly differing variances?
I notice that your response is called "Number of Organisms", and that all the values are non-negative integers. I suspect these are count data. They should not be treated as normally distributed and |
51,693 | How to do an ANOVA when your data are non-normal with possibly differing variances? | This is largely a footnote to @kjetil's answer, with which I tend to agree. But I have an extra graph that says a little more and that won't fit into a comment.
This is a combination plot. For each group, there is a quantile plot showing the detail for each group, a superimposed box showing median and quartiles in standard form and finally a diamond showing the mean, shown large for clarity. The picture is essentially one of fairly constant scatter, near symmetry and a lack of outliers or other pathological or unusual structure. I would be happy to see an ANOVA in these cases.
That said, I wouldn't expect that Poisson regression would perform badly or misleadingly for such data, provided that there was some careful handling of the variance-mean structure (e.g. in the standard error calculation).
Presumably there are (or should be) other biological and/or environmental predictors that would appear in a complete analysis. | How to do an ANOVA when your data are non-normal with possibly differing variances? | This is largely a footnote to @kjetil's answer, with which I tend to agree. But I have an extra graph that says a little more and that won't fit into a comment.
This is a combination plot. For each gr | How to do an ANOVA when your data are non-normal with possibly differing variances?
This is largely a footnote to @kjetil's answer, with which I tend to agree. But I have an extra graph that says a little more and that won't fit into a comment.
This is a combination plot. For each group, there is a quantile plot showing the detail for each group, a superimposed box showing median and quartiles in standard form and finally a diamond showing the mean, shown large for clarity. The picture is essentially one of fairly constant scatter, near symmetry and a lack of outliers or other pathological or unusual structure. I would be happy to see an ANOVA in these cases.
That said, I wouldn't expect that Poisson regression would perform badly or misleadingly for such data, provided that there was some careful handling of the variance-mean structure (e.g. in the standard error calculation).
Presumably there are (or should be) other biological and/or environmental predictors that would appear in a complete analysis. | How to do an ANOVA when your data are non-normal with possibly differing variances?
This is largely a footnote to @kjetil's answer, with which I tend to agree. But I have an extra graph that says a little more and that won't fit into a comment.
This is a combination plot. For each gr |
51,694 | Extract BIC and AICc from arima() object | For the BIC and AIC, you can simply use AIC function as follow:
> model <- arima(x=sunspots, order=c(2,0,2), method="ML")
> AIC(model)
[1] 23563.39
> bic=AIC(model,k = log(length(sunspots)))
> bic
[1] 23599.05
The function AIC can provide both AIC and BIC. Look at ?AIC. | Extract BIC and AICc from arima() object | For the BIC and AIC, you can simply use AIC function as follow:
> model <- arima(x=sunspots, order=c(2,0,2), method="ML")
> AIC(model)
[1] 23563.39
> bic=AIC(model,k = log(length(sunspots)))
> bic | Extract BIC and AICc from arima() object
For the BIC and AIC, you can simply use AIC function as follow:
> model <- arima(x=sunspots, order=c(2,0,2), method="ML")
> AIC(model)
[1] 23563.39
> bic=AIC(model,k = log(length(sunspots)))
> bic
[1] 23599.05
The function AIC can provide both AIC and BIC. Look at ?AIC. | Extract BIC and AICc from arima() object
For the BIC and AIC, you can simply use AIC function as follow:
> model <- arima(x=sunspots, order=c(2,0,2), method="ML")
> AIC(model)
[1] 23563.39
> bic=AIC(model,k = log(length(sunspots)))
> bic |
51,695 | Extract BIC and AICc from arima() object | Answer:
One possible solution, although no claim to be the best, is as follows; it's a hack that I've come up with after looking at some source code.
npar <- length(model$coef) + 1
nstar <- length(model$residuals) - model$arma[6] - model$arma[7] * model$arma[5]
bic <- model$aic + npar * (log(nstar) - 2)
aicc <- model$aic + 2 * npar * (nstar/(nstar - npar - 1) - 1)
Now that the bic and aicc have been stored as objects - using, solely, output from the arima() function - we can now set them as attributes to the model object.
# Give model attributes for bic and aicc
attr(model,"bic") <- bic
attr(model,"aicc") <- aicc
> attributes(model)
$names
[1] "coef" "sigma2" "var.coef" "mask" "loglik"
[6] "aic" "arma" "residuals" "call" "series"
[11] "code" "n.cond" "model"
$class
[1] "Arima"
$bic
[1] 23599.05
$aicc
[1] 23563.42
Pass on these attributes to a new object (we don't want to overwrite model).
# Create new object with these attributes
model_2 <- attributes(model)
We can now access the BIC and AICc in a similar manner as to how we accessed the AIC value. The following code should make this clear:
> model$aic
[1] 23563.39
> model_2$bic
[1] 23599.05
> model_2$aicc
[1] 23563.42
Edit:
Based on the very useful information provided by @Stat about the AIC() function, the following code may be useful as alternative ways of getting the AIC, BIC, AICc, and HQC. Attach them as attributes to the model object and work away.
# AIC
AIC(arima(x=sunspots, order=c(2,0,2), method="ML"))
# BIC
AIC(arima(x=sunspots, order=c(2,0,2), method="ML"),k=log(length(sunspots)))
# AICc
AIC(arima(x=sunspots, order=c(2,0,2), method="ML")) + 2 * npar * (nstar/(nstar - npar - 1) - 1)
# HQC
AIC(arima(x=sunspots, order=c(2,0,2), method="ML"), k=2*log(log(length(sunspots)))) | Extract BIC and AICc from arima() object | Answer:
One possible solution, although no claim to be the best, is as follows; it's a hack that I've come up with after looking at some source code.
npar <- length(model$coef) + 1
nstar <- length(mod | Extract BIC and AICc from arima() object
Answer:
One possible solution, although no claim to be the best, is as follows; it's a hack that I've come up with after looking at some source code.
npar <- length(model$coef) + 1
nstar <- length(model$residuals) - model$arma[6] - model$arma[7] * model$arma[5]
bic <- model$aic + npar * (log(nstar) - 2)
aicc <- model$aic + 2 * npar * (nstar/(nstar - npar - 1) - 1)
Now that the bic and aicc have been stored as objects - using, solely, output from the arima() function - we can now set them as attributes to the model object.
# Give model attributes for bic and aicc
attr(model,"bic") <- bic
attr(model,"aicc") <- aicc
> attributes(model)
$names
[1] "coef" "sigma2" "var.coef" "mask" "loglik"
[6] "aic" "arma" "residuals" "call" "series"
[11] "code" "n.cond" "model"
$class
[1] "Arima"
$bic
[1] 23599.05
$aicc
[1] 23563.42
Pass on these attributes to a new object (we don't want to overwrite model).
# Create new object with these attributes
model_2 <- attributes(model)
We can now access the BIC and AICc in a similar manner as to how we accessed the AIC value. The following code should make this clear:
> model$aic
[1] 23563.39
> model_2$bic
[1] 23599.05
> model_2$aicc
[1] 23563.42
Edit:
Based on the very useful information provided by @Stat about the AIC() function, the following code may be useful as alternative ways of getting the AIC, BIC, AICc, and HQC. Attach them as attributes to the model object and work away.
# AIC
AIC(arima(x=sunspots, order=c(2,0,2), method="ML"))
# BIC
AIC(arima(x=sunspots, order=c(2,0,2), method="ML"),k=log(length(sunspots)))
# AICc
AIC(arima(x=sunspots, order=c(2,0,2), method="ML")) + 2 * npar * (nstar/(nstar - npar - 1) - 1)
# HQC
AIC(arima(x=sunspots, order=c(2,0,2), method="ML"), k=2*log(log(length(sunspots)))) | Extract BIC and AICc from arima() object
Answer:
One possible solution, although no claim to be the best, is as follows; it's a hack that I've come up with after looking at some source code.
npar <- length(model$coef) + 1
nstar <- length(mod |
51,696 | Extract BIC and AICc from arima() object | Here is a function my TA in my time series analysis course at UC Davis wrote to extract the AICc
Function aicc() computes the AICc of a given ARIMA model.
INPUT: an ARIMA model object produced by arima()
OUTPUT: AICc value for the given model object
aicc = function(model){
n = model$nobs
p = length(model$coef)
aicc = model$aic + 2*p*(p+1)/(n-p-1)
return(aicc)
}
Example:
x = arima.sim(100,model=list(ar=(0.3)))
mod = arima(x,order=c(1,0,0))
aicc(mod) | Extract BIC and AICc from arima() object | Here is a function my TA in my time series analysis course at UC Davis wrote to extract the AICc
Function aicc() computes the AICc of a given ARIMA model.
INPUT: an ARIMA model object produced by arim | Extract BIC and AICc from arima() object
Here is a function my TA in my time series analysis course at UC Davis wrote to extract the AICc
Function aicc() computes the AICc of a given ARIMA model.
INPUT: an ARIMA model object produced by arima()
OUTPUT: AICc value for the given model object
aicc = function(model){
n = model$nobs
p = length(model$coef)
aicc = model$aic + 2*p*(p+1)/(n-p-1)
return(aicc)
}
Example:
x = arima.sim(100,model=list(ar=(0.3)))
mod = arima(x,order=c(1,0,0))
aicc(mod) | Extract BIC and AICc from arima() object
Here is a function my TA in my time series analysis course at UC Davis wrote to extract the AICc
Function aicc() computes the AICc of a given ARIMA model.
INPUT: an ARIMA model object produced by arim |
51,697 | Extract BIC and AICc from arima() object | Once you have loaded forecast package, you must use Arima() function for AIC, AICc and BIC. Notice upper "A" in Arima() function. If you use arima() function with lower "a", then R will use the function that comes with base R. | Extract BIC and AICc from arima() object | Once you have loaded forecast package, you must use Arima() function for AIC, AICc and BIC. Notice upper "A" in Arima() function. If you use arima() function with lower "a", then R will use the functi | Extract BIC and AICc from arima() object
Once you have loaded forecast package, you must use Arima() function for AIC, AICc and BIC. Notice upper "A" in Arima() function. If you use arima() function with lower "a", then R will use the function that comes with base R. | Extract BIC and AICc from arima() object
Once you have loaded forecast package, you must use Arima() function for AIC, AICc and BIC. Notice upper "A" in Arima() function. If you use arima() function with lower "a", then R will use the functi |
51,698 | Extract BIC and AICc from arima() object | what it seems like is you are using incorrect arima function to get the values. Note that, arima() is not part of forecast library, you will have to use Arima() instead.
once the library(forecast) is imported use below function to extract the values:
model=Arima(grow, order=c(2,0,0))
attributes(model)
$names
[1] "coef" "sigma2" "var.coef" "mask" "loglik" "aic"
[7] "arma" "residuals" "call" "series" "code" "n.cond"
[13] "nobs" "model" "aicc" "bic" "x"
$class
[1] "ARIMA" "Arima"
model$bic
[1] 1069.786
model=arima(grow, order=c(2,0,0))
attributes(model)
$names
[1] "coef" "sigma2" "var.coef" "mask" "loglik" "aic"
[7] "arma" "residuals" "call" "series" "code" "n.cond"
[13] "nobs" "model"
$class
[1] "Arima"
Hope this helps.. :) | Extract BIC and AICc from arima() object | what it seems like is you are using incorrect arima function to get the values. Note that, arima() is not part of forecast library, you will have to use Arima() instead.
once the library(forecast) is | Extract BIC and AICc from arima() object
what it seems like is you are using incorrect arima function to get the values. Note that, arima() is not part of forecast library, you will have to use Arima() instead.
once the library(forecast) is imported use below function to extract the values:
model=Arima(grow, order=c(2,0,0))
attributes(model)
$names
[1] "coef" "sigma2" "var.coef" "mask" "loglik" "aic"
[7] "arma" "residuals" "call" "series" "code" "n.cond"
[13] "nobs" "model" "aicc" "bic" "x"
$class
[1] "ARIMA" "Arima"
model$bic
[1] 1069.786
model=arima(grow, order=c(2,0,0))
attributes(model)
$names
[1] "coef" "sigma2" "var.coef" "mask" "loglik" "aic"
[7] "arma" "residuals" "call" "series" "code" "n.cond"
[13] "nobs" "model"
$class
[1] "Arima"
Hope this helps.. :) | Extract BIC and AICc from arima() object
what it seems like is you are using incorrect arima function to get the values. Note that, arima() is not part of forecast library, you will have to use Arima() instead.
once the library(forecast) is |
51,699 | Why is a deterministic trend process not stationary? | I think I nice way to get the intuition is to simulate 3 series for $t=0,...,500$ and plot them:
Autoregressive Stationary Series: $A_{t}=0.05+0.95A_{t-1}+u_{t}$
Random Walk with Drift: $R_{t}=0.05+1R_{t-1}+u_{t}$
Explosive Series: $E_{t}=0.05+1.05E_{t-1}+u_{t}$
where $u_{t}$ is just some white noise, like iid $N(0,1)$.
Look at $A$ and $R$:
The theoretical mean of $A$ is $1$ (red horizontal line) and its standard deviation is $3.2$. The graph will deviate from that mean over time, but not too far. $R$ will look qualitatively similar to $A$ early on, but begins to drift apart in the middle, but converges towards the end. In theory, the unconditional mean and variance of $R$ do not exist, and you can see that in the graph.
Now plot all 3 series on a graph with the same scale.
Can you see how $E$ just makes the other two look like a straight line? The slope parameter in $E$ exceeds $1$ by 0.05, the same amount that it falls short for $A$, but what a difference it makes! Here, the average makes no sense at all.
The other point is that $A$ and $R$ look like the sorts of things we see every day, but we are bad at guessing which ones are stationary, especially with fewer data.
This is shamelessly plagiarized from Econometric Methods by Jack Johnston and John DiNardo, which is sadly out of print. | Why is a deterministic trend process not stationary? | I think I nice way to get the intuition is to simulate 3 series for $t=0,...,500$ and plot them:
Autoregressive Stationary Series: $A_{t}=0.05+0.95A_{t-1}+u_{t}$
Random Walk with Drift: $R_{t}=0.05+1 | Why is a deterministic trend process not stationary?
I think I nice way to get the intuition is to simulate 3 series for $t=0,...,500$ and plot them:
Autoregressive Stationary Series: $A_{t}=0.05+0.95A_{t-1}+u_{t}$
Random Walk with Drift: $R_{t}=0.05+1R_{t-1}+u_{t}$
Explosive Series: $E_{t}=0.05+1.05E_{t-1}+u_{t}$
where $u_{t}$ is just some white noise, like iid $N(0,1)$.
Look at $A$ and $R$:
The theoretical mean of $A$ is $1$ (red horizontal line) and its standard deviation is $3.2$. The graph will deviate from that mean over time, but not too far. $R$ will look qualitatively similar to $A$ early on, but begins to drift apart in the middle, but converges towards the end. In theory, the unconditional mean and variance of $R$ do not exist, and you can see that in the graph.
Now plot all 3 series on a graph with the same scale.
Can you see how $E$ just makes the other two look like a straight line? The slope parameter in $E$ exceeds $1$ by 0.05, the same amount that it falls short for $A$, but what a difference it makes! Here, the average makes no sense at all.
The other point is that $A$ and $R$ look like the sorts of things we see every day, but we are bad at guessing which ones are stationary, especially with fewer data.
This is shamelessly plagiarized from Econometric Methods by Jack Johnston and John DiNardo, which is sadly out of print. | Why is a deterministic trend process not stationary?
I think I nice way to get the intuition is to simulate 3 series for $t=0,...,500$ and plot them:
Autoregressive Stationary Series: $A_{t}=0.05+0.95A_{t-1}+u_{t}$
Random Walk with Drift: $R_{t}=0.05+1 |
51,700 | Why is a deterministic trend process not stationary? | You answer your question yourself: Because stationarity implies both a constant variance, and a constant mean.
If either term is dependent on time, the process is not stationary. In your first example, the mean is dependent on time, and in the second, variance is. | Why is a deterministic trend process not stationary? | You answer your question yourself: Because stationarity implies both a constant variance, and a constant mean.
If either term is dependent on time, the process is not stationary. In your first example | Why is a deterministic trend process not stationary?
You answer your question yourself: Because stationarity implies both a constant variance, and a constant mean.
If either term is dependent on time, the process is not stationary. In your first example, the mean is dependent on time, and in the second, variance is. | Why is a deterministic trend process not stationary?
You answer your question yourself: Because stationarity implies both a constant variance, and a constant mean.
If either term is dependent on time, the process is not stationary. In your first example |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.