id
int64 28.8k
36k
| category
stringclasses 3
values | text
stringlengths 44
3.03k
| title
stringlengths 10
236
| published
stringlengths 19
19
| author
stringlengths 6
943
| link
stringlengths 66
127
| primary_category
stringclasses 62
values |
---|---|---|---|---|---|---|---|
28,867 | em | The meaning of public messages such as "One in x people gets cancer" or "One
in y people gets cancer by age z" can be improved. One assumption commonly
invoked is that there is no other cause of death, a confusing assumption. We
develop a light bulb model to clarify cumulative risk and we use Markov chain
modeling, incorporating the assumption widely in place, to evaluate transition
probabilities. Age-progression in the cancer risk is then reported on
Australian data. Future modelling can elicit realistic assumptions. | Cancer Risk Messages: A Light Bulb Model | 2018-07-09 13:58:20 | Ka C. Chan, Ruth F. G. Williams, Christopher T. Lenard, Terence M. Mills | http://arxiv.org/abs/1807.03040v2, http://arxiv.org/pdf/1807.03040v2 | econ.EM |
28,868 | em | Statements for public health purposes such as "1 in 2 will get cancer by age
85" have appeared in public spaces. The meaning drawn from such statements
affects economic welfare, not just public health. Both markets and government
use risk information on all kinds of risks, useful information can, in turn,
improve economic welfare, however inaccuracy can lower it. We adapt the
contingency table approach so that a quoted risk is cross-classified with the
states of nature. We show that bureaucratic objective functions regarding the
accuracy of a reported cancer risk can then be stated. | Cancer Risk Messages: Public Health and Economic Welfare | 2018-07-09 14:18:01 | Ruth F. G. Williams, Ka C. Chan, Christopher T. Lenard, Terence M. Mills | http://arxiv.org/abs/1807.03045v2, http://arxiv.org/pdf/1807.03045v2 | econ.EM |
28,869 | em | This paper applies economic concepts from measuring income inequality to an
exercise in assessing spatial inequality in cancer service access in regional
areas. We propose a mathematical model for accessing chemotherapy among local
government areas (LGAs). Our model incorporates a distance factor. With a
simulation we report results for a single inequality measure: the Lorenz curve
is depicted for our illustrative data. We develop this approach in order to
move incrementally towards its application to actual data and real-world health
service regions. We seek to develop the exercises that can lead policy makers
to relevant policy information on the most useful data collections to be
collected and modeling for cancer service access in regional areas. | Simulation Modelling of Inequality in Cancer Service Access | 2018-07-09 14:25:38 | Ka C. Chan, Ruth F. G. Williams, Christopher T. Lenard, Terence M. Mills | http://dx.doi.org/10.1080/27707571.2022.2127188, http://arxiv.org/abs/1807.03048v1, http://arxiv.org/pdf/1807.03048v1 | econ.EM |
28,870 | em | The data mining technique of time series clustering is well established in
many fields. However, as an unsupervised learning method, it requires making
choices that are nontrivially influenced by the nature of the data involved.
The aim of this paper is to verify usefulness of the time series clustering
method for macroeconomics research, and to develop the most suitable
methodology.
By extensively testing various possibilities, we arrive at a choice of a
dissimilarity measure (compression-based dissimilarity measure, or CDM) which
is particularly suitable for clustering macroeconomic variables. We check that
the results are stable in time and reflect large-scale phenomena such as
crises. We also successfully apply our findings to analysis of national
economies, specifically to identifying their structural relations. | Clustering Macroeconomic Time Series | 2018-07-11 11:51:41 | Iwo Augustyński, Paweł Laskoś-Grabowski | http://dx.doi.org/10.15611/eada.2018.2.06, http://arxiv.org/abs/1807.04004v2, http://arxiv.org/pdf/1807.04004v2 | econ.EM |
28,871 | em | This paper re-examines the problem of estimating risk premia in linear factor
pricing models. Typically, the data used in the empirical literature are
characterized by weakness of some pricing factors, strong cross-sectional
dependence in the errors, and (moderately) high cross-sectional dimensionality.
Using an asymptotic framework where the number of assets/portfolios grows with
the time span of the data while the risk exposures of weak factors are
local-to-zero, we show that the conventional two-pass estimation procedure
delivers inconsistent estimates of the risk premia. We propose a new estimation
procedure based on sample-splitting instrumental variables regression. The
proposed estimator of risk premia is robust to weak included factors and to the
presence of strong unaccounted cross-sectional error dependence. We derive the
many-asset weak factor asymptotic distribution of the proposed estimator, show
how to construct its standard errors, verify its performance in simulations,
and revisit some empirical studies. | Factor models with many assets: strong factors, weak factors, and the two-pass procedure | 2018-07-11 14:53:19 | Stanislav Anatolyev, Anna Mikusheva | http://arxiv.org/abs/1807.04094v2, http://arxiv.org/pdf/1807.04094v2 | econ.EM |
28,872 | em | This paper analyzes the bank lending channel and the heterogeneous effects on
the euro area, providing evidence that the channel is indeed working. The
analysis of the transmission mechanism is based on structural impulse responses
to an unconventional monetary policy shock on bank loans. The Bank Lending
Survey (BLS) is exploited in order to get insights on developments of loan
demand and supply. The contribution of this paper is to use country-specific
data to analyze the consequences of unconventional monetary policy, instead of
taking an aggregate stance by using euro area data. This approach provides a
deeper understanding of the bank lending channel and its effects. That is, an
expansionary monetary policy shock leads to an increase in loan demand, supply
and output growth. A small north-south disparity between the countries can be
observed. | Heterogeneous Effects of Unconventional Monetary Policy on Loan Demand and Supply. Insights from the Bank Lending Survey | 2018-07-11 17:36:21 | Martin Guth | http://arxiv.org/abs/1807.04161v1, http://arxiv.org/pdf/1807.04161v1 | econ.EM |
28,873 | em | I present a dynamic, voluntary contribution mechanism, public good game and
derive its potential outcomes. In each period, players endogenously determine
contribution productivity by engaging in costly investment. The level of
contribution productivity carries from period to period, creating a dynamic
link between periods. The investment mimics investing in the stock of
technology for producing public goods such as national defense or a clean
environment. After investing, players decide how much of their remaining money
to contribute to provision of the public good, as in traditional public good
games. I analyze three kinds of outcomes of the game: the lowest payoff
outcome, the Nash Equilibria, and socially optimal behavior. In the lowest
payoff outcome, all players receive payoffs of zero. Nash Equilibrium occurs
when players invest any amount and contribute all or nothing depending on the
contribution productivity. Therefore, there are infinitely many Nash Equilibria
strategies. Finally, the socially optimal result occurs when players invest
everything in early periods, then at some point switch to contributing
everything. My goal is to discover and explain this point. I use mathematical
analysis and computer simulation to derive the results. | Analysis of a Dynamic Voluntary Contribution Mechanism Public Good Game | 2018-07-12 17:13:41 | Dmytro Bogatov | http://arxiv.org/abs/1807.04621v2, http://arxiv.org/pdf/1807.04621v2 | econ.EM |
28,874 | em | Wang and Tchetgen Tchetgen (2017) studied identification and estimation of
the average treatment effect when some confounders are unmeasured. Under their
identification condition, they showed that the semiparametric efficient
influence function depends on five unknown functionals. They proposed to
parameterize all functionals and estimate the average treatment effect from the
efficient influence function by replacing the unknown functionals with
estimated functionals. They established that their estimator is consistent when
certain functionals are correctly specified and attains the semiparametric
efficiency bound when all functionals are correctly specified. In applications,
it is likely that those functionals could all be misspecified. Consequently
their estimator could be inconsistent or consistent but not efficient. This
paper presents an alternative estimator that does not require parameterization
of any of the functionals. We establish that the proposed estimator is always
consistent and always attains the semiparametric efficiency bound. A simple and
intuitive estimator of the asymptotic variance is presented, and a small scale
simulation study reveals that the proposed estimation outperforms the existing
alternatives in finite samples. | A Simple and Efficient Estimation of the Average Treatment Effect in the Presence of Unmeasured Confounders | 2018-07-16 07:42:01 | Chunrong Ai, Lukang Huang, Zheng Zhang | http://arxiv.org/abs/1807.05678v1, http://arxiv.org/pdf/1807.05678v1 | econ.EM |
28,875 | em | This paper analyzes how the legalization of same-sex marriage in the U.S.
affected gay and lesbian couples in the labor market. Results from a
difference-in-difference model show that both partners in same-sex couples were
more likely to be employed, to have a full-time contract, and to work longer
hours in states that legalized same-sex marriage. In line with a theoretical
search model of discrimination, suggestive empirical evidence supports the
hypothesis that marriage equality led to an improvement in employment outcomes
among gays and lesbians and lower occupational segregation thanks to a decrease
in discrimination towards sexual minorities. | Pink Work: Same-Sex Marriage, Employment and Discrimination | 2018-07-18 01:57:39 | Dario Sansone | http://arxiv.org/abs/1807.06698v1, http://arxiv.org/pdf/1807.06698v1 | econ.EM |
28,876 | em | Regression quantiles have asymptotic variances that depend on the conditional
densities of the response variable given regressors. This paper develops a new
estimate of the asymptotic variance of regression quantiles that leads any
resulting Wald-type test or confidence region to behave as well in large
samples as its infeasible counterpart in which the true conditional response
densities are embedded. We give explicit guidance on implementing the new
variance estimator to control adaptively the size of any resulting Wald-type
test. Monte Carlo evidence indicates the potential of our approach to deliver
powerful tests of heterogeneity of quantile treatment effects in covariates
with good size performance over different quantile levels, data-generating
processes and sample sizes. We also include an empirical example. Supplementary
material is available online. | Quantile-Regression Inference With Adaptive Control of Size | 2018-07-18 17:40:36 | Juan Carlos Escanciano, Chuan Goh | http://dx.doi.org/10.1080/01621459.2018.1505624, http://arxiv.org/abs/1807.06977v2, http://arxiv.org/pdf/1807.06977v2 | econ.EM |
28,877 | em | The accumulation of knowledge required to produce economic value is a process
that often relates to nations economic growth. Such a relationship, however, is
misleading when the proxy of such accumulation is the average years of
education. In this paper, we show that the predictive power of this proxy
started to dwindle in 1990 when nations schooling began to homogenized. We
propose a metric of human capital that is less sensitive than average years of
education and remains as a significant predictor of economic growth when tested
with both cross-section data and panel data. We argue that future research on
economic growth will discard educational variables based on quantity as
predictor given the thresholds that these variables are reaching. | A New Index of Human Capital to Predict Economic Growth | 2018-07-18 20:34:27 | Henry Laverde, Juan C. Correa, Klaus Jaffe | http://arxiv.org/abs/1807.07051v1, http://arxiv.org/pdf/1807.07051v1 | econ.EM |
28,878 | em | The public debt and deficit ceilings of the Maastricht Treaty are the subject
of recurring controversy. First, there is debate about the role and impact of
these criteria in the initial phase of the introduction of the single currency.
Secondly, it must be specified how these will then be applied, in a permanent
regime, when the single currency is well established. | Stability in EMU | 2018-07-20 10:53:14 | Theo Peeters | http://arxiv.org/abs/1807.07730v1, http://arxiv.org/pdf/1807.07730v1 | econ.EM |
28,879 | em | If multiway cluster-robust standard errors are used routinely in applied
economics, surprisingly few theoretical results justify this practice. This
paper aims to fill this gap. We first prove, under nearly the same conditions
as with i.i.d. data, the weak convergence of empirical processes under multiway
clustering. This result implies central limit theorems for sample averages but
is also key for showing the asymptotic normality of nonlinear estimators such
as GMM estimators. We then establish consistency of various asymptotic variance
estimators, including that of Cameron et al. (2011) but also a new estimator
that is positive by construction. Next, we show the general consistency, for
linear and nonlinear estimators, of the pigeonhole bootstrap, a resampling
scheme adapted to multiway clustering. Monte Carlo simulations suggest that
inference based on our two preferred methods may be accurate even with very few
clusters, and significantly improve upon inference based on Cameron et al.
(2011). | Asymptotic results under multiway clustering | 2018-07-20 19:33:13 | Laurent Davezies, Xavier D'Haultfoeuille, Yannick Guyonvarch | http://arxiv.org/abs/1807.07925v2, http://arxiv.org/pdf/1807.07925v2 | econ.EM |
28,880 | em | In dynamical framework the conflict between government and the central bank
according to the exchange Rate of payment of fixed rates and fixed rates of
fixed income (EMU) convergence criteria such that the public debt / GDP ratio
The method consists of calculating private public debt management in a public
debt management system purpose there is no mechanism to allow naturally for
this adjustment. | EMU and ECB Conflicts | 2018-07-21 09:57:15 | William Mackenzie | http://arxiv.org/abs/1807.08097v1, http://arxiv.org/pdf/1807.08097v1 | econ.EM |
28,881 | em | Dynamic discrete choice models often discretize the state vector and restrict
its dimension in order to achieve valid inference. I propose a novel two-stage
estimator for the set-identified structural parameter that incorporates a
high-dimensional state space into the dynamic model of imperfect competition.
In the first stage, I estimate the state variable's law of motion and the
equilibrium policy function using machine learning tools. In the second stage,
I plug the first-stage estimates into a moment inequality and solve for the
structural parameter. The moment function is presented as the sum of two
components, where the first one expresses the equilibrium assumption and the
second one is a bias correction term that makes the sum insensitive (i.e.,
orthogonal) to first-stage bias. The proposed estimator uniformly converges at
the root-N rate and I use it to construct confidence regions. The results
developed here can be used to incorporate high-dimensional state space into
classic dynamic discrete choice models, for example, those considered in Rust
(1987), Bajari et al. (2007), and Scott (2013). | Machine Learning for Dynamic Discrete Choice | 2018-08-08 01:23:50 | Vira Semenova | http://arxiv.org/abs/1808.02569v2, http://arxiv.org/pdf/1808.02569v2 | econ.EM |
28,882 | em | This paper presents a weighted optimization framework that unifies the
binary,multi-valued, continuous, as well as mixture of discrete and continuous
treatment, under the unconfounded treatment assignment. With a general loss
function, the framework includes the average, quantile and asymmetric least
squares causal effect of treatment as special cases. For this general
framework, we first derive the semiparametric efficiency bound for the causal
effect of treatment, extending the existing bound results to a wider class of
models. We then propose a generalized optimization estimation for the causal
effect with weights estimated by solving an expanding set of equations. Under
some sufficient conditions, we establish consistency and asymptotic normality
of the proposed estimator of the causal effect and show that the estimator
attains our semiparametric efficiency bound, thereby extending the existing
literature on efficient estimation of causal effect to a wider class of
applications. Finally, we discuss etimation of some causal effect functionals
such as the treatment effect curve and the average outcome. To evaluate the
finite sample performance of the proposed procedure, we conduct a small scale
simulation study and find that the proposed estimation has practical value. To
illustrate the applicability of the procedure, we revisit the literature on
campaign advertise and campaign contributions. Unlike the existing procedures
which produce mixed results, we find no evidence of campaign advertise on
campaign contribution. | A Unified Framework for Efficient Estimation of General Treatment Models | 2018-08-15 04:32:29 | Chunrong Ai, Oliver Linton, Kaiji Motegi, Zheng Zhang | http://arxiv.org/abs/1808.04936v2, http://arxiv.org/pdf/1808.04936v2 | econ.EM |
28,883 | em | Recent years have seen many attempts to combine expenditure-side estimates of
U.S. real output (GDE) growth with income-side estimates (GDI) to improve
estimates of real GDP growth. We show how to incorporate information from
multiple releases of noisy data to provide more precise estimates while
avoiding some of the identifying assumptions required in earlier work. This
relies on a new insight: using multiple data releases allows us to distinguish
news and noise measurement errors in situations where a single vintage does
not.
Our new measure, GDP++, fits the data better than GDP+, the GDP growth
measure of Aruoba et al. (2016) published by the Federal Reserve Bank of
Philadephia. Historical decompositions show that GDE releases are more
informative than GDI, while the use of multiple data releases is particularly
important in the quarters leading up to the Great Recession. | Can GDP measurement be further improved? Data revision and reconciliation | 2018-08-15 07:48:26 | Jan P. A. M. Jacobs, Samad Sarferaz, Jan-Egbert Sturm, Simon van Norden | http://arxiv.org/abs/1808.04970v1, http://arxiv.org/pdf/1808.04970v1 | econ.EM |
28,884 | em | While investments in renewable energy sources (RES) are incentivized around
the world, the policy tools that do so are still poorly understood, leading to
costly misadjustments in many cases. As a case study, the deployment dynamics
of residential solar photovoltaics (PV) invoked by the German feed-in tariff
legislation are investigated. Here we report a model showing that the question
of when people invest in residential PV systems is found to be not only
determined by profitability, but also by profitability's change compared to the
status quo. This finding is interpreted in the light of loss aversion, a
concept developed in Kahneman and Tversky's Prospect Theory. The model is able
to reproduce most of the dynamics of the uptake with only a few financial and
behavioral assumptions | When Do Households Invest in Solar Photovoltaics? An Application of Prospect Theory | 2018-08-16 19:29:55 | Martin Klein, Marc Deissenroth | http://dx.doi.org/10.1016/j.enpol.2017.06.067, http://arxiv.org/abs/1808.05572v1, http://arxiv.org/pdf/1808.05572v1 | econ.EM |
28,910 | em | Kitamura and Stoye (2014) develop a nonparametric test for linear inequality
constraints, when these are are represented as vertices of a polyhedron instead
of its faces. They implement this test for an application to nonparametric
tests of Random Utility Models. As they note in their paper, testing such
models is computationally challenging. In this paper, we develop and implement
more efficient algorithms, based on column generation, to carry out the test.
These improved algorithms allow us to tackle larger datasets. | Column Generation Algorithms for Nonparametric Analysis of Random Utility Models | 2018-12-04 16:28:33 | Bart Smeulders | http://arxiv.org/abs/1812.01400v1, http://arxiv.org/pdf/1812.01400v1 | econ.EM |
28,885 | em | The purpose of this paper is to provide guidelines for empirical researchers
who use a class of bivariate threshold crossing models with dummy endogenous
variables. A common practice employed by the researchers is the specification
of the joint distribution of the unobservables as a bivariate normal
distribution, which results in a bivariate probit model. To address the problem
of misspecification in this practice, we propose an easy-to-implement
semiparametric estimation framework with parametric copula and nonparametric
marginal distributions. We establish asymptotic theory, including root-n
normality, for the sieve maximum likelihood estimators that can be used to
conduct inference on the individual structural parameters and the average
treatment effect (ATE). In order to show the practical relevance of the
proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo
simulation exercises. The results suggest that the estimates of the parameters,
especially the ATE, are sensitive to parametric specification, while
semiparametric estimation exhibits robustness to underlying data generating
processes. We then provide an empirical illustration where we estimate the
effect of health insurance on doctor visits. In this paper, we also show that
the absence of excluded instruments may result in identification failure, in
contrast to what some practitioners believe. | Estimation in a Generalization of Bivariate Probit Models with Dummy Endogenous Regressors | 2018-08-17 11:34:04 | Sukjin Han, Sungwon Lee | http://arxiv.org/abs/1808.05792v2, http://arxiv.org/pdf/1808.05792v2 | econ.EM |
28,886 | em | Under suitable conditions, one-step generalized method of moments (GMM) based
on the first-difference (FD) transformation is numerically equal to one-step
GMM based on the forward orthogonal deviations (FOD) transformation. However,
when the number of time periods ($T$) is not small, the FOD transformation
requires less computational work. This paper shows that the computational
complexity of the FD and FOD transformations increases with the number of
individuals ($N$) linearly, but the computational complexity of the FOD
transformation increases with $T$ at the rate $T^{4}$ increases, while the
computational complexity of the FD transformation increases at the rate $T^{6}$
increases. Simulations illustrate that calculations exploiting the FOD
transformation are performed orders of magnitude faster than those using the FD
transformation. The results in the paper indicate that, when one-step GMM based
on the FD and FOD transformations are the same, Monte Carlo experiments can be
conducted much faster if the FOD version of the estimator is used. | Quantifying the Computational Advantage of Forward Orthogonal Deviations | 2018-08-17 23:57:31 | Robert F. Phillips | http://arxiv.org/abs/1808.05995v1, http://arxiv.org/pdf/1808.05995v1 | econ.EM |
28,887 | em | There is generally a need to deal with quality change and new goods in the
consumer price index due to the underlying dynamic item universe. Traditionally
axiomatic tests are defined for a fixed universe. We propose five tests
explicitly formulated for a dynamic item universe, and motivate them both from
the perspectives of a cost-of-goods index and a cost-of-living index. None of
the indices satisfies all the tests at the same time, which are currently
available for making use of scanner data that comprises the whole item
universe. The set of tests provides a rigorous diagnostic for whether an index
is completely appropriate in a dynamic item universe, as well as pointing
towards the directions of possible remedies. We thus outline a large index
family that potentially can satisfy all the tests. | Tests for price indices in a dynamic item universe | 2018-08-27 22:01:08 | Li-Chun Zhang, Ingvild Johansen, Ragnhild Nygaard | http://arxiv.org/abs/1808.08995v2, http://arxiv.org/pdf/1808.08995v2 | econ.EM |
28,888 | em | A fixed-design residual bootstrap method is proposed for the two-step
estimator of Francq and Zako\"ian (2015) associated with the conditional
Value-at-Risk. The bootstrap's consistency is proven for a general class of
volatility models and intervals are constructed for the conditional
Value-at-Risk. A simulation study reveals that the equal-tailed percentile
bootstrap interval tends to fall short of its nominal value. In contrast, the
reversed-tails bootstrap interval yields accurate coverage. We also compare the
theoretically analyzed fixed-design bootstrap with the recursive-design
bootstrap. It turns out that the fixed-design bootstrap performs equally well
in terms of average coverage, yet leads on average to shorter intervals in
smaller samples. An empirical application illustrates the interval estimation. | A Residual Bootstrap for Conditional Value-at-Risk | 2018-08-28 08:34:36 | Eric Beutner, Alexander Heinemann, Stephan Smeekes | http://arxiv.org/abs/1808.09125v4, http://arxiv.org/pdf/1808.09125v4 | econ.EM |
28,889 | em | Kotlarski's identity has been widely used in applied economic research.
However, how to conduct inference based on this popular identification approach
has been an open question for two decades. This paper addresses this open
problem by constructing a novel confidence band for the density function of a
latent variable in repeated measurement error model. The confidence band builds
on our finding that we can rewrite Kotlarski's identity as a system of linear
moment restrictions. The confidence band controls the asymptotic size uniformly
over a class of data generating processes, and it is consistent against all
fixed alternatives. Simulation studies support our theoretical results. | Inference based on Kotlarski's Identity | 2018-08-28 18:54:59 | Kengo Kato, Yuya Sasaki, Takuya Ura | http://arxiv.org/abs/1808.09375v3, http://arxiv.org/pdf/1808.09375v3 | econ.EM |
28,890 | em | This study considers various semiparametric difference-in-differences models
under different assumptions on the relation between the treatment group
identifier, time and covariates for cross-sectional and panel data. The
variance lower bound is shown to be sensitive to the model assumptions imposed
implying a robustness-efficiency trade-off. The obtained efficient influence
functions lead to estimators that are rate double robust and have desirable
asymptotic properties under weak first stage convergence conditions. This
enables to use sophisticated machine-learning algorithms that can cope with
settings where common trend confounding is high-dimensional. The usefulness of
the proposed estimators is assessed in an empirical example. It is shown that
the efficiency-robustness trade-offs and the choice of first stage predictors
can lead to divergent empirical results in practice. | Efficient Difference-in-Differences Estimation with High-Dimensional Common Trend Confounding | 2018-09-05 20:41:34 | Michael Zimmert | http://arxiv.org/abs/1809.01643v5, http://arxiv.org/pdf/1809.01643v5 | econ.EM |
28,918 | em | We study estimation, pointwise and simultaneous inference, and confidence
intervals for many average partial effects of lasso Logit. Focusing on
high-dimensional, cluster-sampling environments, we propose a new average
partial effect estimator and explore its asymptotic properties. Practical
penalty choices compatible with our asymptotic theory are also provided. The
proposed estimator allow for valid inference without requiring oracle property.
We provide easy-to-implement algorithms for cluster-robust high-dimensional
hypothesis testing and construction of simultaneously valid confidence
intervals using a multiplier cluster bootstrap. We apply the proposed
algorithms to the text regression model of Wu (2018) to examine the presence of
gendered language on the internet. | Many Average Partial Effects: with An Application to Text Regression | 2018-12-22 01:35:51 | Harold D. Chiang | http://arxiv.org/abs/1812.09397v5, http://arxiv.org/pdf/1812.09397v5 | econ.EM |
28,891 | em | The bootstrap is a method for estimating the distribution of an estimator or
test statistic by re-sampling the data or a model estimated from the data.
Under conditions that hold in a wide variety of econometric applications, the
bootstrap provides approximations to distributions of statistics, coverage
probabilities of confidence intervals, and rejection probabilities of
hypothesis tests that are more accurate than the approximations of first-order
asymptotic distribution theory. The reductions in the differences between true
and nominal coverage or rejection probabilities can be very large. In addition,
the bootstrap provides a way to carry out inference in certain settings where
obtaining analytic distributional approximations is difficult or impossible.
This article explains the usefulness and limitations of the bootstrap in
contexts of interest in econometrics. The presentation is informal and
expository. It provides an intuitive understanding of how the bootstrap works.
Mathematical details are available in references that are cited. | Bootstrap Methods in Econometrics | 2018-09-11 19:39:03 | Joel L. Horowitz | http://arxiv.org/abs/1809.04016v1, http://arxiv.org/pdf/1809.04016v1 | econ.EM |
28,892 | em | A method for implicit variable selection in mixture of experts frameworks is
proposed. We introduce a prior structure where information is taken from a set
of independent covariates. Robust class membership predictors are identified
using a normal gamma prior. The resulting model setup is used in a finite
mixture of Bernoulli distributions to find homogenous clusters of women in
Mozambique based on their information sources on HIV. Fully Bayesian inference
is carried out via the implementation of a Gibbs sampler. | Bayesian shrinkage in mixture of experts models: Identifying robust determinants of class membership | 2018-09-13 12:30:21 | Gregor Zens | http://arxiv.org/abs/1809.04853v2, http://arxiv.org/pdf/1809.04853v2 | econ.EM |
28,893 | em | Time averaging has been the traditional approach to handle mixed sampling
frequencies. However, it ignores information possibly embedded in high
frequency. Mixed data sampling (MIDAS) regression models provide a concise way
to utilize the additional information in high-frequency variables. In this
paper, we propose a specification test to choose between time averaging and
MIDAS models, based on a Durbin-Wu-Hausman test. In particular, a set of
instrumental variables is proposed and theoretically validated when the
frequency ratio is large. As a result, our method tends to be more powerful
than existing methods, as reconfirmed through the simulations. | On the Choice of Instruments in Mixed Frequency Specification Tests | 2018-09-14 19:59:44 | Yun Liu, Yeonwoo Rho | http://arxiv.org/abs/1809.05503v1, http://arxiv.org/pdf/1809.05503v1 | econ.EM |
28,894 | em | This article deals with asimple issue: if we have grouped data with a binary
dependent variable and want to include fixed effects (group specific
intercepts) in the specification, is Ordinary Least Squares (OLS) in any way
superior to a (conditional) logit form? In particular, what are the
consequences of using OLS instead of a fixed effects logit model with respect
to the latter dropping all units which show no variability in the dependent
variable while the former allows for estimation using all units. First, we show
that the discussion of fthe incidental parameters problem is based on an
assumption about the kinds of data being studied; for what appears to be the
common use of fixed effect models in political science the incidental
parameters issue is illusory. Turning to linear models, we see that OLS yields
a linear combination of the estimates for the units with and without variation
in the dependent variable, and so the coefficient estimates must be carefully
interpreted. The article then compares two methods of estimating logit models
with fixed effects, and shows that the Chamberlain conditional logit is as good
as or better than a logit analysis which simply includes group specific
intercepts (even though the conditional logit technique was designed to deal
with the incidental parameters problem!). Related to this, the article
discusses the estimation of marginal effects using both OLS and logit. While it
appears that a form of logit with fixed effects can be used to estimate
marginal effects, this method can be improved by starting with conditional
logit and then using the those parameter estimates to constrain the logit with
fixed effects model. This method produces estimates of sample average marginal
effects that are at least as good as OLS, and much better when group size is
small or the number of groups is large. . | Estimating grouped data models with a binary dependent variable and fixed effects: What are the issues | 2018-09-18 05:25:25 | Nathaniel Beck | http://arxiv.org/abs/1809.06505v1, http://arxiv.org/pdf/1809.06505v1 | econ.EM |
28,895 | em | We provide new results for nonparametric identification, estimation, and
inference of causal effects using `proxy controls': observables that are noisy
but informative proxies for unobserved confounding factors. Our analysis
applies to cross-sectional settings but is particularly well-suited to panel
models. Our identification results motivate a simple and `well-posed'
nonparametric estimator. We derive convergence rates for the estimator and
construct uniform confidence bands with asymptotically correct size. In panel
settings, our methods provide a novel approach to the difficult problem of
identification with non-separable, general heterogeneity and fixed $T$. In
panels, observations from different periods serve as proxies for unobserved
heterogeneity and our key identifying assumptions follow from restrictions on
the serial dependence structure. We apply our methods to two empirical
settings. We estimate consumer demand counterfactuals using panel data and we
estimate causal effects of grade retention on cognitive performance. | Proxy Controls and Panel Data | 2018-09-30 03:38:11 | Ben Deaner | http://arxiv.org/abs/1810.00283v8, http://arxiv.org/pdf/1810.00283v8 | econ.EM |
28,896 | em | We consider the problem of regression with selectively observed covariates in
a nonparametric framework. Our approach relies on instrumental variables that
explain variation in the latent covariates but have no direct effect on
selection. The regression function of interest is shown to be a weighted
version of observed conditional expectation where the weighting function is a
fraction of selection probabilities. Nonparametric identification of the
fractional probability weight (FPW) function is achieved via a partial
completeness assumption. We provide primitive functional form assumptions for
partial completeness to hold. The identification result is constructive for the
FPW series estimator. We derive the rate of convergence and also the pointwise
asymptotic distribution. In both cases, the asymptotic performance of the FPW
series estimator does not suffer from the inverse problem which derives from
the nonparametric instrumental variable approach. In a Monte Carlo study, we
analyze the finite sample properties of our estimator and we compare our
approach to inverse probability weighting, which can be used alternatively for
unconditional moment estimation. In the empirical application, we focus on two
different applications. We estimate the association between income and health
using linked data from the SHARE survey and administrative pension information
and use pension entitlements as an instrument. In the second application we
revisit the question how income affects the demand for housing based on data
from the German Socio-Economic Panel Study (SOEP). In this application we use
regional income information on the residential block level as an instrument. In
both applications we show that income is selectively missing and we demonstrate
that standard methods that do not account for the nonrandom selection process
lead to significantly biased estimates for individuals with low income. | Nonparametric Regression with Selectively Missing Covariates | 2018-09-30 18:52:54 | Christoph Breunig, Peter Haan | http://arxiv.org/abs/1810.00411v4, http://arxiv.org/pdf/1810.00411v4 | econ.EM |
28,897 | em | The intention of this paper is to discuss the mathematical model of causality
introduced by C.W.J. Granger in 1969. The Granger's model of causality has
become well-known and often used in various econometric models describing
causal systems, e.g., between commodity prices and exchange rates.
Our paper presents a new mathematical model of causality between two measured
objects. We have slightly modified the well-known Kolmogorovian probability
model. In particular, we use the horizontal sum of set $\sigma$-algebras
instead of their direct product. | Granger causality on horizontal sum of Boolean algebras | 2018-10-03 12:27:43 | M. Bohdalová, M. Kalina, O. Nánásiová | http://arxiv.org/abs/1810.01654v1, http://arxiv.org/pdf/1810.01654v1 | econ.EM |
28,898 | em | Explanatory variables in a predictive regression typically exhibit low signal
strength and various degrees of persistence. Variable selection in such a
context is of great importance. In this paper, we explore the pitfalls and
possibilities of the LASSO methods in this predictive regression framework. In
the presence of stationary, local unit root, and cointegrated predictors, we
show that the adaptive LASSO cannot asymptotically eliminate all cointegrating
variables with zero regression coefficients. This new finding motivates a novel
post-selection adaptive LASSO, which we call the twin adaptive LASSO (TAlasso),
to restore variable selection consistency. Accommodating the system of
heterogeneous regressors, TAlasso achieves the well-known oracle property. In
contrast, conventional LASSO fails to attain coefficient estimation consistency
and variable screening in all components simultaneously. We apply these LASSO
methods to evaluate the short- and long-horizon predictability of S\&P 500
excess returns. | On LASSO for Predictive Regression | 2018-10-07 16:19:07 | Ji Hyung Lee, Zhentao Shi, Zhan Gao | http://arxiv.org/abs/1810.03140v4, http://arxiv.org/pdf/1810.03140v4 | econ.EM |
28,899 | em | This paper proposes a new approach to obtain uniformly valid inference for
linear functionals or scalar subvectors of a partially identified parameter
defined by linear moment inequalities. The procedure amounts to bootstrapping
the value functions of randomly perturbed linear programming problems, and does
not require the researcher to grid over the parameter space. The low-level
conditions for uniform validity rely on genericity results for linear programs.
The unconventional perturbation approach produces a confidence set with a
coverage probability of 1 over the identified set, but obtains exact coverage
on an outer set, is valid under weak assumptions, and is computationally simple
to implement. | Simple Inference on Functionals of Set-Identified Parameters Defined by Linear Moments | 2018-10-07 20:03:14 | JoonHwan Cho, Thomas M. Russell | http://arxiv.org/abs/1810.03180v10, http://arxiv.org/pdf/1810.03180v10 | econ.EM |
28,900 | em | In this paper we consider the properties of the Pesaran (2004, 2015a) CD test
for cross-section correlation when applied to residuals obtained from panel
data models with many estimated parameters. We show that the presence of
period-specific parameters leads the CD test statistic to diverge as length of
the time dimension of the sample grows. This result holds even if cross-section
dependence is correctly accounted for and hence constitutes an example of the
Incidental Parameters Problem. The relevance of this problem is investigated
both for the classical Time Fixed Effects estimator as well as the Common
Correlated Effects estimator of Pesaran (2006). We suggest a weighted CD test
statistic which re-establishes standard normal inference under the null
hypothesis. Given the widespread use of the CD test statistic to test for
remaining cross-section correlation, our results have far reaching implications
for empirical researchers. | The Incidental Parameters Problem in Testing for Remaining Cross-section Correlation | 2018-10-09 00:48:52 | Arturas Juodis, Simon Reese | http://arxiv.org/abs/1810.03715v4, http://arxiv.org/pdf/1810.03715v4 | econ.EM |
28,901 | em | This paper studies nonparametric identification and counterfactual bounds for
heterogeneous firms that can be ranked in terms of productivity. Our approach
works when quantities and prices are latent, rendering standard approaches
inapplicable. Instead, we require observation of profits or other
optimizing-values such as costs or revenues, and either prices or price proxies
of flexibly chosen variables. We extend classical duality results for
price-taking firms to a setup with discrete heterogeneity, endogeneity, and
limited variation in possibly latent prices. Finally, we show that convergence
results for nonparametric estimators may be directly converted to convergence
results for production sets. | Prices, Profits, Proxies, and Production | 2018-10-10 21:15:29 | Victor H. Aguiar, Nail Kashaev, Roy Allen | http://arxiv.org/abs/1810.04697v4, http://arxiv.org/pdf/1810.04697v4 | econ.EM |
28,902 | em | A long-standing question about consumer behavior is whether individuals'
observed purchase decisions satisfy the revealed preference (RP) axioms of the
utility maximization theory (UMT). Researchers using survey or experimental
panel data sets on prices and consumption to answer this question face the
well-known problem of measurement error. We show that ignoring measurement
error in the RP approach may lead to overrejection of the UMT. To solve this
problem, we propose a new statistical RP framework for consumption panel data
sets that allows for testing the UMT in the presence of measurement error. Our
test is applicable to all consumer models that can be characterized by their
first-order conditions. Our approach is nonparametric, allows for unrestricted
heterogeneity in preferences, and requires only a centering condition on
measurement error. We develop two applications that provide new evidence about
the UMT. First, we find support in a survey data set for the dynamic and
time-consistent UMT in single-individual households, in the presence of
\emph{nonclassical} measurement error in consumption. In the second
application, we cannot reject the static UMT in a widely used experimental data
set in which measurement error in prices is assumed to be the result of price
misperception due to the experimental design. The first finding stands in
contrast to the conclusions drawn from the deterministic RP test of Browning
(1989). The second finding reverses the conclusions drawn from the
deterministic RP test of Afriat (1967) and Varian (1982). | Stochastic Revealed Preferences with Measurement Error | 2018-10-12 02:25:24 | Victor H. Aguiar, Nail Kashaev | http://arxiv.org/abs/1810.05287v2, http://arxiv.org/pdf/1810.05287v2 | econ.EM |
28,903 | em | In this paper, we study estimation of nonlinear models with cross sectional
data using two-step generalized estimating equations (GEE) in the quasi-maximum
likelihood estimation (QMLE) framework. In the interest of improving
efficiency, we propose a grouping estimator to account for the potential
spatial correlation in the underlying innovations. We use a Poisson model and a
Negative Binomial II model for count data and a Probit model for binary
response data to demonstrate the GEE procedure. Under mild weak dependency
assumptions, results on estimation consistency and asymptotic normality are
provided. Monte Carlo simulations show efficiency gain of our approach in
comparison of different estimation methods for count data and binary response
data. Finally we apply the GEE approach to study the determinants of the inflow
foreign direct investment (FDI) to China. | Using generalized estimating equations to estimate nonlinear models with spatial data | 2018-10-13 15:58:41 | Cuicui Lu, Weining Wang, Jeffrey M. Wooldridge | http://arxiv.org/abs/1810.05855v1, http://arxiv.org/pdf/1810.05855v1 | econ.EM |
28,925 | em | Nonparametric Instrumental Variables (NPIV) analysis is based on a
conditional moment restriction. We show that if this moment condition is even
slightly misspecified, say because instruments are not quite valid, then NPIV
estimates can be subject to substantial asymptotic error and the identified set
under a relaxed moment condition may be large. Imposing strong a priori
smoothness restrictions mitigates the problem but induces bias if the
restrictions are too strong. In order to manage this trade-off we develop a
methods for empirical sensitivity analysis and apply them to the consumer
demand data previously analyzed in Blundell (2007) and Horowitz (2011). | Nonparametric Instrumental Variables Estimation Under Misspecification | 2019-01-04 21:52:59 | Ben Deaner | http://arxiv.org/abs/1901.01241v7, http://arxiv.org/pdf/1901.01241v7 | econ.EM |
28,904 | em | This paper develops a consistent heteroskedasticity robust Lagrange
Multiplier (LM) type specification test for semiparametric conditional mean
models. Consistency is achieved by turning a conditional moment restriction
into a growing number of unconditional moment restrictions using series
methods. The proposed test statistic is straightforward to compute and is
asymptotically standard normal under the null. Compared with the earlier
literature on series-based specification tests in parametric models, I rely on
the projection property of series estimators and derive a different
normalization of the test statistic. Compared with the recent test in Gupta
(2018), I use a different way of accounting for heteroskedasticity. I
demonstrate using Monte Carlo studies that my test has superior finite sample
performance compared with the existing tests. I apply the test to one of the
semiparametric gasoline demand specifications from Yatchew and No (2001) and
find no evidence against it. | A Consistent Heteroskedasticity Robust LM Type Specification Test for Semiparametric Models | 2018-10-17 18:37:02 | Ivan Korolev | http://arxiv.org/abs/1810.07620v3, http://arxiv.org/pdf/1810.07620v3 | econ.EM |
28,905 | em | This study considers treatment effect models in which others' treatment
decisions can affect both one's own treatment and outcome. Focusing on the case
of two-player interactions, we formulate treatment decision behavior as a
complete information game with multiple equilibria. Using a latent index
framework and assuming a stochastic equilibrium selection, we prove that the
marginal treatment effect from one's own treatment and that from the partner
are identifiable on the conditional supports of certain threshold variables
determined through the game model. Based on our constructive identification
results, we propose a two-step semiparametric procedure for estimating the
marginal treatment effects using series approximation. We show that the
proposed estimator is uniformly consistent and asymptotically normally
distributed. As an empirical illustration, we investigate the impacts of risky
behaviors on adolescents' academic performance. | Treatment Effect Models with Strategic Interaction in Treatment Decisions | 2018-10-19 06:51:42 | Tadao Hoshino, Takahide Yanagi | http://arxiv.org/abs/1810.08350v11, http://arxiv.org/pdf/1810.08350v11 | econ.EM |
28,906 | em | In this paper we include dependency structures for electricity price
forecasting and forecasting evaluation. We work with off-peak and peak time
series from the German-Austrian day-ahead price, hence we analyze bivariate
data. We first estimate the mean of the two time series, and then in a second
step we estimate the residuals. The mean equation is estimated by OLS and
elastic net and the residuals are estimated by maximum likelihood. Our
contribution is to include a bivariate jump component on a mean reverting jump
diffusion model in the residuals. The models' forecasts are evaluated using
four different criteria, including the energy score to measure whether the
correlation structure between the time series is properly included or not. In
the results it is observed that the models with bivariate jumps provide better
results with the energy score, which means that it is important to consider
this structure in order to properly forecast correlated time series. | Probabilistic Forecasting in Day-Ahead Electricity Markets: Simulating Peak and Off-Peak Prices | 2018-10-19 12:27:16 | Peru Muniain, Florian Ziel | http://dx.doi.org/10.1016/j.ijforecast.2019.11.006, http://arxiv.org/abs/1810.08418v2, http://arxiv.org/pdf/1810.08418v2 | econ.EM |
28,907 | em | We propose a novel two-regime regression model where regime switching is
driven by a vector of possibly unobservable factors. When the factors are
latent, we estimate them by the principal component analysis of a panel data
set. We show that the optimization problem can be reformulated as mixed integer
optimization, and we present two alternative computational algorithms. We
derive the asymptotic distribution of the resulting estimator under the scheme
that the threshold effect shrinks to zero. In particular, we establish a phase
transition that describes the effect of first-stage factor estimation as the
cross-sectional dimension of panel data increases relative to the time-series
dimension. Moreover, we develop bootstrap inference and illustrate our methods
via numerical studies. | Factor-Driven Two-Regime Regression | 2018-10-26 00:12:52 | Sokbae Lee, Yuan Liao, Myung Hwan Seo, Youngki Shin | http://dx.doi.org/10.1214/20-AOS2017, http://arxiv.org/abs/1810.11109v4, http://arxiv.org/pdf/1810.11109v4 | econ.EM |
28,908 | em | Let Y be an outcome of interest, X a vector of treatment measures, and W a
vector of pre-treatment control variables. Here X may include (combinations of)
continuous, discrete, and/or non-mutually exclusive "treatments". Consider the
linear regression of Y onto X in a subpopulation homogenous in W = w (formally
a conditional linear predictor). Let b0(w) be the coefficient vector on X in
this regression. We introduce a semiparametrically efficient estimate of the
average beta0 = E[b0(W)]. When X is binary-valued (multi-valued) our procedure
recovers the (a vector of) average treatment effect(s). When X is
continuously-valued, or consists of multiple non-exclusive treatments, our
estimand coincides with the average partial effect (APE) of X on Y when the
underlying potential response function is linear in X, but otherwise
heterogenous across agents. When the potential response function takes a
general nonlinear/heterogenous form, and X is continuously-valued, our
procedure recovers a weighted average of the gradient of this response across
individuals and values of X. We provide a simple, and semiparametrically
efficient, method of covariate adjustment for settings with complicated
treatment regimes. Our method generalizes familiar methods of covariate
adjustment used for program evaluation as well as methods of semiparametric
regression (e.g., the partially linear regression model). | Semiparametrically efficient estimation of the average linear regression function | 2018-10-30 06:26:33 | Bryan S. Graham, Cristine Campos de Xavier Pinto | http://arxiv.org/abs/1810.12511v1, http://arxiv.org/pdf/1810.12511v1 | econ.EM |
28,909 | em | We investigate the finite sample performance of causal machine learning
estimators for heterogeneous causal effects at different aggregation levels. We
employ an Empirical Monte Carlo Study that relies on arguably realistic data
generation processes (DGPs) based on actual data. We consider 24 different
DGPs, eleven different causal machine learning estimators, and three
aggregation levels of the estimated effects. In the main DGPs, we allow for
selection into treatment based on a rich set of observable covariates. We
provide evidence that the estimators can be categorized into three groups. The
first group performs consistently well across all DGPs and aggregation levels.
These estimators have multiple steps to account for the selection into the
treatment and the outcome process. The second group shows competitive
performance only for particular DGPs. The third group is clearly outperformed
by the other estimators. | Machine Learning Estimation of Heterogeneous Causal Effects: Empirical Monte Carlo Evidence | 2018-10-31 15:10:25 | Michael C. Knaus, Michael Lechner, Anthony Strittmatter | http://dx.doi.org/10.1093/ectj/utaa014, http://arxiv.org/abs/1810.13237v2, http://arxiv.org/pdf/1810.13237v2 | econ.EM |
28,911 | em | This article proposes doubly robust estimators for the average treatment
effect on the treated (ATT) in difference-in-differences (DID) research
designs. In contrast to alternative DID estimators, the proposed estimators are
consistent if either (but not necessarily both) a propensity score or outcome
regression working models are correctly specified. We also derive the
semiparametric efficiency bound for the ATT in DID designs when either panel or
repeated cross-section data are available, and show that our proposed
estimators attain the semiparametric efficiency bound when the working models
are correctly specified. Furthermore, we quantify the potential efficiency
gains of having access to panel data instead of repeated cross-section data.
Finally, by paying articular attention to the estimation method used to
estimate the nuisance parameters, we show that one can sometimes construct
doubly robust DID estimators for the ATT that are also doubly robust for
inference. Simulation studies and an empirical application illustrate the
desirable finite-sample performance of the proposed estimators. Open-source
software for implementing the proposed policy evaluation tools is available. | Doubly Robust Difference-in-Differences Estimators | 2018-11-30 00:18:26 | Pedro H. C. Sant'Anna, Jun B. Zhao | http://arxiv.org/abs/1812.01723v3, http://arxiv.org/pdf/1812.01723v3 | econ.EM |
28,912 | em | This paper examines a commonly used measure of persuasion whose precise
interpretation has been obscure in the literature. By using the potential
outcome framework, we define the causal persuasion rate by a proper conditional
probability of taking the action of interest with a persuasive message
conditional on not taking the action without the message. We then formally
study identification under empirically relevant data scenarios and show that
the commonly adopted measure generally does not estimate, but often overstates,
the causal rate of persuasion. We discuss several new parameters of interest
and provide practical methods for causal inference. | Identifying the Effect of Persuasion | 2018-12-06 03:20:35 | Sung Jae Jun, Sokbae Lee | http://arxiv.org/abs/1812.02276v6, http://arxiv.org/pdf/1812.02276v6 | econ.EM |
28,913 | em | We develop a uniform test for detecting and dating explosive behavior of a
strictly stationary GARCH$(r,s)$ (generalized autoregressive conditional
heteroskedasticity) process. Namely, we test the null hypothesis of a globally
stable GARCH process with constant parameters against an alternative where
there is an 'abnormal' period with changed parameter values. During this
period, the change may lead to an explosive behavior of the volatility process.
It is assumed that both the magnitude and the timing of the breaks are unknown.
We develop a double supreme test for the existence of a break, and then provide
an algorithm to identify the period of change. Our theoretical results hold
under mild moment assumptions on the innovations of the GARCH process.
Technically, the existing properties for the QMLE in the GARCH model need to be
reinvestigated to hold uniformly over all possible periods of change. The key
results involve a uniform weak Bahadur representation for the estimated
parameters, which leads to weak convergence of the test statistic to the
supreme of a Gaussian Process. In simulations we show that the test has good
size and power for reasonably large time series lengths. We apply the test to
Apple asset returns and Bitcoin returns. | A supreme test for periodic explosive GARCH | 2018-12-09 15:51:14 | Stefan Richter, Weining Wang, Wei Biao Wu | http://arxiv.org/abs/1812.03475v1, http://arxiv.org/pdf/1812.03475v1 | econ.EM |
28,914 | em | Recent studies have proposed causal machine learning (CML) methods to
estimate conditional average treatment effects (CATEs). In this study, I
investigate whether CML methods add value compared to conventional CATE
estimators by re-evaluating Connecticut's Jobs First welfare experiment. This
experiment entails a mix of positive and negative work incentives. Previous
studies show that it is hard to tackle the effect heterogeneity of Jobs First
by means of CATEs. I report evidence that CML methods can provide support for
the theoretical labor supply predictions. Furthermore, I document reasons why
some conventional CATE estimators fail and discuss the limitations of CML
methods. | What Is the Value Added by Using Causal Machine Learning Methods in a Welfare Experiment Evaluation? | 2018-12-16 23:24:02 | Anthony Strittmatter | http://arxiv.org/abs/1812.06533v3, http://arxiv.org/pdf/1812.06533v3 | econ.EM |
28,915 | em | This paper explores the use of a fuzzy regression discontinuity design where
multiple treatments are applied at the threshold. The identification results
show that, under the very strong assumption that the change in the probability
of treatment at the cutoff is equal across treatments, a
difference-in-discontinuities estimator identifies the treatment effect of
interest. The point estimates of the treatment effect using a simple fuzzy
difference-in-discontinuities design are biased if the change in the
probability of a treatment applying at the cutoff differs across treatments.
Modifications of the fuzzy difference-in-discontinuities approach that rely on
milder assumptions are also proposed. Our results suggest caution is needed
when applying before-and-after methods in the presence of fuzzy
discontinuities. Using data from the National Health Interview Survey, we apply
this new identification strategy to evaluate the causal effect of the
Affordable Care Act (ACA) on older Americans' health care access and
utilization. | Fuzzy Difference-in-Discontinuities: Identification Theory and Application to the Affordable Care Act | 2018-12-17 00:27:54 | Hector Galindo-Silva, Nibene Habib Some, Guy Tchuente | http://arxiv.org/abs/1812.06537v3, http://arxiv.org/pdf/1812.06537v3 | econ.EM |
28,916 | em | We propose convenient inferential methods for potentially nonstationary
multivariate unobserved components models with fractional integration and
cointegration. Based on finite-order ARMA approximations in the state space
representation, maximum likelihood estimation can make use of the EM algorithm
and related techniques. The approximation outperforms the frequently used
autoregressive or moving average truncation, both in terms of computational
costs and with respect to approximation quality. Monte Carlo simulations reveal
good estimation properties of the proposed methods for processes of different
complexity and dimension. | Approximate State Space Modelling of Unobserved Fractional Components | 2018-12-21 17:25:45 | Tobias Hartl, Roland Weigand | http://dx.doi.org/10.1080/07474938.2020.1841444, http://arxiv.org/abs/1812.09142v3, http://arxiv.org/pdf/1812.09142v3 | econ.EM |
28,917 | em | We propose a setup for fractionally cointegrated time series which is
formulated in terms of latent integrated and short-memory components. It
accommodates nonstationary processes with different fractional orders and
cointegration of different strengths and is applicable in high-dimensional
settings. In an application to realized covariance matrices, we find that
orthogonal short- and long-memory components provide a reasonable fit and
competitive out-of-sample performance compared to several competing methods. | Multivariate Fractional Components Analysis | 2018-12-21 17:33:27 | Tobias Hartl, Roland Weigand | http://arxiv.org/abs/1812.09149v2, http://arxiv.org/pdf/1812.09149v2 | econ.EM |
28,919 | em | Consider a setting in which a policy maker assigns subjects to treatments,
observing each outcome before the next subject arrives. Initially, it is
unknown which treatment is best, but the sequential nature of the problem
permits learning about the effectiveness of the treatments. While the
multi-armed-bandit literature has shed much light on the situation when the
policy maker compares the effectiveness of the treatments through their mean,
much less is known about other targets. This is restrictive, because a cautious
decision maker may prefer to target a robust location measure such as a
quantile or a trimmed mean. Furthermore, socio-economic decision making often
requires targeting purpose specific characteristics of the outcome
distribution, such as its inherent degree of inequality, welfare or poverty. In
the present paper we introduce and study sequential learning algorithms when
the distributional characteristic of interest is a general functional of the
outcome distribution. Minimax expected regret optimality results are obtained
within the subclass of explore-then-commit policies, and for the unrestricted
class of all policies. | Functional Sequential Treatment Allocation | 2018-12-22 02:18:13 | Anders Bredahl Kock, David Preinerstorfer, Bezirgen Veliyev | http://arxiv.org/abs/1812.09408v8, http://arxiv.org/pdf/1812.09408v8 | econ.EM |
28,920 | em | In many applications common in testing for convergence the number of
cross-sectional units is large and the number of time periods are few. In these
situations asymptotic tests based on an omnibus null hypothesis are
characterised by a number of problems. In this paper we propose a multiple
pairwise comparisons method based on an a recursive bootstrap to test for
convergence with no prior information on the composition of convergence clubs.
Monte Carlo simulations suggest that our bootstrap-based test performs well to
correctly identify convergence clubs when compared with other similar tests
that rely on asymptotic arguments. Across a potentially large number of
regions, using both cross-country and regional data for the European Union, we
find that the size distortion which afflicts standard tests and results in a
bias towards finding less convergence, is ameliorated when we utilise our
bootstrap test. | Robust Tests for Convergence Clubs | 2018-12-22 15:11:04 | Luisa Corrado, Melvyn Weeks, Thanasis Stengos, M. Ege Yazgan | http://arxiv.org/abs/1812.09518v1, http://arxiv.org/pdf/1812.09518v1 | econ.EM |
28,921 | em | We propose a practical and robust method for making inferences on average
treatment effects estimated by synthetic controls. We develop a $K$-fold
cross-fitting procedure for bias-correction. To avoid the difficult estimation
of the long-run variance, inference is based on a self-normalized
$t$-statistic, which has an asymptotically pivotal $t$-distribution. Our
$t$-test is easy to implement, provably robust against misspecification, valid
with non-stationary data, and demonstrates an excellent small sample
performance. Compared to difference-in-differences, our method often yields
more than 50% shorter confidence intervals and is robust to violations of
parallel trends assumptions. An $\texttt{R}$-package for implementing our
methods is available. | A $t$-test for synthetic controls | 2018-12-27 23:40:13 | Victor Chernozhukov, Kaspar Wuthrich, Yinchu Zhu | http://arxiv.org/abs/1812.10820v7, http://arxiv.org/pdf/1812.10820v7 | econ.EM |
28,922 | em | The instrumental variable quantile regression (IVQR) model (Chernozhukov and
Hansen, 2005) is a popular tool for estimating causal quantile effects with
endogenous covariates. However, estimation is complicated by the non-smoothness
and non-convexity of the IVQR GMM objective function. This paper shows that the
IVQR estimation problem can be decomposed into a set of conventional quantile
regression sub-problems which are convex and can be solved efficiently. This
reformulation leads to new identification results and to fast, easy to
implement, and tuning-free estimators that do not require the availability of
high-level "black box" optimization routines. | Decentralization Estimators for Instrumental Variable Quantile Regression Models | 2018-12-28 11:50:33 | Hiroaki Kaido, Kaspar Wuthrich | http://arxiv.org/abs/1812.10925v4, http://arxiv.org/pdf/1812.10925v4 | econ.EM |
28,923 | em | Predicting future successful designs and corresponding market opportunity is
a fundamental goal of product design firms. There is accordingly a long history
of quantitative approaches that aim to capture diverse consumer preferences,
and then translate those preferences to corresponding "design gaps" in the
market. We extend this work by developing a deep learning approach to predict
design gaps in the market. These design gaps represent clusters of designs that
do not yet exist, but are predicted to be both (1) highly preferred by
consumers, and (2) feasible to build under engineering and manufacturing
constraints. This approach is tested on the entire U.S. automotive market using
of millions of real purchase data. We retroactively predict design gaps in the
market, and compare predicted design gaps with actual known successful designs.
Our preliminary results give evidence it may be possible to predict design
gaps, suggesting this approach has promise for early identification of market
opportunity. | Predicting "Design Gaps" in the Market: Deep Consumer Choice Models under Probabilistic Design Constraints | 2018-12-28 18:56:46 | Alex Burnap, John Hauser | http://arxiv.org/abs/1812.11067v1, http://arxiv.org/pdf/1812.11067v1 | econ.EM |
28,924 | em | This paper studies identification and estimation of a class of dynamic models
in which the decision maker (DM) is uncertain about the data-generating
process. The DM surrounds a benchmark model that he or she fears is
misspecified by a set of models. Decisions are evaluated under a worst-case
model delivering the lowest utility among all models in this set. The DM's
benchmark model and preference parameters are jointly underidentified. With the
benchmark model held fixed, primitive conditions are established for
identification of the DM's worst-case model and preference parameters. The key
step in the identification analysis is to establish existence and uniqueness of
the DM's continuation value function allowing for unbounded statespace and
unbounded utilities. To do so, fixed-point results are derived for monotone,
convex operators that act on a Banach space of thin-tailed functions arising
naturally from the structure of the continuation value recursion. The
fixed-point results are quite general; applications to models with learning and
Rust-type dynamic discrete choice models are also discussed. For estimation, a
perturbation result is derived which provides a necessary and sufficient
condition for consistent estimation of continuation values and the worst-case
model. The result also allows convergence rates of estimators to be
characterized. An empirical application studies an endowment economy where the
DM's benchmark model may be interpreted as an aggregate of experts' forecasting
models. The application reveals time-variation in the way the DM
pessimistically distorts benchmark probabilities. Consequences for asset
pricing are explored and connections are drawn with the literature on
macroeconomic uncertainty. | Dynamic Models with Robust Decision Makers: Identification and Estimation | 2018-12-29 02:36:41 | Timothy M. Christensen | http://arxiv.org/abs/1812.11246v3, http://arxiv.org/pdf/1812.11246v3 | econ.EM |
28,926 | em | This paper introduces a flexible regularization approach that reduces point
estimation risk of group means stemming from e.g. categorical regressors,
(quasi-)experimental data or panel data models. The loss function is penalized
by adding weighted squared l2-norm differences between group location
parameters and informative first-stage estimates. Under quadratic loss, the
penalized estimation problem has a simple interpretable closed-form solution
that nests methods established in the literature on ridge regression,
discretized support smoothing kernels and model averaging methods. We derive
risk-optimal penalty parameters and propose a plug-in approach for estimation.
The large sample properties are analyzed in an asymptotic local to zero
framework by introducing a class of sequences for close and distant systems of
locations that is sufficient for describing a large range of data generating
processes. We provide the asymptotic distributions of the shrinkage estimators
under different penalization schemes. The proposed plug-in estimator uniformly
dominates the ordinary least squares in terms of asymptotic risk if the number
of groups is larger than three. Monte Carlo simulations reveal robust
improvements over standard methods in finite samples. Real data examples of
estimating time trends in a panel and a difference-in-differences study
illustrate potential applications. | Shrinkage for Categorical Regressors | 2019-01-07 19:17:23 | Phillip Heiler, Jana Mareckova | http://arxiv.org/abs/1901.01898v1, http://arxiv.org/pdf/1901.01898v1 | econ.EM |
28,927 | em | This article introduces lassopack, a suite of programs for regularized
regression in Stata. lassopack implements lasso, square-root lasso, elastic
net, ridge regression, adaptive lasso and post-estimation OLS. The methods are
suitable for the high-dimensional setting where the number of predictors $p$
may be large and possibly greater than the number of observations, $n$. We
offer three different approaches for selecting the penalization (`tuning')
parameters: information criteria (implemented in lasso2), $K$-fold
cross-validation and $h$-step ahead rolling cross-validation for cross-section,
panel and time-series data (cvlasso), and theory-driven (`rigorous')
penalization for the lasso and square-root lasso for cross-section and panel
data (rlasso). We discuss the theoretical framework and practical
considerations for each approach. We also present Monte Carlo results to
compare the performance of the penalization approaches. | lassopack: Model selection and prediction with regularized regression in Stata | 2019-01-16 20:30:27 | Achim Ahrens, Christian B. Hansen, Mark E. Schaffer | http://arxiv.org/abs/1901.05397v1, http://arxiv.org/pdf/1901.05397v1 | econ.EM |
28,928 | em | The maximum utility estimation proposed by Elliott and Lieli (2013) can be
viewed as cost-sensitive binary classification; thus, its in-sample overfitting
issue is similar to that of perceptron learning. A utility-maximizing
prediction rule (UMPR) is constructed to alleviate the in-sample overfitting of
the maximum utility estimation. We establish non-asymptotic upper bounds on the
difference between the maximal expected utility and the generalized expected
utility of the UMPR. Simulation results show that the UMPR with an appropriate
data-dependent penalty achieves larger generalized expected utility than common
estimators in the binary classification if the conditional probability of the
binary outcome is misspecified. | Model Selection in Utility-Maximizing Binary Prediction | 2019-03-02 18:02:50 | Jiun-Hua Su | http://dx.doi.org/10.1016/j.jeconom.2020.07.052, http://arxiv.org/abs/1903.00716v3, http://arxiv.org/pdf/1903.00716v3 | econ.EM |
28,929 | em | We provide a finite sample inference method for the structural parameters of
a semiparametric binary response model under a conditional median restriction
originally studied by Manski (1975, 1985). Our inference method is valid for
any sample size and irrespective of whether the structural parameters are point
identified or partially identified, for example due to the lack of a
continuously distributed covariate with large support. Our inference approach
exploits distributional properties of observable outcomes conditional on the
observed sequence of exogenous variables. Moment inequalities conditional on
this size n sequence of exogenous covariates are constructed, and the test
statistic is a monotone function of violations of sample moment inequalities.
The critical value used for inference is provided by the appropriate quantile
of a known function of n independent Rademacher random variables. We
investigate power properties of the underlying test and provide simulation
studies to support the theoretical findings. | Finite Sample Inference for the Maximum Score Estimand | 2019-03-04 22:53:00 | Adam M. Rosen, Takuya Ura | http://arxiv.org/abs/1903.01511v2, http://arxiv.org/pdf/1903.01511v2 | econ.EM |
28,930 | em | A fundamental problem with nonlinear models is that maximum likelihood
estimates are not guaranteed to exist. Though nonexistence is a well known
problem in the binary choice literature, it presents significant challenges for
other models as well and is not as well understood in more general settings.
These challenges are only magnified for models that feature many fixed effects
and other high-dimensional parameters. We address the current ambiguity
surrounding this topic by studying the conditions that govern the existence of
estimates for (pseudo-)maximum likelihood estimators used to estimate a wide
class of generalized linear models (GLMs). We show that some, but not all, of
these GLM estimators can still deliver consistent estimates of at least some of
the linear parameters when these conditions fail to hold. We also demonstrate
how to verify these conditions in models with high-dimensional parameters, such
as panel data models with multiple levels of fixed effects. | Verifying the existence of maximum likelihood estimates for generalized linear models | 2019-03-05 05:18:49 | Sergio Correia, Paulo Guimarães, Thomas Zylkin | http://arxiv.org/abs/1903.01633v6, http://arxiv.org/pdf/1903.01633v6 | econ.EM |
28,931 | em | Bojinov & Shephard (2019) defined potential outcome time series to
nonparametrically measure dynamic causal effects in time series experiments.
Four innovations are developed in this paper: "instrumental paths," treatments
which are "shocks," "linear potential outcomes" and the "causal response
function." Potential outcome time series are then used to provide a
nonparametric causal interpretation of impulse response functions, generalized
impulse response functions, local projections and LP-IV. | Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function | 2019-03-05 05:53:08 | Ashesh Rambachan, Neil Shephard | http://arxiv.org/abs/1903.01637v3, http://arxiv.org/pdf/1903.01637v3 | econ.EM |
28,932 | em | In this paper we present ppmlhdfe, a new Stata command for estimation of
(pseudo) Poisson regression models with multiple high-dimensional fixed effects
(HDFE). Estimation is implemented using a modified version of the iteratively
reweighted least-squares (IRLS) algorithm that allows for fast estimation in
the presence of HDFE. Because the code is built around the reghdfe package, it
has similar syntax, supports many of the same functionalities, and benefits
from reghdfe's fast convergence properties for computing high-dimensional least
squares problems.
Performance is further enhanced by some new techniques we introduce for
accelerating HDFE-IRLS estimation specifically. ppmlhdfe also implements a
novel and more robust approach to check for the existence of (pseudo) maximum
likelihood estimates. | ppmlhdfe: Fast Poisson Estimation with High-Dimensional Fixed Effects | 2019-03-05 09:11:26 | Sergio Correia, Paulo Guimarães, Thomas Zylkin | http://dx.doi.org/10.1177/1536867X20909691, http://arxiv.org/abs/1903.01690v3, http://arxiv.org/pdf/1903.01690v3 | econ.EM |
28,933 | em | A fixed effects regression estimator is introduced that can directly identify
and estimate the Africa-Dummy in one regression step so that its correct
standard errors as well as correlations to other coefficients can easily be
estimated. We can estimate the Nickel bias and found it to be negligibly tiny.
Semiparametric extensions check whether the Africa-Dummy is simply a result of
misspecification of the functional form. In particular, we show that the
returns to growth factors are different for Sub-Saharan African countries
compared to the rest of the world. For example, returns to population growth
are positive and beta-convergence is faster. When extending the model to
identify the development of the Africa-Dummy over time we see that it has been
changing dramatically over time and that the punishment for Sub-Saharan African
countries has been decreasing incrementally to reach insignificance around the
turn of the millennium. | The Africa-Dummy: Gone with the Millennium? | 2019-03-06 16:18:13 | Max Köhler, Stefan Sperlich | http://arxiv.org/abs/1903.02357v1, http://arxiv.org/pdf/1903.02357v1 | econ.EM |
28,934 | em | Various papers demonstrate the importance of inequality, poverty and the size
of the middle class for economic growth. When explaining why these measures of
the income distribution are added to the growth regression, it is often
mentioned that poor people behave different which may translate to the economy
as a whole. However, simply adding explanatory variables does not reflect this
behavior. By a varying coefficient model we show that the returns to growth
differ a lot depending on poverty and inequality. Furthermore, we investigate
how these returns differ for the poorer and for the richer part of the
societies. We argue that the differences in the coefficients impede, on the one
hand, that the means coefficients are informative, and, on the other hand,
challenge the credibility of the economic interpretation. In short, we show
that, when estimating mean coefficients without accounting for poverty and
inequality, the estimation is likely to suffer from a serious endogeneity bias. | A Varying Coefficient Model for Assessing the Returns to Growth to Account for Poverty and Inequality | 2019-03-06 17:07:05 | Max Köhler, Stefan Sperlich, Jisu Yoon | http://arxiv.org/abs/1903.02390v1, http://arxiv.org/pdf/1903.02390v1 | econ.EM |
28,935 | em | We consider inference on the probability density of valuations in the
first-price sealed-bid auctions model within the independent private value
paradigm. We show the asymptotic normality of the two-step nonparametric
estimator of Guerre, Perrigne, and Vuong (2000) (GPV), and propose an easily
implementable and consistent estimator of the asymptotic variance. We prove the
validity of the pointwise percentile bootstrap confidence intervals based on
the GPV estimator. Lastly, we use the intermediate Gaussian approximation
approach to construct bootstrap-based asymptotically valid uniform confidence
bands for the density of the valuations. | Inference for First-Price Auctions with Guerre, Perrigne, and Vuong's Estimator | 2019-03-15 11:09:33 | Jun Ma, Vadim Marmer, Artyom Shneyerov | http://dx.doi.org/10.1016/j.jeconom.2019.02.006, http://arxiv.org/abs/1903.06401v1, http://arxiv.org/pdf/1903.06401v1 | econ.EM |
28,936 | em | Empirical growth analysis has three major problems --- variable selection,
parameter heterogeneity and cross-sectional dependence --- which are addressed
independently from each other in most studies. The purpose of this study is to
propose an integrated framework that extends the conventional linear growth
regression model to allow for parameter heterogeneity and cross-sectional error
dependence, while simultaneously performing variable selection. We also derive
the asymptotic properties of the estimator under both low and high dimensions,
and further investigate the finite sample performance of the estimator through
Monte Carlo simulations. We apply the framework to a dataset of 89 countries
over the period from 1960 to 2014. Our results reveal some cross-country
patterns not found in previous studies (e.g., "middle income trap hypothesis",
"natural resources curse hypothesis", "religion works via belief, not
practice", etc.). | An Integrated Panel Data Approach to Modelling Economic Growth | 2019-03-19 14:38:09 | Guohua Feng, Jiti Gao, Bin Peng | http://arxiv.org/abs/1903.07948v1, http://arxiv.org/pdf/1903.07948v1 | econ.EM |
28,937 | em | We propose a new approach to mixed-frequency regressions in a
high-dimensional environment that resorts to Group Lasso penalization and
Bayesian techniques for estimation and inference. In particular, to improve the
prediction properties of the model and its sparse recovery ability, we consider
a Group Lasso with a spike-and-slab prior. Penalty hyper-parameters governing
the model shrinkage are automatically tuned via an adaptive MCMC algorithm. We
establish good frequentist asymptotic properties of the posterior of the
in-sample and out-of-sample prediction error, we recover the optimal posterior
contraction rate, and we show optimality of the posterior predictive density.
Simulations show that the proposed models have good selection and forecasting
performance in small samples, even when the design matrix presents
cross-correlation. When applied to forecasting U.S. GDP, our penalized
regressions can outperform many strong competitors. Results suggest that
financial variables may have some, although very limited, short-term predictive
content. | Bayesian MIDAS Penalized Regressions: Estimation, Selection, and Prediction | 2019-03-19 17:42:37 | Matteo Mogliani, Anna Simoni | http://arxiv.org/abs/1903.08025v3, http://arxiv.org/pdf/1903.08025v3 | econ.EM |
28,938 | em | I study a regression model in which one covariate is an unknown function of a
latent driver of link formation in a network. Rather than specify and fit a
parametric network formation model, I introduce a new method based on matching
pairs of agents with similar columns of the squared adjacency matrix, the ijth
entry of which contains the number of other agents linked to both agents i and
j. The intuition behind this approach is that for a large class of network
formation models the columns of the squared adjacency matrix characterize all
of the identifiable information about individual linking behavior. In this
paper, I describe the model, formalize this intuition, and provide consistent
estimators for the parameters of the regression model. Auerbach (2021)
considers inference and an application to network peer effects. | Identification and Estimation of a Partially Linear Regression Model using Network Data | 2019-03-22 21:59:22 | Eric Auerbach | http://arxiv.org/abs/1903.09679v3, http://arxiv.org/pdf/1903.09679v3 | econ.EM |
28,939 | em | This paper studies a panel data setting where the goal is to estimate causal
effects of an intervention by predicting the counterfactual values of outcomes
for treated units, had they not received the treatment. Several approaches have
been proposed for this problem, including regression methods, synthetic control
methods and matrix completion methods. This paper considers an ensemble
approach, and shows that it performs better than any of the individual methods
in several economic datasets. Matrix completion methods are often given the
most weight by the ensemble, but this clearly depends on the setting. We argue
that ensemble methods present a fruitful direction for further research in the
causal panel data setting. | Ensemble Methods for Causal Effects in Panel Data Settings | 2019-03-25 02:21:52 | Susan Athey, Mohsen Bayati, Guido Imbens, Zhaonan Qu | http://arxiv.org/abs/1903.10079v1, http://arxiv.org/pdf/1903.10079v1 | econ.EM |
28,941 | em | How can one determine whether a community-level treatment, such as the
introduction of a social program or trade shock, alters agents' incentives to
form links in a network? This paper proposes analogues of a two-sample
Kolmogorov-Smirnov test, widely used in the literature to test the null
hypothesis of "no treatment effects", for network data. It first specifies a
testing problem in which the null hypothesis is that two networks are drawn
from the same random graph model. It then describes two randomization tests
based on the magnitude of the difference between the networks' adjacency
matrices as measured by the $2\to2$ and $\infty\to1$ operator norms. Power
properties of the tests are examined analytically, in simulation, and through
two real-world applications. A key finding is that the test based on the
$\infty\to1$ norm can be substantially more powerful than that based on the
$2\to2$ norm for the kinds of sparse and degree-heterogeneous networks common
in economics. | Testing for Differences in Stochastic Network Structure | 2019-03-26 22:00:45 | Eric Auerbach | http://arxiv.org/abs/1903.11117v5, http://arxiv.org/pdf/1903.11117v5 | econ.EM |
28,942 | em | This paper studies a regularized support function estimator for bounds on
components of the parameter vector in the case in which the identified set is a
polygon. The proposed regularized estimator has three important properties: (i)
it has a uniform asymptotic Gaussian limit in the presence of flat faces in the
absence of redundant (or overidentifying) constraints (or vice versa); (ii) the
bias from regularization does not enter the first-order limiting
distribution;(iii) the estimator remains consistent for sharp identified set
for the individual components even in the non-regualar case. These properties
are used to construct uniformly valid confidence sets for an element
$\theta_{1}$ of a parameter vector $\theta\in\mathbb{R}^{d}$ that is partially
identified by affine moment equality and inequality conditions. The proposed
confidence sets can be computed as a solution to a small number of linear and
convex quadratic programs, which leads to a substantial decrease in computation
time and guarantees a global optimum. As a result, the method provides
uniformly valid inference in applications in which the dimension of the
parameter space, $d$, and the number of inequalities, $k$, were previously
computationally unfeasible ($d,k=100$). The proposed approach can be extended
to construct confidence sets for intersection bounds, to construct joint
polygon-shaped confidence sets for multiple components of $\theta$, and to find
the set of solutions to a linear program. Inference for coefficients in the
linear IV regression model with an interval outcome is used as an illustrative
example. | Simple subvector inference on sharp identified set in affine models | 2019-03-30 01:49:40 | Bulat Gafarov | http://arxiv.org/abs/1904.00111v2, http://arxiv.org/pdf/1904.00111v2 | econ.EM |
28,943 | em | Three-dimensional panel models are widely used in empirical analysis.
Researchers use various combinations of fixed effects for three-dimensional
panels. When one imposes a parsimonious model and the true model is rich, then
it incurs mis-specification biases. When one employs a rich model and the true
model is parsimonious, then it incurs larger standard errors than necessary. It
is therefore useful for researchers to know correct models. In this light, Lu,
Miao, and Su (2018) propose methods of model selection. We advance this
literature by proposing a method of post-selection inference for regression
parameters. Despite our use of the lasso technique as means of model selection,
our assumptions allow for many and even all fixed effects to be nonzero.
Simulation studies demonstrate that the proposed method is more precise than
under-fitting fixed effect estimators, is more efficient than over-fitting
fixed effect estimators, and allows for as accurate inference as the oracle
estimator. | Post-Selection Inference in Three-Dimensional Panel Data | 2019-03-30 15:51:35 | Harold D. Chiang, Joel Rodrigue, Yuya Sasaki | http://arxiv.org/abs/1904.00211v2, http://arxiv.org/pdf/1904.00211v2 | econ.EM |
28,944 | em | We propose a framework for analyzing the sensitivity of counterfactuals to
parametric assumptions about the distribution of latent variables in structural
models. In particular, we derive bounds on counterfactuals as the distribution
of latent variables spans nonparametric neighborhoods of a given parametric
specification while other "structural" features of the model are maintained.
Our approach recasts the infinite-dimensional problem of optimizing the
counterfactual with respect to the distribution of latent variables (subject to
model constraints) as a finite-dimensional convex program. We also develop an
MPEC version of our method to further simplify computation in models with
endogenous parameters (e.g., value functions) defined by equilibrium
constraints. We propose plug-in estimators of the bounds and two methods for
inference. We also show that our bounds converge to the sharp nonparametric
bounds on counterfactuals as the neighborhood size becomes large. To illustrate
the broad applicability of our procedure, we present empirical applications to
matching models with transferable utility and dynamic discrete choice models. | Counterfactual Sensitivity and Robustness | 2019-04-01 20:53:20 | Timothy Christensen, Benjamin Connault | http://arxiv.org/abs/1904.00989v4, http://arxiv.org/pdf/1904.00989v4 | econ.EM |
28,945 | em | Models with a discrete endogenous variable are typically underidentified when
the instrument takes on too few values. This paper presents a new method that
matches pairs of covariates and instruments to restore point identification in
this scenario in a triangular model. The model consists of a structural
function for a continuous outcome and a selection model for the discrete
endogenous variable. The structural outcome function must be continuous and
monotonic in a scalar disturbance, but it can be nonseparable. The selection
model allows for unrestricted heterogeneity. Global identification is obtained
under weak conditions. The paper also provides estimators of the structural
outcome function. Two empirical examples of the return to education and
selection into Head Start illustrate the value and limitations of the method. | Matching Points: Supplementing Instruments with Covariates in Triangular Models | 2019-04-02 04:12:10 | Junlong Feng | http://arxiv.org/abs/1904.01159v3, http://arxiv.org/pdf/1904.01159v3 | econ.EM |
28,946 | em | Empirical economists are often deterred from the application of fixed effects
binary choice models mainly for two reasons: the incidental parameter problem
and the computational challenge even in moderately large panels. Using the
example of binary choice models with individual and time fixed effects, we show
how both issues can be alleviated by combining asymptotic bias corrections with
computational advances. Because unbalancedness is often encountered in applied
work, we investigate its consequences on the finite sample properties of
various (bias corrected) estimators. In simulation experiments we find that
analytical bias corrections perform particularly well, whereas split-panel
jackknife estimators can be severely biased in unbalanced panels. | Fixed Effects Binary Choice Models: Estimation and Inference with Long Panels | 2019-04-08 20:38:31 | Daniel Czarnowske, Amrei Stammann | http://arxiv.org/abs/1904.04217v3, http://arxiv.org/pdf/1904.04217v3 | econ.EM |
28,948 | em | This article proposes inference procedures for distribution regression models
in duration analysis using randomly right-censored data. This generalizes
classical duration models by allowing situations where explanatory variables'
marginal effects freely vary with duration time. The article discusses
applications to testing uniform restrictions on the varying coefficients,
inferences on average marginal effects, and others involving conditional
distribution estimates. Finite sample properties of the proposed method are
studied by means of Monte Carlo experiments. Finally, we apply our proposal to
study the effects of unemployment benefits on unemployment duration. | Distribution Regression in Duration Analysis: an Application to Unemployment Spells | 2019-04-12 15:22:27 | Miguel A. Delgado, Andrés García-Suaza, Pedro H. C. Sant'Anna | http://arxiv.org/abs/1904.06185v2, http://arxiv.org/pdf/1904.06185v2 | econ.EM |
28,949 | em | Internet finance is a new financial model that applies Internet technology to
payment, capital borrowing and lending and transaction processing. In order to
study the internal risks, this paper uses the Internet financial risk elements
as the network node to construct the complex network of Internet financial risk
system. Different from the study of macroeconomic shocks and financial
institution data, this paper mainly adopts the perspective of complex system to
analyze the systematic risk of Internet finance. By dividing the entire
financial system into Internet financial subnet, regulatory subnet and
traditional financial subnet, the paper discusses the relationship between
contagion and contagion among different risk factors, and concludes that risks
are transmitted externally through the internal circulation of Internet
finance, thus discovering potential hidden dangers of systemic risks. The
results show that the nodes around the center of the whole system are the main
objects of financial risk contagion in the Internet financial network. In
addition, macro-prudential regulation plays a decisive role in the control of
the Internet financial system, and points out the reasons why the current
regulatory measures are still limited. This paper summarizes a research model
which is still in its infancy, hoping to open up new prospects and directions
for us to understand the cascading behaviors of Internet financial risks. | Complex Network Construction of Internet Financial risk | 2019-04-14 09:55:11 | Runjie Xu, Chuanmin Mi, Rafal Mierzwiak, Runyu Meng | http://dx.doi.org/10.1016/j.physa.2019.122930, http://arxiv.org/abs/1904.06640v3, http://arxiv.org/pdf/1904.06640v3 | econ.EM |
28,950 | em | We develop a dynamic model of discrete choice that incorporates peer effects
into random consideration sets. We characterize the equilibrium behavior and
study the empirical content of the model. In our setup, changes in the choices
of friends affect the distribution of the consideration sets. We exploit this
variation to recover the ranking of preferences, attention mechanisms, and
network connections. These nonparametric identification results allow
unrestricted heterogeneity across people and do not rely on the variation of
either covariates or the set of available options. Our methodology leads to a
maximum-likelihood estimator that performs well in simulations. We apply our
results to an experimental dataset that has been designed to study the visual
focus of attention. | Peer Effects in Random Consideration Sets | 2019-04-14 22:15:07 | Nail Kashaev, Natalia Lazzati | http://arxiv.org/abs/1904.06742v3, http://arxiv.org/pdf/1904.06742v3 | econ.EM |
28,951 | em | In multinomial response models, idiosyncratic variations in the indirect
utility are generally modeled using Gumbel or normal distributions. This study
makes a strong case to substitute these thin-tailed distributions with a
t-distribution. First, we demonstrate that a model with a t-distributed error
kernel better estimates and predicts preferences, especially in
class-imbalanced datasets. Our proposed specification also implicitly accounts
for decision-uncertainty behavior, i.e. the degree of certainty that
decision-makers hold in their choices relative to the variation in the indirect
utility of any alternative. Second, after applying a t-distributed error kernel
in a multinomial response model for the first time, we extend this
specification to a generalized continuous-multinomial (GCM) model and derive
its full-information maximum likelihood estimator. The likelihood involves an
open-form expression of the cumulative density function of the multivariate
t-distribution, which we propose to compute using a combination of the
composite marginal likelihood method and the separation-of-variables approach.
Third, we establish finite sample properties of the GCM model with a
t-distributed error kernel (GCM-t) and highlight its superiority over the GCM
model with a normally-distributed error kernel (GCM-N) in a Monte Carlo study.
Finally, we compare GCM-t and GCM-N in an empirical setting related to
preferences for electric vehicles (EVs). We observe that accounting for
decision-uncertainty behavior in GCM-t results in lower elasticity estimates
and a higher willingness to pay for improving the EV attributes than those of
the GCM-N model. These differences are relevant in making policies to expedite
the adoption of EVs. | A Generalized Continuous-Multinomial Response Model with a t-distributed Error Kernel | 2019-04-17 18:54:04 | Subodh Dubey, Prateek Bansal, Ricardo A. Daziano, Erick Guerra | http://arxiv.org/abs/1904.08332v3, http://arxiv.org/pdf/1904.08332v3 | econ.EM |
28,952 | em | Currently all countries including developing countries are expected to
utilize their own tax revenues and carry out their own development for solving
poverty in their countries. However, developing countries cannot earn tax
revenues like developed countries partly because they do not have effective
countermeasures against international tax avoidance. Our analysis focuses on
treaty shopping among various ways to conduct international tax avoidance
because tax revenues of developing countries have been heavily damaged through
treaty shopping. To analyze the location and sector of conduit firms likely to
be used for treaty shopping, we constructed a multilayer ownership-tax network
and proposed multilayer centrality. Because multilayer centrality can consider
not only the value owing in the ownership network but also the withholding tax
rate, it is expected to grasp precisely the locations and sectors of conduit
firms established for the purpose of treaty shopping. Our analysis shows that
firms in the sectors of Finance & Insurance and Wholesale & Retail trade etc.
are involved with treaty shopping. We suggest that developing countries make a
clause focusing on these sectors in the tax treaties they conclude. | Location-Sector Analysis of International Profit Shifting on a Multilayer Ownership-Tax Network | 2019-04-19 15:30:34 | Tembo Nakamoto, Odile Rouhban, Yuichi Ikeda | http://arxiv.org/abs/1904.09165v1, http://arxiv.org/pdf/1904.09165v1 | econ.EM |
29,010 | em | This paper considers generalized least squares (GLS) estimation for linear
panel data models. By estimating the large error covariance matrix
consistently, the proposed feasible GLS (FGLS) estimator is more efficient than
the ordinary least squares (OLS) in the presence of heteroskedasticity, serial,
and cross-sectional correlations. To take into account the serial correlations,
we employ the banding method. To take into account the cross-sectional
correlations, we suggest to use the thresholding method. We establish the
limiting distribution of the proposed estimator. A Monte Carlo study is
considered. The proposed method is applied to an empirical application. | Feasible Generalized Least Squares for Panel Data with Cross-sectional and Serial Correlations | 2019-10-20 18:37:51 | Jushan Bai, Sung Hoon Choi, Yuan Liao | http://arxiv.org/abs/1910.09004v3, http://arxiv.org/pdf/1910.09004v3 | econ.EM |
28,953 | em | We study identification in nonparametric regression models with a
misclassified and endogenous binary regressor when an instrument is correlated
with misclassification error. We show that the regression function is
nonparametrically identified if one binary instrument variable and one binary
covariate satisfy the following conditions. The instrumental variable corrects
endogeneity; the instrumental variable must be correlated with the unobserved
true underlying binary variable, must be uncorrelated with the error term in
the outcome equation, but is allowed to be correlated with the
misclassification error. The covariate corrects misclassification; this
variable can be one of the regressors in the outcome equation, must be
correlated with the unobserved true underlying binary variable, and must be
uncorrelated with the misclassification error. We also propose a mixture-based
framework for modeling unobserved heterogeneous treatment effects with a
misclassified and endogenous binary regressor and show that treatment effects
can be identified if the true treatment effect is related to an observed
regressor and another observable variable. | Identification of Regression Models with a Misclassified and Endogenous Binary Regressor | 2019-04-25 06:41:37 | Hiroyuki Kasahara, Katsumi Shimotsu | http://arxiv.org/abs/1904.11143v3, http://arxiv.org/pdf/1904.11143v3 | econ.EM |
28,954 | em | In matched-pairs experiments in which one cluster per pair of clusters is
assigned to treatment, to estimate treatment effects, researchers often regress
their outcome on a treatment indicator and pair fixed effects, clustering
standard errors at the unit-ofrandomization level. We show that even if the
treatment has no effect, a 5%-level t-test based on this regression will
wrongly conclude that the treatment has an effect up to 16.5% of the time. To
fix this problem, researchers should instead cluster standard errors at the
pair level. Using simulations, we show that similar results apply to clustered
experiments with small strata. | At What Level Should One Cluster Standard Errors in Paired and Small-Strata Experiments? | 2019-06-01 23:47:18 | Clément de Chaisemartin, Jaime Ramirez-Cuellar | http://arxiv.org/abs/1906.00288v10, http://arxiv.org/pdf/1906.00288v10 | econ.EM |
28,955 | em | We propose the use of indirect inference estimation to conduct inference in
complex locally stationary models. We develop a local indirect inference
algorithm and establish the asymptotic properties of the proposed estimator.
Due to the nonparametric nature of locally stationary models, the resulting
indirect inference estimator exhibits nonparametric rates of convergence. We
validate our methodology with simulation studies in the confines of a locally
stationary moving average model and a new locally stationary multiplicative
stochastic volatility model. Using this indirect inference methodology and the
new locally stationary volatility model, we obtain evidence of non-linear,
time-varying volatility trends for monthly returns on several Fama-French
portfolios. | Indirect Inference for Locally Stationary Models | 2019-06-05 03:41:13 | David Frazier, Bonsoo Koo | http://dx.doi.org/10.1016/S0304-4076/20/30303-1, http://arxiv.org/abs/1906.01768v2, http://arxiv.org/pdf/1906.01768v2 | econ.EM |
28,956 | em | In a nonparametric instrumental regression model, we strengthen the
conventional moment independence assumption towards full statistical
independence between instrument and error term. This allows us to prove
identification results and develop estimators for a structural function of
interest when the instrument is discrete, and in particular binary. When the
regressor of interest is also discrete with more mass points than the
instrument, we state straightforward conditions under which the structural
function is partially identified, and give modified assumptions which imply
point identification. These stronger assumptions are shown to hold outside of a
small set of conditional moments of the error term. Estimators for the
identified set are given when the structural function is either partially or
point identified. When the regressor is continuously distributed, we prove that
if the instrument induces a sufficiently rich variation in the joint
distribution of the regressor and error term then point identification of the
structural function is still possible. This approach is relatively tractable,
and under some standard conditions we demonstrate that our point identifying
assumption holds on a topologically generic set of density functions for the
joint distribution of regressor, error, and instrument. Our method also applies
to a well-known nonparametric quantile regression framework, and we are able to
state analogous point identification results in that context. | Nonparametric Identification and Estimation with Independent, Discrete Instruments | 2019-06-12 19:05:52 | Isaac Loh | http://arxiv.org/abs/1906.05231v1, http://arxiv.org/pdf/1906.05231v1 | econ.EM |
28,957 | em | We consider the asymptotic properties of the Synthetic Control (SC) estimator
when both the number of pre-treatment periods and control units are large. If
potential outcomes follow a linear factor model, we provide conditions under
which the factor loadings of the SC unit converge in probability to the factor
loadings of the treated unit. This happens when there are weights diluted among
an increasing number of control units such that a weighted average of the
factor loadings of the control units asymptotically reconstructs the factor
loadings of the treated unit. In this case, the SC estimator is asymptotically
unbiased even when treatment assignment is correlated with time-varying
unobservables. This result can be valid even when the number of control units
is larger than the number of pre-treatment periods. | On the Properties of the Synthetic Control Estimator with Many Periods and Many Controls | 2019-06-16 15:26:28 | Bruno Ferman | http://arxiv.org/abs/1906.06665v5, http://arxiv.org/pdf/1906.06665v5 | econ.EM |
28,958 | em | We study the association between physical appearance and family income using
a novel data which has 3-dimensional body scans to mitigate the issue of
reporting errors and measurement errors observed in most previous studies. We
apply machine learning to obtain intrinsic features consisting of human body
and take into account a possible issue of endogenous body shapes. The
estimation results show that there is a significant relationship between
physical appearance and family income and the associations are different across
the gender. This supports the hypothesis on the physical attractiveness premium
and its heterogeneity across the gender. | Shape Matters: Evidence from Machine Learning on Body Shape-Income Relationship | 2019-06-16 21:42:22 | Suyong Song, Stephen S. Baek | http://dx.doi.org/10.1371/journal.pone.0254785, http://arxiv.org/abs/1906.06747v1, http://arxiv.org/pdf/1906.06747v1 | econ.EM |
29,011 | em | This paper considers estimation of large dynamic factor models with common
and idiosyncratic trends by means of the Expectation Maximization algorithm,
implemented jointly with the Kalman smoother. We show that, as the
cross-sectional dimension $n$ and the sample size $T$ diverge to infinity, the
common component for a given unit estimated at a given point in time is
$\min(\sqrt n,\sqrt T)$-consistent. The case of local levels and/or local
linear trends trends is also considered. By means of a MonteCarlo simulation
exercise, we compare our approach with estimators based on principal component
analysis. | Quasi Maximum Likelihood Estimation of Non-Stationary Large Approximate Dynamic Factor Models | 2019-10-22 12:00:06 | Matteo Barigozzi, Matteo Luciani | http://arxiv.org/abs/1910.09841v1, http://arxiv.org/pdf/1910.09841v1 | econ.EM |
28,959 | em | This paper aims to examine the use of sparse methods to forecast the real, in
the chain-linked volume sense, expenditure components of the US and EU GDP in
the short-run sooner than the national institutions of statistics officially
release the data. We estimate current quarter nowcasts along with 1- and
2-quarter forecasts by bridging quarterly data with available monthly
information announced with a much smaller delay. We solve the
high-dimensionality problem of the monthly dataset by assuming sparse
structures of leading indicators, capable of adequately explaining the dynamics
of analyzed data. For variable selection and estimation of the forecasts, we
use the sparse methods - LASSO together with its recent modifications. We
propose an adjustment that combines LASSO cases with principal components
analysis that deemed to improve the forecasting performance. We evaluate
forecasting performance conducting pseudo-real-time experiments for gross fixed
capital formation, private consumption, imports and exports over the sample of
2005-2019, compared with benchmark ARMA and factor models. The main results
suggest that sparse methods can outperform the benchmarks and to identify
reasonable subsets of explanatory variables. The proposed LASSO-PC modification
show further improvement in forecast accuracy. | Sparse structures with LASSO through Principal Components: forecasting GDP components in the short-run | 2019-06-19 12:30:36 | Saulius Jokubaitis, Dmitrij Celov, Remigijus Leipus | http://dx.doi.org/10.1016/j.ijforecast.2020.09.005, http://arxiv.org/abs/1906.07992v2, http://arxiv.org/pdf/1906.07992v2 | econ.EM |
28,960 | em | In 2018, allowance prices in the EU Emission Trading Scheme (EU ETS)
experienced a run-up from persistently low levels in previous years. Regulators
attribute this to a comprehensive reform in the same year, and are confident
the new price level reflects an anticipated tighter supply of allowances. We
ask if this is indeed the case, or if it is an overreaction of the market
driven by speculation. We combine several econometric methods - time-varying
coefficient regression, formal bubble detection as well as time stamping and
crash odds prediction - to juxtapose the regulators' claim versus the
concurrent explanation. We find evidence of a long period of explosive
behaviour in allowance prices, starting in March 2018 when the reform was
adopted. Our results suggest that the reform triggered market participants into
speculation, and question regulators' confidence in its long-term outcome. This
has implications for both the further development of the EU ETS, and the long
lasting debate about taxes versus emission trading schemes. | Understanding the explosive trend in EU ETS prices -- fundamentals or speculation? | 2019-06-25 17:43:50 | Marina Friedrich, Sébastien Fries, Michael Pahle, Ottmar Edenhofer | http://arxiv.org/abs/1906.10572v5, http://arxiv.org/pdf/1906.10572v5 | econ.EM |
28,961 | em | Many economic studies use shift-share instruments to estimate causal effects.
Often, all shares need to fulfil an exclusion restriction, making the
identifying assumption strict. This paper proposes to use methods that relax
the exclusion restriction by selecting invalid shares. I apply the methods in
two empirical examples: the effect of immigration on wages and of Chinese
import exposure on employment. In the first application, the coefficient
becomes lower and often changes sign, but this is reconcilable with arguments
made in the literature. In the second application, the findings are mostly
robust to the use of the new methods. | Relaxing the Exclusion Restriction in Shift-Share Instrumental Variable Estimation | 2019-06-29 18:27:49 | Nicolas Apfel | http://arxiv.org/abs/1907.00222v4, http://arxiv.org/pdf/1907.00222v4 | econ.EM |
28,962 | em | There is currently an increasing interest in large vector autoregressive
(VAR) models. VARs are popular tools for macroeconomic forecasting and use of
larger models has been demonstrated to often improve the forecasting ability
compared to more traditional small-scale models. Mixed-frequency VARs deal with
data sampled at different frequencies while remaining within the realms of
VARs. Estimation of mixed-frequency VARs makes use of simulation smoothing, but
using the standard procedure these models quickly become prohibitive in
nowcasting situations as the size of the model grows. We propose two algorithms
that alleviate the computational efficiency of the simulation smoothing
algorithm. Our preferred choice is an adaptive algorithm, which augments the
state vector as necessary to sample also monthly variables that are missing at
the end of the sample. For large VARs, we find considerable improvements in
speed using our adaptive algorithm. The algorithm therefore provides a crucial
building block for bringing the mixed-frequency VARs to the high-dimensional
regime. | Simulation smoothing for nowcasting with large mixed-frequency VARs | 2019-07-02 00:08:21 | Sebastian Ankargren, Paulina Jonéus | http://arxiv.org/abs/1907.01075v1, http://arxiv.org/pdf/1907.01075v1 | econ.EM |
28,963 | em | We propose a robust method of discrete choice analysis when agents' choice
sets are unobserved. Our core model assumes nothing about agents' choice sets
apart from their minimum size. Importantly, it leaves unrestricted the
dependence, conditional on observables, between choice sets and preferences. We
first characterize the sharp identification region of the model's parameters by
a finite set of conditional moment inequalities. We then apply our theoretical
findings to learn about households' risk preferences and choice sets from data
on their deductible choices in auto collision insurance. We find that the data
can be explained by expected utility theory with low levels of risk aversion
and heterogeneous non-singleton choice sets, and that more than three in four
households require limited choice sets to explain their deductible choices. We
also provide simulation evidence on the computational tractability of our
method in applications with larger feasible sets or higher-dimensional
unobserved heterogeneity. | Heterogeneous Choice Sets and Preferences | 2019-07-04 14:47:26 | Levon Barseghyan, Maura Coughlin, Francesca Molinari, Joshua C. Teitelbaum | http://arxiv.org/abs/1907.02337v2, http://arxiv.org/pdf/1907.02337v2 | econ.EM |
28,964 | em | In this paper we develop a new machine learning estimator for ordered choice
models based on the random forest. The proposed Ordered Forest flexibly
estimates the conditional choice probabilities while taking the ordering
information explicitly into account. In addition to common machine learning
estimators, it enables the estimation of marginal effects as well as conducting
inference and thus provides the same output as classical econometric
estimators. An extensive simulation study reveals a good predictive
performance, particularly in settings with non-linearities and
near-multicollinearity. An empirical application contrasts the estimation of
marginal effects and their standard errors with an ordered logit model. A
software implementation of the Ordered Forest is provided both in R and Python
in the package orf available on CRAN and PyPI, respectively. | Random Forest Estimation of the Ordered Choice Model | 2019-07-04 17:54:58 | Michael Lechner, Gabriel Okasa | http://arxiv.org/abs/1907.02436v3, http://arxiv.org/pdf/1907.02436v3 | econ.EM |
28,966 | em | This paper provides tests for detecting sample selection in nonparametric
conditional quantile functions. The first test is an omitted predictor test
with the propensity score as the omitted variable. As with any omnibus test, in
the case of rejection we cannot distinguish between rejection due to genuine
selection or to misspecification. Thus, we suggest a second test to provide
supporting evidence whether the cause for rejection at the first stage was
solely due to selection or not. Using only individuals with propensity score
close to one, this second test relies on an `identification at infinity'
argument, but accommodates cases of irregular identification. Importantly,
neither of the two tests requires parametric assumptions on the selection
equation nor a continuous exclusion restriction. Data-driven bandwidth
procedures are proposed, and Monte Carlo evidence suggests a good finite sample
performance in particular of the first test. Finally, we also derive an
extension of the first test to nonparametric conditional mean functions, and
apply our procedure to test for selection in log hourly wages using UK Family
Expenditure Survey data as \citet{AB2017}. | Testing for Quantile Sample Selection | 2019-07-17 12:39:39 | Valentina Corradi, Daniel Gutknecht | http://arxiv.org/abs/1907.07412v5, http://arxiv.org/pdf/1907.07412v5 | econ.EM |
28,967 | em | Clustering methods such as k-means have found widespread use in a variety of
applications. This paper proposes a formal testing procedure to determine
whether a null hypothesis of a single cluster, indicating homogeneity of the
data, can be rejected in favor of multiple clusters. The test is simple to
implement, valid under relatively mild conditions (including non-normality, and
heterogeneity of the data in aspects beyond those in the clustering analysis),
and applicable in a range of contexts (including clustering when the time
series dimension is small, or clustering on parameters other than the mean). We
verify that the test has good size control in finite samples, and we illustrate
the test in applications to clustering vehicle manufacturers and U.S. mutual
funds. | Testing for Unobserved Heterogeneity via k-means Clustering | 2019-07-17 18:28:24 | Andrew J. Patton, Brian M. Weller | http://arxiv.org/abs/1907.07582v1, http://arxiv.org/pdf/1907.07582v1 | econ.EM |
Subsets and Splits