id
int64
28.8k
36k
category
stringclasses
3 values
text
stringlengths
44
3.03k
title
stringlengths
10
236
published
stringlengths
19
19
author
stringlengths
6
943
link
stringlengths
66
127
primary_category
stringclasses
62 values
29,425
em
This paper provides an introduction to structural estimation methods for matching markets with transferable utility.
Structural Estimation of Matching Markets with Transferable Utility
2021-09-16 15:29:14
Alfred Galichon, Bernard Salanié
http://arxiv.org/abs/2109.07932v1, http://arxiv.org/pdf/2109.07932v1
econ.EM
29,133
em
In this paper, we propose a model which simulates odds distributions of pari-mutuel betting system under two hypotheses on the behavior of bettors: 1. The amount of bets increases very rapidly as the deadline for betting comes near. 2. Each bettor bets on a horse which gives the largest expectation value of the benefit. The results can be interpreted as such efficient behaviors do not serve to extinguish the FL bias but even produce stronger FL bias.
Efficiency in Micro-Behaviors and FL Bias
2018-05-11 05:17:42
Kurihara Kazutaka, Yohei Tutiya
http://arxiv.org/abs/1805.04225v1, http://arxiv.org/pdf/1805.04225v1
econ.EM
29,931
em
This paper derives the efficiency bound for estimating the parameters of dynamic panel data models in the presence of an increasing number of incidental parameters. We study the efficiency problem by formulating the dynamic panel as a simultaneous equations system, and show that the quasi-maximum likelihood estimator (QMLE) applied to the system achieves the efficiency bound. Comparison of QMLE with fixed effects estimators is made.
Efficiency of QMLE for dynamic panel data models with interactive effects
2023-12-13 06:56:34
Jushan Bai
http://arxiv.org/abs/2312.07881v1, http://arxiv.org/pdf/2312.07881v1
econ.EM
28,940
em
Endogeneity and missing data are common issues in empirical research. We investigate how both jointly affect inference on causal parameters. Conventional methods to estimate the variance, which treat the imputed data as if it was observed in the first place, are not reliable. We derive the asymptotic variance and propose a heteroskedasticity robust variance estimator for two-stage least squares which accounts for the imputation. Monte Carlo simulations support our theoretical findings.
On the Effect of Imputation on the 2SLS Variance
2019-03-26 19:42:59
Helmut Farbmacher, Alexander Kann
http://arxiv.org/abs/1903.11004v1, http://arxiv.org/pdf/1903.11004v1
econ.EM
28,947
em
We propose a model selection criterion to detect purely causal from purely noncausal models in the framework of quantile autoregressions (QAR). We also present asymptotics for the i.i.d. case with regularly varying distributed innovations in QAR. This new modelling perspective is appealing for investigating the presence of bubbles in economic and financial time series, and is an alternative to approximate maximum likelihood methods. We illustrate our analysis using hyperinflation episodes in Latin American countries.
Identification of Noncausal Models by Quantile Autoregressions
2019-04-11 23:49:57
Alain Hecq, Li Sun
http://arxiv.org/abs/1904.05952v1, http://arxiv.org/pdf/1904.05952v1
econ.EM
28,965
em
Complex functions have multiple uses in various fields of study, so analyze their characteristics it is of extensive interest to other sciences. This work begins with a particular class of rational functions of a complex variable; over this is deduced two elementals properties concerning the residues and is proposed one results which establishes one lower bound for the p-norm of the residues vector. Applications to the autoregressive processes are presented and the exemplifications are indicated in historical data of electric generation and econometric series.
On the residues vectors of a rational class of complex functions. Application to autoregressive processes
2019-07-12 23:46:56
Guillermo Daniel Scheidereiter, Omar Roberto Faure
http://arxiv.org/abs/1907.05949v1, http://arxiv.org/pdf/1907.05949v1
econ.EM
29,080
em
Many econometric models can be analyzed as finite mixtures. We focus on two-component mixtures and we show that they are nonparametrically point identified by a combination of an exclusion restriction and tail restrictions. Our identification analysis suggests simple closed-form estimators of the component distributions and mixing proportions, as well as a specification test. We derive their asymptotic properties using results on tail empirical processes and we present a simulation study that documents their finite-sample performance.
Inference on two component mixtures under tail restrictions
2021-02-11 22:27:47
Marc Henry, Koen Jochmans, Bernard Salanié
http://arxiv.org/abs/2102.06232v1, http://arxiv.org/pdf/2102.06232v1
econ.EM
29,093
em
This paper establishes an extended representation theorem for unit-root VARs. A specific algebraic technique is devised to recover stationarity from the solution of the model in the form of a cointegrating transformation. Closed forms of the results of interest are derived for integrated processes up to the 4-th order. An extension to higher-order processes turns out to be within the reach on an induction argument.
Cointegrated Solutions of Unit-Root VARs: An Extended Representation Theorem
2021-02-21 18:28:20
Mario Faliva, Maria Grazia Zoia
http://arxiv.org/abs/2102.10626v1, http://arxiv.org/pdf/2102.10626v1
econ.EM
29,015
em
The randomization inference literature studying randomized controlled trials (RCTs) assumes that units' potential outcomes are deterministic. This assumption is unlikely to hold, as stochastic shocks may take place during the experiment. In this paper, we consider the case of an RCT with individual-level treatment assignment, and we allow for individual-level and cluster-level (e.g. village-level) shocks. We show that one can draw inference on the ATE conditional on the realizations of the cluster-level shocks, using heteroskedasticity-robust standard errors, or on the ATE netted out of those shocks, using cluster-robust standard errors.
Clustering and External Validity in Randomized Controlled Trials
2019-12-02 22:30:25
Antoine Deeb, Clément de Chaisemartin
http://arxiv.org/abs/1912.01052v7, http://arxiv.org/pdf/1912.01052v7
econ.EM
28,777
em
This article reviews recent advances in fixed effect estimation of panel data models for long panels, where the number of time periods is relatively large. We focus on semiparametric models with unobserved individual and time effects, where the distribution of the outcome variable conditional on covariates and unobserved effects is specified parametrically, while the distribution of the unobserved effects is left unrestricted. Compared to existing reviews on long panels (Arellano and Hahn 2007; a section in Arellano and Bonhomme 2011) we discuss models with both individual and time effects, split-panel Jackknife bias corrections, unbalanced panels, distribution and quantile effects, and other extensions. Understanding and correcting the incidental parameter bias caused by the estimation of many fixed effects is our main focus, and the unifying theme is that the order of this bias is given by the simple formula p/n for all models discussed, with p the number of estimated parameters and n the total sample size.
Fixed Effect Estimation of Large T Panel Data Models
2017-09-26 15:46:13
Iván Fernández-Val, Martin Weidner
http://arxiv.org/abs/1709.08980v2, http://arxiv.org/pdf/1709.08980v2
econ.EM
28,778
em
This paper considers the identification of treatment effects on conditional transition probabilities. We show that even under random assignment only the instantaneous average treatment effect is point identified. Since treated and control units drop out at different rates, randomization only ensures the comparability of treatment and controls at the time of randomization, so that long-run average treatment effects are not point identified. Instead we derive informative bounds on these average treatment effects. Our bounds do not impose (semi)parametric restrictions, for example, proportional hazards. We also explore various assumptions such as monotone treatment response, common shocks and positively correlated outcomes that tighten the bounds.
Bounds On Treatment Effects On Transitions
2017-09-26 15:46:40
Johan Vikström, Geert Ridder, Martin Weidner
http://arxiv.org/abs/1709.08981v1, http://arxiv.org/pdf/1709.08981v1
econ.EM
28,779
em
We propose an inference procedure for estimators defined by mathematical programming problems, focusing on the important special cases of linear programming (LP) and quadratic programming (QP). In these settings, the coefficients in both the objective function and the constraints of the mathematical programming problem may be estimated from data and hence involve sampling error. Our inference approach exploits the characterization of the solutions to these programming problems by complementarity conditions; by doing so, we can transform the problem of doing inference on the solution of a constrained optimization problem (a non-standard inference problem) into one involving inference based on a set of inequalities with pre-estimated coefficients, which is much better understood. We evaluate the performance of our procedure in several Monte Carlo simulations and an empirical application to the classic portfolio selection problem in finance.
Inference on Estimators defined by Mathematical Programming
2017-09-26 19:24:52
Yu-Wei Hsieh, Xiaoxia Shi, Matthew Shum
http://arxiv.org/abs/1709.09115v1, http://arxiv.org/pdf/1709.09115v1
econ.EM
28,780
em
We analyze the empirical content of the Roy model, stripped down to its essential features, namely sector specific unobserved heterogeneity and self-selection on the basis of potential outcomes. We characterize sharp bounds on the joint distribution of potential outcomes and testable implications of the Roy self-selection model under an instrumental constraint on the joint distribution of potential outcomes we call stochastically monotone instrumental variable (SMIV). We show that testing the Roy model selection is equivalent to testing stochastic monotonicity of observed outcomes relative to the instrument. We apply our sharp bounds to the derivation of a measure of departure from Roy self-selection to identify values of observable characteristics that induce the most costly misallocation of talent and sector and are therefore prime targets for intervention. Special emphasis is put on the case of binary outcomes, which has received little attention in the literature to date. For richer sets of outcomes, we emphasize the distinction between pointwise sharp bounds and functional sharp bounds, and its importance, when constructing sharp bounds on functional features, such as inequality measures. We analyze a Roy model of college major choice in Canada and Germany within this framework, and we take a new look at the under-representation of women in~STEM.
Sharp bounds and testability of a Roy model of STEM major choices
2017-09-27 02:25:35
Ismael Mourifie, Marc Henry, Romuald Meango
http://arxiv.org/abs/1709.09284v2, http://arxiv.org/pdf/1709.09284v2
econ.EM
28,781
em
The ongoing net neutrality debate has generated a lot of heated discussions on whether or not monetary interactions should be regulated between content and access providers. Among the several topics discussed, `differential pricing' has recently received attention due to `zero-rating' platforms proposed by some service providers. In the differential pricing scheme, Internet Service Providers (ISPs) can exempt data access charges for on content from certain CPs (zero-rated) while no exemption is on content from other CPs. This allows the possibility for Content Providers (CPs) to make `sponsorship' agreements to zero-rate their content and attract more user traffic. In this paper, we study the effect of differential pricing on various players in the Internet. We first consider a model with a monopolistic ISP and multiple CPs where users select CPs based on the quality of service (QoS) and data access charges. We show that in a differential pricing regime 1) a CP offering low QoS can make have higher surplus than a CP offering better QoS through sponsorships. 2) Overall QoS (mean delay) for end users can degrade under differential pricing schemes. In the oligopolistic market with multiple ISPs, users tend to select the ISP with lowest ISP resulting in same type of conclusions as in the monopolistic market. We then study how differential pricing effects the revenue of ISPs.
Zero-rating of Content and its Effect on the Quality of Service in the Internet
2017-09-27 07:51:32
Manjesh K. Hanawal, Fehmina Malik, Yezekael Hayel
http://arxiv.org/abs/1709.09334v2, http://arxiv.org/pdf/1709.09334v2
econ.EM
28,782
em
The uncertainty and robustness of Computable General Equilibrium models can be assessed by conducting a Systematic Sensitivity Analysis. Different methods have been used in the literature for SSA of CGE models such as Gaussian Quadrature and Monte Carlo methods. This paper explores the use of Quasi-random Monte Carlo methods based on the Halton and Sobol' sequences as means to improve the efficiency over regular Monte Carlo SSA, thus reducing the computational requirements of the SSA. The findings suggest that by using low-discrepancy sequences, the number of simulations required by the regular MC SSA methods can be notably reduced, hence lowering the computational time required for SSA of CGE models.
Quasi-random Monte Carlo application in CGE systematic sensitivity analysis
2017-09-28 01:54:30
Theodoros Chatzivasileiadis
http://arxiv.org/abs/1709.09755v1, http://arxiv.org/pdf/1709.09755v1
econ.EM
28,783
em
We propose a method of estimating the linear-in-means model of peer effects in which the peer group, defined by a social network, is endogenous in the outcome equation for peer effects. Endogeneity is due to unobservable individual characteristics that influence both link formation in the network and the outcome of interest. We propose two estimators of the peer effect equation that control for the endogeneity of the social connections using a control function approach. We leave the functional form of the control function unspecified and treat it as unknown. To estimate the model, we use a sieve semiparametric approach, and we establish asymptotics of the semiparametric estimator.
Estimation of Peer Effects in Endogenous Social Networks: Control Function Approach
2017-09-28 18:41:48
Ida Johnsson, Hyungsik Roger Moon
http://arxiv.org/abs/1709.10024v3, http://arxiv.org/pdf/1709.10024v3
econ.EM
28,784
em
This paper considers the problem of forecasting a collection of short time series using cross sectional information in panel data. We construct point predictors using Tweedie's formula for the posterior mean of heterogeneous coefficients under a correlated random effects distribution. This formula utilizes cross-sectional information to transform the unit-specific (quasi) maximum likelihood estimator into an approximation of the posterior mean under a prior distribution that equals the population distribution of the random coefficients. We show that the risk of a predictor based on a non-parametric estimate of the Tweedie correction is asymptotically equivalent to the risk of a predictor that treats the correlated-random-effects distribution as known (ratio-optimality). Our empirical Bayes predictor performs well compared to various competitors in a Monte Carlo study. In an empirical application we use the predictor to forecast revenues for a large panel of bank holding companies and compare forecasts that condition on actual and severely adverse macroeconomic conditions.
Forecasting with Dynamic Panel Data Models
2017-09-29 01:46:48
Laura Liu, Hyungsik Roger Moon, Frank Schorfheide
http://arxiv.org/abs/1709.10193v1, http://arxiv.org/pdf/1709.10193v1
econ.EM
28,785
em
There is a fast growing literature that set-identifies structural vector autoregressions (SVARs) by imposing sign restrictions on the responses of a subset of the endogenous variables to a particular structural shock (sign-restricted SVARs). Most methods that have been used to construct pointwise coverage bands for impulse responses of sign-restricted SVARs are justified only from a Bayesian perspective. This paper demonstrates how to formulate the inference problem for sign-restricted SVARs within a moment-inequality framework. In particular, it develops methods of constructing confidence bands for impulse response functions of sign-restricted SVARs that are valid from a frequentist perspective. The paper also provides a comparison of frequentist and Bayesian coverage bands in the context of an empirical application - the former can be substantially wider than the latter.
Inference for VARs Identified with Sign Restrictions
2017-09-29 02:25:13
Eleonora Granziera, Hyungsik Roger Moon, Frank Schorfheide
http://arxiv.org/abs/1709.10196v2, http://arxiv.org/pdf/1709.10196v2
econ.EM
28,786
em
We systematically investigate the effect heterogeneity of job search programmes for unemployed workers. To investigate possibly heterogeneous employment effects, we combine non-experimental causal empirical models with Lasso-type estimators. The empirical analyses are based on rich administrative data from Swiss social security records. We find considerable heterogeneities only during the first six months after the start of training. Consistent with previous results of the literature, unemployed persons with fewer employment opportunities profit more from participating in these programmes. Furthermore, we also document heterogeneous employment effects by residence status. Finally, we show the potential of easy-to-implement programme participation rules for improving average employment effects of these active labour market programmes.
Heterogeneous Employment Effects of Job Search Programmes: A Machine Learning Approach
2017-09-29 11:21:08
Michael Knaus, Michael Lechner, Anthony Strittmatter
http://dx.doi.org/10.3368/jhr.57.2.0718-9615R1, http://arxiv.org/abs/1709.10279v2, http://arxiv.org/pdf/1709.10279v2
econ.EM
28,787
em
Dynamic contracts with multiple agents is a classical decentralized decision-making problem with asymmetric information. In this paper, we extend the single-agent dynamic incentive contract model in continuous-time to a multi-agent scheme in finite horizon and allow the terminal reward to be dependent on the history of actions and incentives. We first derive a set of sufficient conditions for the existence of optimal contracts in the most general setting and conditions under which they form a Nash equilibrium. Then we show that the principal's problem can be converted to solving Hamilton-Jacobi-Bellman (HJB) equation requiring a static Nash equilibrium. Finally, we provide a framework to solve this problem by solving partial differential equations (PDE) derived from backward stochastic differential equations (BSDE).
A Note on the Multi-Agent Contracts in Continuous Time
2017-10-01 20:07:08
Qi Luo, Romesh Saigal
http://arxiv.org/abs/1710.00377v2, http://arxiv.org/pdf/1710.00377v2
econ.EM
28,788
em
This paper presents a new estimator of the intercept of a linear regression model in cases where the outcome varaible is observed subject to a selection rule. The intercept is often in this context of inherent interest; for example, in a program evaluation context, the difference between the intercepts in outcome equations for participants and non-participants can be interpreted as the difference in average outcomes of participants and their counterfactual average outcomes if they had chosen not to participate. The new estimator can under mild conditions exhibit a rate of convergence in probability equal to $n^{-p/(2p+1)}$, where $p\ge 2$ is an integer that indexes the strength of certain smoothness assumptions. This rate of convergence is shown in this context to be the optimal rate of convergence for estimation of the intercept parameter in terms of a minimax criterion. The new estimator, unlike other proposals in the literature, is under mild conditions consistent and asymptotically normal with a rate of convergence that is the same regardless of the degree to which selection depends on unobservables in the outcome equation. Simulation evidence and an empirical example are included.
Rate-Optimal Estimation of the Intercept in a Semiparametric Sample-Selection Model
2017-10-04 03:02:22
Chuan Goh
http://arxiv.org/abs/1710.01423v3, http://arxiv.org/pdf/1710.01423v3
econ.EM
28,789
em
Gale, Kuhn and Tucker (1950) introduced two ways to reduce a zero-sum game by packaging some strategies with respect to a probability distribution on them. In terms of value, they gave conditions for a desirable reduction. We show that a probability distribution for a desirable reduction relies on optimal strategies in the original game. Also, we correct an improper example given by them to show that the reverse of a theorem does not hold.
A Note on Gale, Kuhn, and Tucker's Reductions of Zero-Sum Games
2017-10-06 12:45:42
Shuige Liu
http://arxiv.org/abs/1710.02326v1, http://arxiv.org/pdf/1710.02326v1
econ.EM
28,790
em
This study proposes a simple technique for propensity score matching for multiple treatment levels under the strong unconfoundedness assumption with the help of the Aitchison distance proposed in the field of compositional data analysis (CODA).
Propensity score matching for multiple treatment levels: A CODA-based contribution
2017-10-24 03:27:47
Hajime Seya, Takahiro Yoshida
http://arxiv.org/abs/1710.08558v1, http://arxiv.org/pdf/1710.08558v1
econ.EM
28,791
em
We consider an index model of dyadic link formation with a homophily effect index and a degree heterogeneity index. We provide nonparametric identification results in a single large network setting for the potentially nonparametric homophily effect function, the realizations of unobserved individual fixed effects and the unknown distribution of idiosyncratic pairwise shocks, up to normalization, for each possible true value of the unknown parameters. We propose a novel form of scale normalization on an arbitrary interquantile range, which is not only theoretically robust but also proves particularly convenient for the identification analysis, as quantiles provide direct linkages between the observable conditional probabilities and the unknown index values. We then use an inductive "in-fill and out-expansion" algorithm to establish our main results, and consider extensions to more general settings that allow nonseparable dependence between homophily and degree heterogeneity, as well as certain extents of network sparsity and weaker assumptions on the support of unobserved heterogeneity. As a byproduct, we also propose a concept called "modeling equivalence" as a refinement of "observational equivalence", and use it to provide a formal discussion about normalization, identification and their interplay with counterfactuals.
Nonparametric Identification in Index Models of Link Formation
2017-10-30 23:32:12
Wayne Yuan Gao
http://arxiv.org/abs/1710.11230v5, http://arxiv.org/pdf/1710.11230v5
econ.EM
28,792
em
Web search data are a valuable source of business and economic information. Previous studies have utilized Google Trends web search data for economic forecasting. We expand this work by providing algorithms to combine and aggregate search volume data, so that the resulting data is both consistent over time and consistent between data series. We give a brand equity example, where Google Trends is used to analyze shopping data for 100 top ranked brands and these data are used to nowcast economic variables. We describe the importance of out of sample prediction and show how principal component analysis (PCA) can be used to improve the signal to noise ratio and prevent overfitting in nowcasting models. We give a finance example, where exploratory data analysis and classification is used to analyze the relationship between Google Trends searches and stock prices.
Aggregating Google Trends: Multivariate Testing and Analysis
2017-12-08 19:18:10
Stephen L. France, Yuying Shi
http://arxiv.org/abs/1712.03152v2, http://arxiv.org/pdf/1712.03152v2
econ.EM
28,793
em
We propose a new inferential methodology for dynamic economies that is robust to misspecification of the mechanism generating frictions. Economies with frictions are treated as perturbations of a frictionless economy that are consistent with a variety of mechanisms. We derive a representation for the law of motion for such economies and we characterize parameter set identification. We derive a link from model aggregate predictions to distributional information contained in qualitative survey data and specify conditions under which the identified set is refined. The latter is used to semi-parametrically estimate distortions due to frictions in macroeconomic variables. Based on these estimates, we propose a novel test for complete models. Using consumer and business survey data collected by the European Commission, we apply our method to estimate distortions due to financial frictions in the Spanish economy. We investigate the implications of these estimates for the adequacy of the standard model of financial frictions SW-BGG (Smets and Wouters (2007), Bernanke, Gertler, and Gilchrist (1999)).
Set Identified Dynamic Economies and Robustness to Misspecification
2017-12-11 11:41:11
Andreas Tryphonides
http://arxiv.org/abs/1712.03675v2, http://arxiv.org/pdf/1712.03675v2
econ.EM
28,794
em
This paper defines the class of $\mathcal{H}$-valued autoregressive (AR) processes with a unit root of finite type, where $\mathcal{H}$ is an infinite dimensional separable Hilbert space, and derives a generalization of the Granger-Johansen Representation Theorem valid for any integration order $d=1,2,\dots$. An existence theorem shows that the solution of an AR with a unit root of finite type is necessarily integrated of some finite integer $d$ and displays a common trends representation with a finite number of common stochastic trends of the type of (cumulated) bilateral random walks and an infinite dimensional cointegrating space. A characterization theorem clarifies the connections between the structure of the AR operators and $(i)$ the order of integration, $(ii)$ the structure of the attractor space and the cointegrating space, $(iii)$ the expression of the cointegrating relations, and $(iv)$ the Triangular representation of the process. Except for the fact that the number of cointegrating relations that are integrated of order 0 is infinite, the representation of $\mathcal{H}$-valued ARs with a unit root of finite type coincides with that of usual finite dimensional VARs, which corresponds to the special case $\mathcal{H}=\mathbb{R}^p$.
Cointegration in functional autoregressive processes
2017-12-20 18:23:20
Massimo Franchi, Paolo Paruolo
http://dx.doi.org/10.1017/S0266466619000306, http://arxiv.org/abs/1712.07522v2, http://arxiv.org/pdf/1712.07522v2
econ.EM
28,795
em
High-dimensional linear models with endogenous variables play an increasingly important role in recent econometric literature. In this work we allow for models with many endogenous variables and many instrument variables to achieve identification. Because of the high-dimensionality in the second stage, constructing honest confidence regions with asymptotically correct coverage is non-trivial. Our main contribution is to propose estimators and confidence regions that would achieve that. The approach relies on moment conditions that have an additional orthogonal property with respect to nuisance parameters. Moreover, estimation of high-dimension nuisance parameters is carried out via new pivotal procedures. In order to achieve simultaneously valid confidence regions we use a multiplier bootstrap procedure to compute critical values and establish its validity.
Simultaneous Confidence Intervals for High-dimensional Linear Models with Many Endogenous Variables
2017-12-21 20:33:40
Alexandre Belloni, Christian Hansen, Whitney Newey
http://arxiv.org/abs/1712.08102v4, http://arxiv.org/pdf/1712.08102v4
econ.EM
28,796
em
This paper investigates the impacts of major natural resource discoveries since 1960 on life expectancy in the nations that they were resource poor prior to the discoveries. Previous literature explains the relation between nations wealth and life expectancy, but it has been silent about the impacts of resource discoveries on life expectancy. We attempt to fill this gap in this study. An important advantage of this study is that as the previous researchers argued resource discovery could be an exogenous variable. We use longitudinal data from 1960 to 2014 and we apply three modern empirical methods including Difference-in-Differences, Event studies, and Synthetic Control approach, to investigate the main question of the research which is 'how resource discoveries affect life expectancy?'. The findings show that resource discoveries in Ecuador, Yemen, Oman, and Equatorial Guinea have positive and significant impacts on life expectancy, but the effects for the European countries are mostly negative.
Resource Abundance and Life Expectancy
2018-01-01 01:43:39
Bahram Sanginabadi
http://arxiv.org/abs/1801.00369v1, http://arxiv.org/pdf/1801.00369v1
econ.EM
28,797
em
In this paper we estimate a Bayesian vector autoregressive model with factor stochastic volatility in the error term to assess the effects of an uncertainty shock in the Euro area. This allows us to treat macroeconomic uncertainty as a latent quantity during estimation. Only a limited number of contributions to the literature estimate uncertainty and its macroeconomic consequences jointly, and most are based on single country models. We analyze the special case of a shock restricted to the Euro area, where member states are highly related by construction. We find significant results of a decrease in real activity for all countries over a period of roughly a year following an uncertainty shock. Moreover, equity prices, short-term interest rates and exports tend to decline, while unemployment levels increase. Dynamic responses across countries differ slightly in magnitude and duration, with Ireland, Slovakia and Greece exhibiting different reactions for some macroeconomic fundamentals.
Implications of macroeconomic volatility in the Euro area
2018-01-09 16:20:42
Niko Hauzenberger, Maximilian Böck, Michael Pfarrhofer, Anna Stelzer, Gregor Zens
http://arxiv.org/abs/1801.02925v2, http://arxiv.org/pdf/1801.02925v2
econ.EM
28,798
em
We report a new result on lotteries --- that a well-funded syndicate has a purely mechanical strategy to achieve expected returns of 10\% to 25\% in an equiprobable lottery with no take and no carryover pool. We prove that an optimal strategy (Nash equilibrium) in a game between the syndicate and other players consists of betting one of each ticket (the "trump ticket"), and extend that result to proportional ticket selection in non-equiprobable lotteries. The strategy can be adjusted to accommodate lottery taxes and carryover pools. No "irrationality" need be involved for the strategy to succeed --- it requires only that a large group of non-syndicate bettors each choose a few tickets independently.
A Method for Winning at Lotteries
2018-01-05 22:35:17
Steven D. Moffitt, William T. Ziemba
http://arxiv.org/abs/1801.02958v1, http://arxiv.org/pdf/1801.02958v1
econ.EM
28,799
em
Despite its unusual payout structure, the Canadian 6/49 Lotto is one of the few government sponsored lotteries that has the potential for a favorable strategy we call "buying the pot." By buying the pot we mean that a syndicate buys each ticket in the lottery, ensuring that it holds a jackpot winner. We assume that the other bettors independently buy small numbers of tickets. This paper presents (1) a formula for the syndicate's expected return, (2) conditions under which buying the pot produces a significant positive expected return, and (3) the implications of these findings for lottery design.
Does it Pay to Buy the Pot in the Canadian 6/49 Lotto? Implications for Lottery Design
2018-01-06 00:58:18
Steven D. Moffitt, William T. Ziemba
http://arxiv.org/abs/1801.02959v1, http://arxiv.org/pdf/1801.02959v1
econ.EM
28,800
em
Dynamic Discrete Choice Models (DDCMs) are important in the structural estimation literature. Since the structural errors are practically always continuous and unbounded in nature, researchers often use the expected value function. The idea to solve for the expected value function made solution more practical and estimation feasible. However, as we show in this paper, the expected value function is impractical compared to an alternative: the integrated (ex ante) value function. We provide brief descriptions of the inefficacy of the former, and benchmarks on actual problems with varying cardinality of the state space and number of decisions. Though the two approaches solve the same problem in theory, the benchmarks support the claim that the integrated value function is preferred in practice.
Solving Dynamic Discrete Choice Models: Integrated or Expected Value Function?
2018-01-11 23:26:00
Patrick Kofod Mogensen
http://arxiv.org/abs/1801.03978v1, http://arxiv.org/pdf/1801.03978v1
econ.EM
28,801
em
This paper develops a new model and estimation procedure for panel data that allows us to identify heterogeneous structural breaks. We model individual heterogeneity using a grouped pattern. For each group, we allow common structural breaks in the coefficients. However, the number, timing, and size of these breaks can differ across groups. We develop a hybrid estimation procedure of the grouped fixed effects approach and adaptive group fused Lasso. We show that our method can consistently identify the latent group structure, detect structural breaks, and estimate the regression parameters. Monte Carlo results demonstrate the good performance of the proposed method in finite samples. An empirical application to the relationship between income and democracy illustrates the importance of considering heterogeneous structural breaks.
Heterogeneous structural breaks in panel data models
2018-01-15 09:19:28
Ryo Okui, Wendun Wang
http://arxiv.org/abs/1801.04672v2, http://arxiv.org/pdf/1801.04672v2
econ.EM
28,802
em
We characterize common assumption of rationality of 2-person games within an incomplete information framework. We use the lexicographic model with incomplete information and show that a belief hierarchy expresses common assumption of rationality within a complete information framework if and only if there is a belief hierarchy within the corresponding incomplete information framework that expresses common full belief in caution, rationality, every good choice is supported, and prior belief in the original utility functions.
Characterizing Assumption of Rationality by Incomplete Information
2018-01-15 12:48:20
Shuige Liu
http://arxiv.org/abs/1801.04714v1, http://arxiv.org/pdf/1801.04714v1
econ.EM
28,803
em
We first show (1) the importance of investigating health expenditure process using the order two Markov chain model, rather than the standard order one model, which is widely used in the literature. Markov chain of order two is the minimal framework that is capable of distinguishing those who experience a certain health expenditure level for the first time from those who have been experiencing that or other levels for some time. In addition, using the model we show (2) that the probability of encountering a health shock first de- creases until around age 10, and then increases with age, particularly, after age 40, (3) that health shock distributions among different age groups do not differ until their percentiles reach the median range, but that above the median the health shock distributions of older age groups gradually start to first-order dominate those of younger groups, and (4) that the persistency of health shocks also shows a U-shape in relation to age.
Quantifying Health Shocks Over the Life Cycle
2018-01-26 13:35:38
Taiyo Fukai, Hidehiko Ichimura, Kyogo Kanazawa
http://arxiv.org/abs/1801.08746v1, http://arxiv.org/pdf/1801.08746v1
econ.EM
28,804
em
We define a modification of the standard Kripke model, called the ordered Kripke model, by introducing a linear order on the set of accessible states of each state. We first show this model can be used to describe the lexicographic belief hierarchy in epistemic game theory, and perfect rationalizability can be characterized within this model. Then we show that each ordered Kripke model is the limit of a sequence of standard probabilistic Kripke models with a modified (common) belief operator, in the senses of structure and the (epsilon-)permissibilities characterized within them.
Ordered Kripke Model, Permissibility, and Convergence of Probabilistic Kripke Model
2018-01-26 14:46:28
Shuige Liu
http://arxiv.org/abs/1801.08767v1, http://arxiv.org/pdf/1801.08767v1
econ.EM
28,805
em
Why women avoid participating in a competition and how can we encourage them to participate in it? In this paper, we investigate how social image concerns affect women's decision to compete. We first construct a theoretical model and show that participating in a competition, even under affirmative action policies favoring women, is costly for women under public observability since it deviates from traditional female gender norms, resulting in women's low appearance in competitive environments. We propose and theoretically show that introducing prosocial incentives in the competitive environment is effective and robust to public observability since (i) it induces women who are intrinsically motivated by prosocial incentives to the competitive environment and (ii) it makes participating in a competition not costly for women from social image point of view. We conduct a laboratory experiment where we randomly manipulate the public observability of decisions to compete and test our theoretical predictions. The results of the experiment are fairly consistent with our theoretical predictions. We suggest that when designing policies to promote gender equality in competitive environments, using prosocial incentives through company philanthropy or other social responsibility policies, either as substitutes or as complements to traditional affirmative action policies, could be promising.
How Can We Induce More Women to Competitions?
2018-01-27 11:51:44
Masayuki Yagasaki, Mitsunosuke Morishita
http://arxiv.org/abs/1801.10518v1, http://arxiv.org/pdf/1801.10518v1
econ.EM
28,806
em
The rational choice theory is based on this idea that people rationally pursue goals for increasing their personal interests. In most conditions, the behavior of an actor is not independent of the person and others' behavior. Here, we present a new concept of rational choice as a hyper-rational choice which in this concept, the actor thinks about profit or loss of other actors in addition to his personal profit or loss and then will choose an action which is desirable to him. We implement the hyper-rational choice to generalize and expand the game theory. Results of this study will help to model the behavior of people considering environmental conditions, the kind of behavior interactive, valuation system of itself and others and system of beliefs and internal values of societies. Hyper-rationality helps us understand how human decision makers behave in interactive decisions.
Hyper-rational choice theory
2018-01-12 02:16:09
Madjid Eshaghi Gordji, Gholamreza Askari
http://arxiv.org/abs/1801.10520v2, http://arxiv.org/pdf/1801.10520v2
econ.EM
28,807
em
We develop a new VAR model for structural analysis with mixed-frequency data. The MIDAS-SVAR model allows to identify structural dynamic links exploiting the information contained in variables sampled at different frequencies. It also provides a general framework to test homogeneous frequency-based representations versus mixed-frequency data models. A set of Monte Carlo experiments suggests that the test performs well both in terms of size and power. The MIDAS-SVAR is then used to study how monetary policy and financial market volatility impact on the dynamics of gross capital inflows to the US. While no relation is found when using standard quarterly data, exploiting the variability present in the series within the quarter shows that the effect of an interest rate shock is greater the longer the time lag between the month of the shock and the end of the quarter
Structural analysis with mixed-frequency data: A MIDAS-SVAR model of US capital flows
2018-02-02 21:12:12
Emanuele Bacchiocchi, Andrea Bastianin, Alessandro Missale, Eduardo Rossi
http://arxiv.org/abs/1802.00793v1, http://arxiv.org/pdf/1802.00793v1
econ.EM
28,808
em
The development and deployment of matching procedures that incentivize truthful preference reporting is considered one of the major successes of market design research. In this study, we test the degree to which these procedures succeed in eliminating preference misrepresentation. We administered an online experiment to 1,714 medical students immediately after their participation in the medical residency match--a leading field application of strategy-proof market design. When placed in an analogous, incentivized matching task, we find that 23% of participants misrepresent their preferences. We explore the factors that predict preference misrepresentation, including cognitive ability, strategic positioning, overconfidence, expectations, advice, and trust. We discuss the implications of this behavior for the design of allocation mechanisms and the social welfare in markets that use them.
An Experimental Investigation of Preference Misrepresentation in the Residency Match
2018-02-05 20:51:55
Alex Rees-Jones, Samuel Skowronek
http://dx.doi.org/10.1073/pnas.1803212115, http://arxiv.org/abs/1802.01990v2, http://arxiv.org/pdf/1802.01990v2
econ.EM
28,809
em
Consumers are creatures of habit, often periodic, tied to work, shopping and other schedules. We analyzed one month of data from the world's largest bike-sharing company to elicit demand behavioral cycles, initially using models from animal tracking that showed large customers fit an Ornstein-Uhlenbeck model with demand peaks at periodicities of 7, 12, 24 hour and 7-days. Lorenz curves of bicycle demand showed that the majority of customer usage was infrequent, and demand cycles from time-series models would strongly overfit the data yielding unreliable models. Analysis of thresholded wavelets for the space-time tensor of bike-sharing contracts was able to compress the data into a 56-coefficient model with little loss of information, suggesting that bike-sharing demand behavior is exceptionally strong and regular. Improvements to predicted demand could be made by adjusting for 'noise' filtered by our model from air quality and weather information and demand from infrequent riders.
Prediction of Shared Bicycle Demand with Wavelet Thresholding
2018-02-08 04:17:27
J. Christopher Westland, Jian Mou, Dafei Yin
http://arxiv.org/abs/1802.02683v1, http://arxiv.org/pdf/1802.02683v1
econ.EM
28,810
em
This paper describes a numerical method to solve for mean product qualities which equates the real market share to the market share predicted by a discrete choice model. The method covers a general class of discrete choice model, including the pure characteristics model in Berry and Pakes(2007) and the random coefficient logit model in Berry et al.(1995) (hereafter BLP). The method transforms the original market share inversion problem to an unconstrained convex minimization problem, so that any convex programming algorithm can be used to solve the inversion. Moreover, such results also imply that the computational complexity of inverting a demand model should be no more than that of a convex programming problem. In simulation examples, I show the method outperforms the contraction mapping algorithm in BLP. I also find the method remains robust in pure characteristics models with near-zero market shares.
A General Method for Demand Inversion
2018-02-13 05:50:46
Lixiong Li
http://arxiv.org/abs/1802.04444v3, http://arxiv.org/pdf/1802.04444v3
econ.EM
29,016
em
This paper develops a set of test statistics based on bilinear forms in the context of the extremum estimation framework with particular interest in nonlinear hypothesis. We show that the proposed statistic converges to a conventional chi-square limit. A Monte Carlo experiment suggests that the test statistic works well in finite samples.
Bilinear form test statistics for extremum estimation
2019-12-03 17:32:49
Federico Crudu, Felipe Osorio
http://dx.doi.org/10.1016/j.econlet.2019.108885, http://arxiv.org/abs/1912.01410v1, http://arxiv.org/pdf/1912.01410v1
econ.EM
28,811
em
We provide an epistemic foundation for cooperative games by proof theory via studying the knowledge for players unanimously accepting only core payoffs. We first transform each cooperative game into a decision problem where a player can accept or reject any payoff vector offered to her based on her knowledge about available cooperation. Then we use a modified KD-system in epistemic logic, which can be regarded as a counterpart of the model for non-cooperative games in Bonanno (2008), (2015), to describe a player's knowledge, decision-making criterion, and reasoning process; especially, a formula called C-acceptability is defined to capture the criterion for accepting a core payoff vector. Within this syntactical framework, we characterize the core of a cooperative game in terms of players' knowledge. Based on that result, we discuss an epistemic inconsistency behind Debreu-Scarf Theorem, that is, the increase of the number of replicas has invariant requirement on each participant's knowledge from the aspect of competitive market, while requires unbounded epistemic ability players from the aspect of cooperative game.
Knowledge and Unanimous Acceptance of Core Payoffs: An Epistemic Foundation for Cooperative Game Theory
2018-02-13 15:49:12
Shuige Liu
http://arxiv.org/abs/1802.04595v4, http://arxiv.org/pdf/1802.04595v4
econ.EM
28,812
em
In this study interest centers on regional differences in the response of housing prices to monetary policy shocks in the US. We address this issue by analyzing monthly home price data for metropolitan regions using a factor-augmented vector autoregression (FAVAR) model. Bayesian model estimation is based on Gibbs sampling with Normal-Gamma shrinkage priors for the autoregressive coefficients and factor loadings, while monetary policy shocks are identified using high-frequency surprises around policy announcements as external instruments. The empirical results indicate that monetary policy actions typically have sizeable and significant positive effects on regional housing prices, revealing differences in magnitude and duration. The largest effects are observed in regions located in states on both the East and West Coasts, notably California, Arizona and Florida.
The dynamic impact of monetary policy on regional housing prices in the US: Evidence based on factor-augmented vector autoregressions
2018-02-16 12:08:34
Manfred M. Fischer, Florian Huber, Michael Pfarrhofer, Petra Staufer-Steinnocher
http://arxiv.org/abs/1802.05870v1, http://arxiv.org/pdf/1802.05870v1
econ.EM
28,813
em
We study the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature such as those in Aguirregabiria and Mira (2002, 2007), Pesendorfer and Schmidt-Dengler (2008), and Pakes et al. (2007). First, we establish that the K-PML estimator is consistent and asymptotically normal for all K. This complements findings in Aguirregabiria and Mira (2007), who focus on K=1 and K large enough to induce convergence of the estimator. Furthermore, we show under certain conditions that the asymptotic variance of the K-PML estimator can exhibit arbitrary patterns as a function of K. Second, we establish that the K-MD estimator is consistent and asymptotically normal for all K. For a specific weight matrix, the K-MD estimator has the same asymptotic distribution as the K-PML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. The invariance result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-PML estimators. Our main result implies two new corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators. In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-PML estimator for all K. Finally, the appendix provides appropriate conditions under which the optimal 1-MD estimator is asymptotically efficient.
On the iterated estimation of dynamic discrete choice games
2018-02-19 18:19:35
Federico A. Bugni, Jackson Bunting
http://arxiv.org/abs/1802.06665v4, http://arxiv.org/pdf/1802.06665v4
econ.EM
28,814
em
This paper proposes nonparametric kernel-smoothing estimation for panel data to examine the degree of heterogeneity across cross-sectional units. We first estimate the sample mean, autocovariances, and autocorrelations for each unit and then apply kernel smoothing to compute their density functions. The dependence of the kernel estimator on bandwidth makes asymptotic bias of very high order affect the required condition on the relative magnitudes of the cross-sectional sample size (N) and the time-series length (T). In particular, it makes the condition on N and T stronger and more complicated than those typically observed in the long-panel literature without kernel smoothing. We also consider a split-panel jackknife method to correct bias and construction of confidence intervals. An empirical application and Monte Carlo simulations illustrate our procedure in finite samples.
Kernel Estimation for Panel Data with Heterogeneous Dynamics
2018-02-24 12:45:50
Ryo Okui, Takahide Yanagi
http://arxiv.org/abs/1802.08825v4, http://arxiv.org/pdf/1802.08825v4
econ.EM
28,815
em
People reason heuristically in situations resembling inferential puzzles such as Bertrand's box paradox and the Monty Hall problem. The practical significance of that fact for economic decision making is uncertain because a departure from sound reasoning may, but does not necessarily, result in a "cognitively biased" outcome different from what sound reasoning would have produced. Criteria are derived here, applicable to both experimental and non-experimental situations, for heuristic reasoning in an inferential-puzzle situations to result, or not to result, in cognitively bias. In some situations, neither of these criteria is satisfied, and whether or not agents' posterior probability assessments or choices are cognitively biased cannot be determined.
Identifying the occurrence or non occurrence of cognitive bias in situations resembling the Monty Hall problem
2018-02-25 03:28:11
Fatemeh Borhani, Edward J. Green
http://arxiv.org/abs/1802.08935v1, http://arxiv.org/pdf/1802.08935v1
econ.EM
28,816
em
I analyse the solution method for the variational optimisation problem in the rational inattention framework proposed by Christopher A. Sims. The solution, in general, does not exist, although it may exist in exceptional cases. I show that the solution does not exist for the quadratic and the logarithmic objective functions analysed by Sims (2003, 2006). For a linear-quadratic objective function a solution can be constructed under restrictions on all but one of its parameters. This approach is, therefore, unlikely to be applicable to a wider set of economic models.
On the solution of the variational optimisation in the rational inattention framework
2018-02-27 16:21:46
Nigar Hashimzade
http://arxiv.org/abs/1802.09869v2, http://arxiv.org/pdf/1802.09869v2
econ.EM
28,830
em
This paper proposes a model-free approach to analyze panel data with heterogeneous dynamic structures across observational units. We first compute the sample mean, autocovariances, and autocorrelations for each unit, and then estimate the parameters of interest based on their empirical distributions. We then investigate the asymptotic properties of our estimators using double asymptotics and propose split-panel jackknife bias correction and inference based on the cross-sectional bootstrap. We illustrate the usefulness of our procedures by studying the deviation dynamics of the law of one price. Monte Carlo simulations confirm that the proposed bias correction is effective and yields valid inference in small samples.
Panel Data Analysis with Heterogeneous Dynamics
2018-03-26 10:53:47
Ryo Okui, Takahide Yanagi
http://arxiv.org/abs/1803.09452v2, http://arxiv.org/pdf/1803.09452v2
econ.EM
28,817
em
Many macroeconomic policy questions may be assessed in a case study framework, where the time series of a treated unit is compared to a counterfactual constructed from a large pool of control units. I provide a general framework for this setting, tailored to predict the counterfactual by minimizing a tradeoff between underfitting (bias) and overfitting (variance). The framework nests recently proposed structural and reduced form machine learning approaches as special cases. Furthermore, difference-in-differences with matching and the original synthetic control are restrictive cases of the framework, in general not minimizing the bias-variance objective. Using simulation studies I find that machine learning methods outperform traditional methods when the number of potential controls is large or the treated unit is substantially different from the controls. Equipped with a toolbox of approaches, I revisit a study on the effect of economic liberalisation on economic growth. I find effects for several countries where no effect was found in the original study. Furthermore, I inspect how a systematically important bank respond to increasing capital requirements by using a large pool of banks to estimate the counterfactual. Finally, I assess the effect of a changing product price on product sales using a novel scanner dataset.
Synthetic Control Methods and Big Data
2018-03-01 00:32:09
Daniel Kinn
http://arxiv.org/abs/1803.00096v1, http://arxiv.org/pdf/1803.00096v1
econ.EM
28,818
em
It is widely known that geographically weighted regression(GWR) is essentially same as varying-coefficient model. In the former research about varying-coefficient model, scholars tend to use multidimensional-kernel-based locally weighted estimation(MLWE) so that information of both distance and direction is considered. However, when we construct the local weight matrix of geographically weighted estimation, distance among the locations in the neighbor is the only factor controlling the value of entries of weight matrix. In other word, estimation of GWR is distance-kernel-based. Thus, in this paper, under stationary and limited dependent data with multidimensional subscripts, we analyze the local mean squared properties of without any assumption of the form of coefficient functions and compare it with MLWE. According to the theoretical and simulation results, geographically-weighted locally linear estimation(GWLE) is asymptotically more efficient than MLWE. Furthermore, a relationship between optimal bandwith selection and design of scale parameters is also obtained.
An Note on Why Geographically Weighted Regression Overcomes Multidimensional-Kernel-Based Varying-Coefficient Model
2018-03-04 21:50:17
Zihao Yuan
http://arxiv.org/abs/1803.01402v2, http://arxiv.org/pdf/1803.01402v2
econ.EM
28,819
em
We study three pricing mechanisms' performance and their effects on the participants in the data industry from the data supply chain perspective. A win-win pricing strategy for the players in the data supply chain is proposed. We obtain analytical solutions in each pricing mechanism, including the decentralized and centralized pricing, Nash Bargaining pricing, and revenue sharing mechanism.
Pricing Mechanism in Information Goods
2018-03-05 10:37:06
Xinming Li, Huaqing Wang
http://arxiv.org/abs/1803.01530v1, http://arxiv.org/pdf/1803.01530v1
econ.EM
28,820
em
Spatial association and heterogeneity are two critical areas in the research about spatial analysis, geography, statistics and so on. Though large amounts of outstanding methods has been proposed and studied, there are few of them tend to study spatial association under heterogeneous environment. Additionally, most of the traditional methods are based on distance statistic and spatial weighted matrix. However, in some abstract spatial situations, distance statistic can not be applied since we can not even observe the geographical locations directly. Meanwhile, under these circumstances, due to invisibility of spatial positions, designing of weight matrix can not absolutely avoid subjectivity. In this paper, a new entropy-based method, which is data-driven and distribution-free, has been proposed to help us investigate spatial association while fully taking the fact that heterogeneity widely exist. Specifically, this method is not bounded with distance statistic or weight matrix. Asymmetrical dependence is adopted to reflect the heterogeneity in spatial association for each individual and the whole discussion in this paper is performed on spatio-temporal data with only assuming stationary m-dependent over time.
A Nonparametric Approach to Measure the Heterogeneous Spatial Association: Under Spatial Temporal Data
2018-03-06 21:46:49
Zihao Yuan
http://arxiv.org/abs/1803.02334v2, http://arxiv.org/pdf/1803.02334v2
econ.EM
28,821
em
The last technical barriers to trade(TBT) between countries are Non-Tariff Barriers(NTBs), meaning all trade barriers are possible other than Tariff Barriers. And the most typical examples are (TBT), which refer to measure Technical Regulation, Standards, Procedure for Conformity Assessment, Test & Certification etc. Therefore, in order to eliminate TBT, WTO has made all membership countries automatically enter into an agreement on TBT
A study of strategy to the remove and ease TBT for increasing export in GCC6 countries
2018-03-09 09:39:31
YongJae Kim
http://arxiv.org/abs/1803.03394v3, http://arxiv.org/pdf/1803.03394v3
econ.EM
28,822
em
Understanding the effectiveness of alternative approaches to water conservation is crucially important for ensuring the security and reliability of water services for urban residents. We analyze data from one of the longest-running "cash for grass" policies - the Southern Nevada Water Authority's Water Smart Landscapes program, where homeowners are paid to replace grass with xeric landscaping. We use a twelve year long panel dataset of monthly water consumption records for 300,000 households in Las Vegas, Nevada. Utilizing a panel difference-in-differences approach, we estimate the average water savings per square meter of turf removed. We find that participation in this program reduced the average treated household's consumption by 18 percent. We find no evidence that water savings degrade as the landscape ages, or that water savings per unit area are influenced by the value of the rebate. Depending on the assumed time horizon of benefits from turf removal, we find that the WSL program cost the water authority about $1.62 per thousand gallons of water saved, which compares favorably to alternative means of water conservation or supply augmentation.
How Smart Are `Water Smart Landscapes'?
2018-03-13 05:00:07
Christa Brelsford, Joshua K. Abbott
http://arxiv.org/abs/1803.04593v1, http://arxiv.org/pdf/1803.04593v1
econ.EM
28,823
em
The business cycles are generated by the oscillating macro-/micro-/nano- economic output variables in the economy of the scale and the scope in the amplitude/frequency/phase/time domains in the economics. The accurate forward looking assumptions on the business cycles oscillation dynamics can optimize the financial capital investing and/or borrowing by the economic agents in the capital markets. The book's main objective is to study the business cycles in the economy of the scale and the scope, formulating the Ledenyov unified business cycles theory in the Ledenyov classic and quantum econodynamics.
Business Cycles in Economics
2018-03-16 11:24:05
Viktor O. Ledenyov, Dimitri O. Ledenyov
http://dx.doi.org/10.2139/ssrn.3134655, http://arxiv.org/abs/1803.06108v1, http://arxiv.org/pdf/1803.06108v1
econ.EM
28,824
em
Unobserved heterogeneous treatment effects have been emphasized in the recent policy evaluation literature (see e.g., Heckman and Vytlacil, 2005). This paper proposes a nonparametric test for unobserved heterogeneous treatment effects in a treatment effect model with a binary treatment assignment, allowing for individuals' self-selection to the treatment. Under the standard local average treatment effects assumptions, i.e., the no defiers condition, we derive testable model restrictions for the hypothesis of unobserved heterogeneous treatment effects. Also, we show that if the treatment outcomes satisfy a monotonicity assumption, these model restrictions are also sufficient. Then, we propose a modified Kolmogorov-Smirnov-type test which is consistent and simple to implement. Monte Carlo simulations show that our test performs well in finite samples. For illustration, we apply our test to study heterogeneous treatment effects of the Job Training Partnership Act on earnings and the impacts of fertility on family income, where the null hypothesis of homogeneous treatment effects gets rejected in the second case but fails to be rejected in the first application.
Testing for Unobserved Heterogeneous Treatment Effects with Observational Data
2018-03-20 19:30:07
Yu-Chin Hsu, Ta-Cheng Huang, Haiqing Xu
http://arxiv.org/abs/1803.07514v2, http://arxiv.org/pdf/1803.07514v2
econ.EM
28,825
em
In the regression discontinuity design (RDD), it is common practice to assess the credibility of the design by testing the continuity of the density of the running variable at the cut-off, e.g., McCrary (2008). In this paper we propose an approximate sign test for continuity of a density at a point based on the so-called g-order statistics, and study its properties under two complementary asymptotic frameworks. In the first asymptotic framework, the number q of observations local to the cut-off is fixed as the sample size n diverges to infinity, while in the second framework q diverges to infinity slowly as n diverges to infinity. Under both of these frameworks, we show that the test we propose is asymptotically valid in the sense that it has limiting rejection probability under the null hypothesis not exceeding the nominal level. More importantly, the test is easy to implement, asymptotically valid under weaker conditions than those used by competing methods, and exhibits finite sample validity under stronger conditions than those needed for its asymptotic validity. In a simulation study, we find that the approximate sign test provides good control of the rejection probability under the null hypothesis while remaining competitive under the alternative hypothesis. We finally apply our test to the design in Lee (2008), a well-known application of the RDD to study incumbency advantage.
Testing Continuity of a Density via g-order statistics in the Regression Discontinuity Design
2018-03-21 17:52:59
Federico A. Bugni, Ivan A. Canay
http://arxiv.org/abs/1803.07951v6, http://arxiv.org/pdf/1803.07951v6
econ.EM
28,826
em
In this paper, we propose the use of causal inference techniques for survival function estimation and prediction for subgroups of the data, upto individual units. Tree ensemble methods, specifically random forests were modified for this purpose. A real world healthcare dataset was used with about 1800 patients with breast cancer, which has multiple patient covariates as well as disease free survival days (DFS) and a death event binary indicator (y). We use the type of cancer curative intervention as the treatment variable (T=0 or 1, binary treatment case in our example). The algorithm is a 2 step approach. In step 1, we estimate heterogeneous treatment effects using a causalTree with the DFS as the dependent variable. Next, in step 2, for each selected leaf of the causalTree with distinctly different average treatment effect (with respect to survival), we fit a survival forest to all the patients in that leaf, one forest each for treatment T=0 as well as T=1 to get estimated patient level survival curves for each treatment (more generally, any model can be used at this step). Then, we subtract the patient level survival curves to get the differential survival curve for a given patient, to compare the survival function as a result of the 2 treatments. The path to a selected leaf also gives us the combination of patient features and their values which are causally important for the treatment effect difference at the leaf.
Causal Inference for Survival Analysis
2018-03-22 06:22:19
Vikas Ramachandra
http://arxiv.org/abs/1803.08218v1, http://arxiv.org/pdf/1803.08218v1
econ.EM
28,827
em
Linear regressions with period and group fixed effects are widely used to estimate treatment effects. We show that they estimate weighted sums of the average treatment effects (ATE) in each group and period, with weights that may be negative. Due to the negative weights, the linear regression coefficient may for instance be negative while all the ATEs are positive. We propose another estimator that solves this issue. In the two applications we revisit, it is significantly different from the linear regression estimator.
Two-way fixed effects estimators with heterogeneous treatment effects
2018-03-22 01:56:07
Clément de Chaisemartin, Xavier D'Haultfœuille
http://dx.doi.org/10.1257/aer.20181169, http://arxiv.org/abs/1803.08807v7, http://arxiv.org/pdf/1803.08807v7
econ.EM
28,828
em
We examine the effects of monetary policy on income inequality in Japan using a novel econometric approach that jointly estimates the Gini coefficient based on micro-level grouped data of households and the dynamics of macroeconomic quantities. Our results indicate different effects on income inequality for different types of households: A monetary tightening increases inequality when income data is based on households whose head is employed (workers' households), while the effect reverses over the medium term when considering a broader definition of households. Differences in the relative strength of the transmission channels can account for this finding. Finally we demonstrate that the proposed joint estimation strategy leads to more informative inference while results based on the frequently used two-step estimation approach yields inconclusive results.
How does monetary policy affect income inequality in Japan? Evidence from grouped data
2018-03-23 19:28:23
Martin Feldkircher, Kazuhiko Kakamu
http://dx.doi.org/10.1007/s00181-021-02102-7, http://arxiv.org/abs/1803.08868v2, http://arxiv.org/pdf/1803.08868v2
econ.EM
28,829
em
We develop inference for a two-sided matching model where the characteristics of agents on one side of the market are endogenous due to pre-matching investments. The model can be used to measure the impact of frictions in labour markets using a single cross-section of matched employer-employee data. The observed matching of workers to firms is the outcome of a discrete, two-sided matching process where firms with heterogeneous preferences over education sequentially choose workers according to an index correlated with worker preferences over firms. The distribution of education arises in equilibrium from a Bayesian game: workers, knowing the distribution of worker and firm types, invest in education prior to the matching process. Although the observed matching exhibits strong cross-sectional dependence due to the matching process, we propose an asymptotically valid inference procedure that combines discrete choice methods with simulation.
Schooling Choice, Labour Market Matching, and Wages
2018-03-24 03:41:09
Jacob Schwartz
http://arxiv.org/abs/1803.09020v6, http://arxiv.org/pdf/1803.09020v6
econ.EM
28,831
em
In this paper, we assess the impact of climate shocks on futures markets for agricultural commodities and a set of macroeconomic quantities for multiple high-income economies. To capture relations among countries, markets, and climate shocks, this paper proposes parsimonious methods to estimate high-dimensional panel VARs. We assume that coefficients associated with domestic lagged endogenous variables arise from a Gaussian mixture model while further parsimony is achieved using suitable global-local shrinkage priors on several regions of the parameter space. Our results point towards pronounced global reactions of key macroeconomic quantities to climate shocks. Moreover, the empirical findings highlight substantial linkages between regionally located climate shifts and global commodity markets.
A Bayesian panel VAR model to analyze the impact of climate change on high-income economies
2018-04-04 21:23:10
Florian Huber, Tamás Krisztin, Michael Pfarrhofer
http://arxiv.org/abs/1804.01554v3, http://arxiv.org/pdf/1804.01554v3
econ.EM
28,832
em
This paper provides a new methodology to analyze unobserved heterogeneity when observed characteristics are modeled nonlinearly. The proposed model builds on varying random coefficients (VRC) that are determined by nonlinear functions of observed regressors and additively separable unobservables. This paper proposes a novel estimator of the VRC density based on weighted sieve minimum distance. The main example of sieve bases are Hermite functions which yield a numerically stable estimation procedure. This paper shows inference results that go beyond what has been shown in ordinary RC models. We provide in each case rates of convergence and also establish pointwise limit theory of linear functionals, where a prominent example is the density of potential outcomes. In addition, a multiplier bootstrap procedure is proposed to construct uniform confidence bands. A Monte Carlo study examines finite sample properties of the estimator and shows that it performs well even when the regressors associated to RC are far from being heavy tailed. Finally, the methodology is applied to analyze heterogeneity in income elasticity of demand for housing.
Varying Random Coefficient Models
2018-04-09 20:16:52
Christoph Breunig
http://arxiv.org/abs/1804.03110v4, http://arxiv.org/pdf/1804.03110v4
econ.EM
28,833
em
We develop point-identification for the local average treatment effect when the binary treatment contains a measurement error. The standard instrumental variable estimator is inconsistent for the parameter since the measurement error is non-classical by construction. We correct the problem by identifying the distribution of the measurement error based on the use of an exogenous variable that can even be a binary covariate. The moment conditions derived from the identification lead to generalized method of moments estimation with asymptotically valid inferences. Monte Carlo simulations and an empirical illustration demonstrate the usefulness of the proposed procedure.
Inference on Local Average Treatment Effects for Misclassified Treatment
2018-04-10 08:57:30
Takahide Yanagi
http://arxiv.org/abs/1804.03349v1, http://arxiv.org/pdf/1804.03349v1
econ.EM
28,834
em
This paper re-examines the Shapley value methods for attribution analysis in the area of online advertising. As a credit allocation solution in cooperative game theory, Shapley value method directly quantifies the contribution of online advertising inputs to the advertising key performance indicator (KPI) across multiple channels. We simplify its calculation by developing an alternative mathematical formulation. The new formula significantly improves the computational efficiency and therefore extends the scope of applicability. Based on the simplified formula, we further develop the ordered Shapley value method. The proposed method is able to take into account the order of channels visited by users. We claim that it provides a more comprehensive insight by evaluating the attribution of channels at different stages of user conversion journeys. The proposed approaches are illustrated using a real-world online advertising campaign dataset.
Shapley Value Methods for Attribution Modeling in Online Advertising
2018-04-15 12:19:25
Kaifeng Zhao, Seyed Hanif Mahboobi, Saeed R. Bagheri
http://arxiv.org/abs/1804.05327v1, http://arxiv.org/pdf/1804.05327v1
econ.EM
28,835
em
To estimate the dynamic effects of an absorbing treatment, researchers often use two-way fixed effects regressions that include leads and lags of the treatment. We show that in settings with variation in treatment timing across units, the coefficient on a given lead or lag can be contaminated by effects from other periods, and apparent pretrends can arise solely from treatment effects heterogeneity. We propose an alternative estimator that is free of contamination, and illustrate the relative shortcomings of two-way fixed effects regressions with leads and lags through an empirical application.
Estimating Dynamic Treatment Effects in Event Studies with Heterogeneous Treatment Effects
2018-04-16 19:54:46
Liyang Sun, Sarah Abraham
http://arxiv.org/abs/1804.05785v2, http://arxiv.org/pdf/1804.05785v2
econ.EM
28,836
em
This paper offers a two-pronged critique of the empirical investigation of the income distribution performed by physicists over the past decade. Their finding rely on the graphical analysis of the observed distribution of normalized incomes. Two central observations lead to the conclusion that the majority of incomes are exponentially distributed, but neither each individual piece of evidence nor their concurrent observation robustly proves that the thermal and superthermal mixture fits the observed distribution of incomes better than reasonable alternatives. A formal analysis using popular measures of fit shows that while an exponential distribution with a power-law tail provides a better fit of the IRS income data than the log-normal distribution (often assumed by economists), the thermal and superthermal mixture's fit can be improved upon further by adding a log-normal component. The economic implications of the thermal and superthermal distribution of incomes, and the expanded mixture are explored in the paper.
Revisiting the thermal and superthermal two-class distribution of incomes: A critical perspective
2018-04-17 19:09:59
Markus P. A. Schneider
http://dx.doi.org/10.1140/epjb/e2014-50501-x, http://arxiv.org/abs/1804.06341v1, http://arxiv.org/pdf/1804.06341v1
econ.EM
28,837
em
Researchers increasingly leverage movement across multiple treatments to estimate causal effects. While these "mover regressions" are often motivated by a linear constant-effects model, it is not clear what they capture under weaker quasi-experimental assumptions. I show that binary treatment mover regressions recover a convex average of four difference-in-difference comparisons and are thus causally interpretable under a standard parallel trends assumption. Estimates from multiple-treatment models, however, need not be causal without stronger restrictions on the heterogeneity of treatment effects and time-varying shocks. I propose a class of two-step estimators to isolate and combine the large set of difference-in-difference quasi-experiments generated by a mover design, identifying mover average treatment effects under conditional-on-covariate parallel trends and effect homogeneity restrictions. I characterize the efficient estimators in this class and derive specification tests based on the model's overidentifying restrictions. Future drafts will apply the theory to the Finkelstein et al. (2016) movers design, analyzing the causal effects of geography on healthcare utilization.
Estimating Treatment Effects in Mover Designs
2018-04-18 16:42:55
Peter Hull
http://arxiv.org/abs/1804.06721v1, http://arxiv.org/pdf/1804.06721v1
econ.EM
28,838
em
The study aims to identify the institutional flaws of the current EU waste management model by analysing the economic model of extended producer responsibility and collective waste management systems and to create a model for measuring the transaction costs borne by waste recovery organizations. The model was approbated by analysing the Bulgarian collective waste management systems that have been complying with the EU legislation for the last 10 years. The analysis focuses on waste oils because of their economic importance and the limited number of studies and analyses in this field as the predominant body of research to date has mainly addressed packaging waste, mixed household waste or discarded electrical and electronic equipment. The study aims to support the process of establishing a circular economy in the EU, which was initiated in 2015.
Transaction Costs in Collective Waste Recovery Systems in the EU
2018-04-18 18:40:15
Shteryo Nozharov
http://arxiv.org/abs/1804.06792v1, http://arxiv.org/pdf/1804.06792v1
econ.EM
28,839
em
We study the foundations of empirical equilibrium, a refinement of Nash equilibrium that is based on a non-parametric characterization of empirical distributions of behavior in games (Velez and Brown,2020b arXiv:1907.12408). The refinement can be alternatively defined as those Nash equilibria that do not refute the regular QRE theory of Goeree, Holt, and Palfrey (2005). By contrast, some empirical equilibria may refute monotone additive randomly disturbed payoff models. As a by product, we show that empirical equilibrium does not coincide with refinements based on approximation by monotone additive randomly disturbed payoff models, and further our understanding of the empirical content of these models.
Empirical Equilibrium
2018-04-21 18:38:24
Rodrigo A. Velez, Alexander L. Brown
http://arxiv.org/abs/1804.07986v3, http://arxiv.org/pdf/1804.07986v3
econ.EM
28,840
em
We analyze an operational policy for a multinational manufacturer to hedge against exchange rate uncertainties and competition. We consider a single product and single period. Because of long-lead times, the capacity investment must done before the selling season begins when the exchange rate between the two countries is uncertain. we consider a duopoly competition in the foreign country. We model the exchange rate as a random variable. We investigate the impact of competition and exchange rate on optimal capacities and optimal prices. We show how competition can impact the decision of the home manufacturer to enter the foreign market.
Price Competition with Geometric Brownian motion in Exchange Rate Uncertainty
2018-04-22 21:33:53
Murat Erkoc, Huaqing Wang, Anas Ahmed
http://arxiv.org/abs/1804.08153v1, http://arxiv.org/pdf/1804.08153v1
econ.EM
28,841
em
Call centers' managers are interested in obtaining accurate point and distributional forecasts of call arrivals in order to achieve an optimal balance between service quality and operating costs. We present a strategy for selecting forecast models of call arrivals which is based on three pillars: (i) flexibility of the loss function; (ii) statistical evaluation of forecast accuracy; (iii) economic evaluation of forecast performance using money metrics. We implement fourteen time series models and seven forecast combination schemes on three series of daily call arrivals. Although we focus mainly on point forecasts, we also analyze density forecast evaluation. We show that second moments modeling is important both for point and density forecasting and that the simple Seasonal Random Walk model is always outperformed by more general specifications. Our results suggest that call center managers should invest in the use of forecast models which describe both first and second moments of call arrivals.
Statistical and Economic Evaluation of Time Series Models for Forecasting Arrivals at Call Centers
2018-04-23 12:57:42
Andrea Bastianin, Marzio Galeotti, Matteo Manera
http://dx.doi.org/10.1007/s00181-018-1475-y, http://arxiv.org/abs/1804.08315v1, http://arxiv.org/pdf/1804.08315v1
econ.EM
28,842
em
Economic inequality is one of the pivotal issues for most of economic and social policy makers across the world to insure the sustainable economic growth and justice. In the mainstream school of economics, namely neoclassical theories, economic issues are dealt with in a mechanistic manner. Such a mainstream framework is majorly focused on investigating a socio-economic system based on an axiomatic scheme where reductionism approach plays a vital role. The major limitations of such theories include unbounded rationality of economic agents, reducing the economic aggregates to a set of predictable factors and lack of attention to adaptability and the evolutionary nature of economic agents. In tackling deficiencies of conventional economic models, in the past two decades, some new approaches have been recruited. One of those novel approaches is the Complex adaptive systems (CAS) framework which has shown a very promising performance in action. In contrast to mainstream school, under this framework, the economic phenomena are studied in an organic manner where the economic agents are supposed to be both boundedly rational and adaptive. According to it, the economic aggregates emerge out of the ways agents of a system decide and interact. As a powerful way of modeling CASs, Agent-based models (ABMs) has found a growing application among academicians and practitioners. ABMs show that how simple behavioral rules of agents and local interactions among them at micro-scale can generate surprisingly complex patterns at macro-scale. In this paper, ABMs have been used to show (1) how an economic inequality emerges in a system and to explain (2) how sadaqah as an Islamic charity rule can majorly help alleviating the inequality and how resource allocation strategies taken by charity entities can accelerate this alleviation.
Economic inequality and Islamic Charity: An exploratory agent-based modeling approach
2018-04-25 01:43:11
Hossein Sabzian, Alireza Aliahmadi, Adel Azar, Madjid Mirzaee
http://arxiv.org/abs/1804.09284v1, http://arxiv.org/pdf/1804.09284v1
econ.EM
28,843
em
This paper is concerned with inference about low-dimensional components of a high-dimensional parameter vector $\beta^0$ which is identified through instrumental variables. We allow for eigenvalues of the expected outer product of included and excluded covariates, denoted by $M$, to shrink to zero as the sample size increases. We propose a novel estimator based on desparsification of an instrumental variable Lasso estimator, which is a regularized version of 2SLS with an additional correction term. This estimator converges to $\beta^0$ at a rate depending on the mapping properties of $M$ captured by a sparse link condition. Linear combinations of our estimator of $\beta^0$ are shown to be asymptotically normally distributed. Based on consistent covariance estimation, our method allows for constructing confidence intervals and statistical tests for single or low-dimensional components of $\beta^0$. In Monte-Carlo simulations we analyze the finite sample behavior of our estimator.
Ill-posed Estimation in High-Dimensional Models with Instrumental Variables
2018-06-02 19:41:24
Christoph Breunig, Enno Mammen, Anna Simoni
http://arxiv.org/abs/1806.00666v2, http://arxiv.org/pdf/1806.00666v2
econ.EM
28,863
em
By recasting indirect inference estimation as a prediction rather than a minimization and by using regularized regressions, we can bypass the three major problems of estimation: selecting the summary statistics, defining the distance function and minimizing it numerically. By substituting regression with classification we can extend this approach to model selection as well. We present three examples: a statistical fit, the parametrization of a simple real business cycle model and heuristics selection in a fishery agent-based model. The outcome is a method that automatically chooses summary statistics, weighs them and use them to parametrize models without running any direct minimization.
Indirect inference through prediction
2018-07-04 16:52:24
Ernesto Carrella, Richard M. Bailey, Jens Koed Madsen
http://dx.doi.org/10.18564/jasss.4150, http://arxiv.org/abs/1807.01579v1, http://arxiv.org/pdf/1807.01579v1
econ.EM
28,844
em
I propose a nonparametric iid bootstrap procedure for the empirical likelihood, the exponential tilting, and the exponentially tilted empirical likelihood estimators that achieves asymptotic refinements for t tests and confidence intervals, and Wald tests and confidence regions based on such estimators. Furthermore, the proposed bootstrap is robust to model misspecification, i.e., it achieves asymptotic refinements regardless of whether the assumed moment condition model is correctly specified or not. This result is new, because asymptotic refinements of the bootstrap based on these estimators have not been established in the literature even under correct model specification. Monte Carlo experiments are conducted in dynamic panel data setting to support the theoretical finding. As an application, bootstrap confidence intervals for the returns to schooling of Hellerstein and Imbens (1999) are calculated. The result suggests that the returns to schooling may be higher.
Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Empirical Likelihood Estimators
2018-06-04 07:54:48
Seojeong Lee
http://dx.doi.org/10.1016/j.jeconom.2015.11.003, http://arxiv.org/abs/1806.00953v2, http://arxiv.org/pdf/1806.00953v2
econ.EM
28,845
em
Many studies use shift-share (or ``Bartik'') instruments, which average a set of shocks with exposure share weights. We provide a new econometric framework for shift-share instrumental variable (SSIV) regressions in which identification follows from the quasi-random assignment of shocks, while exposure shares are allowed to be endogenous. The framework is motivated by an equivalence result: the orthogonality between a shift-share instrument and an unobserved residual can be represented as the orthogonality between the underlying shocks and a shock-level unobservable. SSIV regression coefficients can similarly be obtained from an equivalent shock-level regression, motivating shock-level conditions for their consistency. We discuss and illustrate several practical insights of this framework in the setting of Autor et al. (2013), estimating the effect of Chinese import competition on manufacturing employment across U.S. commuting zones.
Quasi-Experimental Shift-Share Research Designs
2018-06-04 20:03:07
Kirill Borusyak, Peter Hull, Xavier Jaravel
http://arxiv.org/abs/1806.01221v9, http://arxiv.org/pdf/1806.01221v9
econ.EM
28,846
em
The implementation of a supervision and incentive process for identical workers may lead to wage variance that stems from employer and employee optimization. The harder it is to assess the nature of the labor output, the more important such a process becomes, and the influence of such a process on wage development growth. The dynamic model presented in this paper shows that an employer will choose to pay a worker a starting wage that is less than what he deserves, resulting in a wage profile that fits the classic profile in the human-capital literature. The wage profile and wage variance rise at times of technological advancements, which leads to increased turnover as older workers are replaced by younger workers due to a rise in the relative marginal cost of the former.
The Impact of Supervision and Incentive Process in Explaining Wage Profile and Variance
2018-06-04 22:05:37
Nitsa Kasir, Idit Sohlberg
http://arxiv.org/abs/1806.01332v1, http://arxiv.org/pdf/1806.01332v1
econ.EM
28,847
em
I propose a nonparametric iid bootstrap that achieves asymptotic refinements for t tests and confidence intervals based on GMM estimators even when the model is misspecified. In addition, my bootstrap does not require recentering the moment function, which has been considered as critical for GMM. Regardless of model misspecification, the proposed bootstrap achieves the same sharp magnitude of refinements as the conventional bootstrap methods which establish asymptotic refinements by recentering in the absence of misspecification. The key idea is to link the misspecified bootstrap moment condition to the large sample theory of GMM under misspecification of Hall and Inoue (2003). Two examples are provided: Combining data sets and invalid instrumental variables.
Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Method of Moments Estimators
2018-06-05 04:13:06
Seojeong Lee
http://dx.doi.org/10.1016/j.jeconom.2013.05.008, http://arxiv.org/abs/1806.01450v1, http://arxiv.org/pdf/1806.01450v1
econ.EM
28,848
em
Under treatment effect heterogeneity, an instrument identifies the instrument-specific local average treatment effect (LATE). With multiple instruments, two-stage least squares (2SLS) estimand is a weighted average of different LATEs. What is often overlooked in the literature is that the postulated moment condition evaluated at the 2SLS estimand does not hold unless those LATEs are the same. If so, the conventional heteroskedasticity-robust variance estimator would be inconsistent, and 2SLS standard errors based on such estimators would be incorrect. I derive the correct asymptotic distribution, and propose a consistent asymptotic variance estimator by using the result of Hall and Inoue (2003, Journal of Econometrics) on misspecified moment condition models. This can be used to correctly calculate the standard errors regardless of whether there is more than one LATE or not.
A Consistent Variance Estimator for 2SLS When Instruments Identify Different LATEs
2018-06-05 04:36:49
Seojeong Lee
http://dx.doi.org/10.1080/07350015.2016.1186555, http://arxiv.org/abs/1806.01457v1, http://arxiv.org/pdf/1806.01457v1
econ.EM
28,849
em
We propose leave-out estimators of quadratic forms designed for the study of linear models with unrestricted heteroscedasticity. Applications include analysis of variance and tests of linear restrictions in models with many regressors. An approximation algorithm is provided that enables accurate computation of the estimator in very large datasets. We study the large sample properties of our estimator allowing the number of regressors to grow in proportion to the number of observations. Consistency is established in a variety of settings where plug-in methods and estimators predicated on homoscedasticity exhibit first-order biases. For quadratic forms of increasing rank, the limiting distribution can be represented by a linear combination of normal and non-central $\chi^2$ random variables, with normality ensuing under strong identification. Standard error estimators are proposed that enable tests of linear restrictions and the construction of uniformly valid confidence intervals for quadratic forms of interest. We find in Italian social security records that leave-out estimates of a variance decomposition in a two-way fixed effects model of wage determination yield substantially different conclusions regarding the relative contribution of workers, firms, and worker-firm sorting to wage inequality than conventional methods. Monte Carlo exercises corroborate the accuracy of our asymptotic approximations, with clear evidence of non-normality emerging when worker mobility between blocks of firms is limited.
Leave-out estimation of variance components
2018-06-05 07:59:27
Patrick Kline, Raffaele Saggio, Mikkel Sølvsten
http://arxiv.org/abs/1806.01494v2, http://arxiv.org/pdf/1806.01494v2
econ.EM
28,850
em
Autonomous ships (AS) used for cargo transport have gained a considerable amount of attention in recent years. They promise benefits such as reduced crew costs, increased safety and increased flexibility. This paper explores the effects of a faster increase in technological performance in maritime shipping achieved by leveraging fast-improving technological domains such as computer processors, and advanced energy storage. Based on historical improvement rates of several modes of transport (Cargo Ships, Air, Rail, Trucking) a simplified Markov-chain Monte-Carlo (MCMC) simulation of an intermodal transport model (IMTM) is used to explore the effects of differing technological improvement rates for AS. The results show that the annual improvement rates of traditional shipping (Ocean Cargo Ships = 2.6%, Air Cargo = 5.5%, Trucking = 0.6%, Rail = 1.9%, Inland Water Transport = 0.4%) improve at lower rates than technologies associated with automation such as Computer Processors (35.6%), Fuel Cells (14.7%) and Automotive Autonomous Hardware (27.9%). The IMTM simulations up to the year 2050 show that the introduction of any mode of autonomous transport will increase competition in lower cost shipping options, but is unlikely to significantly alter the overall distribution of transport mode costs. Secondly, if all forms of transport end up converting to autonomous systems, then the uncertainty surrounding the improvement rates yields a complex intermodal transport solution involving several options, all at a much lower cost over time. Ultimately, the research shows a need for more accurate measurement of current autonomous transport costs and how they are changing over time.
A Quantitative Analysis of Possible Futures of Autonomous Transport
2018-06-05 17:00:58
Christopher L. Benson, Pranav D Sumanth, Alina P Colling
http://arxiv.org/abs/1806.01696v1, http://arxiv.org/pdf/1806.01696v1
econ.EM
28,851
em
A standard growth model is modified in a straightforward way to incorporate what Keynes (1936) suggests in the "essence" of his general theory. The theoretical essence is the idea that exogenous changes in investment cause changes in employment and unemployment. We implement this idea by assuming the path for capital growth rate is exogenous in the growth model. The result is a growth model that can explain both long term trends and fluctuations around the trend. The modified growth model was tested using the U.S. economic data from 1947 to 2014. The hypothesized inverse relationship between the capital growth and changes in unemployment was confirmed, and the structurally estimated model fits fluctuations in unemployment reasonably well.
A Growth Model with Unemployment
2018-06-11 23:29:04
Mina Mahmoudi, Mark Pingle
http://arxiv.org/abs/1806.04228v1, http://arxiv.org/pdf/1806.04228v1
econ.EM
28,852
em
This study provides the theoretical framework and empirical model for productivity growth evaluations in agricultural sector as one of the most important sectors in Iran's economic development plan. We use the Solow residual model to measure the productivity growth share in the value-added growth of the agricultural sector. Our time series data includes value-added per worker, employment, and capital in this sector. The results show that the average total factor productivity growth rate in the agricultural sector is -0.72% during 1991-2010. Also, during this period, the share of total factor productivity growth in the value-added growth is -19.6%, while it has been forecasted to be 33.8% in the fourth development plan. Considering the effective role of capital in the agricultural low productivity, we suggest applying productivity management plans (especially in regards of capital productivity) to achieve future growth goals.
The Role of Agricultural Sector Productivity in Economic Growth: The Case of Iran's Economic Development Plan
2018-06-11 23:43:32
Morteza Tahamipour, Mina Mahmoudi
http://arxiv.org/abs/1806.04235v1, http://arxiv.org/pdf/1806.04235v1
econ.EM
28,853
em
Tariff liberalization and its impact on tax revenue is an important consideration for developing countries, because they are increasingly facing the difficult task of implementing and harmonizing regional and international trade commitments. The tariff reform and its costs for Iranian government is one of the issues that are examined in this study. Another goal of this paper is, estimating the cost of trade liberalization. On this regard, imports value of agricultural sector in Iran in 2010 was analyzed according to two scenarios. For reforming nuisance tariff, a VAT policy is used in both scenarios. In this study, TRIST method is used. In the first scenario, imports' value decreased to a level equal to the second scenario and higher tariff revenue will be created. The results show that reducing the average tariff rate does not always result in the loss of tariff revenue. This paper is a witness that different forms of tariff can generate different amount of income when they have same level of liberalization and equal effect on producers. Therefore, using a good tariff regime can help a government to generate income when increases social welfare by liberalization.
Estimating Trade-Related Adjustment Costs in the Agricultural Sector in Iran
2018-06-11 23:44:02
Omid Karami, Mina Mahmoudi
http://arxiv.org/abs/1806.04238v1, http://arxiv.org/pdf/1806.04238v1
econ.EM
28,854
em
We consider the relation between Sion's minimax theorem for a continuous function and a Nash equilibrium in an asymmetric multi-players zero-sum game in which only one player is different from other players, and the game is symmetric for the other players. Then, 1. The existence of a Nash equilibrium, which is symmetric for players other than one player, implies Sion's minimax theorem for pairs of this player and one of other players with symmetry for the other players. 2. Sion's minimax theorem for pairs of one player and one of other players with symmetry for the other players implies the existence of a Nash equilibrium which is symmetric for the other players. Thus, they are equivalent.
On the relation between Sion's minimax theorem and existence of Nash equilibrium in asymmetric multi-players zero-sum game with only one alien
2018-06-17 04:11:55
Atsuhiro Satoh, Yasuhito Tanaka
http://arxiv.org/abs/1806.07253v1, http://arxiv.org/pdf/1806.07253v1
econ.EM
28,855
em
It is common practice in empirical work to employ cluster-robust standard errors when using the linear regression model to estimate some structural/causal effect of interest. Researchers also often include a large set of regressors in their model specification in order to control for observed and unobserved confounders. In this paper we develop inference methods for linear regression models with many controls and clustering. We show that inference based on the usual cluster-robust standard errors by Liang and Zeger (1986) is invalid in general when the number of controls is a non-vanishing fraction of the sample size. We then propose a new clustered standard errors formula that is robust to the inclusion of many controls and allows to carry out valid inference in a variety of high-dimensional linear regression models, including fixed effects panel data models and the semiparametric partially linear model. Monte Carlo evidence supports our theoretical results and shows that our proposed variance estimator performs well in finite samples. The proposed method is also illustrated with an empirical application that re-visits Donohue III and Levitt's (2001) study of the impact of abortion on crime.
Cluster-Robust Standard Errors for Linear Regression Models with Many Controls
2018-06-19 18:48:50
Riccardo D'Adamo
http://arxiv.org/abs/1806.07314v3, http://arxiv.org/pdf/1806.07314v3
econ.EM
28,856
em
We study inference in shift-share regression designs, such as when a regional outcome is regressed on a weighted average of sectoral shocks, using regional sector shares as weights. We conduct a placebo exercise in which we estimate the effect of a shift-share regressor constructed with randomly generated sectoral shocks on actual labor market outcomes across U.S. Commuting Zones. Tests based on commonly used standard errors with 5\% nominal significance level reject the null of no effect in up to 55\% of the placebo samples. We use a stylized economic model to show that this overrejection problem arises because regression residuals are correlated across regions with similar sectoral shares, independently of their geographic location. We derive novel inference methods that are valid under arbitrary cross-regional correlation in the regression residuals. We show using popular applications of shift-share designs that our methods may lead to substantially wider confidence intervals in practice.
Shift-Share Designs: Theory and Inference
2018-06-20 21:57:10
Rodrigo Adão, Michal Kolesár, Eduardo Morales
http://dx.doi.org/10.1093/qje/qjz025, http://arxiv.org/abs/1806.07928v5, http://arxiv.org/pdf/1806.07928v5
econ.EM
28,857
em
In this paper, we explore the relationship between state-level household income inequality and macroeconomic uncertainty in the United States. Using a novel large-scale macroeconometric model, we shed light on regional disparities of inequality responses to a national uncertainty shock. The results suggest that income inequality decreases in most states, with a pronounced degree of heterogeneity in terms of shapes and magnitudes of the dynamic responses. By contrast, some few states, mostly located in the West and South census region, display increasing levels of income inequality over time. We find that this directional pattern in responses is mainly driven by the income composition and labor market fundamentals. In addition, forecast error variance decompositions allow for a quantitative assessment of the importance of uncertainty shocks in explaining income inequality. The findings highlight that volatility shocks account for a considerable fraction of forecast error variance for most states considered. Finally, a regression-based analysis sheds light on the driving forces behind differences in state-specific inequality responses.
The transmission of uncertainty shocks on income inequality: State-level evidence from the United States
2018-06-21 17:57:45
Manfred M. Fischer, Florian Huber, Michael Pfarrhofer
http://arxiv.org/abs/1806.08278v1, http://arxiv.org/pdf/1806.08278v1
econ.EM
28,858
em
We propose a new class of unit root tests that exploits invariance properties in the Locally Asymptotically Brownian Functional limit experiment associated to the unit root model. The invariance structures naturally suggest tests that are based on the ranks of the increments of the observations, their average, and an assumed reference density for the innovations. The tests are semiparametric in the sense that they are valid, i.e., have the correct (asymptotic) size, irrespective of the true innovation density. For a correctly specified reference density, our test is point-optimal and nearly efficient. For arbitrary reference densities, we establish a Chernoff-Savage type result, i.e., our test performs as well as commonly used tests under Gaussian innovations but has improved power under other, e.g., fat-tailed or skewed, innovation distributions. To avoid nonparametric estimation, we propose a simplified version of our test that exhibits the same asymptotic properties, except for the Chernoff-Savage result that we are only able to demonstrate by means of simulations.
Semiparametrically Point-Optimal Hybrid Rank Tests for Unit Roots
2018-06-25 10:03:48
Bo Zhou, Ramon van den Akker, Bas J. M. Werker
http://dx.doi.org/10.1214/18-AOS1758, http://arxiv.org/abs/1806.09304v1, http://arxiv.org/pdf/1806.09304v1
econ.EM
28,859
em
In this article we introduce a general nonparametric point-identification result for nonseparable triangular models with a multivariate first- and second stage. Based on this we prove point-identification of Hedonic models with multivariate heterogeneity and endogenous observable characteristics, extending and complementing identification results from the literature which all require exogeneity. As an additional application of our theoretical result, we show that the BLP model (Berry et al. 1995) can also be identified without index restrictions.
Point-identification in multivariate nonseparable triangular models
2018-06-25 22:36:39
Florian Gunsilius
http://arxiv.org/abs/1806.09680v1, http://arxiv.org/pdf/1806.09680v1
econ.EM
28,860
em
Historical examination of the Bretton Woods system allows comparisons to be made with the current evolution of the EMS.
The Bretton Woods Experience and ERM
2018-07-02 03:00:20
Chris Kirrane
http://arxiv.org/abs/1807.00418v1, http://arxiv.org/pdf/1807.00418v1
econ.EM
28,861
em
This paper describes the opportunities and also the difficulties of EMU with regard to international monetary cooperation. Even though the institutional and intellectual assistance to the coordination of monetary policy in the EU will probably be strengthened with the EMU, among the shortcomings of the Maastricht Treaty concerns the relationship between the founder members and those countries who wish to remain outside monetary union.
Maastricht and Monetary Cooperation
2018-07-02 03:01:08
Chris Kirrane
http://arxiv.org/abs/1807.00419v1, http://arxiv.org/pdf/1807.00419v1
econ.EM
28,862
em
This paper proposes a hierarchical modeling approach to perform stochastic model specification in Markov switching vector error correction models. We assume that a common distribution gives rise to the regime-specific regression coefficients. The mean as well as the variances of this distribution are treated as fully stochastic and suitable shrinkage priors are used. These shrinkage priors enable to assess which coefficients differ across regimes in a flexible manner. In the case of similar coefficients, our model pushes the respective regions of the parameter space towards the common distribution. This allows for selecting a parsimonious model while still maintaining sufficient flexibility to control for sudden shifts in the parameters, if necessary. We apply our modeling approach to real-time Euro area data and assume transition probabilities between expansionary and recessionary regimes to be driven by the cointegration errors. The results suggest that the regime allocation is governed by a subset of short-run adjustment coefficients and regime-specific variance-covariance matrices. These findings are complemented by an out-of-sample forecast exercise, illustrating the advantages of the model for predicting Euro area inflation in real time.
Stochastic model specification in Markov switching vector error correction models
2018-07-02 11:36:11
Niko Hauzenberger, Florian Huber, Michael Pfarrhofer, Thomas O. Zörner
http://arxiv.org/abs/1807.00529v2, http://arxiv.org/pdf/1807.00529v2
econ.EM
28,864
em
This paper studies the identifying content of the instrument monotonicity assumption of Imbens and Angrist (1994) on the distribution of potential outcomes in a model with a binary outcome, a binary treatment and an exogenous binary instrument. Specifically, I derive necessary and sufficient conditions on the distribution of the data under which the identified set for the distribution of potential outcomes when the instrument monotonicity assumption is imposed can be a strict subset of that when it is not imposed.
On the Identifying Content of Instrument Monotonicity
2018-07-04 19:25:35
Vishal Kamat
http://arxiv.org/abs/1807.01661v2, http://arxiv.org/pdf/1807.01661v2
econ.EM
28,865
em
This paper develops an inferential theory for state-varying factor models of large dimensions. Unlike constant factor models, loadings are general functions of some recurrent state process. We develop an estimator for the latent factors and state-varying loadings under a large cross-section and time dimension. Our estimator combines nonparametric methods with principal component analysis. We derive the rate of convergence and limiting normal distribution for the factors, loadings and common components. In addition, we develop a statistical test for a change in the factor structure in different states. We apply the estimator to U.S. Treasury yields and S&P500 stock returns. The systematic factor structure in treasury yields differs in times of booms and recessions as well as in periods of high market volatility. State-varying factors based on the VIX capture significantly more variation and pricing information in individual stocks than constant factor models.
State-Varying Factor Models of Large Dimensions
2018-07-06 07:05:40
Markus Pelger, Ruoxuan Xiong
http://arxiv.org/abs/1807.02248v4, http://arxiv.org/pdf/1807.02248v4
econ.EM
28,866
em
The methods of new institutional economics for identifying the transaction costs of trade litigations in Bulgaria are used in the current paper. For the needs of the research, an indicative model, measuring this type of costs on microeconomic level, is applied in the study. The main purpose of the model is to forecast the rational behavior of trade litigation parties in accordance with the transaction costs in the process of enforcing the execution of the signed commercial contract. The application of the model is related to the more accurate measurement of the transaction costs on microeconomic level, which fact could lead to better prediction and management of these costs in order market efficiency and economic growth to be achieved. In addition, it is made an attempt to be analysed the efficiency of the institutional change of the commercial justice system and the impact of the reform of the judicial system over the economic turnover. The augmentation or lack of reduction of the transaction costs in trade litigations would mean inefficiency of the reform of the judicial system. JEL Codes: O43, P48, D23, K12
Transaction costs and institutional change of trade litigations in Bulgaria
2018-07-09 13:34:56
Shteryo Nozharov, Petya Koralova-Nozharova
http://arxiv.org/abs/1807.03034v1, http://arxiv.org/pdf/1807.03034v1
econ.EM

Dataset Card for Economics Paper Dataset

Dataset Summary

The Economics Research Paper Dataset was designed to support the development of the LLaMA-2-Econ models, with a focus on Title Generation, Abstract Classification, and Question & Answer (Q&A) tasks. It comprises abstracts and titles of economics research papers, along with synthetic Q&A pairs derived from the abstracts, to facilitate training of large language models for economics-specific applications.

Dataset Description

Content: The dataset includes:

  • Economics paper abstracts and titles.

Source: The data was collected using the arXiv API, with papers selected from the categories Econometrics (ec.EM), General Economics (ec.GN), and Theoretical Economics (ec.TH).

Volume:

  • Total abstracts and titles: 6362

Intended Uses

This dataset is intended for training and evaluating language models specialized in:

  • Generating titles for economics research papers.
  • Classifying abstracts into sub-fields of economics.
  • Answering questions based on economics paper abstracts.

Dataset Creation

Curation Rationale

The dataset was curated to address the lack of specialized tools and datasets for enhancing research within the economics domain, leveraging the potential of language models like LLaMA-2.

Source Data

Initial Data Collection and Normalization

Data was collected through the arXiv API, targeting papers within specified categories of economics. Titles and abstracts were extracted, and synthetic Q&A pairs were generated using a process that involved the GPT-3.5 Turbo model for contextual dialogue creation.

Licensing Information

The dataset is derived from arXiv papers. Users are advised to adhere to arXiv's terms of use.

Downloads last month
38

Models trained or fine-tuned on onurkeles/econ_paper_abstracts