text_with_holes
stringlengths 311
3.27k
| text_candidates
stringlengths 167
875
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
The primary aim of our paper is to provide a method for constructing uniformly valid inference and confidence bands in sparse high-dimensional models in the sieve framework. <|MaskedSetence|> The double machine learning approach (Belloni et al., 2014b; Chernozhukov et al., 2018) offers a general framework for uniformly valid inference in high-dimensional settings. Similar methods, such as those proposed by van de Geer et al. <|MaskedSetence|> These studies are based on the so-called debiasing approach, which provides an alternative framework for valid inference. The framework entails a one-step correction of the lasso estimator, resulting in an asymptotically normally distributed estimator of the low-dimensional target parameter. <|MaskedSetence|> (2015b).
In research closely related to ours, Kozbur (2021) proposes a post-nonparametric double selection approach for a scalar functional of a single component.. | **A**: In doing so, we contribute to the growing literature on high-dimensional inference in additive models, especially that on debiased/double machine learning.
**B**: For a survey on post-selection inference in high-dimensional settings and its generalizations, we refer to Chernozhukov et al.
**C**: (2014) and Zhang and Zhang (2014), have also produced valid confidence intervals for low-dimensional parameters in high-dimensional linear models.
| ACB | ACB | ACB | ACB | Selection 3 |
4 Application: Functional Global Sensitivity Analysis of an ensemble of Climate Economy Models
For this paper we focus on \chCO2 emissions as the main output of an ensemble of coupled climate-economy-energy models. Each model-scenario produces a vector of \chCO2 emissions defined from the year 2010 to 2090 at 10-years time intervals. <|MaskedSetence|> A thorough description of the dataset used as a testbed for the application of the methods described before can be found in [17]. <|MaskedSetence|> <|MaskedSetence|> | **A**: We use the scenarios developed in [17] which involve five models (IMAGE, IMACLIM, MESSAGE-GLOBIOM, TIAM-UCL and WITCH-GLOBIOM) that provide output data until the end of the interval T𝑇Titalic_T.
.
**B**: This discretization of the output space is in any case arbitrary, since \chCO2 emissions do exist in every time instant in the interval T=[2010,2090]𝑇20102090T=[2010,2090]italic_T = [ 2010 , 2090 ].
**C**: This was one of the first paper to apply global sensitivity techniques to an ensemble of climate economy models, thus addressing both parametric and model uncertainty.
| BCA | BCA | BAC | BCA | Selection 1 |
<|MaskedSetence|> But they only allow for binary states and binary actions. They introduce the condition of expanding observations, explaining that this property of the network is necessary for learning. They establish that it is also sufficient for learning with unbounded beliefs. <|MaskedSetence|> <|MaskedSetence|> | **A**: Lobel and
Sadler (2015) introduce a notion of “information diffusion” and use the improvement principle to establish information diffusion even when learning fails.
.
**B**: Building on Banerjee and
Fudenberg (2004), a key contribution of Acemoglu, Dahleh, Lobel, and
Ozdaglar (2011) is to use a welfare improvement principle to deduce learning; this approach works even though martingale arguments fail.
**C**: Ozdaglar (2011) provide a general treatment of observational networks in an otherwise classical setting.
| CBA | CBA | ABC | CBA | Selection 4 |
<|MaskedSetence|> The data cover (essentially) all funding proposals for such evaluations submitted to the Abdul Latif Jameel Poverty Action Lab (J-PAL) from 2009 to 2021. J-PAL is the leading funder and facilitator of experimental economic research in low-income countries, and funds projects that are typically designed to inform policy in those countries. <|MaskedSetence|> But they are also not invariant to scale: projects with more arms cost significantly more, with a 100 log point increase in the number of arms raising costs by approximately 20 log points on average. Interpreted through the lens of our model, these patterns provide both a prima facie justification for applying MHT adjustment to studies of this sort, and also imply that simply controlling the average size of tests in these studies (e.g. via a Bonferroni correction) would be too conservative. <|MaskedSetence|> | **A**:
In Appendix A, we study one case in which it is arguably feasible, using unique data on the costs of conducting experimental program evaluations that we obtained for this purpose.
**B**: As we discuss in more detail in Appendix A, the characteristics of these projects thus align fairly closely with the assumptions in our framework.
We find that research costs in this setting are significantly less than proportional to the number of treatments tested.
**C**: Overall the exercises demonstrates that it may be possible in some cases to measure research costs and interpret them through the lens of a framework like ours to obtain quantitative guidelines for MHT adjustment in economic research..
| ABC | ABC | ABC | ABC | Selection 3 |
This paper is the first to define truncation-invariance and truncation-proofness, but not the first to define truncation strategies or to weaken strategy-proofness. Truncation strategies were first defined in Chen (2017) and the author used it to define rank monotonicity. Originating from Mongell and Roth (1991), there is another larger string of literature studying a different concept of truncation strategies. Typical papers include Roth and Vate (1991), Roth and Rothblum (1999), and so on. The same term notwithstanding, the truncation strategies defined in these papers are truncations by moving upwards the outside option if it exists, while truncations defined in the current paper is based on the original allocation of an agent. Moreover, truncation strategies defined in these papers can not be applied to the housing market model.
We motivate restricting each agent’s strategy domain to truncations at truthful matching from two perspectives. First, the necessity for restricting manipulations arises from certain inherent limitations in the concept of strategy-proofness.
Recently, there is a string of literature arguing that strategy-proofness lacks empirical support and proposing weakening of this property. Papers such as Charness and Levin (2009) and Esponda and Vespa (2014) showed that people have difficulties with hypothetical reasoning even in single-agent decision problems. On seeing this, Troyan and Morrill (2020) proposed the concept of obvious manipulability and non-obvious manipulability, which were used to classify non-strategy-proof rules. <|MaskedSetence|> Unlike Troyan and Morrill (2020), the current paper weakens strategy-proofness by shrinking the strategy set of agents. Second, as noted by Altuntaş et al. (2023), the manipulation based on truncation strategies takes into account evidence from the behavioral economics literature. The literature in this direction has provided strong evidence that in practice an agent often uses heuristics to make decisions (Tversky and Kahneman (1974), Tversky et al. <|MaskedSetence|> <|MaskedSetence|> Mennle et al. (2015) showed empirical evidence that people may prioritize simple manipulations that are close to their true preferences when they are lying.. | **A**: (2002)).
**B**: (1982), Gilovich et al.
**C**: Non-obvious manipulability is a weaker requirement than strategy-proofness in the sense of payoff comparison.
| BAC | CBA | CBA | CBA | Selection 4 |
For other types of outcome variables (continuous outcomes in linear models, binary and multinomial outcomes), results for regression models with fixed effects and lagged dependent variables are already available. Such results are of great importance for applied practice, as they allow researchers to distinguish unobserved heterogeneity from state dependence, and to control for both when estimating the effect of regressors. The demand for such methods is evidenced by the popularity of existing approaches for the linear model, such as those proposed by Arellano and Bond (1991) and Blundell and Bond (1998). In contrast, for ordinal outcomes, almost no results are available.
The challenge of accommodating unobserved heterogeneity in nonlinear models is well understood, especially when the researcher also wants to allow for lagged dependent variables. For example, while recent developments (Kitazawa 2021 and Honoré and Weidner 2020) relax these requirements, early work on the dynamic binary logit model with fixed effects either assumed no regressors, or restricted their joint distribution (cf. Chamberlain 1985 and Honoré and Kyriazidou 2000). The challenge of accommodating unobserved heterogeneity in the ordered logit model seems even greater than in the binary model. The reason is that even the static version of the model is not in the exponential family (Hahn 1997). <|MaskedSetence|> An alternative approach in the static ordered logit model is to reduce it to a set of binary choice models (cf. Das and van Soest 1999, Johnson 2004b, Baetschmann, Staub, and Winkelmann 2015, Muris 2017, and Botosaru, Muris, and Pendakur 2023). Unfortunately, the dynamic ordered logit model cannot be similarly reduced to a dynamic binary choice model (see Muris, Raposo, and Vandoros 2023). <|MaskedSetence|> The contribution of this paper is to develop such an approach.
To do this, we follow the functional differencing approach in Bonhomme (2012) to obtain moment conditions for the finite-dimensional parameters in this model, namely the autoregressive parameters (one for each level of the lagged dependent variable), the threshold parameters in the underlying latent variable formulation, and the regression coefficients. <|MaskedSetence|> | **A**: Therefore, a new approach is needed.
**B**: As a result, one cannot directly appeal to a sufficient statistic approach.
**C**: Our approach is closely related to Honoré and Weidner (2020), and can be seen as the extension of their method to the case of an ordered response variable..
| BAC | BAC | BAC | CBA | Selection 1 |
5 Conclusion
Endogeneity is a common threat to causal identification in econometric models. Reverse causality is one source of such endogeneity. We build on work by \textcitehoyer09anm,mooijetal16 who have shown that the causal direction between two variables X𝑋Xitalic_X and Y𝑌Yitalic_Y is identifiable in models with additively separable error terms and nonlinear function forms. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We extend known results on causal identification and causal discovery to settings with heteroskedasticity with respect to additional control covariates.
. | **A**: them and, thus, provide a heteroskedasticity-robust method to test for reverse causality.
**B**: We extend their results by allowing for additional control covariates W𝑊Witalic_W and heteroskedasticity w.r.t.
**C**: In addition, we show how this test can be extended to a bivariate causal discovery algorithm by comparing the test statistics of residual and purported cause of two candidate models.
| BAC | BAC | CAB | BAC | Selection 2 |
<|MaskedSetence|> This final figure surpasses Britain’s total crop and pasture land combined. <|MaskedSetence|> If we add cotton, sugar, and timber circa 1830, we have somewhere between 25,000,000 and 30,000,000 ghost acres, exceeding even the contribution of coal by a healthy margin. (p. <|MaskedSetence|> | **A**:
…[R]aising enough sheep to replace the yarn made with Britain’s New World cotton imports by would have required staggering quantities of land: almost 9,000,000 acres in 1815, using ratios from model farms, and over 23,000,000 acres in 1830.
**B**: It also surpasses Anthony Wrigley’s estimate that matching the annual energy output of Britain’s coal industry circa 1815 would have required that the country magically receive 15,000,000 additional acres of forest.
**C**: 276)
Based on this calculation, I set the land supply Z𝑍Zitalic_Z after the relief of land constraints to
.
| ABC | ABC | ABC | ACB | Selection 2 |
There is a large literature in behavioral and experimental economics that points toward the importance of various behavioral traits and heterogeneous characteristics of trust and reciprocity in sharing behavior. <|MaskedSetence|> (1997); Fehr and Gächter (1998, 2000); Camerer (2003); Cox (2004) have leveraged experimental evidence to highlight the use of trust and reciprocity as devices for contract enforcement and for driving cooperation in markets and sharing games. <|MaskedSetence|> While trust and reciprocity in these settings often drive efficiency gains, we show through counterfactuals that which characteristics matter the most is highly context-dependent; both trust and reciprocity can backfire as tools to promote efficiency, depending on the information structure.
This result agrees with some more recent work examining the role of these characteristics in supporting positive market outcomes. For example, Choi and Storr (2022) finds evidence suggesting that providing reputation systems in experimental markets interacts with preferences primarily by giving participants more information about whom not to trust. Subsequent work by Solimine and Isaac (2023) supports this result and further emphasizes the role of the information in determining the effectiveness of trust in promoting positive market outcomes. <|MaskedSetence|> When information is more limited, however, introducing higher levels of trust increases trustworthy behavior by some but may backfire by allowing others to take advantage of this change.
. | **A**: Our counterfactual findings involving trust agree with these findings; through the way that trust interacts with preferences for reciprocity and altruism, promoting trust in the community dramatically improves outcomes when subjects are provided with detailed information about others’ behavior.
**B**: Throughout these works it is emphasized that reciprocity manifests not only positively, but can also be used to characterize punishment behavior.
**C**: A large series of studies including Fehr et al.
| BCA | CBA | CBA | CBA | Selection 2 |
Non-Business day. <|MaskedSetence|> Again we note that for the TFM-tucker model, one needs to identify a proper representation of the loading space in order to interpret the model. <|MaskedSetence|> <|MaskedSetence|> Interpretation is impossible for the vector factor model in such a high dimensional case.
. | **A**: For TFM-cp, the model is unique hence interpretation can be made directly.
**B**: Values are in percentage.
We remark that this example is just for illustration and showcasing the interpretation of the proposed tensor factor model.
**C**: In Chen et al., (2022), varimax rotation was used to find the most sparse loading matrix representation to model interpretation.
| BCA | BCA | BCA | CAB | Selection 3 |
In Section 3, we study how specific properties of choice rules tend to lead to contextual privacy violations. <|MaskedSetence|> These abstract characterizations lead us to a more intuitive insight, Theorem 1, which says that under the restriction to individual elicitation protocols, any time there is some group of agents who are collectively pivotal but there is no agent who is individually pivotal, there must be a contextual privacy violation and the designer must choose whose privacy to protect.777One can derive as a special case of Theorem 1 a result from the cryptography literature known as the Corners Lemma (Chor and Kushilevitz, 1989; Chor et al., 1994). <|MaskedSetence|> <|MaskedSetence|> Our discussion yields two central insights. First, maximally contextually private protocols involve a deliberate decision about the protection set, i.e. a set of agents whose privacy ought to be protected if possible. Second, given that protection set, maximally contextually private protocols delay asking questions to protected agents as much as possible.
. | **A**: This result has been used to show that the second-price auction does not permit a decentralized computation protocol (Brandt and Sandholm, 2005) that satisfies unconditional privacy, compare Chor and Kushilevitz (1989) and Milgrom and Segal (2020).
**B**: In our first results, Proposition 1 and Proposition 2, we provide characterizations of choice rules that fully avoid contextual privacy violations under an arbitrary fixed elicitation technology, and under individual elicitation technologies, respectively.
**C**: Propositions 3-7 show how this conflict between collective and individual pivotality arises in common choice rules in environments with and without transfers.
In Section 4, we take the perspective of a privacy-conscious designer who wants to implement a choice rule through a maximally contextually private protocol.
| BAC | BAC | BAC | BAC | Selection 2 |
<|MaskedSetence|> The partial equilibrium theory has a foundation in general equilibrium theory, where there are only two types of goods, numeraire good and traded good, and consumers’ utility functions must be quasi-linear. In the partial equilibrium model, it is easy to show by drawing a diagram that there is only one equilibrium price and that this price is globally stable. <|MaskedSetence|> That is, the aim of this study is to determine whether the above result holds when considering a general equilibrium model in which the utility remains quasi-linear and the dimension of the consumption space may be greater than two. The results are as follows: first, the equilibrium price is unique up to normalization in an economy where all consumers have quasi-linear utility functions. Second, this equilibrium price is locally stable with respect to the tâtonnement process (Theorem 1). As expected from partial equilibrium theory, if the number of commodities is two, the equilibrium price is globally stable (Proposition 3). However, if the number of commodities is greater than two, then the global stability is not derived in this paper. <|MaskedSetence|> | **A**: This is related to the inherent difficulty of quasi-linear economies: see our discussion in subsection 3.1..
**B**: In other words, in a quasi-linear economy with two commodities, it is expected that the equilibrium price is unique, and is globally stable with respect to the tâtonnement process.
The purpose of this paper is to extend this result to a quasi-linear economy with more than two commodities.
**C**:
There is another theory of equilibrium besides the general equilibrium theory, namely, the partial equilibrium theory.
| CBA | CBA | CAB | CBA | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> For example, in Hosoya (2017), convergence with respect to a uniform topology in the space of utility function could only be proved if the C1superscript𝐶1C^{1}italic_C start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT topology is equipped in the space of demand functions. In Theorem 3, however, this relationship is reversed.
As a final note, we mention the closed convergence topology of weak orders. <|MaskedSetence|> Hence, for example, it is quite easy to derive the convergence result in the closed convergence topology from Theorems 2-3. In this connection, in econometric studies that use statistical models that require a particular shape for the utility function, we can inversely derive the compact convergence of their utility function from the convergence of corresponding orders in the closed convergence topology. In this sense, the use of a specified shape of the utility function is not a disadvantage for Theorems 2-3.
. | **A**: Previous results in this context have usually required a stronger topology in the space of demand functions to prove convergence in some topology of the space of utility functions.
**B**: If the shapes of utility functions are specified for some set of weak orders, then in most cases, the compact convergence of the utility function is equivalent to the convergence in the closed convergence topology of the weak order.
**C**: This result is unexpected in some ways.
| CAB | CAB | CAB | CAB | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> A guide for practical implementation of the computational procedure outlined in section 1.2 is given in appendix A. We refer to inequality displayed by our measure as overall inequality, while specific marginal inequality is described as wealth or income inequality.
3.1. Income-wealth 𝜶𝜶\displaystyle\bm{\alpha}bold_italic_α-Lorenz curves. | **A**: Details of the sampling technique and a discussion of specific features and issues with the data set are given in appendix B.
**B**: In this section, we apply our methodology to the analysis of income-wealth inequality in the United States between 1989 and 2022, based on the public version of the triennial Survey of Consumer Finances (SCF).
**C**: Wealth refers to all assets, financial and otherwise.
| BCA | BCA | BCA | BCA | Selection 1 |
<|MaskedSetence|> (2009) demonstrates that between 1986-2003, the 75th percentile math SAT score of accepted students at the top 20 public universities, top 20 private colleges, and top 20 liberal arts colleges, steadily trended upward. <|MaskedSetence|> Different selection criteria may induce different equilibrium thresholds. The equilibrium acceptance threshold affects the value of the policy because it determines which students are accepted or rejected. <|MaskedSetence|> | **A**: A selection criterion that induces a high acceptance threshold may not necessarily yield high value for the college.
.
**B**: In our model, the equilibrium acceptance threshold depends on students’ strategic behavior and the decision maker’s capacity constraint and selection criterion.
**C**: This example aims to capture the phenomenon that college admissions has become increasingly competitive since the 1980s; Bound et al.
| CBA | CAB | CBA | CBA | Selection 3 |
<|MaskedSetence|> (2023). These papers are designed as methods for inference for parameters defined via linear models or estimating equations rather than parameters like our equally-weighted or size-weighed cluster-level average treatment effects that are defined explicitly in terms of potential outcomes. Importantly, in almost all of these papers, the sampling framework treats cluster sizes as non-random, though we note that in some cases the results are rich enough to permit the distribution of the data to vary across clusters: further discussion is provided in Remark 2.2. <|MaskedSetence|> Section 2 describes our setup and notation, including a formal description of our sampling framework and two parameters of interest. <|MaskedSetence|> In Section 4, we demonstrate the finite-sample behavior of our proposed estimators in a small simulation study. Finally, in Section 5, we conduct an empirical exercise to demonstrate the practical relevance of our findings. Proofs of all results are included in the Appendix.. | **A**: We then propose in Section 3 estimators for each of these two quantities and develop the requisite distributional approximations to use them for inference about each quantity.
**B**: Finally, none of these papers seem to explicitly consider the additional complications stemming from sampling only a subset of the units within each cluster.
The remainder of our paper is organized as follows.
**C**: Miller (2015) and MacKinnon
et al.
| CBA | CBA | CBA | ABC | Selection 3 |
2.2 Challenges for inventory modelling from retailing practice
To account for the characteristics of practical problems, several extensions of basic inventory models have been proposed. One crucial matter is the choice of an appropriate probability distribution used for representing random demand as observed by the retailer. <|MaskedSetence|> While such restrictive assumptions make deriving optimal replenishment order policies easy (Bijvank and Vis,, 2011), they are not descriptive in many applications. <|MaskedSetence|> <|MaskedSetence|> | **A**: Ulrich et al., (2022), e.g., based on their real-world e-grocery retailing data, demonstrate the importance of a case-specific estimation of the demand distribution.
**B**: Especially, the combination of high service level requirements and more complex demand patterns commonly observed in e-grocery retailing favour the use of probability distribution that allow for e.g. overdispersion or skewness.
.
**C**: Parts of the previous literature rely on simple modelling such as considering a Poisson process for the arrival of customer demand (see e.g. Siawsolit and Gaukler,, 2021).
| CBA | CAB | CAB | CAB | Selection 2 |
For analytical tractability, we focus on the case where researchers use one-sided tests in this section, for a limited number of options for p𝑝pitalic_p-hacking. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We present these results in Appendix A. Appendix B presents derivations underlying all analytical results.. | **A**: In the simulation study in Section 5, we consider generalizations of these analytical examples for two-sided tests, and we also show results for one-sided tests.
In addition to analyzing the effects of p𝑝pitalic_p-hacking on the shape of the p𝑝pitalic_p-curve, we study its implications for the bias of the estimates and size distortions of the tests reported by researchers engaged in p𝑝pitalic_p-hacking.
**B**: The analytical results provide a clear understanding of the opportunities for tests to have power and what types of situations the tests will have power in.
**C**: Appendix C provides analogous numerical results for two-sided tests.
| CBA | CBA | CBA | ACB | Selection 2 |
<|MaskedSetence|> A 1 pp. rise in external debt causes, on average, a 0.5% increase in GHG emissions.
In exploring a possible mechanism of action, we find that external debt is negatively related to an indicator of policies associated with environmental sustainability. <|MaskedSetence|> <|MaskedSetence|> On the contrary, our results are aligned with Beşe et al., 2021b and Beşe et al., 2021a who find a significant positive effect of external debt on CO2 emissions in China and India, respectively.. | **A**: This may suggest that when external debt increases, governments are less able to enforce environmental regulations because their main priority is to increase the tax base or because they are captured by the private sector and prevented from tightening such regulations, and therefore could explain the positive associacion between external debt and environmental degradation.
Our results point to significant negative environmental effects of external financing in EMDEs with several implications.
**B**: First, this suggests that the endogeneity issue may be behind previous findings of the non-significant effect of external debt on GHG emissions (Katircioglu and Celebi, , 2018; Akam et al., , 2021).
**C**: We find a positive and statistically significant effect of external debt on GHG emissions when we take into account the potential endogeneity problems.
| BCA | CAB | CAB | CAB | Selection 4 |
In this paper, we draw on new advances in the time series forecasting literature to improve the accuracy of the imputations employed for causal inference in panel data settings. Over the past few years, the forecasting literature has proposed a number of deep neural architectures that have significantly improved predictive capabilities over older models. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> That is, to impute the potential untreated outcome in the treated unit following the treatment, we feed into the model the outcome from the control states during the same time period – effectively casting contemporaneous outcomes of the control states as “leading indicators” for the potential untreated outcome of the treated state.
. | **A**: We overcome this limitation by incorporating the time series of outcomes for control units into the forecasting model for the treated unit as additional features.
**B**: For causal inference with panel data, this is an important limitation because single-unit time series models do not incorporate information from the time series of control unit outcomes to help estimate missing values for the treated unit.
**C**: However, these models are designed to be applied to data for single time series, i.e., predicting future values of a unit based on that same unit’s past values.
| CBA | ABC | CBA | CBA | Selection 4 |
<|MaskedSetence|> (2014) and its time series extension in Adamek et al. (2022b).
The latter has recently been empirically investigated in the context of LP estimation with instrumental variables (LP-IV) in contemporaneous work by Karapanagioti (2021). Their approach not only differs in the focus on IV models, but they also apply the method of Adamek et al. (2022b) ‘as is’. <|MaskedSetence|> (2022b) specifically to high-dimensional LPs consisting of a small number of parameters of interest – the dynamic response of a variable to a shock at a given horizon – and many controls. In particular, we modify their approach by leaving the parameter of interest unpenalized during the estimation procedure to ensure it does not suffer from penalization bias. <|MaskedSetence|> We theoretically show that the combination of few unpenalized parameters with many penalized ones does not affect the asymptotic behaviour of the desparsified lasso.
. | **A**: Such a setting is of more general relevance for treatment effect models consisting of a small number of variables whose effects are of interest combined with a large set of controls.
**B**: Instead, we tailor the approach of Adamek et al.
**C**:
We develop HDLP inference based on the desparsified lasso of van de Geer et al.
| CBA | CBA | CBA | BCA | Selection 1 |
1.1 Related work
Anchoring in decision-making was first proposed by Paul Slovic in researching how people evaluate the risk of gambling (Slovic (1967)). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Similar prior anchoring questions have been found to influence the certainty equivalent for a gamble (Johnson and Schkade (1989)), estimation of tree height (Jacowitz and Kahneman (1995)), and willingness to subject oneself to annoying sounds for money (Ariely et al. (2003)). Anchoring effects, however, are also present for consequential decisions, both economic and non-economic. Judges’ sentencing decisions and bankruptcy rulings are anchored by prosecutors’ demands (Enough and Mussweiler (2001)) and official reviewers’ recommendations (Mugerman et al. (2021)), respectively. Moreover, arbitrarily generated anchors have been found to affect willingness to pay for public goods (Green et al. (1998); Kahneman et al. (1993)), willingness to pay for private goods (Ariely et al. (2003)), willingness to pay to save seabirds (Jacowitz and Kahneman (1995)), and judges’ sentencing (Englich et al. (2006)); indeed, the anchoring effect is present even when participants know that the anchor is generated by their own social security numbers (Ariely et al. (2003)) or dice rolls (Englich et al. (2006)).
. | **A**: Subsequent research has expanded the literature on the anchoring effect, replicating it across a wide variety of contexts.
**B**: In a pioneering study, Tversky and Kahneman spun a wheel of fortune in front of participants and asked them to consider whether the number of African countries in the United Nations was higher or lower than the random outcome (a prior anchoring question) before asking for their final estimation of the number.
**C**: The authors found that the final estimation was significantly affected by the value of the initial anchor (Tversky and Kahneman (1974)).
| ABC | CBA | ABC | ABC | Selection 3 |
We presented a new solution concept for sequential imperfect-information games called observable perfect equilibrium that captures the assumption that all players are playing as rationally as possible given the fact that some players have taken observable suboptimal actions. We believe that this is more compelling than other solution concepts that assume that one or all players make certain types of mistakes for all other actions including those that have not been observed. We showed that every observable perfect equilibrium is a Nash equilibrium, which implies that observable perfect equilibrium is a refinement of Nash equilibrium. <|MaskedSetence|> We showed that an OPE can be computed in polynomial time in two-player zero-sum games based on repeatedly solving a linear program formulation. <|MaskedSetence|> <|MaskedSetence|> So we expect our analysis to extend to significantly more complex settings than the example considered.. | **A**: While we only considered a simplified game called the no-limit clairvoyance game, this game encodes several elements of the complexity of full no-limit Texas hold ’em, and in fact conclusions from this game have been incorporated into some of the strongest agents for no-limit Texas hold ’em.
**B**: We also showed that observable perfect equilibrium is always guaranteed to exist.
**C**: We also argued that computation of OPE is more efficient than computation of the related concept of one-sided quasi-perfect equilibrium, which in turn has been shown to be more efficient than computation of quasi-perfect equilibrium and extensive-form trembling-hand perfect equilibrium.
We demonstrated that observable perfect equilibrium leads to a different solution in no-limit poker than EFTHPE, QPE, and OSQPE.
| BCA | BCA | CAB | BCA | Selection 2 |
In the model, the receiver cannot implement transfers or choose the senders’ payoff structure.171717Mechanisms that involve transfers are inefficient because, compared to the outcome under complete information, at least one player incurs a cost when participating in a transfer. Even if the receiver could affect the senders’ payoff structure within the limits prescribed in Section 3, the only way to obtain an efficient and collusion-proof outcome is by using public advocacy. If the receiver could, then it would set an environment with prohibitively high misreporting costs to spur truthful reporting. However, organizations may be limited when choosing among mechanisms, either because of exogenous constraints or commitment problems. The main result is positive: efficiency can be obtained even when organizations can only decide how to structure communication.
Two additional extensions are worth discussing. First, the model assumes that senders know the state perfectly. A model variant with imperfectly informed senders injects an information aggregation problem over a purely strategic problem of information elicitation.181818Introducing imperfectly informed senders would certainly enhance the model’s realism, but it may also cloud the circumstances in which the senders’ strategic interaction benefits the organization. <|MaskedSetence|> Second, it is assumed that truthful reporting is costless. However, the players may incur substantial consultation costs. The need to economize on consultations makes protocols with fewer senders more appealing. <|MaskedSetence|> <|MaskedSetence|> Moreover, without a costless report, communication always involves inefficient expenditures.. | **A**: The receiver’s need to aggregate information from imperfectly informed agents typically makes protocols with a higher number of senders more appealing.
**B**: The receiver cannot take informed decisions when even senders do not know the best course of action.
**C**: Importantly, these two last model variants prevent efficient outcomes by default, and thus have no relevance here.
| ACB | BAC | ACB | ACB | Selection 3 |
<|MaskedSetence|> First, our approach excludes drugs developed by large pharmaceutical firms with a market valuation above the 95th percentile of the firm size distribution. <|MaskedSetence|> To relax this assumption, we could consider large firms separately and keep track of announcements about acquired drugs.
Second, a competitor’s announcements can affect a firm’s drug valuation. <|MaskedSetence|> We can adapt our approach to include “competitive announcements” in our estimation method and plan to work on this in the future.. | **A**:
Our estimates suggest several important areas for future research.
**B**: To the extent that these firms develop different types of drugs, our approach fails to capture those drugs.
**C**: For instance, if two firms are developing competitive drugs, the impact of one firm’s announcement on the market value of the other firms would provide information about the expected effect of competition.
| ABC | ABC | CBA | ABC | Selection 1 |
<|MaskedSetence|> In the present work, production of knowledge affects the firms’ capacity to innovate, which in turn allows the production of higher quality manufactured varieties in a region. The chance of successful innovations depends on the spatial distribution of mobile agents in the economy. Therefore, it is assumed that regional knowledge levels transfer imperfectly between regions, depending on the related variety (Frenken et al.,, 2007), i.e., the relative importance of interaction between agents within
the same region rather than between different regions – which depends on several factors such as cognitive proximity, cultural factors, diversity of skills and abilities, among others. <|MaskedSetence|> Section 3.3) generated from knowledge spillovers. Our modeling strategy is such that indirect utility differentials, which govern the migration of mobile agents between regions, are determined solely by trade linkages and by the spatial dimension of regional interaction (cf. Section 3.4). <|MaskedSetence|> This allows us for great analytical tractability and to focus on spatial outcomes as a result of pecuniary factors and the economic geography of knowledge spillovers (Bond-Smith,, 2021).. | **A**: We thus avoid the explicit use of dynamics for the innovation process.
**B**: We assume further that the increasing complexity of each variety is offset by the available regional quality levels (cf.
**C**: We combine the typical pecuniary externalities in geographical economics (Krugman, 1991b, ; Fujita et al.,, 1999; Baldwin et al.,, 2003)
with the spatial diffusion of knowledge spawned from intra-regional
and inter-regional interactions to infer
about the circular causality between migration and knowledge flows.
| CBA | CBA | CBA | CAB | Selection 2 |
4 Security Design with Limited Liability
In this second class of applications, we show how monotone function intervals pertain to security design with limited liability. <|MaskedSetence|> Monotone function intervals embed two widely adopted economic assumptions in the security design literature. <|MaskedSetence|> The second is that the security’s payoff has to be monotone in the asset’s return. These two assumptions imply that the set of feasible securities can be described by a monotone function interval. Recognizing this, we use the second crucial property of extreme points—namely, for any convex optimization problem, one of the solutions must be an extreme point of the feasible set—to generalize and unify several results in security design under a common framework. <|MaskedSetence|> | **A**: To do so, we revisit the environments of two seminal papers in the literature: Innes (1990), which has moral hazard, and DeMarzo and Duffie (1999), which has adverse selection..
**B**: In security design problems, a security issuer designs a security that specifies how the return of an asset is divided between the issuer and the security holder.
**C**: The first is limited liability, which places natural upper and lower bounds on the security’s payoff—a security cannot pay more than the asset’s return or less than zero.
| CAB | BCA | BCA | BCA | Selection 3 |
We introduce EW-ESRI, a network-based measure, to estimate firms’ systemic economic relevance in terms of employment. <|MaskedSetence|> We apply this measure to every firm in Hungary’s economy using value-added tax (VAT) data to reconstruct the firm-level production network. <|MaskedSetence|> <|MaskedSetence|> They illustrate the range of possible economic costs for the same emission reduction target under a command-and-control policy approach.
. | **A**: We link this measure of systemic relevance to data on CO2 emissions of Hungary’s largest emitting firms and identify firms with high emissions and low systemic relevance as decarbonization leverage points.
This allows us to simulate and compare different decarbonization strategies to understand EW-ESRI’s usefulness in informing the design of decarbonization policies.
**B**: It captures firms’ dependence on each other’s production processes, indicating potential job loss if a firm faces distress or closure.
**C**: The described scenarios demonstrate the importance of the firm-level supply network in decarbonization to limit economic costs.
| BAC | BAC | BAC | BAC | Selection 1 |
To conclude, the numerical results have shown that the “low” value of the carbon tax presented in Table 3 is efficient enough as the further increase of its value does not lead to an improvement in any of the output factors. Nevertheless, in case GenCos possess a “low” GEB, considering “low” values for TEB and an incentive leads to an increase by about 28.0% in the VRE share in the optimal generation mix when compared with the baseline. <|MaskedSetence|> If one is aimed at maximising VRE share, welfare and total generation values, one should consider simultaneously increasing TEB and incentive values to “high”. <|MaskedSetence|> However, one should bear in mind that the numerical results suggest that “high” incentive value has a greater influence on VRE share and welfare increase if applied solely compared to only increasing TEB value.
In case GenCos possess a “high” GEB exploring “low” values of all the input parameters leads to an increase by about 53.3%, 71.1% and 147.4% in VRE share, welfare and generation amount, respectively, when compared to the baseline results. Further increasing TEB and incentive to “high” values allows one to obtain the highest values for all the output factors. <|MaskedSetence|> Meanwhile, the results also suggest that if one was to solely increase just either TEB or incentive value, then the choice should be made in favour of TEB in case the aim is to increase the total generation amount or VRE share in the total generation mix. Alternatively, only increasing incentive value would have a greater impact on welfare than solely increasing TEB value when compared to the optimal welfare value for the case with all (but the GEB) parameters being at a “low” value.. | **A**: The results show an increase (compared to the baseline) of VRE share, welfare and generation amount that is higher by about 2.7%, 7.3% and 17.8%, respectively, than in the case with all (but the GEB) input parameters being at the “low” value.
**B**: Such an approach leads to an increase (compared to the baseline) of VRE share, welfare and total generation values, which is higher by about 14.3%, 8.1% and 5.5%, respectively, than in the case if all the input parameters are “low”.
**C**: At the same time, one can notice an increase by about 58.7% in total welfare and an increase by roughly 99.94% in total generation amount.
| CBA | CBA | CBA | CAB | Selection 3 |
A central result in the literature on strategy-proofness is the impossibility of Gibbard (1973) and Satterthwaite (1975), which states that any strategy-proof rule on the universal preference domain with more than two alternatives in its range is dictatorial. <|MaskedSetence|> <|MaskedSetence|> In our general approach, we combine these two preference domains to address the problem of locating a public facility in any subset of the real line. To be more specific, in our model some agents have single-peaked preferences, while others have single-dipped preferences. <|MaskedSetence|> In this setting, the set of admissible preferences for an agent with single-peaked (single-dipped) preferences is equal to the set of all single-peaked (single-dipped) preferences.
. | **A**: Therefore, to construct non-dictatorial social choice rules that induce truth-telling, one has to restrict either the range of the rules to two alternatives or the domain of admissible preferences.
**B**: Furthermore, the type of preference of each agent (single-peaked or single-dipped) is commonly known but the location of the peak/dip together with the rest of the preference is private information.
**C**: Since rules with a range of two alternatives are not Pareto efficient on the universal preference domain, the literature has focused on identifying situations in which the preference domain can be naturally restricted.
As we explain in more detail in our literature review, the domain of single-peaked and the domain of single-dipped preferences have received special attention when decision makers decide where to construct a new public facility.
| ACB | ACB | BCA | ACB | Selection 4 |
<|MaskedSetence|> In such cases, the first-step estimator can only identify and consistently estimate the impact of non-Gaussian shocks. <|MaskedSetence|> <|MaskedSetence|> In this case, the second term of the adaptive weights leads to an increase in the weights of the restrictions corresponding to Gaussian shocks. This ensures that only those restrictions where the data actually provide evidence against them receive low weights.
Specifically, the Gaussianity correction is constructed such that if two or more (or even all) shocks are Gaussian, the Gaussianity measure of restrictions corresponding to elements in the columns of the Gaussian shocks converges to zero and the weights of these restrictions go to infinity.
However, for the restrictions corresponding to non-Gaussian shocks, the non-Gaussianity measure remains finite and scales the weights based on the degree of non-Gaussianity.. | **A**: The remaining Gaussian shocks are only identified up to a rotation of the Gaussian shocks.
**B**: Therefore, if there are more than two Gaussian shocks, the first-step estimator cannot provide evidence against restrictions on the impact of the Gaussian shocks.
**C**: The introduction of the second term, which adjusts the weights based on the Gaussianity of the shocks, is designed to ensure proper weights when multiple shocks are Gaussian.
| BCA | CAB | CAB | CAB | Selection 3 |
5.2 Privacy and Data
We now assess the impact of privacy regulation by considering policies that limit the firms’ access to the consumers’
information. <|MaskedSetence|> Under this policy, the platform in our model informs the firms about the consumer’s ranking of their products, without disclosing the consumer’s exact value for any specific product.262626See the complete Google proposal at https://privacysandbox.com/. <|MaskedSetence|> Voluntary information disclosure by the consumer is another important, though different dimension. <|MaskedSetence|> | **A**: In this Section, we focus on exogenous restrictions on information disclosure.
**B**: Specifically, we consider cohort-based privacy, which is a restriction in line with the recent Google Privacy Sandbox proposals to replace
third-party cookies.
**C**: See Ali.
| BAC | BAC | BAC | BCA | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> Therefore, studies have looked at the technical processes that are involved in fertilizer production. For example, we know that about 90% of processed mined phosphate is used in a chemical wet process and mostly converted to phosphoric acid, out of which about 82% is used to make fertilizer. Considering further that 15% of P fertilizers are not made from phosphoric acid, we can approximate a lower bound of 80% of total P used in fertilizer \citep[see][]herman_processing. Other studies estimate a higher fraction of P fertilizer use, at the upper end of the spectrum \citetfao04 estimates that 90% of mined PR is used by the fertilizer industry.333These differences can partly be explained by slightly varying approximations of the shares of fertilizer production processes, as well as by differences in the accounting for animal feed supplements (∼similar-to\sim∼7%).
In any case, we note that in our analysis we make the simplifying assumption that the entire PR production (and no other source) is used as fertilizer. This means that the flow of P used for other purposes \citep[e.g. <|MaskedSetence|> We also implicitly assume that mining and use take place in the same year, and we neglect the effect of changes in stocks.
. | **A**: Hence, when one compares the figures of phosphate rock mining and P fertilizer use (and averages these over a few years) one can come to the conclusion that about 70% of mined phosphate rock (in the following abbreviated as PR) ends up as fertilizer.
**B**: via phosphorus compounds, see also][]shinh is handled as if it flows in the same way.
**C**: Considering losses in the production processes of fertilizer and tendential under-reporting in fertilizer use, we know that the actual share is in fact higher.
| BCA | ACB | ACB | ACB | Selection 2 |
Given a market, the question that arises is what are the rules that induce a matching game that allows us to implement stable matchings in Nash equilibrium. <|MaskedSetence|> We show that any stable rule implements, in Nash equilibrium, the individually rational matchings. Second, to implement stable matchings, we focus on another matching game. <|MaskedSetence|> <|MaskedSetence|> This situation leads us to consider a matching game in which the players are only the workers.
Furthermore, when firms’ choice functions satisfy, in addition to
substitutability, the “law of aggregate demand" (LAD, from now on),111This property is first studied by Alkan (2002) under the name of “cardinal monotonicity”. See also Hatfield and. | **A**: First, we study a matching game in which the players are all the agents.
**B**: In these cases, individuals (students or workers) are expected to manipulate their preference lists to their advantage.
**C**: In some markets, such as school choice or labor markets, institutions are legally required to declare their true preferences (priorities or choice functions), i.e., their preferences are public.
| CBA | ACB | ACB | ACB | Selection 2 |
In the analysis, we use the following two definitions of homeownership. <|MaskedSetence|> With this comprehensive homeownership definition, about 70% of individuals in our sample are homeowners (vs 40% if we consider only individuals with a positive open mortgage amount). <|MaskedSetence|> In this second case, we define a mortgage origination as a situation in which either the number of open mortgage trades in year t𝑡titalic_t is bigger than the number of open mortgage trades in year t−1𝑡1t-1italic_t - 1 or the number of months since the most recent mortgage trade has been opened is lower than 12. Clearly, this definition would only capture the flow, and perhaps more importantly would miss cash purchases and wouldn’t distinguish between a new mortgage and a remortgage.
Table 1: Summary statistics of our main variables, 2010, balanced panel. Individuals who experienced a harsh default before or in the same year as a soft default in the sample period (i.e. <|MaskedSetence|> Special codes credit scores lower than 300 have been trimmed. Similarly, the top 1% of total credit limit, total balance on revolving trades and total revolving limit have been trimmed. . | **A**: from 2004 onwards) have been dropped.
**B**: First, we consider an individual as a homeowner if either she ever had a mortgage or she is recorded as a homeowner according to Experian’s imputation.
**C**: Second, in an alternative definition of homeownership, we consider the origination of new mortgages.
| ACB | BCA | BCA | BCA | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> The design of dedicated algorithms for specific problems that exploit the combinatorial structure of the problem at hand is an interesting research direction. Efficient algorithms to compute the maximin distribution have been proposed, for example, by Li et al. (2014) for kidney exchange, or by García-Soriano and Bonchi (2020) when the optimal solutions form a matroid. <|MaskedSetence|> (2023).
Second, we suggest investigating the existence of distributions other than RSD that can be implemented in similar computation times to those of finding a single optimal solution, possibly inspired by the wide range of solution concepts in cooperative bargaining, or by introducing fairness considerations into the literature on symmetry breaking in integer programming.. | **A**: First, our paper focuses on developing general-purpose algorithms that can be applied to a wide class of integer linear programs.
**B**:
We identify three major directions for future research.
**C**: An interesting analysis of the combinatorial structure of fair distribution rules is the recent work by Hojny et al.
| BAC | BAC | CAB | BAC | Selection 1 |
While the standard search for Dragon Kings involves performing a linear fit of the tails of the distribution pisarenko2012robust ; janczura2012black , here we tried to broaden our analysis by also fitting the entire distribution using mGB (7) and GB2 (11) – the two members of the Generalized Beta family of distributions liu2023rethinking , mcdonald1995generalization . As explained in the paragraph that follows (7), the central feature of mGB is that, after exhibiting a long power-law dependence, it eventually terminates at a finite value of the variable. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Distribution of daily realized variance can be modeled using a duo of stochastic differential equations – for stock returns and stochastic volatility – which produces distributions of daily variance such as mGB liu2023rethinking and GB2 dashti2021combined . Via a simple change of variable, daily RV would then follow the same distributions but with renormalized parameters.. | **A**: GB2, on the other hand, has a power-law tail that extends mGB’s power-law dependence to infinity.
The key to understanding the results of fits in Sec.
**B**: At its core is the average of the consecutive daily realized variances (2).
**C**: 4 is the analysis of the structure of RV used by the markets – a square root of realized variance (1).
| ACB | ACB | ABC | ACB | Selection 1 |
<|MaskedSetence|> for different choices of “true” value distributions and pricing rules. Each table corresponds to a different “true” value distribution. Within each table, each row corresponds to a “true” pricing rule, and each column to the pricing rule used in the inference procedure—henceforth, the “hypothesized” pricing rule.
The main take-away is that the prediction error is smallest when the hypothesized pricing rule is the true one; it is also numerically small (see the diagonal in each table). <|MaskedSetence|> <|MaskedSetence|> | **A**: This is a useful consistency check for our learning approach.
.
**B**: Tables 9, 10 and 11 report the Mean Absolute Error (MAE)111111We compute this as the arithmetic average of the absolute difference between predicted and actual value for each quantile.
**C**: When there is a pricing mismatch, however, the error is larger.
| BCA | BCA | BCA | BCA | Selection 4 |
Some other papers consider specific forms of externalities. Akcigit & Liu (2016) considers a model with two research lines that are monopolizable, but only one line is risky and can bring bad news. <|MaskedSetence|> In contrast, this paper considers just one research line but with arbitrary payoffs (allowing for imperfectly monopolizable research) and focuses on ex ante contracts to share rewards. In another paper, Thomas (2021) studies a problem where the safe options are rival; that is, only one agent can take the safe option. <|MaskedSetence|> (2007), Bonatti & Hörner (2011), Rosenberg et al. (2013)). Bonatti & Hörner (2011) considers an equal payoff sharing environment with unobserved actions; in this paper, I show that the efficient contract that redistributes payoffs between winner and losers still implements efficiency even when the actions are unobserved as in Bonatti & Hörner (2011). There are a number of other papers that focus on correlation of the bandit state (Klein & Rady (2011), Rosenberg et al. <|MaskedSetence|> (2022)). The insights in this paper allow for generalization to asymmetries in the amount of research resource available; this complements other papers that have considered asymmetry in the quality of research between players (Das et al. (2020)) and in the informational content available to players (Dong (2018)).
. | **A**: In contrast, this paper assumes that externalities only arise after a breakthrough, rather than from agents competing on the safe arm.
The results that focus on contractible information relate to strategic experimentation papers that consider the role of the observability of breakthroughs, payoffs, and actions (Rosenberg et al.
**B**: Their paper is focused on welfare implications of hiding bad news.
**C**: (2013)), bad news (Keller & Rady (2015)) and Lévy process bandits (Hörner et al.
| BAC | BAC | BAC | ABC | Selection 1 |
There are in total three treatments. Treatment 1 (T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT) was designed to test Hypothesis 1. In that treatment, participants received a sample of past decisions of length equal to two (they would receive information on what the combined contribution of the immediate 2 predecessors was131313In line with the theoretical model, the subject could only observe whether the combined contribution of the two previous players was equal to 0, 10 or 20. That is, by observing a sample of length 2 with total contribution 10, the subject could infer that one of the previous two players defected, but had no way to tell which one.) and also, participants were not informed of their position in the sequence unless they were Player 1 or Player 2141414Subjects at positions 1 and 2 could infer their position, given their observed sample was of length 0 and 1 respectively.. To rigorously test the predictions of the model, we need information on what subjects’ choices are, in all potential samples they can observe (i.e. the combined contribution they observe). In particular, the model predicts that if a player receives a full contribution sample (i.e. the combined contribution of the two previous players is equal to 20), she will contribute herself, otherwise, if the contribution is 0 or 10, she will defect. We elicit behaviour using the Selten (1967) strategy response method and collect the participants’ choices for each possible scenario. This means participants were asked to make conditional decisions for each possible information set depending on their randomly assigned position in the sequence. While this can be seen as a potential limitation of our design, this elicitation method was necessary to allow us to elicit the full strategy profile of the participants, and therefore, to be able to test the theoretical predictions of the model.151515The strategy method has been extensively used in the framework of public goods games (see Bardsley 2000; Fischbacher et al. 2001; Kocher et al. 2008; Herrmann and Thöni 2009; Fischbacher and Gächter 2010; Teyssier 2012; Martinsson et al. <|MaskedSetence|> Figure 2 presents a screenshot from the experimental interface, of what a player in position 3 or 4 could see. In that case, the only information the subject has is that she is positioned in one of the two last positions of the sequence and is asked about her contribution choice, conditional on the potential total contribution of the two immediate players before her (either of players 1 and 2, if she is player 3, or players 2 and 3, if she is player 4) but she cannot distinguish her position. <|MaskedSetence|> The available options for players at positions 1 and 2 were adapted accordingly.
Participants were incentivised to reveal their preferences truthfully, as their payment depended on what they and the other members of the group had chosen in that round. <|MaskedSetence|> For instance, if the player at position 1 chose not to invest in the common project, the experimental software recalled the decision of the player in position 2, for that specific scenario and so forth, returning the total contribution, the total returns, and the payoffs for each participant in the group.
. | **A**: 2013; Katuščák and Miklánek 2023), while previous studies found no statistical differences in subjects’ responses between the strategy and the direct response method (see Brandts and Charness 2000; Brandts and Charness 2011, or Keser and Kliemt 2021 for a discussion).
**B**: In particular, at the end of the experiment (and if this round had been selected for actual play), the computer recalled and matched all participants’ decisions in that round and calculated the payoffs based on the scenario that truly transpired.
**C**: Therefore, the subject does not know whether there is another player following in the sequence.
| ACB | ACB | ACB | ACB | Selection 1 |
Our work is closely related to recent work on delegation in financial decision making. Apesteguia et al. <|MaskedSetence|> They show that a substantial fraction of investors does so by either directly copying previously successful investors by the click of a button or manually implementing investment strategies which are similar to those of the most successful peers. Since success is mainly driven by luck and since investors who previously took on a lot of risk appear on top of the earning rankings, Apesteguia et al. (2020) find that copy trading may lead to a substantial increase in risk taking.
The present study extends the design of Apesteguia et al. <|MaskedSetence|> When our investors do not have access to information on experts’ decision quality, we confirm that a substantial fraction of subjects chooses to delegate to experts with previously high earnings.111Other studies besides Apesteguia et al. (2020) finding an important role for previous earnings in the choice of “experts” include Huck et al. <|MaskedSetence|> (2002), Apesteguia et al. (2010) and Huber et al. (2010).. | **A**: (2020), like the present paper, report an experiment where investors may decide to delegate financial decisions to their peers.
**B**: (2020) by varying the complexity of the underlying task and the information investors receive about the experts.
**C**: (1999), Offerman et al.
| ACB | ABC | ABC | ABC | Selection 4 |
In our third application, we study treatment effects with an instrument that fails the exclusion restriction. When the instrument is allowed to affect the outcome directly, we consider estimation of a generic weighted average of local average treatment effects (LATEs) across instrument values. In the continuous outcome case, we contribute a sensitivity analysis for a measure of exclusion failure that is unit-free.
The proposed bounds are simple and tractable, but can be wider than the bounds implied by the original model.
We illustrate the value of our approach by simulation. <|MaskedSetence|> c=0𝑐0c=0italic_c = 0 corresponds to unconfoundedness; in our example, the bounds discontinuously become unbounded as c𝑐citalic_c crosses 0.10.10.10.1. <|MaskedSetence|> <|MaskedSetence|> | **A**: We find that a plug-in boostrap approach.
**B**: We study estimation of APO bounds under Masten and Poirier’s conditional c-dependence model, which restricts the difference between observed and true propensities to be at most c𝑐citalic_c.
**C**: We implement a simple plug-in estimator and percentile bootstrap that leverage a fixed grid of quantile estimates across bootstraps and values of c𝑐citalic_c.
| CBA | BCA | BCA | BCA | Selection 2 |
If agents are cognitively limited, then NOM is sufficient to describe their strategic behavior. Therefore, the question arises to what extent NOM rules enrich the landscape of strategy-proof rules. We will focus on own-peak-only rules. <|MaskedSetence|> <|MaskedSetence|> Furthermore, the own-peak-only property follows from efficiency and strategy-proofness (see Sprumont, 1991; Ching, 1994). <|MaskedSetence|> | **A**: Because of their simplicity, own-peak-only rules are important rules in their own right and are both useful in practice and extensively studied in the literature.
**B**: However, since we do not impose strategy-proofness, we explicitly invoke it here.
.
**C**: This means that the sole information collected by these rules from an agent’s preference to determine his allotment is his peak amount.
| CAB | ACB | CAB | CAB | Selection 3 |
README.md exists but content is empty.
- Downloads last month
- 47