context
stringlengths 100
2.36k
| A
stringlengths 109
3.73k
| B
stringlengths 105
2.49k
| C
stringlengths 102
3.27k
| D
stringlengths 106
1.99k
| label
stringclasses 4
values |
---|---|---|---|---|---|
It is relatively easy to check that 𝒜ϱKsubscript𝒜superscriptitalic-ϱ𝐾\mathcal{A}_{\varrho^{K}}caligraphic_A start_POSTSUBSCRIPT italic_ϱ start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is a convex set if ϱKsuperscriptitalic-ϱ𝐾\varrho^{K}italic_ϱ start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT satisfies the convexity property. 𝒜ϱK0subscriptsuperscript𝒜0superscriptitalic-ϱ𝐾\mathcal{A}^{0}_{\varrho^{K}}caligraphic_A start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ϱ start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT end_POSTSUBSCRIPT can be considered as the positive polar cone of 𝒜ϱKsubscript𝒜superscriptitalic-ϱ𝐾\mathcal{A}_{\varrho^{K}}caligraphic_A start_POSTSUBSCRIPT italic_ϱ start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT end_POSTSUBSCRIPT. | Now, the dual representation of p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures is provided and which will be used in the proof of p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-dynamic risk measures in Sect. 5. | A special example of p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures which is so called OCE, is discussed in the next section. Finally, in Sect. 5, the p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures are used to study the dual representation of the p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-dynamic risk measures. | In this section, a special class of p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures that is the Optimized Certainty Equivalent (OCE) is studied and it will be used as an example of dynamic risk measures in Sect. 5. | Now, with the definition and dual representation, we consider the time consistency of dynamic p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures. | A |
These asymptotic results were based on the idea that an underlying process (St)t≥0subscriptsubscript𝑆𝑡𝑡0(S_{t})_{t\geq 0}( italic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_t ≥ 0 end_POSTSUBSCRIPT under the local volatility model can be approximated by some suitable Gaussian processes in the Lp(ℚ)superscript𝐿𝑝ℚL^{p}(\mathbb{Q})italic_L start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ( blackboard_Q ) norm. To implement this main idea in the approximation, we used Malliavin calculus theory to represent the delta of the Asian option. In addition, we used the large deviation principle to investigate an asymptotic for the Asian call and put option. | For comparison with the Asian option, we examined the short-maturity behavior of the European option. In contrast to the Asian volatility, we proved that at short maturity T,𝑇T\,,italic_T , the European option is expressed by the European volatility. In terms of these volatilities, we observed the resemblance between Asian and European options at short maturity. | This paper described the short-maturity asymptotic analysis of the Asian option having an arbitrary Hölder continuous payoff in the local volatility model. We were mainly interested in the Asian option price and the Asian option delta value. The short-maturity behaviors of the option price and the delta value were both expressed in terms of the Asian volatility, which was defined by | which we refer to as the European volatility, in the formulas for the Asian option prices and delta values. With regard to σA(T)subscript𝜎𝐴𝑇\sigma_{A}(T)italic_σ start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ( italic_T ) and σE(T),subscript𝜎𝐸𝑇\sigma_{E}(T)\,,italic_σ start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT ( italic_T ) , we compare the Asian option with the European option in Section 7. In addition to the European option, the geometric average Asian option having the terminal payoff | In Section 4, short-maturity asymptotic formulas for the delta values will be presented in terms of the Asian volatility and the European volatility. | A |
Var(ϵ~(t))=Var(2ϵ(t)).𝑉𝑎𝑟~italic-ϵ𝑡𝑉𝑎𝑟2italic-ϵ𝑡\displaystyle Var(\tilde{\epsilon}(t))=Var(2\epsilon(t)).italic_V italic_a italic_r ( over~ start_ARG italic_ϵ end_ARG ( italic_t ) ) = italic_V italic_a italic_r ( 2 italic_ϵ ( italic_t ) ) . | Joint estimation and testing for the model in eq. 30 can be performed by using the techniques presented in [1]. Let us assume we have run a computer experiment composed of N𝑁Nitalic_N runs, for which we observe δny(t),t∈T,n=1,…,Nformulae-sequencesubscript𝛿𝑛𝑦𝑡𝑡𝑇𝑛1…𝑁\delta_{n}y(t),t\in T,n=1,\ldots,Nitalic_δ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_y ( italic_t ) , italic_t ∈ italic_T , italic_n = 1 , … , italic_N. We also need the additional condition that Xfx⊂C0[T]subscript𝑋subscript𝑓𝑥superscriptC0delimited-[]TX_{f_{x}}\subset\pazocal{C}^{0}[T]italic_X start_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT end_POSTSUBSCRIPT ⊂ roman_C start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT [ roman_T ]. We assume the error terms ϵ~n(t),t∈Tsubscript~italic-ϵ𝑛𝑡𝑡𝑇\tilde{\epsilon}_{n}(t),t\in Tover~ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_t ) , italic_t ∈ italic_T to have finite total variance, | To estimate the FCSI from data, let us define a generic contrast between two model runs as δy(t)𝛿𝑦𝑡\delta y(t)italic_δ italic_y ( italic_t ). We can so write, using Eq. 26: | For each t∈T𝑡𝑇t\in Titalic_t ∈ italic_T, the generic Δft,…Δsubscript𝑓𝑡…\Delta f_{t,\ldots}roman_Δ italic_f start_POSTSUBSCRIPT italic_t , … end_POSTSUBSCRIPT is a scalar value. We can so define | Building from [4] and [28], let us define the input-output (I/O) relationship of a given simulation model with a real response varying over an interval as | B |
Under NRA(Θ)𝑁𝑅𝐴ΘNRA(\Theta)italic_N italic_R italic_A ( roman_Θ ), the market is complete if and only if |𝒬|=1𝒬1|\mathcal{Q}|=1| caligraphic_Q | = 1. Furthermore, if NA({θ})𝑁𝐴𝜃NA(\{\theta\})italic_N italic_A ( { italic_θ } ) holds for some θ∈Θ𝜃Θ\theta\in\Thetaitalic_θ ∈ roman_Θ, then Θ={θ}Θ𝜃\Theta=\{\theta\}roman_Θ = { italic_θ }. | The robust pricing systems are used to compute the robust superhedging price of 𝐟𝐟\mathbf{f}bold_f as follows | the set of robust pricing systems for the model θ𝜃\thetaitalic_θ and by 𝒬=⋃θ∈Θ𝒬θ𝒬subscript𝜃Θsuperscript𝒬𝜃\mathcal{Q}=\bigcup_{\theta\in\Theta}\mathcal{Q}^{\theta}caligraphic_Q = ⋃ start_POSTSUBSCRIPT italic_θ ∈ roman_Θ end_POSTSUBSCRIPT caligraphic_Q start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT the set of all robust pricing systems. | Usually, information from the market data, for example available option prices, are used in calibration to select models that fit real data. We show in this section that this fitting procedure helps to reduce the set of robust pricing systems, and hence superhedging prices. | two prices are equal. In [8], the authors explained the two conflicting results by showing the differences in the information used by the hedger and Nature: in [6] the hedger and Nature have the same information while Nature has access to more information in [40]. Another explanation was given in [39], where the authors argued that [6] did not consider enough models. [8] introduced randomized models which are relevant for computing the | C |
Now consider any ε𝜀\varepsilonitalic_ε-neighborhood of the set of Bayes-plausible distributions supported on stationary beliefs, call it (ΦS)εsuperscriptsuperscriptΦ𝑆𝜀(\Phi^{S})^{\varepsilon}( roman_Φ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT. If an agent’s distribution of beliefs is in (ΦS)εsuperscriptsuperscriptΦ𝑆𝜀(\Phi^{S})^{\varepsilon}( roman_Φ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT, then her ex-ante expected utility is at least close to u∗subscript𝑢u_{*}italic_u start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT, as u(φ)≥u∗𝑢𝜑subscript𝑢u(\varphi)\geq u_{*}italic_u ( italic_φ ) ≥ italic_u start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT on ΦSsuperscriptΦ𝑆\Phi^{S}roman_Φ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT and u(φ)𝑢𝜑u(\varphi)italic_u ( italic_φ ) is uniformly continuous. On the other hand, if the distribution is not in (ΦS)εsuperscriptsuperscriptΦ𝑆𝜀(\Phi^{S})^{\varepsilon}( roman_Φ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT, then there is some strictly positive minimum utility improvement that the agent obtains (as the complement of (ΦS)εsuperscriptsuperscriptΦ𝑆𝜀(\Phi^{S})^{\varepsilon}( roman_Φ start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT is closed, hence compact, and I(φ)𝐼𝜑I(\varphi)italic_I ( italic_φ ) is continuous). | We can then apply an improvement principle. The idea is as follows, where we consider deterministic networks for simplicity. | Sadler (2015) introduce a notion of “information diffusion” and use the improvement principle to establish information diffusion even when learning fails. | Consider deterministic networks and assume that society can be covered by finitely many subsequences such that in each subsequence agent nksubscript𝑛𝑘n_{k}italic_n start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT observes nk−1subscript𝑛𝑘1n_{k-1}italic_n start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT. Then, denoting agent n𝑛nitalic_n’s (random) social belief by μnsubscript𝜇𝑛\mu_{n}italic_μ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, it holds that for all ε>0𝜀0\varepsilon>0italic_ε > 0, limn→∞Pr(μn∈Sε)=1subscript→𝑛Prsubscript𝜇𝑛superscript𝑆𝜀1\lim_{n\to\infty}\Pr(\mu_{n}\in S^{\varepsilon})=1roman_lim start_POSTSUBSCRIPT italic_n → ∞ end_POSTSUBSCRIPT roman_Pr ( italic_μ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ∈ italic_S start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT ) = 1, where Sεsuperscript𝑆𝜀S^{\varepsilon}italic_S start_POSTSUPERSCRIPT italic_ε end_POSTSUPERSCRIPT denotes the ε𝜀\varepsilonitalic_ε-neighborhood of the set of stationary beliefs. See Proposition SA.1 in Supplementary Appendix SA.1. We note that this result applies, in particular, to the immediate-predecessor network and the complete network. The latter is special because the social belief is then a martingale, which is assured to converge almost surely by the martingale convergence theorem. For this case, Arieli and | Sørensen (2020) consider “unordered” random sampling models that also only allow for binary states and actions. We believe ours is the first paper to consider the canonical sequential social learning problem with general observational networks and general state and action spaces. At a methodological level, we develop a novel analysis based on continuity and compactness—rather than monotonicity or other properties that are specific to binary states or actions—that uncovers the fundamental logic underlying a general improvement principle. | A |
\frac{C(G)}{G}\right)\right\},\quad\forall g\in\{1,\dots,G\},italic_r start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ( italic_X ; italic_G ) = 1 { divide start_ARG italic_X start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT end_ARG start_ARG square-root start_ARG roman_Σ start_POSTSUBSCRIPT italic_g , italic_g end_POSTSUBSCRIPT end_ARG end_ARG > roman_Φ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( 1 - divide start_ARG italic_C ( italic_G ) end_ARG start_ARG italic_G end_ARG ) } , ∀ italic_g ∈ { 1 , … , italic_G } , | A natural approach for choosing optimal hypothesis testing protocols would be to set λ=0𝜆0\lambda=0italic_λ = 0 and choose maximin protocols that uniformly dominate all other maximin protocols, as Tetenov (2016) does in the J=1𝐽1J=1italic_J = 1 case. This corresponds to looking for uniformly most powerful (UMP) tests in the terminology of classical hypothesis testing. Unfortunately, this approach is not applicable in our context. The following proposition shows that no maximin protocol dominates all other maximin protocols when there are multiple hypotheses. | As an alternative to the approach above, one could examine a worst-case approach with respect to the identity of the implementing policymaker. This model leads to very conservative hypothesis testing protocols. When the researcher’s payoff is linear, threshold crossing protocols such as t𝑡titalic_t-tests are not maximin optimal for any (finite) critical values. When the researcher’s payoff takes a threshold form, a hypothesis testing protocol is maximin optimal if and only if it controls the probability of at least κ𝜅\kappaitalic_κ discoveries under the null. This criterion is stronger than and implies κ𝜅\kappaitalic_κ-FWER control. We refer to Appendix C.6 for a formal analysis of this worst-case approach. | et al. (2002, Table 1) tested for effects in more than one subgroup. In economics, 27 of 124 field experiments published in “top-5” journals between 2007 and 2017 feature factorial designs with more than one treatment (Muralidharan et al., 2020).—these assumptions rationalize separate hypothesis tests based on threshold-crossing protocols. We show that if the planner assigns any positive weight to the production of knowledge per se, the class of optimal testing protocols is the class of unbiased maximin testing protocols. Maximin optimality is closely connected to size control, while unbiasedness requires the power of protocols to exceed their size, where size and power are defined in terms of economic fundamentals in our model. The proposed notion of optimality is a refinement of maximin optimality, which rules out underpowered tests. We then prove that the separate t𝑡titalic_t-tests, which are ubiquitous in applied work, are maximin optimal and unbiased for a unique choice of critical values, and we provide an explicit characterization of the optimal critical values in terms of the researcher’s costs and the number of hypotheses. | Which (if any) hypothesis testing protocols are globally (and maximin) optimal? The answer to this question depends on the functional form of the researcher’s payoff function, the functional form of welfare, and the distribution of X𝑋Xitalic_X. In this section, we consider settings with a linear researcher payoff (Assumption 1) and a linear welfare (Assumptions 2). Motivated by asymptotic approximations, we focus on the leading case where X𝑋Xitalic_X is normally distributed. | B |
Matsuyama (1992) constructs a model of the Industrial Revolution that incorporates the above two characteristics. His model interprets that the Industrial Revolution was caused by an increase in agricultural (labor) productivity, which reallocated labor from agriculture to manufacturing and facilitated learning-by-doing in the manufacturing sector. | However, his model does not explain why the Great Divergence occurred, that is, why agricultural (labor) productivity increased in Britain, but not in China. | To understand why the Industrial Revolution occurred in Britain in the 18th century, researchers need to identify factors that caused what he refers to as the Great Divergence–the divergence in economic growth between Europe and China since the 19th century. These factors should have been present in Britain but absent in China. | Interestingly, even if the agricultural productivity level in China is higher than that in Britain, it does not contribute to escaping the stagnant Malthusian state in our model. | argues that the relief of land constraints is the reason why the Industrial Revolution did not occur in China but in Britain. | A |
Contribution decisions and link decisions in isolation may not capture the true dynamic of behavior. This is because both variables together determine the cost of sharing—the marginal return on a player’s contribution in this purely congestive game decreases as they share with more other players. For each observed action, we compute the direct cost to the sharing individual as (1−mi(Ai))ci1subscript𝑚𝑖subscript𝐴𝑖subscript𝑐𝑖(1-m_{i}(A_{i}))c_{i}( 1 - italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The dynamics of these individual costs are illustrated in Figure 1(d). Due to the decreasing trend in both contributions and linking, costs follow a downward trend. Treatment sessions see a large fixed jump in individual costs, and like contributions, the differential effect is not significantly different from zero, indicating that costs continue to decrease at the same rate as they did before the intervention, albeit from a significantly higher starting point. | Individual costs of sharing are significantly higher in the treatment condition than in the baseline, although they exhibit a similar downward trend over time in both conditions. | Reciprocity is significantly higher in the treatment condition than in the baseline condition. Moreover, it increases over time in the treatment, while it decreases over time in the baseline. | Contributions and average degree (number of links) are both significantly higher in the treatment condition than in the baseline condition. | Contribution decisions and link decisions in isolation may not capture the true dynamic of behavior. This is because both variables together determine the cost of sharing—the marginal return on a player’s contribution in this purely congestive game decreases as they share with more other players. For each observed action, we compute the direct cost to the sharing individual as (1−mi(Ai))ci1subscript𝑚𝑖subscript𝐴𝑖subscript𝑐𝑖(1-m_{i}(A_{i}))c_{i}( 1 - italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. The dynamics of these individual costs are illustrated in Figure 1(d). Due to the decreasing trend in both contributions and linking, costs follow a downward trend. Treatment sessions see a large fixed jump in individual costs, and like contributions, the differential effect is not significantly different from zero, indicating that costs continue to decrease at the same rate as they did before the intervention, albeit from a significantly higher starting point. | A |
\mathbb{X},\mathcal{B}(\mathbb{X}))( italic_J start_POSTSUPERSCRIPT fraktur_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t , ∞ end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_t ∈ blackboard_N end_POSTSUBSCRIPT ∈ roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_N ; blackboard_X , caligraphic_B ( blackboard_X ) ). It follows from (32) that (Jt,∞𝔭)t∈ℕsubscriptsubscriptsuperscript𝐽𝔭𝑡𝑡ℕ(J^{\mathfrak{p}}_{t,\infty})_{t\in\mathbb{N}}( italic_J start_POSTSUPERSCRIPT fraktur_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t , ∞ end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_t ∈ blackboard_N end_POSTSUBSCRIPT is a fixed point of ℌ𝔭superscriptℌ𝔭\mathfrak{H}^{\mathfrak{p}}fraktur_H start_POSTSUPERSCRIPT fraktur_p end_POSTSUPERSCRIPT. | Here, we focus on the risk-averse MDPs framework proposed in [55]. While a plethora of studies exist on conditional risk measures (CRMs) and dynamic risk measures (DRMs), the problem of risk-averse MDPs and their corresponding DPP cannot be straightforwardly inferred from the established properties of these risk measures. Broadly speaking, DPPs transform a sequential optimization problem into the task of solving an operator Bellman equation. In the context of risk-averse MDPs, this demands separate research, as in general, a readily solvable DPP in terms of operator equations requires additional conditions beyond properties of CRMs and DRMs. This is particularly true when considering the attainability of optimal actions. Below, we offer a concise review of the established DPPs that align with the general framework initiated by [55]. [55] considers deterministic costs, introduces the notion of risk transition mappings, and uses them to construct, in a recursive manner, a class of (discounted) DRMs. The author proceeds to derive both finite and infinite (with bounded costs) time horizon DPPs for such DRMs. We also refer to [56] for the assumptions needed. [58] extends the infinite horizon DPP to unbounded costs as well as for average DRMs. The risk transition mappings involved are assumed to exhibit an analogue of a strong Feller property. [23] studies a similar infinite horizon DPP with unbounded costs under a different set of assumptions. Recently, [6] considers unbounded latent costs and establishes the corresponding finite and infinite horizon DPPs. They also prove sufficiency of Markovian actions against history dependent actions. They construct DRMs, for finite time horizon problems, from iterations of static risk measures that are Fatou and law invariant. The infinite horizon problems require in addition the coherency property. They also require the underlying MDP to exhibit a certain path-wise continuous/semi-continuous transition mechanism. [24, 25] develops a computational approach for optimization with dynamic convex risk measures using deep learning techniques. Finally, it is noteworthy that the concept of risk form is introduced in [28] and is applied to handle two-stage MDP with partial information and decision-dependent observation distribution. | We are now in position to present the DPP for the infinite horizon optimization in (P). We recall the definition of Stsubscript𝑆𝑡S_{t}italic_S start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT from (27). | To establish the infinite horizon DPP, we define the infinite horizon version of the optimal value function as | To establish an infinite horizon DPP for (P), we first study the value functions associated with a Markovian policy 𝔭𝔭\mathfrak{p}fraktur_p. Recall the definitions of Gt,Ht𝔭subscript𝐺𝑡subscriptsuperscript𝐻𝔭𝑡G_{t},H^{\mathfrak{p}}_{t}italic_G start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_H start_POSTSUPERSCRIPT fraktur_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and Jt,T∗subscriptsuperscript𝐽𝑡𝑇J^{*}_{t,T}italic_J start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t , italic_T end_POSTSUBSCRIPT from (19), (21), and (28), respectively. We justify the definition of | C |
This paper attempts to support inventory management decisions of a real-world e-grocery retailer. While the more general characterisation of e-grocery retailing in Section 2.3 also holds for the company under consideration, we will introduce its specific requirements and characteristics in this section. | On the other hand, while there are opportunities resulting from the control the retailer exerts over the fulfilment process, picking and delivery increase the time between the instance a replenishment order for an SKU is placed and the final availability to the customer. This longer delivery time reduces the forecasting accuracy of crucial variables, such as the demand distribution for the period under consideration. In particular, features used for the forecast on this distribution, such as the known demand are less informative more days in advance. Using data from the e-grocery retailer under consideration in this paper, Figure 1 displays the mean average percentage forecast error as a function of the lead time when applying a linear regression for all SKUs within the categories fruits and vegetables in the demand period January 2019 to December 2019. We observe that the mean average percentage error strongly increases for longer lead times, thus implying a decrease in the forecast precision. | The assortment of the retailer covers several thousand SKUs from different areas such as fruits, vegetables and meat. The outward distribution process covers two phases. First, at each day there is a supply from national distribution warehouses to local fulfilment centres. In the following, we refer to these warehouses as the supplier. If a customer places a purchase, then this order is served by the dedicated fulfilment centre. This requires the retailer to adequately manage, on a daily basis, the inventory process for a variety of SKUs and several fulfilment centres. Our modelling approach takes the perspective of a single fulfilment centre, where we assume an unlimited storage capacity. As the stowing and picking processes are controlled by the retailer, we suppose that units of a SKU are picked and delivered to the customer according to the First In – First Out (FIFO) principle, that is, the oldest SKUs are sold first. | As our demand data is uncensored, it does not depend on the inventory level (see the characteristics on e-grocery retailing in Section 2.3). Therefore, we are able to evaluate the lookahead policy according to the true demand which in particular is not limited by the demand fulfilled under the policy of the retailer. However, as our replenishment order quantity for a given period may deviate from the quantity actually ordered by the retailer for the corresponding day, we make use of the information on the relative amount of incompletely supplied replenishment order quantities in the data of the retailer. We transfer this information to our order quantities, i.e. if there was full supply (or full shortage), we also assume full supply (or full shortage) for a different quantity. The number of units deteriorating depends on the composition of the inventory with corresponding date of supply. Since the inventory in our model again deviates from the inventory given by the retailer’s data, we use simulations to determine which number of units would have been deteriorated if the retailer had followed the policy. Specifically, as introduced above, we assume spoilage to follow a binomial distribution with the probability parameter estimated from historical data and the number of units with a given supply date. To determine the amount of spoilage in the evaluation, we make use of the underlying probability distribution which is used as the forecast in the lookahead policy for the following month.888As an example, we calculate a CDF of shelf life for a SKU based on spoilage between February and July, and use this distribution to (1) calculate spoilage embedded in the lookahead policy of the replenishment order decision for August and (2) evaluate resulting spoilage in July. For each demand period and supply date we generate a random number from a uniform distribution on (0,1). Applying this value to the inverse CDF of the shelf life gives the number of units deteriorated. Using the same random number in the evaluation of both approaches ensures that a larger inventory on a given day with identical supply date leads to a larger number of units deteriorated and vice versa. | We evaluate four SKUs within six fulfilment centres. Due to missing data for the SKU lettuce in two fulfilment centres, we are able to evaluate 22 SKU/fulfilment centre combinations in total. Table 6 illustrates relative changes in the resulting average costs, i.e. relative savings, when using our lookahead policy instead of the benchmark approach. Overall, we find substantial cost reductions of 6.2% to 23.7% for all four SKUs. As our data set allows us to evaluate only six months of data, results vary considerably across the different SKU/fulfilment centre combinations, and for 4 out of the 24 combinations we do in fact see an increase in costs. In particular, for the combinations where we obtain substantially higher costs under the lookahead policy (grapes and lettuce in fulfilment centre 4), we find that realised demand is considerably lower than the forecast. This underlines the importance of a high quality of probability distribution estimation before the integration in an optimisation framework. It should also be noted that the cost parameters used in the lookahead policy may differ from the cost structure implicitly embedded in the benchmark policy. However, our results in the sensitivity analysis (cf. the Supplementary Material) show that using probabilistic information is superior across different values of the cost parameter for lost sales b𝑏bitalic_b. | B |
Low (24): Benin, Cameroon, Congo, Dem. Rep., Cote d’Ivoire, Ethiopia, Gambia, Ghana, Kenya, Mozambique, Niger, Nigeria, Senegal, Sudan, Tanzania, Togo, Zambia, Bangladesh, Cambodia, Myanmar, Nepal, Pakistan, Sri Lanka, Tajikistan, Haiti. | Medium (26): Congo, Egypt, Eswatini, Lesotho, Morocco, Yemen, Zimbabwe, Armenia, Fiji, India, Indonesia, Kyrgyzstan, Philippines, Vietnam, Albania, Belize, Bolivia, Brazil, Colombia, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, Paraguay, Peru. | Latin America and the Caribbean (20): Argentina, Belize, Bolivia, Brazil, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guatemala, Guyana, Haiti, Honduras, Jamaica, Mexico, Nicaragua, Panama, Paraguay, Peru, Venezuela. | Medium (26): Congo, Egypt, Eswatini, Lesotho, Morocco, Yemen, Zimbabwe, Armenia, Fiji, India, Indonesia, Kyrgyzstan, Philippines, Vietnam, Albania, Belize, Bolivia, Brazil, Colombia, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, Paraguay, Peru. | Latin America and the Caribbean (20): Argentina, Belize, Bolivia, Brazil, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guatemala, Guyana, Haiti, Honduras, Jamaica, Mexico, Nicaragua, Panama, Paraguay, Peru, Venezuela. | A |
The numerical computation of the signature is performed using the iisignature Python package (version 0.24) of Reizenstein and Graham (2020). The signatory Python package of Kidger and Lyons (2020) could also be used for faster computations. Because the signature is an infinite object, we compute in practice only the truncated signature up to some specified order R𝑅Ritalic_R. The influence of the truncation order on the statistical power of the test will be discussed in Section 3.2. Note that we will focus on truncation orders below 8 as there is not much information beyond this order given that we work with a limited number of observations of each path which implies that the approximation of high order iterated integrals will rely on very few points. | The present article is organized as follows: as a preliminary, we introduce in Section 2 the Maximum Mean Distance and the signature before describing the statistical test proposed by Chevyrev and Oberhauser (2022). This test is based on these two notions and allows to assess whether two stochastic processes have the same law using finite numbers of their sample paths. Then in Section 3, we study this test from a numerical point of view. We start by studying its power using synthetic data in settings that are realistic in view of insurance applications and then, we apply it to real historical data. We also discuss several challenges related to the numerical implementation of this approach, and highlight its domain of validity in terms of the distance between models and the volume of data at hand. | We propose a new approach for the validation of real-world economic scenarios motivated by insurance applications. This approach relies on the formulation of the problem of validating real-world economic scenarios as a two-sample hypothesis testing problem where the first sample consists of historical paths, the second sample consists of simulated paths of a given real-world stochastic model and the null hypothesis is that the two samples come from the same distribution. For this purpose, we use the statistical test developed by Chevyrev and Oberhauser (2022) which precisely allows to check whether two samples of stochastic processes paths come from the same distribution. It relies on the notions of signature and maximum mean distance which are presented in this article. Our contribution is to study this test from a numerical point of view in settings that are relevant for applications. More specifically, we start by measuring the statistical power of the test on synthetic data under two practical constraints: first, the marginal one-year distributions of the compared samples are equal or very close so that point-in-time validation methods are unable to distinguish the two samples and second, one sample is assumed to be of small size (below 50) while the other is of larger size (1000). To this end, we apply the test to three stochastic processes in continuous time, namely the fractional Brownian motion (fBm), the Black-Scholes dynamics (BSd) and the rough Heston model, and two time series models, namely a regime-switching AR(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) process and a random walk with i.i.d. Gamma increments. The test is also applied to a two-dimensional process combining the price in the rough Heston model and a regime-switching AR(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) process. The numerical experiments have highlighted the need to configure the test specifically for each stochastic process to achieve a good statistical power. In particular, the path representation (original paths, log-paths, realized volatility or log-returns), the path transformation (lead-lag, time lead-lag or cumulative lead-lag), the truncation order, the signature type (signature or log-signature) and the rescaling are key ingredients to be adjusted for each model. For example, the test achieves statistical powers that are close to one in the following settings which illustrate three different risk factors (stock volatility, stock price and inflation respectively): | Our contribution is to study more deeply this statistical test from a numerical point of view on a variety of stochastic models and to show its practical interest for the validation of real-world stochastic scenarios when this validation is specified as an hypothesis testing problem. First, we present a numerical analysis with synthetic data in order to measure the statistical power of the test and second, we work with historical data to study the ability of the test to discriminate between several models in practice. Moreover, two constraints are considered in the numerical experiments. The first one is to impose that the distributions of the annual increments are the same in the two compared samples, which is natural since, without this constraint, point-in-time validation methods could distinguish the two samples. Secondly, in order to mimic the operational process of real-world scenarios validation in insurance, we consider samples of different sizes: the first sample consisting of synthetic or real historical paths is of small size (typically below 50) while the second sample consisting of the simulated scenarios is of greater size (typically around 1000). Our aim is to demonstrate the high statistical power of the test under these constraints. Numerical results are presented for three financial risk drivers, namely of a stock price, stock volatility as well as inflation. As for the stock price, we first generate two samples of paths under the widespread Black-Scholes dynamics, each sample corresponding to a specific choice of the (deterministic) volatility structure. We also simulate two samples of paths under both the classic and rough Heston models. As for the stock volatility, the two samples are generated using fractional Brownian motions with different Hurst parameters. Note that the exponential of a fractional Brownian motion, as model for volatility, has been proposed by Gatheral et al. (2018) who showed that such a model is consistent with historical data. For the inflation, one sample is generated using a regime-switching AR(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) process and the other sample is generated using a random walk with i.i.d. Gamma noises. Finally, we compare two samples of two-dimensional processes with either independent or correlated coordinates when the first coordinate evolves according to the price in the rough Heston model and the second coordinate evolves according to an AR(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) regime-switching process. Besides these numerical results on simulated paths, we also provide numerical results on real historical data. More specifically, we test historical paths of S&P 500 realized volatility (used as a proxy of spot volatility) against sample paths from a standard Ornstein-Uhlenbeck model on the one hand and against sample paths from a fractional Ornstein-Uhlenbeck model on the other hand. We show that the test allows to reject the former model while the latter is not rejected. Similarly, it allows to reject a random walk model with i.i.d. Gamma noises when applied to US inflation data while a regime-switching AR(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) process is not rejected. A summary of the studied risk factors and associated models is provided in Table 1. | In this subsection, we apply the signature-based validation on simulated data, i.e. the two samples of stochastic processes are numerically simulated. Keeping in mind insurance applications, the two-sample test is structured as follows: | D |
For the human component of my experiment, I explored using Amazon GroundTruth and Prolific (prolific.co) to recruit participants. For the automated component of my experiment, I explored Meta AI’s BlenderBot and GPT-3, a large language model (LLM) developed by OpenAI that is the largest LLM publicly available through a graphical user interface. In the next two sections, I describe the specifics of each experimental setting. | After investigating GroundTruth and Prolific.co, I opted to use Prolific.co to survey human subjects due to its high quality of responses and ease of use. Prolific is a platform for on-demand data collection focused on survey studies and experiments involving paid human subjects. It has over 130,000 vetted participants who can be selected based on demographics, geography, language, education, and other attributes. | Since determining what wage to offer to employees is a complex judgement, employers may use the minimum wage as a convenient reference point upon which to base their offers. Due to the difficulty of conducting controlled experiments with employers, I seek to answer a related question: does the minimum wage function as an anchor for what people perceive to be a fair wage? Although the average person does not engage in wage determination, public discourse surrounding the fairness of wages can influence the economy through the political process, meaning that effects of minimum wage on perceptions of fairness can have broad implications. Thus, I ask human subjects on the crowdsourcing platform Prolific.co as well as an AI bot, specifically a version of OpenAI’s bot GPT-3 (GPT-3 (2022)), to determine the fair wage for a worker given their job description and a minimum wage. | I test human subjects through the crowdsourcing service Prolific.co as well as AI Bots such as BlenderBot | Prolific automates the selection of subjects based on an initial survey. I limited participation to residents of the United States without restriction to demographic characteristics, and I collected information about age, sex, ethnicity, country of birth, country of residence, nationality, language, student status, and employment status for possible future analysis. In accordance with Prolific.co’s recommendations, I paid human subjects $0.20 per response, which yields a wage of $10-12/hour depending on individual response speed. This turned out to be lower than what GPT-3 deemed fair (Table 5). A sample enrollment form is included in the IRB documentation. The total cost of the experiment was $425, which I funded from my personal savings. Polytechnic School IRB Approval of the protocol was obtained on 9/20/2022. | A |
Unlike Bitcoin, which prefers a peer-to-peer electronic cash system, Ethereum is a platform for decentralized applications and allows anyone to create and execute smart contracts. | Blockchain [1] is a peer-to-peer network system based on technologies such as cryptography [2] and consensus mechanisms [3] to create and store huge transaction information. At present, the biggest application scenario of blockchain is cryptocurrency. For example, the initial “Bitcoin” [4] also represents the birth of blockchain. | The code of a smart contract defines the functionality of the contract account, enabling analysis to determine whether it is a Ponzi account. | As cryptocurrencies continue to evolve, smart contracts [5] bring blockchain 2.0, also known as Ethereum [6]. | The smart contract [7], accompanying Ethereum, is understood as a program on the blockchain that operates when the starting conditions are met. | D |
Three regulatory paradigms govern firms’ announcements about the success and failure of drug candidates throughout the development process.666Although most announcements in our sample are made in the U.S., firms also market drugs elsewhere, such as in the E.U. and Canada. We include announcements from all “western” countries because they have similar rules governing drug development and announcements (see Ng, 2015, Chapter 7). First, the Security and Exchange Commission (SEC) requires public companies to disclose all material information to investors via the annual 10-K, quarterly 10-Q, and current 8-K forms, with Regulation Fair Disclosure (2000) mandating timely disclosure of all material information. Second, the FDA controls what firms can announce about their drugs during development, requiring registration of clinical trials within 21 days of enrolling the first subject under the Food and Drug Administration Modernization Act (1997) and disclosure of information about clinical trials and FDA application processes whenever relevant, ensuring announcements are not materially misleading. Third, the Sarbanes-Oxley Act (2002) allows the SEC to monitor firms’ announcements about their FDA review process, with the FDA referring cases of false or misleading statements to the investing public to the SEC since 2004. | The results of this sensitivity analysis are presented in Table 5, under the row labeled “Drugs with Complete Path.” When considering the middle 90% and bottom 95% samples, which exclude outlier firms based on market capitalization, we find that the estimated values for the 29 drugs with complete paths are $1.89 billion and $1.99 billion, respectively. These estimates are similar to the values obtained when analyzing all approved drugs within the corresponding samples, suggesting that our findings are consistent even when focusing on drugs with more comprehensive data. | Given the pronounced right skewness in the firm size distribution, we estimate the value of drugs after removing outlier firms to ensure that extreme cases do not drive our results. This decision is motivated by our discussion of the market capitalization distribution in Section 3.3, which suggests that large and small firms may have fundamentally different characteristics. | Three regulatory paradigms govern firms’ announcements about the success and failure of drug candidates throughout the development process.666Although most announcements in our sample are made in the U.S., firms also market drugs elsewhere, such as in the E.U. and Canada. We include announcements from all “western” countries because they have similar rules governing drug development and announcements (see Ng, 2015, Chapter 7). First, the Security and Exchange Commission (SEC) requires public companies to disclose all material information to investors via the annual 10-K, quarterly 10-Q, and current 8-K forms, with Regulation Fair Disclosure (2000) mandating timely disclosure of all material information. Second, the FDA controls what firms can announce about their drugs during development, requiring registration of clinical trials within 21 days of enrolling the first subject under the Food and Drug Administration Modernization Act (1997) and disclosure of information about clinical trials and FDA application processes whenever relevant, ensuring announcements are not materially misleading. Third, the Sarbanes-Oxley Act (2002) allows the SEC to monitor firms’ announcements about their FDA review process, with the FDA referring cases of false or misleading statements to the investing public to the SEC since 2004. | These regulations incentivize firms to inform the market and the general public correctly and promptly. However, companies retain some discretion in determining what is considered “material” and “not misleading.” This ambiguity is more pronounced when the results from clinical trials are small or when large companies are developing multiple drugs simultaneously. In such cases, firms may delay or bundle negative announcements with positive ones to soften the market reaction. | D |
15 voters in all, with 3 experts: N=15𝑁15N=15italic_N = 15, K=3𝐾3K=3italic_K = 3. The two treatments | Table 2: p=0.7𝑝0.7p=0.7italic_p = 0.7, F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ] | Table 1: p=0.7𝑝0.7p=0.7italic_p = 0.7, F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ] | With p=0.7𝑝0.7p=0.7italic_p = 0.7 and q𝑞qitalic_q uniform over [0.5,[0.5,[ 0.5 ,0.7], we have verified | In all experiments, we set π=0.5𝜋0.5\pi=0.5italic_π = 0.5, p=0.7𝑝0.7p=0.7italic_p = 0.7, and F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform | B |
In Section 3 we present our numerical simulations in one and two space dimensions as well as discuss their potential extension to higher dimensions. | In this paper, we propose a quantum Monte Carlo algorithm to solve high-dimensional Black-Scholes PDEs with correlation and general payoff function which is continuous and piece-wise affine (CPWA), enabling to price most relevant payoff functions used in finance (see also Section 2.1.2). Our algorithm follows the idea of the quantum Monte Carlo algorithm proposed in [chakrabarti2021threshold, QC5_Patrick, QC4_optionpricing] which first uploads the multivariate log-normal distribution and the payoff function in rotated form and then applies a QAE algorithm to approximately solve the Black-Scholes PDE to price options. | In Section 4, we introduce and analyze all relevant quantum circuits we need in our quantum Monte Carlo algorithm. | In Section 5, we provide a detailed error analysis of the steps of our algorithm outlined in Section 2.4.1. | In this section, we first present our quantum Monte Carlo algorithm named Algorithm 1 to solve Black-Scholes PDEs (1) with corresponding CPWA payoff function (8). Moreover, we then outline Algorithm 1 and present our main result in Theorem 1, namely a convergence and complexity analysis of our algorithm. | B |
\theta^{*},c^{*}}))]blackboard_E [ ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - italic_ρ italic_t end_POSTSUPERSCRIPT italic_d ( 0 ∨ roman_sup start_POSTSUBSCRIPT italic_s ≤ italic_t end_POSTSUBSCRIPT ( italic_M start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT - italic_V start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ) ) ] is finite? We will verify that our problem formulation does not require the injection of infinitely large capital to meet the tracking goal. The present paper contributes positive answers to both questions. | In solving the stochastic control problem (1.6) with a running maximum cost, we introduce two auxiliary state processes with reflections and study an auxiliary stochastic control problem, which gives rise to the HJB equation with two Neumann boundary conditions. By applying the dual transform and stochastic flow analysis, we can conjecture and carefully verify that the classical solution of the dual PDE satisfies a separation form of three terms, all of which admit probabilistic representations involving some dual reflected diffusion processes and/or the local time at the reflection boundary. We stress that the main challenge is to prove the smoothness of the conditional expectation of the integration of an exponential-like functional of the reflected drifted-Brownian motion (RDBM) with respect to the local time of another correlated RDBM. We propose a new method of decomposition-homogenization to the dual PDE, which allows us to show the smoothness of the conditional expectation of the integration of exponential-like functional of the RDBM with respect to the local time of an independent RDBM. | Theorem 3.1 provides a probabilistic presentation of the classical solution to the PDE (2) with Neumann boundary conditions (2.10) and (2.11). Our method in the proof of Theorem 3.1 is completely from a probabilistic perspective. More precisely, we start with the proof of the smoothness of the function v𝑣vitalic_v by applying properties of reflected processes (Rr,Hh,Nz)superscript𝑅𝑟superscript𝐻ℎsuperscript𝑁𝑧(R^{r},H^{h},N^{z})( italic_R start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , italic_H start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT , italic_N start_POSTSUPERSCRIPT italic_z end_POSTSUPERSCRIPT ), the homogenization technique of the Neumann problem and the stochastic flow analysis. Then, we show that v𝑣vitalic_v solves a linear PDE by verifying two related Neumann boundary conditions at r=0𝑟0r=0italic_r = 0 and h=0ℎ0h=0italic_h = 0 respectively. | The rest of the paper is organized as follows. In Section 2, we introduce the auxiliary state processes with reflections and derive the associated HJB equation with two Neumann boundary conditions for the auxiliary stochastic control problem. In Section 3, we address the solvability of the dual PDE problem by verifying a separation form of the solution and the probabilistic representations, the homogenization of Neumann boundary conditions and the stochastic flow analysis. The verification theorem on the optimal feedback control is presented in Section 4 together with the technical proofs on the strength of stochastic flow analysis and estimations of the optimal control. It is also verified therein that the expected total capital injection is bounded. Finally, the proof of an auxiliary lemma is reported in Appendix A. | Next, we propose a homogenization method of Neumann boundary conditions to study the smoothness of the last term in the probabilistic representation (3) together with the application of the result obtained in Lemma 3.3. | A |
This results in only a gradual increase of expected job and output loss in the beginning, but fails to anticipate the effects of a systemically very important firm which triggers widespread job and output losses. 102 firms need to be closed in this strategy to reach the benchmark. | (B) In the ‘Remove least-employees firms first’ strategy, firms are closed according to their ascending numbers of employees. | ‘Remove least-risky firms first (employment)’ strategy that tries to lose the least number of jobs per firm closed, including network effects, and the | The ‘Remove least-risky firms first (employment)’ strategy that focuses on minimum expected job loss in the total economy per firm removed, as seen in Fig. 3C, produces a cumulative emission savings curve that approximately increases linearly for the first 100 firms. The expected job loss increases slowly, as expected, since firms are ordered according to their ascending EW-ESRI rank. In order to achieve the goal of 20.21% emission savings 107 firms need to be removed from the production network. As a consequence, 10.93% of output and 7.97% of jobs are lost. Note that this strategy represents a substantial advancement compared to the previous one, mainly because it takes network effects into account. | (C) In the ‘Remove least-risky firms first (employment)’ strategy, firms are removed according to their ascending risk of triggering job loss, i.e., EW-ESRI; firms that are considered least systemically relevant for the production network are removed first. | D |
Table 4 presents the summary regarding the compositions of the input parameter values that have the strongest and the weakest effect on each of the output factors (VRE share in the generation mix, total welfare and generation amount). Additionally, it provides the relative values indicating the improvement of the corresponding factor compared to its optimal value in case all the input parameters, i.e., carbon tax, VRE incentives, TEB, and GEB are zero (baseline case). It is worth highlighting that the “baseline” case does not closely reflect reality, as these input parameters are greater than zero in practice. Figure 12 presents the optimal share of VRE in the generation mix for different combinations of input factors. The bars indicate the share of VRE when the parameter values are “low” or “high” according to Table 3. The “baseline” shows the value of VRE in the optimal generation mix when maximising the total welfare and considering all the input parameters to be zero. Figures 13 and 14 present relative differences in total welfare and generation values, respectively, for different compositions of the input parameters when compared to the case when all of the parameters are set to zero, i.e., no policies are involved. It is worth highlighting that while all the information regarding output results can be obtained from Figures 12, 13 and 14, the primary purpose of these figures is to demonstrate the change in output factors values considering different values of distinct input parameters. In particular, in the case of Figure 12, this parameter is a tax value, while Figures 13 and 14 primarily demonstrate information regarding incentive values. The remaining alternative visualisations of the output data is presented in Appendix B. | Table 4 presents the summary regarding the compositions of the input parameter values that have the strongest and the weakest effect on each of the output factors (VRE share in the generation mix, total welfare and generation amount). Additionally, it provides the relative values indicating the improvement of the corresponding factor compared to its optimal value in case all the input parameters, i.e., carbon tax, VRE incentives, TEB, and GEB are zero (baseline case). It is worth highlighting that the “baseline” case does not closely reflect reality, as these input parameters are greater than zero in practice. Figure 12 presents the optimal share of VRE in the generation mix for different combinations of input factors. The bars indicate the share of VRE when the parameter values are “low” or “high” according to Table 3. The “baseline” shows the value of VRE in the optimal generation mix when maximising the total welfare and considering all the input parameters to be zero. Figures 13 and 14 present relative differences in total welfare and generation values, respectively, for different compositions of the input parameters when compared to the case when all of the parameters are set to zero, i.e., no policies are involved. It is worth highlighting that while all the information regarding output results can be obtained from Figures 12, 13 and 14, the primary purpose of these figures is to demonstrate the change in output factors values considering different values of distinct input parameters. In particular, in the case of Figure 12, this parameter is a tax value, while Figures 13 and 14 primarily demonstrate information regarding incentive values. The remaining alternative visualisations of the output data is presented in Appendix B. | Table 4: Nordics case study: summary. Percentage values (%) are calculated relative to the baseline case. | Lastly, we present a sensitivity analysis for the carbon tax. Similarly to the case of the other input parameters, we fix the values of incentives and TEB budget to 0% and €10M, respectively, while assuming the GEB to take values from Table 2. Here, we have also omitted the cases in which GenCos have a €1M capacity expansion budget due to the similarities in optimal decision values with the €10M case. We consider the values for the carbon tax to lie within the interval given in Table 2. | Next, the VRE generation-capacity subsidies are varied. Similarly to the previous subsection, we consider the values of the remaining parameters to be fixed. In particular, we assume the carbon tax to remain at 0 € / MWh, the TEB at €10M while considering the GEB values as defined in Table 2 but excluding the case where both GenCos have €1M GEB, due to the similarities in optimal decisions values with the case in which they have the same budget set to €10M. | B |
Let fVGsubscript𝑓VGf_{\text{VG}}italic_f start_POSTSUBSCRIPT VG end_POSTSUBSCRIPT denote the density of the log-returns in the | VG model at maturity T>0𝑇0T>0italic_T > 0 with parameters σ>0𝜎0\sigma>0italic_σ > 0, θ∈ℝ𝜃ℝ\theta\in\mathbb{R}italic_θ ∈ blackboard_R | The FMLS model with parameters σ>0𝜎0\sigma>0italic_σ > 0 and α∈(1,2)𝛼12\alpha\in(1,2)italic_α ∈ ( 1 , 2 ) describes | σ>0𝜎0\sigma>0italic_σ > 0 and maturity T𝑇Titalic_T by the smallest natural number NFsubscript𝑁𝐹N_{F}italic_N start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT | skewness β∈[−1,1]𝛽11\beta\in[-1,1]italic_β ∈ [ - 1 , 1 ], scale c>0𝑐0c>0italic_c > 0 and location θ∈ℝ𝜃ℝ\theta\in\mathbb{R}italic_θ ∈ blackboard_R | A |
}}{n}\sum_{j\neq i}\pi^{j}\Big{)}^{2}.italic_a ↦ ( italic_a - divide start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_j ≠ italic_i end_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ) ( italic_μ + over~ start_ARG italic_g end_ARG ( italic_a ) ) - divide start_ARG italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 2 italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG ( italic_a - divide start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_j ≠ italic_i end_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT . | Due to our assumption on g𝑔gitalic_g, a maximum point a*superscript𝑎a^{*}italic_a start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT exists and is finite. Then we obtain the following result. | Then we can prove that there exists an optimal strategy for (4.3). In order to do so, let a*superscript𝑎a^{*}italic_a start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT be a maximum point of | [Y^{a^{*}}]blackboard_E start_POSTSUBSCRIPT blackboard_Q end_POSTSUBSCRIPT [ italic_Y start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ] ≤ italic_Y start_POSTSUPERSCRIPT italic_a start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = blackboard_E start_POSTSUBSCRIPT blackboard_Q start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_Y start_POSTSUPERSCRIPT italic_a start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ]. Since the maximizing point is not at the boundary, the assumption of bounded policies is no restriction. Thus, we have solved the problem. | If limx→±∞g(x)x=0subscriptnormal-→𝑥plus-or-minus𝑔𝑥𝑥0\lim_{x\to\pm\infty}\frac{g(x)}{x}=0roman_lim start_POSTSUBSCRIPT italic_x → ± ∞ end_POSTSUBSCRIPT divide start_ARG italic_g ( italic_x ) end_ARG start_ARG italic_x end_ARG = 0, an optimal strategy for (4.3) is given by πti≡a*superscriptsubscript𝜋𝑡𝑖superscript𝑎\pi_{t}^{i}\equiv a^{*}italic_π start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ≡ italic_a start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT, where a*superscript𝑎a^{*}italic_a start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT is the maximum point from (4.6). | A |
Traditionally, the flow of resources is discussed within the framework of a material flows approach \citep[see][]flow_ayres,soc_ec_systems, where the amount of a specific material is measured or approximated when it is transferred between systems. Examples are the transition of phosphates (in the following abbreviated as P) from a fertilizer product into an agricultural product, the transition of P from food into the human body, or the loss that happens from several systems to the hydrosphere. These flows are either analyzed on an averaged global level, or studied on the level of samples of countries \citep[see, e.g.,][]p_dk,p_eu. These approaches can however typically not cover flows globally. | or as country-wise exceedence footprints \citepp_exceed. With these approaches it is possible to cover most of the countries in the world, however, for the analysis of flows that happen before the production of biomass, the resolution of input-output data cannot deliver satisfactory results, since mineral resources, fertilizers and its intermediary products, as well as manure cannot be traced in detail. While data based on fertilizer production can remedy this deficit for some regions, it is currently not possible to map P flows globally on this basis. | Our approach to P flows therefore aims to use much more detailed trade data as the basis of the analysis \citep[see also][]chen_p_net. The novelty of our approach is that we transform and connect these data to other sources in such a way that we receive results that can again be interpreted in terms of the material flow of P, and not just as monetary value of traded amounts. It is therefor for the first time possible to quantify how much P is (in a material sense) transferred between countries as either raw material, preliminary product, or fertilizer with the intended use in agricultural production. This model is meant to show the trade-based first round of global P flows (before biomass production) in greater detail than currently available and can thus serve as the foundation for the analysis of P supply security and resilience. | Traditionally, the flow of resources is discussed within the framework of a material flows approach \citep[see][]flow_ayres,soc_ec_systems, where the amount of a specific material is measured or approximated when it is transferred between systems. Examples are the transition of phosphates (in the following abbreviated as P) from a fertilizer product into an agricultural product, the transition of P from food into the human body, or the loss that happens from several systems to the hydrosphere. These flows are either analyzed on an averaged global level, or studied on the level of samples of countries \citep[see, e.g.,][]p_dk,p_eu. These approaches can however typically not cover flows globally. | A related approach is to employ data from sector- or product-based input-output tables and to calculate the flows implied by these relationships, for example with a focus on biomass \citep[in particular agricultural and food products, see][]fabio, | D |
Table 3: Fair premium dependence on fee rate f2subscript𝑓2f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | When comparing buffer EPSs with floor EPSs, we observe that the hedging costs for buffer EPSs are higher than for floor EPSs. In order to explain this feature, we focus on their respective structures and we note that the fee legs for the buffer EPS and floor EPS shown in Table 1 are identical but their protection legs differ. From the profile of cash flow function for a buffer EPS (Figure 2) and a floor EPS (Figure 3), we can see that in a floor EPS the protection payment for large losses is capped, while it is unlimited in a buffer EPS. Hence the protection provided by a buffer EPS is more effective and, consequently, the hedging cost (i.e., the fair premium) for a buffer EPS is typically higher than for a floor EPS with identical fee leg. | Let us introduce two most practically relevant forms of an EPS, which are called the buffer EPS and the floor EPS. Notice that the proposed terminology for a generic EPS is referring directly to the protection leg, rather than the fee leg for which the choice of a buffer | We first define the buffer EPS where, incidentally, the concept of a buffer leg is applied to both the protection and fee legs and thus it could also be called a double-buffer EPS or a buffer/buffer EPS. | The so-called floor EPS, which can also be called floor/buffer EPS, is obtained by setting p2=0,p1∈(0,1],f1=0formulae-sequencesubscript𝑝20formulae-sequencesubscript𝑝101subscript𝑓10p_{2}=0,p_{1}\in(0,1],f_{1}=0italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0 , italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ ( 0 , 1 ] , italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0 and f2∈(0,1]subscript𝑓201f_{2}\in(0,1]italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ ( 0 , 1 ], which means that the provider fully covers the buyer’s losses above the level l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT if p1=1subscript𝑝11p_{1}=1italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1 but the buyer’s losses below l1subscript𝑙1l_{1}italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT are only partially covered since p1l1subscript𝑝1subscript𝑙1p_{1}l_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT acts as a floor for the adjusted return. By convention, the buffer specification is applied to the fee leg and thus the fee leg is exactly the same as in the buffer EPS of Definition 2.5. This choice for the fee leg is motivated by our practical arguments that the buffer fee leg is likely to be appreciated by a buyer of an EPS since there will be no fee payment at all, unless the portfolio’s gains during the lifetime of an EPS are substantial. The adjusted return function ψF(RT)subscript𝜓𝐹subscript𝑅𝑇\psi_{F}(R_{T})italic_ψ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ( italic_R start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ) for the floor EPS is shown in Figure 3. | A |
We find that the defaulters on larger amounts or with a subsequent harsh default have substantially higher penalties in terms of income and location, they move to lower median home values areas and to zip codes of lower economic activity. | We find that the defaulters on larger amounts or with a subsequent harsh default have substantially higher penalties in terms of income and location, they move to lower median home values areas and to zip codes of lower economic activity. | What seems to be happening is that there are consumers who are delinquent on smaller amounts, possibly because of uninsurable shocks, who suffer the consequences of such defaults, but substantially less than those who default on larger amounts and seek bankruptcy and other legal reliefs. The latter appear to have overextended their lines of credit, in particular on mortgages (presumably because of location choices), then gone under in their accounts and essentially diverged from their earlier life trajectories. They end up in substantially worse neighborhoods (of different CZs) with median home values that decrease about 4-times as much as those for the lower delinquent amounts/no-harsh default. | What seems to be happening is that there are individuals who are delinquent on smaller amounts, possibly because of uninsurable shocks, who suffer the consequences of such defaults, but substantially less than those who default on larger amounts and seek bankruptcy and other legal reliefs. The latter appear to have overextended their lines of credit, in particular on mortgages (presumably because of location choices), then gone under in their accounts and essentially diverged from their earlier life trajectories. They end up in substantially worse neighborhoods (of different CZs) with median home values that decrease about 4-times as much as those for the lower delinquent amounts/no-harsh default. | We show that the recovery is slow, painful, and in many respects only partial. In particular, after several years, up to 10, credit scores are still lower by 16 points, incomes never recover and appear to be substantially lower (by about 7,000USD or 14% of the 2010 mean), the defaulters live in lower “quality” neighborhoods (as measured by the median house value and other indicators such as proxies for average zip code income), are less likely to own a home, and are more likely to have low credit limits. We find that the negative effects of a soft default are larger for those individuals who are overextended in their credit lines, in particular the ones of mortgage. Being indebted in a way that is unsustainable for them in the long run, such individuals have also a higher probability of a subsequent harsh default (i.e. Chapter 7, Chapter 13, foreclosure). In addition, they end up in substantially worse neighborhoods, with lower median home values, and these moves are likely to have a substantial effect also on their labor market outlooks. | C |
_{jk}\right)∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_q ⋅ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT sign ( italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) italic_r start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT ), | Minimax: Vote sincerely222222While a viability-aware strategy was included for Minimax in Wolk et al. (2023), | to Minimax. As with IRV, each ballot is a ranking of some or all of the candidates.101010While it is often recommended that equal rankings be allowed under | Block Approval: Voters vote for any number of candidates.272727We use the same sincere strategy as for single-winner Approval Voting. | Approval: Vote for all candidates with uj≥EVsubscript𝑢𝑗𝐸𝑉u_{j}\geq EVitalic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≥ italic_E italic_V. | A |
We did not include weekend effects in our seasonality function, since especially for the extremely volatile crisis data, these effects are negligible. | In order to take into account the change of the mean level in the data after the beginning of the crisis, the linear part of the seasonal function was calibrated piecewise for three different time periods. This is important since for the next step of the calibration, we assume that the deseasonalized data has mean level zero. In the rightmost plot in Figure 2, the dates on which the gradient changes are the 29.6.2021 and the 29.12.2021. | This leads to the conclusion that the rapid up- and downward movements in the desaisonalized data in this time period are not Gaussian any more but are rather modeled by a succession of exponentially distributed jumps of large size. The adequacy of these calibration results are confirmed by the relatively high p𝑝pitalic_p-values for both jump sizes and intensity (cf. Figure 12(C)). The simulation in Figure 12(E) reproduces quite well the characteristic behavior of the historic data (cf. 12(A)) and the confidence intervals correspond to the range of the historic data. The model adequacy is finally also confirmed by a comparison of the autocorrelation fuction (ACF) of the data with the ACF of our simulations (cf. 12(D)). | We point out that even in the 4444-factor model, the inference performed by the MCMC algorithm is still exact at the level of distributions, since there is no discretization involved in the update of the second Gaussian component. The two models are then calibrated to EEX spot price data within different time periods and are then compared in terms of model adequacy using posterior predictive checks (Section 5). Our study is to the best of our knowledge the first attempt to calibrate such models for the extremely volatile data in the period 2021-2023 and can serve as a starting point for future research in this direction. | In this section, we compare the fit of the 3333-factor and 4444-factor model for three different time periods namely the pre-crisis period 2018-2021, the crisis 2021-23 and the whole interval 2018-2023. The EEX data we use for our studies starts at 30.9.2018 and ends at 10.1.2023. When we separate our data to investigate the model adequacy for the respective time intervals, we consider the 1.4.2021 as the start of the crisis period (cf. Figure 1). This choice is partly motivated in order not to include the strong upwards trend observed from the second quarter of 2021 onward into the calibration of the linear trend in the seasonality function. | A |
In this work, we propose DoubleAdapt, a meta-learning approach to incremental learning for stock trend forecasting. | The upper level includes the data adapter and the model adapter as two meta-learners that are optimized to minimize the forecast error on the adapted test data. | We introduce two meta-learners, namely data adapter and model adapter, which adapt data towards a locally stationary distribution and equip the model with task-specific parameters that have quickly adapted to the incremental data and still generalize well on the test data. | Confronted with this challenge, it is noteworthy that the incremental updates stem from two factors: the incremental data and the initial parameters. Conventional IL blindly inherits the parameters learned in the previous task as initial parameter weights and conducts one-sided model adaptation on raw incremental data. To improve IL against distribution shifts, we propose to strengthen the learning scheme by performing two-fold adaptation, namely data adaptation and model adaptation. The data adaptation aims to close the gap between the distributions of incremental data and test data. For example, biased patterns that only exist in incremental data are equivalent to noise with respect to test data and could be resolved through proper data adaptation. Our model adaptation focuses on learning a good initialization of parameters for each IL task, which can appropriately adapt to incremental data and still retain a degree of robustness to distribution shifts. However, it is intractable to design optimal adaptation for each IL task. A proper choice of adaptation varies by forecast model, dataset, period, degree of distribution shifts, and so on. Hence, we borrow ideas from meta-learning (Finn et al., 2017) to realize the two-fold adaptation, i.e., to automatically find profitable data adaptation without human labor or expertise and to reach a sweet spot between adaptiveness and robustness for model adaptation. | (i) in the lower-level optimization, the parameters of the forecast model are initialized by the model adapter and fine-tuned on the adapted incremental data; | B |
My main contribution is to literature studying IPO returns. Some studies have examined the influence of institutional investors on IPO returns. For instance, Jiang and Li (2013) employed over-subscription rates for IPOs in Hong Kong to highlight the role that institutional investors play in determining IPO returns. On the other hand, research by Ljungqvist et al. (2006) and Ritter and Welch (2002) argued that the excitement and enthusiasm of retail investors can lead to overvaluation of an IPO stock and its first-day return, which ultimately results in long-term under-performance as the stock reverts to its fundamental value.333Da et al. (2011) provide an attention-driven explanation. They argue that for an investor to become overly enthusiastic about an upcoming IPO, they must allocate attention to the equity issuance. Without attention from retail investors, these investors are less likely to impact the first-day return and long-term performance of the IPO. For an extensive literature review on IPO underpricing see Ljungqvist (2007). Cornelli et al. (2006) further substantiated this observation using pre-IPO pricing data from European markets. What sets my paper apart is its methodology: while these studies predominantly rely on indirect proxies to gauge investor sentiment, my research adopts a more direct approach. Additionally, a unique facet of my work is the examination into the evolution of sentiment at the investor level. | Refining my focus on the qualitative nuances of investor communication, I observe that messages rich in financial insights and those that reinforce existing information have a marked influence on stock returns. Further, I explore the continuity of pre-IPO enthusiasm at the individual investor level. The data indicates a consistent bullish sentiment among investors engaging with multiple IPOs. However, for users interacting with a broader spectrum of IPOs, there is a clear temporal change, hinting at a move towards caution, perhaps steered by the trends seen in IPO outcomes. Subsequently, I assess the post-IPO sentiment trajectory at the firm’s level, illustrating that companies with initial high enthusiasm consistently garner more optimism. This trend stands out, especially when juxtaposed with the long-term under-performance of these entities. This amplified enthusiasm is mirrored in post-IPO conversations, highlighting that firms with strong pre-IPO buzz are more likely to be the topic of discussion. | Closest to my setting, Tsukioka et al. (2018) uses investor sentiment extracted from Yahoo! Japan Finance message boards on 654 Japanese IPOs from 2001 to 2010, and shows that increased investor attention and optimism lead to higher IPO offer prices and initial returns, offering insight into the typical initial high returns followed by the underperformance seen in IPOs. Yet, the study is limited to pre-IPO sentiment in the Japanese context and employs standard machine learning classification models like SVM. According to Vamossy and Skog (2023), these classifiers fall short in emotion detection compared with modern language models. In stark contrast, my research harnesses a state-of-the-art language model adept at discerning a comprehensive emotional spectrum, including happiness, sadness, disgust, anger, fear, and surprise. While past research predominantly concentrated on overarching positive and negative sentiment metrics, my study probes deeper to uncover the potential impact of distinct emotions on IPO underpricing. Moreover, my research underscores the qualitative depth of investor communication, noting that messages rich in financial insights significantly influence stock returns. By expanding the dataset beyond the IPO phase, I trace a dynamic sentiment evolution, revealing a consistent bullish trend pre-IPO, a shift towards caution with broader IPO interactions, and sustained optimism for firms, even when they underperform long-term. | I examine how changes in emotion affect stock returns and demonstrate that elevated investor enthusiasm, as reflected in social media data, can induce buying pressure, leading to a temporary increase in stock prices. Subsequently, I show that these stocks, after their initial surge driven by this enthusiasm, tend to under-perform when compared to their industry peers. Turning my attention to the qualitative aspects of investor communication, I find that messages rich in financial details and those that amplify existing information have the most pronounced influence on stock dynamics. I also examine the persistence of pre-IPO enthusiasm at the individual investor level. My findings reveal a prevailing bullish sentiment among investors across multiple IPOs. However, this sentiment becomes more varied among users who have engaged with a greater number of IPOs. This varied sentiment is coupled with a noticeable temporal evolution, indicating a shift towards caution, perhaps influenced by patterns observed in IPO performances. Last, I examine the sentiment trajectory post-IPO at the firm level, and show that firms with higher initial enthusiasm consistently attract more optimism. This observation is intriguing, especially considering the long-term under-performance of these firms. This heightened enthusiasm is also reflected in post-IPO discussions, where firms with pronounced pre-IPO enthusiasm are more likely to be discussed. | My main contribution is to literature studying IPO returns. Some studies have examined the influence of institutional investors on IPO returns. For instance, Jiang and Li (2013) employed over-subscription rates for IPOs in Hong Kong to highlight the role that institutional investors play in determining IPO returns. On the other hand, research by Ljungqvist et al. (2006) and Ritter and Welch (2002) argued that the excitement and enthusiasm of retail investors can lead to overvaluation of an IPO stock and its first-day return, which ultimately results in long-term under-performance as the stock reverts to its fundamental value.333Da et al. (2011) provide an attention-driven explanation. They argue that for an investor to become overly enthusiastic about an upcoming IPO, they must allocate attention to the equity issuance. Without attention from retail investors, these investors are less likely to impact the first-day return and long-term performance of the IPO. For an extensive literature review on IPO underpricing see Ljungqvist (2007). Cornelli et al. (2006) further substantiated this observation using pre-IPO pricing data from European markets. What sets my paper apart is its methodology: while these studies predominantly rely on indirect proxies to gauge investor sentiment, my research adopts a more direct approach. Additionally, a unique facet of my work is the examination into the evolution of sentiment at the investor level. | B |
The dataset contains a large number of missing values, which motivated the experimentation of different imputation techniques such as zero-filling, round-robin imputation (implemented in the Python package scikit-learn), and MICE [43]. The best results were achieved with round-robin imputation using scikit-learn’s IterativeImputer with Bayesian ridge regression. This was the pre-processing employed in all the results on the SME dataset. | The first quantum neural network architecture, named OrthoResNN, uses an 8×8888\times 88 × 8 orthogonal experimental layer implemented with a semi-diagonal loader and X𝑋Xitalic_X circuit. Note that the final output of the layer is provided by measurements. We add a skip connection by adding the input of the orthogonal layer to the output 333The model cannot learn to ”ignore” the orthogonal layer via the skip connection, because the orthogonal layer cannot change the magnitude of the input.. The layer is followed by a tanh\tanhroman_tanh activation function. This architecture is illustrated in Fig. 17. | Our second architecture, ExpResNN, replaces the experimental layer with an 8×8888\times 88 × 8 Expectation-per-subspace compound layer. We use the the H𝐻Hitalic_H-loader to encode our data. The layer is again followed by a tanh\tanhroman_tanh activation function. Fig. 18 illustrates the ExpResNN architecture. | To compare the performance of the orthogonal and compound layers to the classical baseline, we designed three neural network architectures. Each architecture had three layers: an encoding layer, an experimental layer, and a classification head. The encoding layer was a standard linear layer of size 32×832832\times 832 × 8 followed by a tanh\tanhroman_tanh activation. Its purpose is to bring the dimension of the features down to 8, which is a reasonable simulation size for both proposed quantum layers. The second layer was the experimental layer of size 8×8888\times 88 × 8 (described below). Finally, the third layer, the classification head, was a linear layer of size 8×2828\times 28 × 2 followed by softmax to predict the probabilities. | In our experimental setup, we consider the fully connected residual layer (ResNN) as the classical benchmark. We performed the same experiment with an orthogonal layer using the semi-diagonal loader and the X circuit (OrthoResNN). Finally, we tried the expectation-per-subspace compound layer with the Hadamard loader and X circuit (ExpResNN). While the performance of the OrthoResNN and ExpResNN remained nearly the same as the FNN layer, these new layers learn the angles of 2n2𝑛2n2 italic_n RBS gates instead of n2superscript𝑛2n^{2}italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT elements of the weight matrix, dramatically reducing the number of parameters needed. The results are shown in Table 4. | D |
Since it is based on actual trades, realized volatility (RV) is the ultimate measure of market volatility, although the latter is more often associated with the implied volatility, most commonly measured by the VIX index cboevix ; cboevixhistoric – the so called market ”fear index” – that tries to predict RV of the S&P500 index for the following month. Its model-independent evaluation demeterfi1999guide is based on options contracts, which are meant to predict future stock prices fluctuations whitepaper2003cboe . The question of how well VIX predicts future realized volatility has been of great interest to researchers christensen1998relation ; vodenska2013understanding ; kownatzki2016howgood ; russon2017nonlinear . Recent results dashti2019implied ; dashti2021realized show that VIX is only marginally better than past RV in predicting future RV. In particular, it underestimates future low volatility and, most importantly, future high volatility. In fact, while both RV and VIX exhibit scale-free power-law tails, the distribution of the ratio of RV to VIX also has a power-law tail with a relatively small power exponent dashti2019implied ; dashti2021realized , meaning that VIX is incapable of predicting large surges in volatility. | While the standard search for Dragon Kings involves performing a linear fit of the tails of the distribution pisarenko2012robust ; janczura2012black , here we tried to broaden our analysis by also fitting the entire distribution using mGB (7) and GB2 (11) – the two members of the Generalized Beta family of distributions liu2023rethinking , mcdonald1995generalization . As explained in the paragraph that follows (7), the central feature of mGB is that, after exhibiting a long power-law dependence, it eventually terminates at a finite value of the variable. GB2, on the other hand, has a power-law tail that extends mGB’s power-law dependence to infinity. | The main result of this paper is that the largest values of RV are in fact nDK. We find that daily returns are the closest to the BS behavior. However, with the increase of n𝑛nitalic_n we observe the development of ”potential” DK with statistically significant deviations upward from the straight line. This trend terminates with the data points returning to the straight line and then abruptly plunging into nDK territory. | We fit CCDF of the full RV distribution – for the entire time span discussed in Sec. 2 – using mGB (7) and GB2 (11). The fits are shown on the log-log scale in Figs. 4 – 13, together with the linear fit (LF) of the tails with RV>40𝑅𝑉40RV>40italic_R italic_V > 40. LF excludes the end points, as prescribed in pisarenko2012robust , that visually may be nDK candidates. (In order to mimic LF we also excluded those points in GB2 fits, which has minimal effect on GB2 fits, including the slope and KS statistic). To make the progression of the fits as a function of n𝑛nitalic_n clearer, we included results for n=5𝑛5n=5italic_n = 5 and n=17𝑛17n=17italic_n = 17, in addition to n=1,7,21𝑛1721n=1,7,21italic_n = 1 , 7 , 21 that we used in Sec. 2. Confidence intervals (CI) were evaluated per janczura2012black , via inversion of the binomial distribution. p𝑝pitalic_p-values were evaluated in the framework of the U-test, which is discussed in pisarenko2012robust and is based on order statistics: | It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is well known, the largest upheavals in the stock market happened on, and close to, the Black Monday, which was a precursor to the Savings and Loan crisis, the Tech Bubble, the Financial Crisis and the COVID Pandemic. Plotted on a log-log scale, power-law tails of a distribution show as a straight line. If the largest RV fall on the straight line they can be classified as Black Swans (BS). If, however, they show statistically significant deviations upward or downward from this straight line, they can be classified as Dragon Kings (DK) sornette2009 ; sornette2012dragon or negative Dragon Kings (nDK) respectively pisarenko2012robust . | D |
Our paper differs from the above references since it models the interaction among bidders who adopt online learning algorithms. In this sense, it is closer in spirit to the Economics literature on learning in games. In addition, unlike the present paper, the cited references assume that the winning bidder observes her valuations and payment in each period; some of these papers leverage insights that depend on specific auction format; and none of them allow for bidders targeting different clauses. | Finally, we mention equilibrium analyses of bidding and ad exchanges that provide results related to our simulation findings. Choi and Sayedi (2019) analyze auctions in which a new entrant’s click-through rate is not known to the publisher. Despotakis et al. (2021) demonstrates that, with competing ad exchanges, a multi-layered auction involving symmetric bidders can result in a scenario where first-price auctions, and soft floors in general, yield higher revenue than second-price auctions. When multiple slots are offered, Rafieian and Yoganarasimhan (2021) shows that, when advertisers can target their bids to specific placements in second-price auctions, total surplus increases, but the effect on publisher revenues is ambiguous, the key difference being that we consider a single slot with randomly arriving queries, rather than multiple slots that are often offered for sale. Nabi et al. (2022) propose a hierarchical empirical Bayes method that learns empirical meta-priors from the data in Bayesian frameworks. They further apply their approach in a contextual bandit setting, demonstrating improvements in performance and convergence time. | The second branch instead takes the perspective of a single bidder who uses learning algorithms to guide her bidding process. Weed et al. (2016) focus on second-price auctions for a single good, and assume that the valuation can vary either stochastically or adversarially in each auction. In a similar environment, Balseiro et al. (2018) and Han et al. (2020) study contextual learning in first-price auctions, where the context is provided by the bidder’s value. For auctions in which the bidder must learn her own value (as is often the case in the settings we consider), Feng et al. (2018) proposes an improved version of the EXP3 algorithm that attains a tighter regret bound. There is also a considerable literature that studies optimal bidding with budget and/or ROI constraints using reinforcement-learning: e.g., Wu et al. (2018), Ghosh et al. (2020), and references therein, and Deng et al. (2023). Golrezaei et al. (2021) also studies the interaction between a seller and a single, budget- and ROI-constrained buyer. | To the best of our knowledge, the closest papers to our own are Kanmaz and Surer (2020), Elzayn et al. (2022), Banchio and Skrzypacz (2022), and Jeunen et al. (2022). The first reports on experiments using a multi-agent reinforcement-learning model in simple sequential (English) auctions for a single object, with a restricted bid space. Our analysis focuses on simultaneous bidding in scenarios that are representative of actual online ad auctions. The second focuses on position (multi-slot) auctions and, among other results, reports on experiments using no-regret learning (specifically, the Hedge algorithm we also use) under standard generalized second-price and Vickerey-Groves-Clarke pricing rules. Our analysis is complementary in that we allow for different targeting clauses and more complex pricing rules such as “soft floors”. The third studies the emergence of spontaneous collusion in standard first- and second-price auctions, under the Q𝑄Qitalic_Q-learning algorithm (Watkins, 1989)). The fourth describes a simulation environment similar to ours that is mainly intended to help train sophisticated bidding algorithms for advertisers. We differ in that we allow for bids broadly targeting multiple queries, and focus on learning algorithms that allow us to model auctions with a large number of bidders; in addition, we demonstrate how to infer values from observed bids. | As noted in §1, our contribution here is twofold. First, we demonstrate through simulations that, with multi-query targeting, different auction formats—and in particular auctions with a soft floor—can yield different revenues, even if bidder types are drawn from the same distribution. Second, restricting attention to single-query auctions, we consider a collection of asymmetric bid distributions that complement those analyzed by Zeithammer (2019), and do not allow for analytical equilibrium solutions. For such environments, we demonstrate how a second-price auction with a suitably chosen reserve price yields higher revenue than a soft-floor auction. | C |
In this model, individuals have to make decisions sequentially, without knowing their position in the sequence (position uncertainty), but are aware of the decisions of some of their predecessors by observing a sample of past play. In the presence of position certainty, those placed in the early positions of the sequence would want to contribute, in an effort to induce some of the other group members to co-operate (Rapoport and Erev, 1994), while late players, would want to free-ride on the contributions of the early players. Nevertheless, if the agents are unaware of their position in the sequence, they would condition their choice on the average payoff, from all potential positions, and they would be inclined to contribute so as to induce the potential successor to do so as well. G&M show that full contribution can occur in equilibrium, where given appropriate values of the parameters of the game (i.e. return from contributions), it is predicted that there exists an equilibrium where all agents contribute. | The first thing to notice is the high value of β𝛽\betaitalic_β for all treatments, ranging between 0.819 and 0.893. This parameter is always significant and different from 0.5 (β𝛽\betaitalic_β values close to 0.5 indicate random behaviour, while values close to unity indicate almost deterministic behaviour). There is also little variation between treatments regarding the error probability. As a general overview of the results, the fraction of behaviour consistent with the G&M type ranges between 17.6% and 25.8% across all treatments252525This range increases to 20% and 26.1%, when a heterogeneous error probability is assumed., the fraction of the altruists between 8.7% and 40.6%, that of conditional co-operators between 24.2% and 66.9% and a very small and insignificant proportion of subjects is classified as free riders. In particular, when there is position uncertainty (T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) the vast majority of the subjects are classified either as altruists or conditional co-operators 76.1% (64.8%) in T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT). This is in sharp contrast with the G&M model prediction, that subjects will defect if they observe at least one defection in their sample. While the behaviour of a conditional co-operator is straightforward (i.e. reciprocate to the existing contribution), the behaviour of the altruist is worth further exploration. On top of altruistic motives, a potential explanation could be that subjects were trying to signal to members of the group, placed later in the sequence, and motivate them to contribute.262626This is in line with Cartwright and Patel (2010) who use a sequential public goods game with exogenous ordering and show that agents early enough in the sequence who believe imitation to be sufficiently likely, would want to contribute. Support for the latter is given by the estimated proportion of altruists when the position is known (T3subscript𝑇3T_{3}italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT), where it drops to virtually zero (7.2% and insignificant), where due to the Treatment’s characteristics, contributing unconditionally does not seem to be an appealing strategy, as is the case when only the action of a past player is revealed (Treatment 2). If subjects expect the last player in the sequence to defect, they lack the motivation to unconditionally contribute. On the contrary, the proportion of conditional co-operators rises to 66.9% when there is position certainty. The G&M model fits well for around 1/4 of the experimental population (25.8% in both T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and T3subscript𝑇3T_{3}italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT), while it accounts for 17.6% of the subjects, when the sample is equal to 2 and there is position uncertainty (T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT). A potential explanation of this drop could be linked to the strict prediction of the model, that one should ignore any contributions in the sample and defect instead, if there is at least one defection. Finally, the proportion of free riders in the experimental population is virtually zero in T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, as the fraction of free riders is estimated to be very low and is always insignificant. This result is in line with Dal Bó and Fréchette (2019) who find that the strategies that represent less than 10 per cent of the data | The main intuition behind the model is that an agent who observes a sample of past decisions of immediate predecessors, without defection, would decide to contribute, hoping to influence all her successors to do so. If instead, she decides to defect, then all the successors are expected to defect as well. As there is position uncertainty, the agent deals with a trade-off between inducing contributions from the remaining players in the sequence and the cost of contributing. In their main theoretical result, G&M show that incentives off the equilibrium path, largely depend on the sample size that the agents observe. When an agent observes a sample of more than one previous actions that contains defection, there is no way this agent can prevent further defection by choosing to contribute. On the other hand, when the sample size is equal to one (only the decision of the previous player is observed), the agent can induce further contributions from the remaining agents in the sequence, by deciding to contribute. The model predicts that when the sample size is equal to one there is a mixed strategy equilibrium that can lead to full contribution. Finally, when the agents are aware of their position in the sequence, the model predicts that full contribution will unravel, as late agents in the sequence will have an incentive to free-ride. | A sketch of the intuition behind this follows. Consider an agent who observes a sample of full co-operation. According to the equilibrium definition, this occurs at the equilibrium path, and the agent can therefore infer that all the previous agents in the sequence contributed. By contributing herself, she knows that all subsequent players will contribute as well, and therefore, her expected payoff from contributing will be equal to r−1𝑟1r-1italic_r - 1, and this payoff is independent of the agent’s beliefs regarding her position in the sequence. On the other hand, the payoff from defecting depends on these beliefs. Consider an agent who observes a sample of m𝑚mitalic_m. This agent can therefore infer that she is not placed in the first m𝑚mitalic_m positions, and there are equal chances of her being in any of the positions in {m+1,⋯,n}𝑚1⋯𝑛\{m+1,\cdots,n\}{ italic_m + 1 , ⋯ , italic_n }. The expected position is (n+m+1)/2𝑛𝑚12(n+m+1)/2( italic_n + italic_m + 1 ) / 2, which means that she can expect that (n+m−1)/2𝑛𝑚12(n+m-1)/2( italic_n + italic_m - 1 ) / 2 agents have already contributed. The payoff from defecting is then equal to (r/n)(n+m−1)/2𝑟𝑛𝑛𝑚12(r/n)(n+m-1)/2( italic_r / italic_n ) ( italic_n + italic_m - 1 ) / 2. Combining the two, gives the condition that r𝑟ritalic_r needs to satisfy in order for the agents to be willing to contribute, whenever they observe full contribution samples (condition 1). | In this model, individuals have to make decisions sequentially, without knowing their position in the sequence (position uncertainty), but are aware of the decisions of some of their predecessors by observing a sample of past play. In the presence of position certainty, those placed in the early positions of the sequence would want to contribute, in an effort to induce some of the other group members to co-operate (Rapoport and Erev, 1994), while late players, would want to free-ride on the contributions of the early players. Nevertheless, if the agents are unaware of their position in the sequence, they would condition their choice on the average payoff, from all potential positions, and they would be inclined to contribute so as to induce the potential successor to do so as well. G&M show that full contribution can occur in equilibrium, where given appropriate values of the parameters of the game (i.e. return from contributions), it is predicted that there exists an equilibrium where all agents contribute. | B |
Further, by appropriately tuning the number of walkers, steps, and coin operator parameters, multi-SSQW can be optimized for swift convergence, thereby enhancing efficiency. It is important to note, however, that achieving efficiency necessitates a deep understanding of system dynamics and careful selection and tuning of parameters, a potentially challenging task. | Uncertainty is an inherent nature of financial markets, manifesting itself in the unpredictability of the cognitive processes of market participants, their decision-making routines, and the general macroeconomic conditions and structural dynamics of the financial market, which directly influence asset valuation. This concept is strikingly similar to quantum theory. Quantum computing is particularly promising for the financial sector, which stands to benefit significantly from this innovation. This is primarily due to many financial scenarios that could be resolved using quantum algorithms. Our ambitions focus on developing quantum states that encapsulate the inherent uncertainties that pervade financial markets, which can be processed using quantum computers. Additionally, we aim to devise quantum algorithms to simulate the dynamic nature of financial systems. | There appears to be an inherent trade-off between the quantum circuit’s complexity and the optimizer’s efficiency in the Variational Quantum Algorithms (VQA). As we increase the depth and intricacy of our quantum | The multi-SSQW framework leverages a dual-domain computational approach, involving a Parameterized Quantum Circuit (PQC) and a classical optimizer. The PQC is composed of n+1 qubits, one designated for coin space and the remainder for position space. This setup is employed to represent and emulate the distribution of short-term financial prices. To fine-tune these parameters in alignment with empirical data, the framework employs a classical optimizer. The optimizer implements the Constrained Optimization By Linear Approximation (COBYLA) algorithm to refine the trained results toward the target distribution. The COBYLA optimizer uses the mean-square error (MSE) and KL divergence as loss functions for an enhanced approach to the targeted distribution. This methodology allows for financial simulation and preparing the probability data into a quantum state, effectively bridging the gap between classical finance models and quantum simulation. | Figure 4: (a) Illustrates the multi-Split-Steps Quantum Walk (multi-SSQW) setup, showing the initial state and the sequence of operations modeling investors’ decision-making. The quantum state evolves iteratively to achieve a targeted distribution, analyzed by a classical optimizer. (b) Depicts the quantum circuit of S^+subscript^𝑆\hat{S}_{+}over^ start_ARG italic_S end_ARG start_POSTSUBSCRIPT + end_POSTSUBSCRIPT, controlling state incrementation. (c) Shows the quantum circuit of S^−subscript^𝑆\hat{S}_{-}over^ start_ARG italic_S end_ARG start_POSTSUBSCRIPT - end_POSTSUBSCRIPT, managing state decrementation. This ensemble represents a quantum approach to simulating financial market dynamics. | B |
The constant coefficient is somewhat lower (indicating a 0.32 vs 0.56 delegation rate in trivial task) | The Table shows that the median investor can potentially recoup the low (10p) cost of delegation for simple tasks, and almost recoup it for complex tasks. For high cost (100p), however, delegation is potentially cost-effective only for the very most inefficient investors in the simple task, and for no investor in the complex task. | Our experiment is intended to assess four different motives for delegating investment decisions to experts. To distinguish among the motives, the experiment controls for the cost of delegation and for task complexity, as well as for the available information about experts. | As elaborated below in our survey of previous literature, we identify four possible motives for delegation: | The third section lays out our laboratory procedures and experiment design, and spells out the hypotheses on delegation motives that we will test. Results are collected in the following section, beginning with an overview using descriptive summary statistics. Later subsections present inferential statistics to identify the strength and significance of the various motives for delegation, and to analyze which characteristics matter for the selection of experts. | B |
We call g𝑔gitalic_g turbulent if there exist three points, x1subscript𝑥1x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, x2subscript𝑥2x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and x3subscript𝑥3x_{3}italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT in I𝐼Iitalic_I such that g(x2)=g(x1)=x1𝑔subscript𝑥2𝑔subscript𝑥1subscript𝑥1g(x_{2})=g(x_{1})=x_{1}italic_g ( italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) = italic_g ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) = italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and g(x3)=x2𝑔subscript𝑥3subscript𝑥2g(x_{3})=x_{2}italic_g ( italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) = italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with either x1<x3<x2subscript𝑥1subscript𝑥3subscript𝑥2x_{1}<x_{3}<x_{2}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT < italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT < italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT or x2<x3<x1subscript𝑥2subscript𝑥3subscript𝑥1x_{2}<x_{3}<x_{1}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT < italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT < italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Moreover, we call g𝑔gitalic_g (topologically) chaotic if some iterate of g𝑔gitalic_g is turbulent. | First, we clarify what we mean by a Li-Yorke chaos, a turbulence, and a topological chaos. (There are several definitions of a chaos in the literature.) The following definitions are taken from [Ruette, 2017, Def. 5.1] and [Block and Coppel, 1992, Chap. \@slowromancapii@]. Let g𝑔gitalic_g be a continuous map of a closed interval I𝐼Iitalic_I into itself: | It is well-known that the existence of a period three cycle implies that of a Li-Yorke chaos (by the famous Li-Yorke theorem [Li and Yorke, 1975, Thm. 1]), and this argument has been used a lot in economic literature, see [Benhabib and Day, 1980], [Benhabib and Day, 1982], [Day and Shafer, 1985], [Nishimura and Yano, 1996] for example. However, this is a bit overkill: by [Block and Coppel, 1992, Chap. \@slowromancapii@], we know that the existence of a cycle of any odd length (not necessarily of period three) implies that of a Li-Yorke chaos. In the first part of this paper, extending Proposition 1.1, we obtain: | It is known that a map g𝑔gitalic_g is topologically chaotic if and only if g𝑔gitalic_g has a periodic point whose period is not a power of 2222, see [Block and Coppel, 1992, Chap. \@slowromancapii@]. This implies that a map g𝑔gitalic_g is topologically chaotic if and only if the topological entropy of g𝑔gitalic_g is positive, see [Block and Coppel, 1992, Chap. \@slowromancapviii@]. In the first part of this paper, we focus on a topological chaos (or a positive topological entropy) in the context of price dynamics. See [Ruette, 2017] for more characterisations and (subtle) mutual relations of various kinds of chaos. | Suppose that g𝑔gitalic_g is S𝑆Sitalic_S-unimodal, g𝑔gitalic_g has no attracting periodic orbit, and the critical point c𝑐citalic_c is non-flat. Then g𝑔gitalic_g has a unique acim ζ𝜁\zetaitalic_ζ and g𝑔gitalic_g is ergodic with respect to ζ𝜁\zetaitalic_ζ if one of the following conditions is satisfied: | C |
We also present baseline data for latency and fill-rate of transactions. Latency and fill-rate measure how quickly and reliably can traders get into their desired positions. Our dataset indicates that roughly 90% of transactions wait less than 12 seconds before their signed transactions are confirmed in a block. As we move out farther in the distribution, waiting time gets much longer—the 99.5th percentile waiting time is more than 20 blocks. Of the USDC-WETH swaps in our dataset, fewer than 0.5% fail onchain. In PEPE-WETH, interface swaps saw an onchain fail rate of nearly 10% at launch in mid-April, dropping to 5% by the end of April, and approximately 3% in August. | Transaction costs on traditional markets. In contrast, a number of works have quantified overall trading costs on traditional markets (such as equities markets). We point to | Many works have assessed transaction costs (slippage, commission, broker fees, bid ask spreads, price impact) on traditional markets. | We compare the magnitude of the different transaction cost components: gas costs, slippage, LP fees, and the price impact of swaps. | A number of works measure the dynamics of liquidity provisioning on decentralized exchanges, sometimes through the lenses of transaction costs. [16] show that lower gas fees increase liquidity repositioning and concentration, reducing price impact for small trades. They use a notion of slippage that incorporates price impact, comparing realized prices to the market mid price. [27] show that higher LP fees may reduce price impact. | B |
These companies, and other centralized cryptoasset exchanges (CEXs) like FTX, fall under the broader definition of virtual asset service providers (VASPs). They facilitate financial activity involving virtual assets (VAs), such as their exchange for other VAs or fiat currencies, their custody and transfer via cryptoasset wallets, and portfolio management services for their customers (FMA,, 2021; EC,, 2018, 2022; FATF,, 2021). | Figure 9: Estimation of the bitcoin holdings in Euro of VASP-5. On-chain and off-chain data are comparable only in 2015 and 2016. | As Figure 1 shows, VASPs lie at the interface of the traditional and the crypto financial ecosystems, respectively called off-chain and on-chain financial activity in jargon. | Figure 4: Comparison of traditional financial intermediaries with VASPs. Circles on the left represent VASPs, divided into groups as described in Figure 3, while on the right are traditional financial intermediaries. Links point to the financial functions offered by each financial intermediary. VASPs are most similar to money exchanges, brokers, and funds, rather than banks. The colors in the circles highlight what traditional intermediary each group is most similar to. | VASPs lie at the interface of the traditional and the crypto financial ecosystems. The former encompasses financial activity with fiat currencies, i.e., legal tender money, and fiat assets, i.e., assets denominated in fiat currencies (similarly to cryptoassets being assets denominated in a cryptocurrency). It can rely on commercial banks and other traditional financial intermediaries. The latter entails financial activity executed on Distributed Ledger Technologies (DLTs) like the Bitcoin and Ethereum blockchains, and with cryptoassets such as bitcoin, ether, and the stablecoins tether (USDT), USD coin (USDC), or DAI. | B |
The aim of this paper is to develop an algorithm that uses representations (5) and (6) to approximate the set of Nash equilibria of a non-cooperative game. | To do so, we will focus for the remaining part of this paper on convex games satisfying the following assumption. | Usually, when considering convex games, the focus is on providing the existence of a unique equilibrium point and developing methods for finding this particular equilibrium, see e.g. [21], where additional strong convexity conditions are assumed to guarantee uniqueness of the Nash equilibrium. In this paper, we will consider convex games without additional assumptions, hence allowing for games with a unique, several, or infinitely many equilibria. Our aim is to approximate the set of Nash equilibria for any desired error bound ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0. | In the following, we will make the following assumption concerning the structure of the constraint set in addition to considering convex games (Assumption 4.1). | In this paper, we will mainly focus on convex games with a shared constraint set 𝕏𝕏\mathbb{X}blackboard_X. | A |
Table 8: Parameter estimation through the CLS for French (left) and Italian (right) energy log spot price. | Third, we address the challenge of the joint temperature and log spot energy price distribution by proposing a coupled model on the dynamics. In particular, we introduce the Brownian noise of the temperature dynamics into the energy process. This allows the integration of weather information available at the time of price formation, as suggested by Benth and Meyer-Brandis [15], while maintaining flexibility and tractability in both processes. We estimate the marginals and dependence parameters of the joint model using Condition Least Square estimation applied to the characteristic function. χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT tests comparing the simulated and observed joint distributions confirm the goodness of fit of the combined model for both French and Northern Italian datasets. | We then turn to the goodness of fit of the estimated combined Model (ETM). Figure 11 and 12 represent χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT test performed between the historical and simulated distributions on the (2-dimensional) empirical copula between temperature and electricity spot price residuals. To ensure reliability of the results, we perform 1,000,00010000001,000,0001 , 000 , 000 simulations and rescale the frequencies to compare with observed frequencies. We can see that the test does not globally reject the null hypotheses which means that the dependence is correctly reproduced by Model (ETM) for both French and North Italian data. | Figure 11: From top left to bottom right, χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT test performed on the distributions of real (blue) and simulated (green - based on 1,000,00010000001,000,0001 , 000 , 000 simulations) ranked residuals for 4 and 25 categories for French data. | Figure 12: From top left to bottom right, chi-square test performed on the distributions of real (blue) and simulated (green - based on 1,000,00010000001,000,0001 , 000 , 000 simulations) ranked residuals for 4 and 25 categories for North Italian data. | B |
Perpetual swaps were introduced by BitMEX in 2016201620162016 [Hayes,]. They are futures contracts with no expiry. These contracts allow for high leverage with most cryptocurrency exchanges offering leverage in the range of 100100100100x–125125125125x and some recent platforms allowing up-to 1000100010001000x(!) leverage111Reference intentionally omitted as trading with such high leverage, especially in volatile markets like cryptocurrencies, is ill-advised. 1000𝐱1000𝐱1000\mathbf{x}1000 bold_x leverage implies, in the best case, that the liquidation price is 10101010 basis points away from the entry price and more realistically 5555 basis points considering fees. Interestingly a reduction in allowed leverage, in retrospect, can be a sign that an exchange is in distress [Reynolds,]. These contracts are designed to track an underlying exchange rate, e.g. BTC / USD, such that speculators can gain exposure to that underlying while holding a collateral of their choice (usually USDT). The first perpetual swap was what is commonly referred to today as inverse perpetual. In inverse perpetual contracts, profits and losses as well as margin are paid in the base asset, e.g., BTC for the BTC/USD inverse perpetual, while the price of the contract is quoted in units of the quote asset. Except for their use as an instrument for speculation, this kind of perpetual swap was originally also used as a tool to hedge exposure to the underlying. This can be achieved by opening a short position in the contract with 1111x leverage. As the inverse perpetual is coin margined, every change in the price of the underlying is offset by the short position in the inverse perpetual which results in a stable equity curve when denominated in units of the quote asset. Before USDT was accepted as the de facto stablecoin in cryptocurrency trading, this was the primary mechanism for traders to hedge their Bitcoin exposure. | Our datasets are comprised of tick-by-tick trades, block trades, liquidations, and open interest as reported by the APIs of the respective exchanges mentioned in Table 1. We limit our attention to Bitcoin linear perpetuals quoted in USDT (https://tether.to/en/) and inverse perpetuals quoted in USD, as these are the most liquid derivatives. We focus on two periods: i) 2023/01/01 to 2023/01/31 (period 1) which is the beginning of the year and is usually a period of naturally higher trading volume and ii) 2023/07/01 to 2023/09/30 (period 2) containing most of the summer of months of 2023202320232023 and September as it is the most recent month prior to this work which enables us to see if our observations are still pertinent in recent data. The infrastructure as well as the collected data are proprietary; however, in the interest of encouraging reproduction of this work, we offer a few suggestions on free and open source resources that can help in that respect (please see the Appendix for further details). | As stablecoins gained in popularity, inverse perpetuals ceded market dominance to the linear perpetual swap. These pay profits and losses in the quote asset and are margined by the same, which is most often some stablecoin for the US dollar. This is not unexpected, as for most people their numéraire is some form of FIAT currency, predominantly USD, especially when it comes to financial instruments. However, the benefits of the linear perpetual are mostly limited to their use as an instrument for speculation, given that these contracts are margined by the quote asset. This margining implies the existence of a liquidation price which necessitates active monitoring and rebalancing if used as a hedge. In addition, most exchanges have a limit on the size of the position that can be opened on linear perpetuals, which reduces the capacity of a potential hedge. | The other possibility is that trading volume is accurate but the reported open interest is incorrect. Market participants do observe the changes in open interest and attempt to infer how informed investors are being positioned in the market. The general heuristic is that if open interest is rising and the price is increasing then the aggressors, presumably informed investors, are the buyers. If on the other hand open interest is rising and the price is falling, the aggressors would be the sellers. If, however, as the price increases open interest decreases, this presumably implies shorts covering (and implies the same for the longs when price is dropping). This heuristic may or may not be valid, but its validity is less important than whether market participants pay attention to it and whether some trade according to it. Given that exchanges generate revenue by extracting fees on traded volume, it is conceivable that they could potentially generate false signals, when the market has none to offer, by modulating open interest artificially, with the expectation that this would incentivize market participants to trade more. If that is what actually takes place, it would require those signals to be as clear and large in magnitude as possible, such that all market participants notice them. In this case we would expect 𝔼SP[XTV|XTV>0]subscript𝔼𝑆𝑃delimited-[]subscript𝑋𝑇𝑉ketsubscript𝑋𝑇𝑉0\mathbb{E}_{SP}[X_{TV}|X_{TV}>0]blackboard_E start_POSTSUBSCRIPT italic_S italic_P end_POSTSUBSCRIPT [ italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT | italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT > 0 ] to be inflated, similar to what we see for ByBit, OKX, and Binance in Table 4 and Table 5 for all sub-periods. Although such activity would be almost impossible to prove beyond reasonable doubt it is instructive to look at the XTVsubscript𝑋𝑇𝑉X_{TV}italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT on the tick-by-tick level during an eventful day, e.g., a significant market decline. On August 17, 2023 the Bitcoin price crashed more than 10%percent1010\%10 % during the US trading session. In Figure 1, we see this event as observed on ByBit, and in Figure 2 we see the same day as it unfolded on Kraken. In both plots the red lines represent the excess total variation observed, with the left axis measuring the magnitude of the excess in USD. The right axis represents the price, and the blue line is the last traded price at the time of the open interest update. Comparing the two figures it is immediately evident that on ByBit there seem to be almost no time intervals where XTV=0subscript𝑋𝑇𝑉0X_{TV}=0italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT = 0, while on Kraken this condition holds almost for the entirety of the day. The few spikes in XTVsubscript𝑋𝑇𝑉X_{TV}italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT on Kraken could conceivably be explained by delayed reporting of liquidations or open interest, as most of them are localized in time intervals with sharp price fluctuations. | Perpetual swaps were introduced by BitMEX in 2016201620162016 [Hayes,]. They are futures contracts with no expiry. These contracts allow for high leverage with most cryptocurrency exchanges offering leverage in the range of 100100100100x–125125125125x and some recent platforms allowing up-to 1000100010001000x(!) leverage111Reference intentionally omitted as trading with such high leverage, especially in volatile markets like cryptocurrencies, is ill-advised. 1000𝐱1000𝐱1000\mathbf{x}1000 bold_x leverage implies, in the best case, that the liquidation price is 10101010 basis points away from the entry price and more realistically 5555 basis points considering fees. Interestingly a reduction in allowed leverage, in retrospect, can be a sign that an exchange is in distress [Reynolds,]. These contracts are designed to track an underlying exchange rate, e.g. BTC / USD, such that speculators can gain exposure to that underlying while holding a collateral of their choice (usually USDT). The first perpetual swap was what is commonly referred to today as inverse perpetual. In inverse perpetual contracts, profits and losses as well as margin are paid in the base asset, e.g., BTC for the BTC/USD inverse perpetual, while the price of the contract is quoted in units of the quote asset. Except for their use as an instrument for speculation, this kind of perpetual swap was originally also used as a tool to hedge exposure to the underlying. This can be achieved by opening a short position in the contract with 1111x leverage. As the inverse perpetual is coin margined, every change in the price of the underlying is offset by the short position in the inverse perpetual which results in a stable equity curve when denominated in units of the quote asset. Before USDT was accepted as the de facto stablecoin in cryptocurrency trading, this was the primary mechanism for traders to hedge their Bitcoin exposure. | B |
Boghosian et al. in [7] established that the Gini coefficient is monotone under the dynamics of the continuum model Eq. 2 of the classical Yard-Sale Model. This result was shown both for the master equation and the resulting non-linear partial integro-differential Fokker-Planck equation first derived in [4]. Thus the Gini coefficient is a Lyapunov functional [21, 26] for the Yard-Sale Model. Chorro showed via a martingale convergence theorem argument in [18] that the finite-agent system likewise approaches oligarchy (in a probabilistic sense). Börgers and Greengard in [8] produced yet simpler proofs of wealth condensation for the classical finite-agent system. Börgers and Greengard [8] and Boghosian [6] have similar results for transactions that are biased in favor of the wealthier agent. Beyond the results for the Yard-Sale Model and its variants, Cardoso et al. in [12, 13] and F. Cao and S. Motsch in [11] have shown that wealth condensation is more likely the rule than the exception for more general unbiased binary exchanges. | We now turn to the main results of the paper: That the Gini coefficient, despite being monotonically increasing under the modified Yard-Sale Model dynamics, can have a non-trivial bound on its rate of change in time. This bound may be carried over into a bound on the value of the Gini coefficient at a future time. | The paper is organized as follows. The variant of the Yard-Sale Model on which the present paper focuses is motivated and defined in Section 2. The Gini coefficient is briefly reviewed in Section 3 with particular focus on its invariance under a normalization of the equations of motion. In Section 4 it is proven both that the Gini coefficient increases monotonically in time under the induced dynamics and that its rate of increase may be bounded. This result is then re-stated for a more general class of evolutionary models. The evolutionary, integro-differential PDE are numerically solved to demonstrate the bound holding in experiment. Plots and descriptions of the numerical method are included. The asymptotics of the modified system when a redistributive tax is incorporated are derived in Section 5 and shown to match the classical Yard-Sale Model with taxation. | Despite the many methods of showing that wealth condensation occurs, we are not aware of any explicit bounds on the rate of increase of the Gini coefficient under models for which the Gini coefficient increases monotonically. | We introduced a variant of the Yard-Sale Model for which the Gini coefficient of economic inequality monotonically increases under the resulting continuum dynamics yet the rate of change in time of the Gini coefficient permits an upper bound. The way in which this bound holds is similar to the entropy – entropy production bounds for nonlinear Fokker-Planck equations. In the econophysics case, the twin results of Corollary 4.3 and Theorem 4.5 may be interpreted as the adage wealth begets wealth but with the constraint that the accumulation of wealth into a small portion of society begins to limit how quickly more can be extracted from the poor. | C |
A common approach to mitigate the curse of dimensionality is the regression-based Monte Carlo method, which involves simulating numerous paths and then estimating the continuation value through cross-sectional regression to obtain optimal stopping rules. [1] first used spline regression to estimate the continuation value of an option. Inspired by his work, [2] and [3] further developed this idea by employing least-squares regression. Presently, the Least Squares Method (LSM) proposed by Longstaff and Schwartz has become one of the most successful methods for pricing American options and is widely used in the industry. In recent years, machine learning methods have been considered as potential alternative approaches for estimating the continuation value. Examples include kernel ridge regression [4, 5], support vector regression [6], neural networks [7, 8], regression trees [9], and Gaussian process regression [10, 11, 12]. In subsequent content, we refer to algorithms that share the same framework as LSM but may utilize different regression methods as Longstaff-Schwartz algorithms. Besides estimating the continuation value, machine learning has also been employed to directly estimate the optimal stopping time [13] and to solve high-dimensional free boundary PDEs for pricing American options [14]. | Inspired by a version of Regression Monte Carlo Methods, proposed by [10], which applied GPR to estimate the continuation value within the Longstaff-Schwartz algorithm in low-dimensional cases, we aim to extend the application of GPR to the pricing of high-dimensional American options. Prior to this, there are two potential challenges in high-dimensional scenarios. The first challenge is the unreliability of the Euclidean metric in high-dimensional spaces. Common kernels, such as the Radial Basis Function (RBF) kernel, employ the Euclidean metric to measure the similarity between two inputs; however, this may not be optimal for high-dimensional data [24]. | In this work, we will apply a deep learning approach based on Gaussian process regression (GPR) to the high-dimensional American option pricing problem. The GPR is a non-parametric Bayesian machine learning method that provides a flexible solution to regression problems. Previous studies have applied GPR to directly learn the derivatives pricing function [15] and subsequently compute the Greeks analytically [16, 17]. This paper focuses on the adoption of GPR to estimate the continuation value of American options. [10] initially integrated GPR with the regression-based Monte Carlo methods, and testing its efficacy on Bermudan options across up to five dimensions. [11] further explored the performance of GPR in high-dimensional scenarios through numerous numerical experiments. They also introduced a modified method, the GPR Monte Carlo Control Variate method, which employs the European option price as the control variate. Their method adopts GPR and a one-step Monte Carlo simulation at each time step to estimate the continuation value for a predetermined set of stock prices. In contrast, our study applies a Gaussian-based method within the Longstaff-Schwartz framework, requiring only a global set of paths and potentially reducing simulation costs. Nonetheless, direct integration of GPR with the Longstaff-Schwartz algorithm presents several challenges. First, GPR’s computational cost is substantial when dealing with large training sets, which are generally necessary to achieve a reliable approximation of the continuation value in high dimensional cases. Second, GPR may struggle to accurately estimate the continuation value in high-dimensional scenarios, and we will present a numerical experiment to illustrate this phenomenon in Section 5. | To overcome this issue, we incorporate the Deep Kernel Learning (DKL) method, introduced by [18, 19], which employs a deep neural network to learn a non-Euclidean metric for the kernel. The second challenge is the significant computational cost associated with processing numerous simulated paths. A large training set is generally required to achieve a reliable approximation in high-dimensional option pricing, and the computational complexity of GPR is O(N3)𝑂superscript𝑁3O(N^{3})italic_O ( italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ), where N𝑁Nitalic_N is the size of the training set. Given the substantial computational expense, we consider the adoption of a sparse variational Gaussian process. For further details on sparse variational Gaussian processes, we refer to [20, 21, 25]. In the subsequent two subsections, we will demonstrate how these machine learning techniques can be integrated into the Longstaff-Schwartz algorithm. | A common approach to mitigate the curse of dimensionality is the regression-based Monte Carlo method, which involves simulating numerous paths and then estimating the continuation value through cross-sectional regression to obtain optimal stopping rules. [1] first used spline regression to estimate the continuation value of an option. Inspired by his work, [2] and [3] further developed this idea by employing least-squares regression. Presently, the Least Squares Method (LSM) proposed by Longstaff and Schwartz has become one of the most successful methods for pricing American options and is widely used in the industry. In recent years, machine learning methods have been considered as potential alternative approaches for estimating the continuation value. Examples include kernel ridge regression [4, 5], support vector regression [6], neural networks [7, 8], regression trees [9], and Gaussian process regression [10, 11, 12]. In subsequent content, we refer to algorithms that share the same framework as LSM but may utilize different regression methods as Longstaff-Schwartz algorithms. Besides estimating the continuation value, machine learning has also been employed to directly estimate the optimal stopping time [13] and to solve high-dimensional free boundary PDEs for pricing American options [14]. | B |
Hence, since the comonotonic coupling is optimal for submodular cost functions [24], the result follows. | The optimal coupling of the MK minimisation problem induced by any consistent scoring function for the entropic risk measure is the comonotonic coupling. | The comonotonic coupling is also optimal when the cost function is a score that elicits the α𝛼\alphaitalic_α-expectile. | The optimal coupling of the MK minimisation problem induced by the scoring function given in (14) is the comonotonic coupling. | The optimal coupling of the MK minimisation problem induced by the score given in (11) is the comonotonic coupling. | B |
A DF strategy (Π∗,i,C∗,i)i=1n∈𝒮nsuperscriptsubscriptsuperscriptΠ𝑖superscript𝐶𝑖𝑖1𝑛superscript𝒮𝑛\left(\Pi^{*,i},C^{*,i}\right)_{i=1}^{n}\in\mathcal{S}^{n}( roman_Π start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT , italic_C start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ∈ caligraphic_S start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is said to be a DF equilibrium strategy if for every (t0,x0)∈[0,T)×ℝnsubscript𝑡0subscript𝑥00𝑇superscriptℝ𝑛(t_{0},x_{0})\in[0,T)\times\mathbb{R}^{n}( italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ∈ [ 0 , italic_T ) × blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, the corresponding outcome (π∗,i,c∗,i)i=1n∈𝒜t0nsuperscriptsubscriptsuperscript𝜋𝑖superscript𝑐𝑖𝑖1𝑛subscriptsuperscript𝒜𝑛subscript𝑡0\left(\pi^{*,i},c^{*,i}\right)_{i=1}^{n}\in\mathcal{A}^{n}_{t_{0}}( italic_π start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT , italic_c start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ∈ caligraphic_A start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT is an open-loop equilibrium control with respect to the initial condition (t0,x0)subscript𝑡0subscript𝑥0(t_{0},x_{0})( italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ). Moreover, a DF equilibrium strategy (Π∗,i,C∗,i)i=1nsuperscriptsubscriptsuperscriptΠ𝑖superscript𝐶𝑖𝑖1𝑛\left(\Pi^{*,i},C^{*,i}\right)_{i=1}^{n}( roman_Π start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT , italic_C start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is said to be simple if the DF strategy (Π∗,i,C∗,i)i=1nsuperscriptsubscriptsuperscriptΠ𝑖superscript𝐶𝑖𝑖1𝑛\left(\Pi^{*,i},C^{*,i}\right)_{i=1}^{n}( roman_Π start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT , italic_C start_POSTSUPERSCRIPT ∗ , italic_i end_POSTSUPERSCRIPT ) start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is simple. | the discount function only needs to satisfy some weak conditions; see Remark 2.1 in Section 2. Meanwhile, we adopt the asset specialization framework in [23, 7] with a common noise. A lot of literature on equilibrium strategy under non-exponential discounting suggest that the investment strategy is independent of discount function, which indicates that time-inconsistency does not influence agent’s portfolio policy; see, e.g., [26, 6]. Thus instead of portfolio games, we incorporate consumption in the same spirit of [22] and focus on CARA relative performance utility. As opposed to the previous works, the presence of non-exponential discounting gives rise to time-inconsistency, which induces the failure of the principle of optimality. In order to resolve time-inconsistency, we replace each agent’s optimal strategy in time-consistent setting by its open-loop equilibrium (consistent) strategy in time-inconsistent setting. As a result, there are two levels of game-theoretic reasoning intertwined. (1) The intra-personal equilibrium among the agent’s current and future selves. (2) The equilibrium among n𝑛nitalic_n-agents. For tractability, we search only for DF equilibrium strategy, see 2.5. To construct the DF equilibrium strategy, we first characterize the open-loop consistent control for each single agent given an arbitrary (but fixed) choice of competitors’ controls by a FBSDE system. By assuming that all the other agents choose DF strategies, see Section 2, we derive a closed form representation of the strategy via a PDE system. We then construct the desired equilibrium by solving a fixed point problem. While for MFG, the solution technique is analogous to the n𝑛nitalic_n-agent setting and the resulting MFE takes similar forms as its n𝑛nitalic_n-agent counterparts. | For the sake of brevity we use the name “equilibrium strategy” but it is different to “close-loop equilibrium strategies” or “subgame-perfect equilibrium strategies” discussed in the literature of time-inconsistent problems. A more appropriate name would be a “DF representation of open-loop equilibrium controls”. | The contributions of our paper are as follows: first, as far as we know, our work is the first paper to incorporate consumption into CARA portfolio game with relative performance concerns. Portfolio games under relative consumption are generally underexplored, with the exception of [22], which focuses solely on CRRA utilities under zero discount rate. Our paper fills this gap in the literature. Second, our work can be viewed as a game-theoretic extension of the exponential utility case presented in [3]. In the special case of a single stock, the DF equilibrium strategy takes the same form as the open-loop equilibrium in [3], but with a modified risk tolerance. This effective risk tolerance parameter has already appeared in some works on a similar topic but with a time-consistent model; see, e.g., [23, 19]. Moreover, in the case with constant discount rate, the equilibrium reduces to the solution of classical Merton problem, which indicates that the equilibrium concept in our paper is the natural extension of equilibrium in classical time-consistent setting to time-inconsistent setting. Third, our work also provides a new explicitly solvable mean field game model. Since the pioneering works by [24, 21], MFGs have been actively studied and widely applied in economics, finance and engineering. To name a few recent developments in theories and application, we refer to [12, 8, 13, 9, 10] among others. However, few studies combine the MFGs with time-inconsistency problem, except some linear quadratic examples; see, e.g., [28, 4]. Our result adds a new explicitly solvable non-LQ example to intersection of these two fields. | There are many important problems in mathematical finance and economics incurring time-inconsistency, for example, the mean-variance selection problem and the investment-consumption problem with non-exponential discounting. The main approaches to handle time-inconsistency are to search for, instead of optimal strategies, time-consistent equilibrium strategies within a game-theoretic framework. Ekeland and Lazrak [14] and Ekeland and Pirvu [15] introduce the precise definition of the equilibrium strategy in continuous-time setting for the first time. Björk et al. [5] derive an extended HJB equation to determine the equilibrium strategy in a Markovian setting. Yong [30] introduces the so-called equilibrium HJB equation to construct the equilibrium strategy in a multi-person differential game framework with a hierarchical structure. The solution concepts considered in [5, 30] are closed-loop equilibrium strategies and the methods to handle time-inconsistency are extensions of the classical dynamic programming approaches. In contrast to the aforementioned literature, Hu et al. [20] introduce the concept of open-loop equilibrium control by using a spike variation formulation, which is different from the closed-loop equilibrium concepts. The open-loop equilibrium control is characterized by a flow of FBSDEs, which is deduced by a duality method in the spirit of Peng’s stochastic maximum principle. Some recent studies devoted to the open-loop equilibrium concept can be found in [2, 3, 29, 18]. Specially, Alia et al. [3], closely related to our paper, study a time-inconsistent investment-consumption problem under a general discount function, and obtain an explicit representation of the equilibrium strategies for some special utility functions, which is different from most of existing literature on the time-inconsistent investment-consumption problem, where the feedback equilibrium strategies are derived via several complicated nonlocal ODEs; see, e.g., [26, 6]. | B |
Importantly, the data by the relays is self-reported and thus requires some trust in the relays, as it cannot be fully verified. However, we combine our Ethereum blockchain reward data set with our PBS data set to verify that the block values reported by the relays correspond to those received by the validators. If this is not the case, we disregard any relay bid data for that block. | Figure 1: PBS scheme visualization. In step \⃝raisebox{-0.9pt}{\footnotesize1}, a searcher sends (bundles of) transactions to one or many builders privately, the transactions included in the bundle can be from the public Ethereum mempool, private order flow or from the searcher itself. The builder then builds a high-value block with bundles received from searchers, as well as transactions from the public mempool or private order flow. In step \⃝raisebox{-0.9pt}{\footnotesize2}, the builder sends the block to the relay. The relay checks that the block complies with its policies. Then, if requested by the proposer, the relay passes the highest valid bid and corresponding block header on to the proposer in step \⃝raisebox{-0.4pt}{\footnotesize3}. The proposer chooses the highest value block amongst the blocks received from the relays, signs the header, and returns it to the relay. This prompts the relay to reveal the full block and broadcast it to the public Ethereum network in step \⃝raisebox{-0.4pt}{\footnotesize4}. | Note that most of trades buy ETH or BTC for a stablecoin. As we saw in Figure 4 the prices of these two cryptocurrencies rose on Binance.com, and they were thus trading for cheaper on DEXes at the beginning of the block. For example, if we look at the price at the beginning of the block, we see that the price of the ETH-USDT Uniswap V3 pool was 1563.57, and at the end of the block, it was 1574.15. The Binance.com price was 1564.61 at the beginning of the block’s slot, while it rose to 1574.63 by the time the block was proposed. Notice that the arbitrageurs drove the price almost exactly to the off-chain price, as we show to be optimal in our previous analysis (cf. Section 4.1). Additionally, some transactions exchange cryptocurrencies for cryptocurrencies (i.e., no stablecoins), such as the transaction at index 2 and 6, which exchange ETH for BTC. The relative price increase measured in USDT of BTC was higher than that of ETH. Thus, BTC relative to ETH was also available for cheaper on DEXes at the beginning of the block. Hence, the searcher also swaps ETH for BTC. | Heuristic 2. The transaction is private, i.e., it did not enter the mempool before the block was propagated. | To obtain block times, i.e., the time at which a block was first seen in the network, and to identify private transactions we use Ethereum network data from the Mempool Guru project [10]. The Mempool Guru project runs geographically distributed Ethereum nodes. Each node records the timestamp at which it first saw any block or transaction. The projects data collection method is detailed in [11]. We take the earliest timestamp across the nodes operated by the Mempool Guru project as the block time and consider a transaction private if not one node saw the transaction before the block it is included in was seen in the network. | D |
MetricsS,tsubscriptMetrics𝑆𝑡\displaystyle\text{Metrics}_{S,t}Metrics start_POSTSUBSCRIPT italic_S , italic_t end_POSTSUBSCRIPT | :Macroeconomic summary at time t.:absentMacroeconomic summary at time 𝑡\displaystyle:\text{Macroeconomic summary at time }t.: Macroeconomic summary at time italic_t . | for stock S at time t.for stock 𝑆 at time 𝑡\displaystyle\quad\text{for stock }S\text{ at time }t.for stock italic_S at time italic_t . | : Performance metrics for stock S at time t.:absent Performance metrics for stock 𝑆 at time 𝑡\displaystyle:\text{ Performance metrics for stock }S\text{ at time }t.: Performance metrics for stock italic_S at time italic_t . | at time t.umberformulae-sequence at time 𝑡𝑢𝑚𝑏𝑒𝑟\displaystyle\quad\text{ at time }t.umberat time italic_t . italic_u italic_m italic_b italic_e italic_r | C |
The outcome is a regime-switching process reminiscent of the model proposed by Buffington and Elliott (2002), enriched with randomisation features. The primary focus of this section is on deriving the characteristic function of our model, which constitutes the main result. | In the following, we model the randomised component processes without time-dependence in the coefficients and such that their conditional versions are Lévy processes, which preserves stationarity and greatly simplifies the proof given. | This construction, coupled with the assumption of independence between stochastic drivers and randomisers, establishes a valuable link between the randomised processes, with coefficients influenced by random variables ϑjsubscriptitalic-ϑ𝑗\vartheta_{j}italic_ϑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, and what we denote their associated conditional processes. In these conditional processes, the coefficients are contingent on real numbers θjsubscript𝜃𝑗\theta_{j}italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, which we interpret as realizations of the randomisers, θj=ϑj(ω*)subscript𝜃𝑗subscriptitalic-ϑ𝑗superscript𝜔\theta_{j}=\vartheta_{j}(\omega^{*})italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_ϑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) for realisations ω*∈Ω*superscript𝜔superscriptΩ\omega^{*}\in\Omega^{*}italic_ω start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ∈ roman_Ω start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT. Since the conditional processes lack the additional layer of randomness, conventional results for stochastic processes can be applied to illuminate the characteristics of the randomised processes. | In addition, we define the conditional component processes Yjθ(t)subscriptsuperscript𝑌𝜃𝑗𝑡Y^{\theta}_{j}(t)italic_Y start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_t ) as the stochastic processes obtained when a realisation θj=ϑj(ω*)subscript𝜃𝑗subscriptitalic-ϑ𝑗superscript𝜔\theta_{j}=\vartheta_{j}(\omega^{*})italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_ϑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_ω start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) is given, hence these processes do not exhibit randomness in their coefficients and can be used to explore characteristics of the randomised component processes. | In this section, we define the framework of random coefficients and switches between regimes. Randomness in the coefficients leads to what we denote as randomised processes, which are stochastic processes that incorporate an additional source of randomness, influencing both their volatility and drift coefficients. For each path of a randomised process, a sample is drawn from a random variable immediately after the initial time, which henceforth influences the dynamics of this path. | A |
We calibrate the HW model at the ATM implied volatility of a caplet with the same duration, for which we use the analytical implied volatility formula of Turfus and Romero-Bermúdez [2023]. | We also compare the 3M SOFR futures convexity of Eq. (5.8) with the 3M Eurodollar convexity, where both include the effects of option smile and skew. For more details about the calculation of the Eurodollar convexity, see Appendix D. | Finally, we show the difference between the convexity of 3M SOFR futures and the convexity of the Eurodollar future in Fig. 3. | In addition, we compare these results with the convexity calculated in the Hull-White model, which neglects the effects of market smile and skew, see Appendix E for more details. | Figure 3: Convexity of 3M-Eurodollar minus the convexity of 3M-SOFR contracts, both incorporating option smile and skew. | A |
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001 | For overtreatment, we document all relevant regression analyses in the Appendix. As expected, consumers experience more overtreatment when approaching an expert with 𝐏𝐦superscript𝐏𝐦\mathbf{P^{m}}bold_P start_POSTSUPERSCRIPT bold_m end_POSTSUPERSCRIPT or 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT in Phase 1 (Table 10). For Phase 2, the results are similar, albeit less strong. In Skill but not Algorithm, consumers who approach an investing expert are also significantly less likely to be overtreated. Focusing on expert choices confirms the strong negative effect of 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT on overtreatment compared to both 𝐏𝐦superscript𝐏𝐦\mathbf{P^{m}}bold_P start_POSTSUPERSCRIPT bold_m end_POSTSUPERSCRIPT and 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT. Overall the results are in line with theory such that consumers should expect 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT-experts to be much more likely to undertreat them. Across all rounds and conditions, intention-to-undertreat rates are 40% under 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT, and only 12% and 11% for 𝐏𝐦superscript𝐏𝐦\mathbf{P^{m}}bold_P start_POSTSUPERSCRIPT bold_m end_POSTSUPERSCRIPT and 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT respectively. On the other hand, intention-to-overtreat rates are only 22% under 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT, and 48%/43% for 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT/𝐏𝐦superscript𝐏𝐦\mathbf{P^{m}}bold_P start_POSTSUPERSCRIPT bold_m end_POSTSUPERSCRIPT. Investment choices generally do not affect cheating intentions. | We now look at undertreatment from the expert’s perspective. Each round, experts receive three diagnoses, and make three treatment choices. Because of diagnostic uncertainty, an expert may intent to undertreat a consumer, but does not actually do so because of a wrong diagnosis. Therefore, we define the intent-to-undertreat as experts who diagnose a big problem, but prescribe the LQT, irrespective of the consumer’s actual underlying condition. The results (Table 6) confirm that across all treatments, experts cheating behavior is strongly influenced by their chosen price menu. Experts intent to undertreat consumers significantly more often under 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT, and significantly less often under the two other menus. Investment choices are largely irrelevant. There are no differences between expert types. | Expert Price Setting. Figure 6 shows the share of experts choosing a specific price vector depending on type, treatment and investment. We are primarily interested in potential strategic differences between high-ability experts who invest and those who forego investments. There are two potential effects. One, an expert may condition their pricing strategy on their current investment choice (within-subject variance). For instance, the standard model predicts a low-ability expert to change their pricing strategy from 𝐏𝐦superscript𝐏𝐦\mathbf{P^{m}}bold_P start_POSTSUPERSCRIPT bold_m end_POSTSUPERSCRIPT to 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT after investing. However, generating the necessary variance implies that an expert switches between different investment and pricing strategies, which is largely inconsistent with signalling behavior. Therefore, we would expect high-ability experts in Algorithm to exhibit lower within-subject variance. Second, experts who invest a lot may differ from expert who do not invest or only invest occasionally (between-subject variance). This would, e.g., be consistent with some proportion of high-ability experts foregoing the investment choice to signal their type. We first focus on the second case. Table 3 shows random effects panel regression results for price setting conditional on the number of rounds an expert chooses to invest. For low-ability experts, and Skill treatments, there are no differences. The number of investment choices does not predict price setting in either condition. For high-ability experts, there are large and significant effects of investing frequency on 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT and 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT in the Algorithm treatment. Those who often rent the algorithmic decision aid are more likely to choose 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT – the price menu that (1) has the highest consumer prices, (2) exhibits the highest expected value for experts and (3) splits the gains of trade relatively equally if experts behave honestly. On the other hand, high-ability experts who never or rarely invest into the decision aid are more likely to choose 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT – the price menu that maximizes expected consumer income if they approach a non-investing high ability expert and cannot be imitated by low-ability experts. | Consumer Choices. How do consumers react to undertreatment with diagnostic uncertainty? Table 7 shows that experiencing undertreatment strongly predicts consumer switching in the following round. This holds for both phases and treatments. Overtreatment appears to have no or only little effect on consumer choices. Furthermore, consumers do not appear to condition their switching decision on the expert’s chosen price menu. Yet, looking at the number of consumers an expert attracts confirms that the former do exhibit strong preferences against 𝐏𝐬superscript𝐏𝐬\mathbf{P^{s}}bold_P start_POSTSUPERSCRIPT bold_s end_POSTSUPERSCRIPT, and for 𝐏𝐦superscript𝐏𝐦\mathbf{P^{m}}bold_P start_POSTSUPERSCRIPT bold_m end_POSTSUPERSCRIPT (Table 11 in the Appendix). In line with consumer switching, undertreating consumers in the previous round also significantly reduces an expert’s current number of consumers. Investments do not meaningfully affect an expert’s ability to attract consumers. This may be one reason why we find consistent under-investment by experts. Finally, a three-way interaction of expert type, the expert’s investment choice, and 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT suggests that high-ability experts in Algorithm attract significantly more consumers when they choose 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT and simultaneously forego the algorithmic decision aid. This result is in line with our theoretical prediction, and does not hold for either Skill or low-ability experts, nor for any other price menu. Thus, it provides supportive evidence for the hypothesis that high-ability experts may strategically alter their utilization of costly decision aids under obfuscation to influence consumer beliefs about their skill type. | B |
README.md exists but content is empty.
- Downloads last month
- 41