id
int64 28.8k
36k
| category
stringclasses 3
values | text
stringlengths 44
3.03k
| title
stringlengths 10
236
| published
stringlengths 19
19
| author
stringlengths 6
943
| link
stringlengths 66
127
| primary_category
stringclasses 62
values |
---|---|---|---|---|---|---|---|
35,800 | th | We propose a novel model for refugee housing respecting the preferences of
accepting community and refugees themselves. In particular, we are given a
topology representing the local community, a set of inhabitants occupying some
vertices of the topology, and a set of refugees that should be housed on the
empty vertices of graph. Both the inhabitants and the refugees have preferences
over the structure of their neighbourhood.
We are specifically interested in the problem of finding housings such that
the preferences of every individual are met; using game-theoretical words, we
are looking for housings that are stable with respect to some well-defined
notion of stability. We investigate conditions under which the existence of
equilibria is guaranteed and study the computational complexity of finding such
a stable outcome. As the problem is NP-hard even in very simple settings, we
employ the parameterised complexity framework to give a finer-grained view on
the problem's complexity with respect to natural parameters and structural
restrictions of the given topology. | Host Community Respecting Refugee Housing | 2023-02-27 20:42:03 | Dušan Knop, Šimon Schierreich | http://arxiv.org/abs/2302.13997v2, http://arxiv.org/pdf/2302.13997v2 | cs.GT |
35,885 | th | There is increasing regulatory interest in whether machine learning
algorithms deployed in consequential domains (e.g. in criminal justice) treat
different demographic groups "fairly." However, there are several proposed
notions of fairness, typically mutually incompatible. Using criminal justice as
an example, we study a model in which society chooses an incarceration rule.
Agents of different demographic groups differ in their outside options (e.g.
opportunity for legal employment) and decide whether to commit crimes. We show
that equalizing type I and type II errors across groups is consistent with the
goal of minimizing the overall crime rate; other popular notions of fairness
are not. | Fair Prediction with Endogenous Behavior | 2020-02-18 19:07:25 | Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh M. Pai, Aaron Roth, Rakesh Vohra | http://arxiv.org/abs/2002.07147v1, http://arxiv.org/pdf/2002.07147v1 | econ.TH |
35,801 | th | In decentralized finance ("DeFi"), automated market makers (AMMs) enable
traders to programmatically exchange one asset for another. Such trades are
enabled by the assets deposited by liquidity providers (LPs). The goal of this
paper is to characterize and interpret the optimal (i.e., profit-maximizing)
strategy of a monopolist liquidity provider, as a function of that LP's beliefs
about asset prices and trader behavior. We introduce a general framework for
reasoning about AMMs based on a Bayesian-like belief inference framework, where
LPs maintain an asset price estimate. In this model, the market maker (i.e.,
LP) chooses a demand curve that specifies the quantity of a risky asset to be
held at each dollar price. Traders arrive sequentially and submit a price bid
that can be interpreted as their estimate of the risky asset price; the AMM
responds to this submitted bid with an allocation of the risky asset to the
trader, a payment that the trader must pay, and a revised internal estimate for
the true asset price. We define an incentive-compatible (IC) AMM as one in
which a trader's optimal strategy is to submit its true estimate of the asset
price, and characterize the IC AMMs as those with downward-sloping demand
curves and payments defined by a formula familiar from Myerson's optimal
auction theory. We generalize Myerson's virtual values, and characterize the
profit-maximizing IC AMM. The optimal demand curve generally has a jump that
can be interpreted as a "bid-ask spread," which we show is caused by a
combination of adverse selection risk (dominant when the degree of information
asymmetry is large) and monopoly pricing (dominant when asymmetry is small).
This work opens up new research directions into the study of automated exchange
mechanisms from the lens of optimal auction theory and iterative belief
inference, using tools of theoretical computer science in a novel way. | A Myersonian Framework for Optimal Liquidity Provision in Automated Market Makers | 2023-03-01 06:21:29 | Jason Milionis, Ciamac C. Moallemi, Tim Roughgarden | http://dx.doi.org/10.4230/LIPIcs.ITCS.2024.80, http://arxiv.org/abs/2303.00208v2, http://arxiv.org/pdf/2303.00208v2 | cs.GT |
35,802 | th | This paper investigates the moral hazard problem in finite horizon with both
continuous and lump-sum payments, involving a time-inconsistent sophisticated
agent and a standard utility maximiser principal. Building upon the so-called
dynamic programming approach in Cvitani\'c, Possama\"i, and Touzi [18] and the
recently available results in Hern\'andez and Possama\"i [43], we present a
methodology that covers the previous contracting problem. Our main contribution
consists in a characterisation of the moral hazard problem faced by the
principal. In particular, it shows that under relatively mild technical
conditions on the data of the problem, the supremum of the principal's expected
utility over a smaller restricted family of contracts is equal to the supremum
over all feasible contracts. Nevertheless, this characterisation yields, as far
as we know, a novel class of control problems that involve the control of a
forward Volterra equation via Volterra-type controls, and infinite-dimensional
stochastic target constraints. Despite the inherent challenges associated to
such a problem, we study the solution under three different specifications of
utility functions for both the agent and the principal, and draw qualitative
implications from the form of the optimal contract. The general case remains
the subject of future research. | Time-inconsistent contract theory | 2023-03-03 00:52:39 | Camilo Hernández, Dylan Possamaï | http://arxiv.org/abs/2303.01601v1, http://arxiv.org/pdf/2303.01601v1 | econ.TH |
35,803 | th | Information design in an incomplete information game includes a designer with
the goal of influencing players' actions through signals generated from a
designed probability distribution so that its objective function is optimized.
We consider a setting in which the designer has partial knowledge on agents'
utilities. We address the uncertainty about players' preferences by formulating
a robust information design problem against worst case payoffs. If the players
have quadratic payoffs that depend on the players' actions and an unknown
payoff-relevant state, and signals on the state that follow a Gaussian
distribution conditional on the state realization, then the information design
problem under quadratic design objectives is a semidefinite program (SDP).
Specifically, we consider ellipsoid perturbations over payoff coefficients in
linear-quadratic-Gaussian (LQG) games. We show that this leads to a tractable
robust SDP formulation. Numerical studies are carried out to identify the
relation between the perturbation levels and the optimal information
structures. | Robust Social Welfare Maximization via Information Design in Linear-Quadratic-Gaussian Games | 2023-03-09 21:40:39 | Furkan Sezer, Ceyhun Eksin | http://arxiv.org/abs/2303.05489v2, http://arxiv.org/pdf/2303.05489v2 | math.OC |
35,804 | th | Consider a normal location model $X \mid \theta \sim N(\theta, \sigma^2)$
with known $\sigma^2$. Suppose $\theta \sim G_0$, where the prior $G_0$ has
zero mean and unit variance. Let $G_1$ be a possibly misspecified prior with
zero mean and unit variance. We show that the squared error Bayes risk of the
posterior mean under $G_1$ is bounded, uniformly over $G_0, G_1, \sigma^2 > 0$. | Mean-variance constrained priors have finite maximum Bayes risk in the normal location model | 2023-03-15 17:36:08 | Jiafeng Chen | http://arxiv.org/abs/2303.08653v1, http://arxiv.org/pdf/2303.08653v1 | math.ST |
35,805 | th | Instant runoff voting (IRV) has recently gained popularity as an alternative
to plurality voting for political elections, with advocates claiming a range of
advantages, including that it produces more moderate winners than plurality and
could thus help address polarization. However, there is little theoretical
backing for this claim, with existing evidence focused on case studies and
simulations. In this work, we prove that IRV has a moderating effect relative
to plurality voting in a precise sense, developed in a 1-dimensional Euclidean
model of voter preferences. We develop a theory of exclusion zones, derived
from properties of the voter distribution, which serve to show how moderate and
extreme candidates interact during IRV vote tabulation. The theory allows us to
prove that if voters are symmetrically distributed and not too concentrated at
the extremes, IRV cannot elect an extreme candidate over a moderate. In
contrast, we show plurality can and validate our results computationally. Our
methods provide new frameworks for the analysis of voting systems, deriving
exact winner distributions geometrically and establishing a connection between
plurality voting and stick-breaking processes. | The Moderating Effect of Instant Runoff Voting | 2023-03-17 05:37:27 | Kiran Tomlinson, Johan Ugander, Jon Kleinberg | http://arxiv.org/abs/2303.09734v5, http://arxiv.org/pdf/2303.09734v5 | cs.MA |
35,806 | th | We study the complexity of finding an approximate (pure) Bayesian Nash
equilibrium in a first-price auction with common priors when the tie-breaking
rule is part of the input. We show that the problem is PPAD-complete even when
the tie-breaking rule is trilateral (i.e., it specifies item allocations when
no more than three bidders are in tie, and adopts the uniform tie-breaking rule
otherwise). This is the first hardness result for equilibrium computation in
first-price auctions with common priors. On the positive side, we give a PTAS
for the problem under the uniform tie-breaking rule. | Complexity of Equilibria in First-Price Auctions under General Tie-Breaking Rules | 2023-03-29 04:57:34 | Xi Chen, Binghui Peng | http://arxiv.org/abs/2303.16388v1, http://arxiv.org/pdf/2303.16388v1 | cs.GT |
35,807 | th | Since Kopel's duopoly model was proposed about three decades ago, there are
almost no analytical results on the equilibria and their stability in the
asymmetric case. The first objective of our study is to fill this gap. This
paper analyzes the asymmetric duopoly model of Kopel analytically by using
several tools based on symbolic computations. We discuss the possibility of the
existence of multiple positive equilibria and establish necessary and
sufficient conditions for a given number of positive equilibria to exist. The
possible positions of the equilibria in Kopel's model are also explored.
Furthermore, in the asymmetric model of Kopel, if the duopolists adopt the best
response reactions or homogeneous adaptive expectations, we establish rigorous
conditions for the local stability of equilibria for the first time. The
occurrence of chaos in Kopel's model seems to be supported by observations
through numerical simulations, which, however, is challenging to prove
rigorously. The second objective is to prove the existence of snapback
repellers in Kopel's map, which implies the existence of chaos in the sense of
Li-Yorke according to Marotto's theorem. | Stability and chaos of the duopoly model of Kopel: A study based on symbolic computations | 2023-04-05 00:35:38 | Xiaoliang Li, Kongyan Chen, Wei Niu, Bo Huang | http://arxiv.org/abs/2304.02136v2, http://arxiv.org/pdf/2304.02136v2 | math.DS |
35,808 | th | This article investigates mechanism-based explanations for a well-known
empirical pattern in sociology of education, namely, that Black-White unequal
access to school resources -- defined as advanced coursework -- is the highest
in racially diverse and majority-White schools. Through an empirically
calibrated and validated agent-based model, this study explores the dynamics of
two qualitatively informed mechanisms, showing (1) that we have reason to
believe that the presence of White students in school can influence the
emergence of Black-White advanced enrollment disparities and (2) that such
influence can represent another possible explanation for the macro-level
pattern of interest. Results contribute to current scholarly accounts of
within-school inequalities, shedding light into policy strategies to improve
the educational experiences of Black students in racially integrated settings. | The presence of White students and the emergence of Black-White within-school inequalities: two interaction-based mechanisms | 2023-04-10 23:09:47 | João M. Souto-Maior | http://arxiv.org/abs/2304.04849v4, http://arxiv.org/pdf/2304.04849v4 | physics.soc-ph |
35,809 | th | The study of systemic risk is often presented through the analysis of several
measures referring to quantities used by practitioners and policy makers.
Almost invariably, those measures evaluate the size of the impact that
exogenous events can exhibit on a financial system without analysing the nature
of initial shock. Here we present a symmetric approach and propose a set of
measures that are based on the amount of exogenous shock that can be absorbed
by the system before it starts to deteriorate. For this purpose, we use a
linearized version of DebtRank that allows to clearly show the onset of
financial distress towards a correct systemic risk estimation. We show how we
can explicitly compute localized and uniform exogenous shocks and explained
their behavior though spectral graph theory. We also extend analysis to
heterogeneous shocks that have to be computed by means of Monte Carlo
simulations. We believe that our approach is more general and natural and
allows to express in a standard way the failure risk in financial systems. | Systemic risk measured by systems resiliency to initial shocks | 2023-04-12 15:13:46 | Luka Klinčić, Vinko Zlatić, Guido Caldarelli, Hrvoje Štefančić | http://arxiv.org/abs/2304.05794v1, http://arxiv.org/pdf/2304.05794v1 | physics.soc-ph |
35,810 | th | Equilibrium solution concepts of normal-form games, such as Nash equilibria,
correlated equilibria, and coarse correlated equilibria, describe the joint
strategy profiles from which no player has incentive to unilaterally deviate.
They are widely studied in game theory, economics, and multiagent systems.
Equilibrium concepts are invariant under certain transforms of the payoffs. We
define an equilibrium-inspired distance metric for the space of all normal-form
games and uncover a distance-preserving equilibrium-invariant embedding.
Furthermore, we propose an additional transform which defines a
better-response-invariant distance metric and embedding. To demonstrate these
metric spaces we study $2\times2$ games. The equilibrium-invariant embedding of
$2\times2$ games has an efficient two variable parameterization (a reduction
from eight), where each variable geometrically describes an angle on a unit
circle. Interesting properties can be spatially inferred from the embedding,
including: equilibrium support, cycles, competition, coordination, distances,
best-responses, and symmetries. The best-response-invariant embedding of
$2\times2$ games, after considering symmetries, rediscovers a set of 15 games,
and their respective equivalence classes. We propose that this set of game
classes is fundamental and captures all possible interesting strategic
interactions in $2\times2$ games. We introduce a directed graph representation
and name for each class. Finally, we leverage the tools developed for
$2\times2$ games to develop game theoretic visualizations of large normal-form
and extensive-form games that aim to fingerprint the strategic interactions
that occur within. | Equilibrium-Invariant Embedding, Metric Space, and Fundamental Set of $2\times2$ Normal-Form Games | 2023-04-20 00:31:28 | Luke Marris, Ian Gemp, Georgios Piliouras | http://arxiv.org/abs/2304.09978v1, http://arxiv.org/pdf/2304.09978v1 | cs.GT |
35,811 | th | In dynamic environments, Q-learning is an automaton that (i) provides
estimates (Q-values) of the continuation values associated with each available
action; and (ii) follows the naive policy of almost always choosing the action
with highest Q-value. We consider a family of automata that are based on
Q-values but whose policy may systematically favor some actions over others,
for example through a bias that favors cooperation. In the spirit of Compte and
Postlewaite [2018], we look for equilibrium biases within this family of
Q-based automata. We examine classic games under various monitoring
technologies and find that equilibrium biases may strongly foster collusion. | Q-learning with biased policy rules | 2023-04-25 11:25:10 | Olivier Compte | http://arxiv.org/abs/2304.12647v2, http://arxiv.org/pdf/2304.12647v2 | econ.TH |
35,812 | th | The logistics of urban areas are becoming more sophisticated due to the fast
city population growth. The stakeholders are faced with the challenges of the
dynamic complexity of city logistics(CL) systems characterized by the
uncertainty effect together with the freight vehicle emissions causing
pollution. In this conceptual paper, we present a research methodology for the
environmental sustainability of CL systems that can be attained by effective
stakeholders' collaboration under non-chaotic situations and the presumption of
the human levity tendency. We propose the mathematical axioms of the
uncertainty effect while putting forward the notion of condition effectors, and
how to assign hypothetical values to them. Finally, we employ a spider network
and causal loop diagram to investigate the system's elements and their behavior
over time. | Modeling the Complexity of City Logistics Systems for Sustainability | 2023-04-27 10:21:56 | Taiwo Adetiloye, Anjali Awasthi | http://arxiv.org/abs/2304.13987v1, http://arxiv.org/pdf/2304.13987v1 | math.OC |
35,813 | th | This paper provides a systematic study of the robust Stackelberg equilibrium
(RSE), which naturally generalizes the widely adopted solution concept of the
strong Stackelberg equilibrium (SSE). The RSE accounts for any possible
up-to-$\delta$ suboptimal follower responses in Stackelberg games and is
adopted to improve the robustness of the leader's strategy. While a few
variants of robust Stackelberg equilibrium have been considered in previous
literature, the RSE solution concept we consider is importantly different -- in
some sense, it relaxes previously studied robust Stackelberg strategies and is
applicable to much broader sources of uncertainties.
We provide a thorough investigation of several fundamental properties of RSE,
including its utility guarantees, algorithmics, and learnability. We first show
that the RSE we defined always exists and thus is well-defined. Then we
characterize how the leader's utility in RSE changes with the robustness level
considered. On the algorithmic side, we show that, in sharp contrast to the
tractability of computing an SSE, it is NP-hard to obtain a fully polynomial
approximation scheme (FPTAS) for any constant robustness level. Nevertheless,
we develop a quasi-polynomial approximation scheme (QPTAS) for RSE. Finally, we
examine the learnability of the RSE in a natural learning scenario, where both
players' utilities are not known in advance, and provide almost tight sample
complexity results on learning the RSE. As a corollary of this result, we also
obtain an algorithm for learning SSE, which strictly improves a key result of
Bai et al. in terms of both utility guarantee and computational efficiency. | Robust Stackelberg Equilibria | 2023-04-28 20:19:21 | Jiarui Gan, Minbiao Han, Jibang Wu, Haifeng Xu | http://arxiv.org/abs/2304.14990v2, http://arxiv.org/pdf/2304.14990v2 | cs.GT |
35,814 | th | In the past several decades, the world's economy has become increasingly
globalized. On the other hand, there are also ideas advocating the practice of
``buy local'', by which people buy locally produced goods and services rather
than those produced farther away. In this paper, we establish a mathematical
theory of real price that determines the optimal global versus local spending
of an agent which achieves the agent's optimal tradeoff between spending and
obtained utility. Our theory of real price depends on the asymptotic analysis
of a Markov chain transition probability matrix related to the network of
producers and consumers. We show that the real price of a product or service
can be determined from the involved Markov chain matrix, and can be
dramatically different from the product's label price. In particular, we show
that the label prices of products and services are often not ``real'' or
directly ``useful'': given two products offering the same myopic utility, the
one with lower label price may not necessarily offer better asymptotic utility.
This theory shows that the globality or locality of the products and services
does have different impacts on the spending-utility tradeoff of a customer. The
established mathematical theory of real price can be used to determine whether
to adopt or not to adopt certain artificial intelligence (AI) technologies from
an economic perspective. | To AI or not to AI, to Buy Local or not to Buy Local: A Mathematical Theory of Real Price | 2023-05-09 05:43:47 | Huan Cai, Catherine Xu, Weiyu Xu | http://arxiv.org/abs/2305.05134v1, http://arxiv.org/pdf/2305.05134v1 | econ.TH |
35,815 | th | A seller is pricing identical copies of a good to a stream of unit-demand
buyers. Each buyer has a value on the good as his private information. The
seller only knows the empirical value distribution of the buyer population and
chooses the revenue-optimal price. We consider a widely studied third-degree
price discrimination model where an information intermediary with perfect
knowledge of the arriving buyer's value sends a signal to the seller, hence
changing the seller's posterior and inducing the seller to set a personalized
posted price. Prior work of Bergemann, Brooks, and Morris (American Economic
Review, 2015) has shown the existence of a signaling scheme that preserves
seller revenue, while always selling the item, hence maximizing consumer
surplus. In a departure from prior work, we ask whether the consumer surplus
generated is fairly distributed among buyers with different values. To this
end, we aim to maximize welfare functions that reward more balanced surplus
allocations.
Our main result is the surprising existence of a novel signaling scheme that
simultaneously $8$-approximates all welfare functions that are non-negative,
monotonically increasing, symmetric, and concave, compared with any other
signaling scheme. Classical examples of such welfare functions include the
utilitarian social welfare, the Nash welfare, and the max-min welfare. Such a
guarantee cannot be given by any consumer-surplus-maximizing scheme -- which
are the ones typically studied in the literature. In addition, our scheme is
socially efficient, and has the fairness property that buyers with higher
values enjoy higher expected surplus, which is not always the case for existing
schemes. | Fair Price Discrimination | 2023-05-11 20:45:06 | Siddhartha Banerjee, Kamesh Munagala, Yiheng Shen, Kangning Wang | http://arxiv.org/abs/2305.07006v1, http://arxiv.org/pdf/2305.07006v1 | cs.GT |
35,816 | th | We show that computing the optimal social surplus requires $\Omega(mn)$ bits
of communication between the website and the bidders in a sponsored search
auction with $n$ slots on the website and with tick size of $2^{-m}$ in the
discrete model, even when bidders are allowed to freely communicate with each
other. | Social Surplus Maximization in Sponsored Search Auctions Requires Communication | 2023-05-12 21:49:03 | Suat Evren | http://arxiv.org/abs/2305.07729v1, http://arxiv.org/pdf/2305.07729v1 | cs.GT |
35,817 | th | A seller wants to sell an item to n buyers. Buyer valuations are drawn i.i.d.
from a distribution unknown to the seller; the seller only knows that the
support is included in [a, b]. To be robust, the seller chooses a DSIC
mechanism that optimizes the worst-case performance relative to the first-best
benchmark. Our analysis unifies the regret and the ratio objectives.
For these objectives, we derive an optimal mechanism and the corresponding
performance in quasi-closed form, as a function of the support information and
the number of buyers n. Our analysis reveals three regimes of support
information and a new class of robust mechanisms. i.) With "low" support
information, the optimal mechanism is a second-price auction (SPA) with random
reserve, a focal class in earlier literature. ii.) With "high" support
information, SPAs are strictly suboptimal, and an optimal mechanism belongs to
a class of mechanisms we introduce, which we call pooling auctions (POOL);
whenever the highest value is above a threshold, the mechanism still allocates
to the highest bidder, but otherwise the mechanism allocates to a uniformly
random buyer, i.e., pools low types. iii.) With "moderate" support information,
a randomization between SPA and POOL is optimal.
We also characterize optimal mechanisms within nested central subclasses of
mechanisms: standard mechanisms that only allocate to the highest bidder, SPA
with random reserve, and SPA with no reserve. We show strict separations in
terms of performance across classes, implying that deviating from standard
mechanisms is necessary for robustness. | Robust Auction Design with Support Information | 2023-05-16 02:23:22 | Jerry Anunrojwong, Santiago R. Balseiro, Omar Besbes | http://arxiv.org/abs/2305.09065v2, http://arxiv.org/pdf/2305.09065v2 | econ.TH |
35,819 | th | In this paper, we navigate the intricate domain of reviewer rewards in
open-access academic publishing, leveraging the precision of mathematics and
the strategic acumen of game theory. We conceptualize the prevailing
voucher-based reviewer reward system as a two-player game, subsequently
identifying potential shortcomings that may incline reviewers towards binary
decisions. To address this issue, we propose and mathematically formalize an
alternative reward system with the objective of mitigating this bias and
promoting more comprehensive reviews. We engage in a detailed investigation of
the properties and outcomes of both systems, employing rigorous
game-theoretical analysis and deep reinforcement learning simulations. Our
results underscore a noteworthy divergence between the two systems, with our
proposed system demonstrating a more balanced decision distribution and
enhanced stability. This research not only augments the mathematical
understanding of reviewer reward systems, but it also provides valuable
insights for the formulation of policies within journal review system. Our
contribution to the mathematical community lies in providing a game-theoretical
perspective to a real-world problem and in the application of deep
reinforcement learning to simulate and understand this complex system. | Game-Theoretical Analysis of Reviewer Rewards in Peer-Review Journal Systems: Analysis and Experimental Evaluation using Deep Reinforcement Learning | 2023-05-20 07:13:35 | Minhyeok Lee | http://arxiv.org/abs/2305.12088v1, http://arxiv.org/pdf/2305.12088v1 | cs.AI |
35,820 | th | This paper evaluates market equilibrium under different pricing mechanisms in
a two-settlement 100%-renewables electricity market. Given general probability
distributions of renewable energy, we establish game-theoretical models to
analyze equilibrium bidding strategies, market prices, and profits under
uniform pricing (UP) and pay-as-bid pricing (PAB). We prove that UP can
incentivize suppliers to withhold bidding quantities and lead to price spikes.
PAB can reduce the market price, but it may lead to a mixed-strategy price
equilibrium. Then, we present a regulated uniform pricing scheme (RUP) based on
suppliers' marginal costs that include penalty costs for real-time deviations.
We show that RUP can achieve lower yet positive prices and profits compared
with PAB in a duopoly market, which approximates the least-cost system outcome.
Simulations with synthetic and real data find that under PAB and RUP, higher
uncertainty of renewables and real-time shortage penalty prices can increase
the market price by encouraging lower bidding quantities, thereby increasing
suppliers' profits. | Uniform Pricing vs Pay as Bid in 100%-Renewables Electricity Markets: A Game-theoretical Analysis | 2023-05-21 03:50:39 | Dongwei Zhao, Audun Botterud, Marija Ilic | http://arxiv.org/abs/2305.12309v1, http://arxiv.org/pdf/2305.12309v1 | eess.SY |
35,821 | th | During epidemics people reduce their social and economic activity to lower
their risk of infection. Such social distancing strategies will depend on
information about the course of the epidemic but also on when they expect the
epidemic to end, for instance due to vaccination. Typically it is difficult to
make optimal decisions, because the available information is incomplete and
uncertain. Here, we show how optimal decision making depends on knowledge about
vaccination timing in a differential game in which individual decision making
gives rise to Nash equilibria, and the arrival of the vaccine is described by a
probability distribution. We show that the earlier the vaccination is expected
to happen and the more precisely the timing of the vaccination is known, the
stronger is the incentive to socially distance. In particular, equilibrium
social distancing only meaningfully deviates from the no-vaccination
equilibrium course if the vaccine is expected to arrive before the epidemic
would have run its course. We demonstrate how the probability distribution of
the vaccination time acts as a generalised form of discounting, with the
special case of an exponential vaccination time distribution directly
corresponding to regular exponential discounting. | Rational social distancing in epidemics with uncertain vaccination timing | 2023-05-23 05:28:14 | Simon K. Schnyder, John J. Molina, Ryoichi Yamamoto, Matthew S. Turner | http://arxiv.org/abs/2305.13618v1, http://arxiv.org/pdf/2305.13618v1 | econ.TH |
35,822 | th | Sociologists of education increasingly highlight the role of opportunity
hoarding in the formation of Black-White educational inequalities. Informed by
this literature, this article unpacks the necessary and sufficient conditions
under which the hoarding of educational resources emerges within schools. It
develops a qualitatively informed agent-based model which captures Black and
White students' competition for a valuable school resource: advanced
coursework. In contrast to traditional accounts -- which explain the emergence
of hoarding through the actions of Whites that keep valuable resources within
White communities -- simulations, perhaps surprisingly, show hoarding to arise
even when Whites do not play the role of hoarders of resources. Behind this
result is the fact that a structural inequality (i.e., racial differences in
social class) -- and not action-driven hoarding -- is the necessary condition
for hoarding to emerge. Findings, therefore, illustrate that common
action-driven understandings of opportunity hoarding can overlook the
structural foundations behind this important phenomenon. Policy implications
are discussed. | Hoarding without hoarders: unpacking the emergence of opportunity hoarding within schools | 2023-05-24 05:49:38 | João M. Souto-Maior | http://arxiv.org/abs/2305.14653v2, http://arxiv.org/pdf/2305.14653v2 | physics.soc-ph |
35,823 | th | We design TimeBoost: a practical transaction ordering policy for rollup
sequencers that takes into account both transaction timestamps and bids; it
works by creating a score from timestamps and bids, and orders transactions
based on this score.
TimeBoost is transaction-data-independent (i.e., can work with encrypted
transactions) and supports low transaction finalization times similar to a
first-come first-serve (FCFS or pure-latency) ordering policy. At the same
time, it avoids the inefficient latency competition created by an FCFS policy.
It further satisfies useful economic properties of first-price auctions that
come with a pure-bidding policy. We show through rigorous economic analyses how
TimeBoost allows players to compete on arbitrage opportunities in a way that
results in better guarantees compared to both pure-latency and pure-bidding
approaches. | Buying Time: Latency Racing vs. Bidding in Transaction Ordering | 2023-06-03 22:20:39 | Akaki Mamageishvili, Mahimna Kelkar, Jan Christoph Schlegel, Edward W. Felten | http://arxiv.org/abs/2306.02179v2, http://arxiv.org/pdf/2306.02179v2 | cs.GT |
35,886 | th | Models of social learning feature either binary signals or abstract signal
structures often deprived of micro-foundations. Both models are limited when
analyzing interim results or performing empirical analysis. We present a method
of generating signal structures which are richer than the binary model, yet are
tractable enough to perform simulations and empirical analysis. We demonstrate
the method's usability by revisiting two classical papers: (1) we discuss the
economic significance of unbounded signals Smith and Sorensen (2000); (2) we
use experimental data from Anderson and Holt (1997) to perform econometric
analysis. Additionally, we provide a necessary and sufficient condition for the
occurrence of action cascades. | A Practical Approach to Social Learning | 2020-02-25 19:41:23 | Amir Ban, Moran Koren | http://arxiv.org/abs/2002.11017v1, http://arxiv.org/pdf/2002.11017v1 | econ.TH |
35,824 | th | In online ad markets, a rising number of advertisers are employing bidding
agencies to participate in ad auctions. These agencies are specialized in
designing online algorithms and bidding on behalf of their clients. Typically,
an agency usually has information on multiple advertisers, so she can
potentially coordinate bids to help her clients achieve higher utilities than
those under independent bidding.
In this paper, we study coordinated online bidding algorithms in repeated
second-price auctions with budgets. We propose algorithms that guarantee every
client a higher utility than the best she can get under independent bidding. We
show that these algorithms achieve maximal coalition welfare and discuss
bidders' incentives to misreport their budgets, in symmetric cases. Our proofs
combine the techniques of online learning and equilibrium analysis, overcoming
the difficulty of competing with a multi-dimensional benchmark. The performance
of our algorithms is further evaluated by experiments on both synthetic and
real data. To the best of our knowledge, we are the first to consider bidder
coordination in online repeated auctions with constraints. | Coordinated Dynamic Bidding in Repeated Second-Price Auctions with Budgets | 2023-06-13 14:55:04 | Yurong Chen, Qian Wang, Zhijian Duan, Haoran Sun, Zhaohua Chen, Xiang Yan, Xiaotie Deng | http://arxiv.org/abs/2306.07709v1, http://arxiv.org/pdf/2306.07709v1 | cs.GT |
35,825 | th | We study the classic house-swapping problem of Shapley and Scarf (1974) in a
setting where agents may have "objective" indifferences, i.e., indifferences
that are shared by all agents. In other words, if any one agent is indifferent
between two houses, then all agents are indifferent between those two houses.
The most direct interpretation is the presence of multiple copies of the same
object. Our setting is a special case of the house-swapping problem with
general indifferences. We derive a simple, easily interpretable algorithm that
produces the unique strict core allocation of the house-swapping market, if it
exists. Our algorithm runs in square polynomial time, a substantial improvement
over the cubed time methods for the more general problem. | House-Swapping with Objective Indifferences | 2023-06-16 01:16:22 | Will Sandholtz, Andrew Tai | http://arxiv.org/abs/2306.09529v1, http://arxiv.org/pdf/2306.09529v1 | econ.TH |
35,826 | th | This paper extends the Isotonic Mechanism from the single-owner to
multi-owner settings, in an effort to make it applicable to peer review where a
paper often has multiple authors. Our approach starts by partitioning all
submissions of a machine learning conference into disjoint blocks, each of
which shares a common set of co-authors. We then employ the Isotonic Mechanism
to elicit a ranking of the submissions from each author and to produce adjusted
review scores that align with both the reported ranking and the original review
scores. The generalized mechanism uses a weighted average of the adjusted
scores on each block. We show that, under certain conditions, truth-telling by
all authors is a Nash equilibrium for any valid partition of the overlapping
ownership sets. However, we demonstrate that while the mechanism's performance
in terms of estimation accuracy depends on the partition structure, optimizing
this structure is computationally intractable in general. We develop a nearly
linear-time greedy algorithm that provably finds a performant partition with
appealing robust approximation guarantees. Extensive experiments on both
synthetic data and real-world conference review data demonstrate the
effectiveness of this generalized Isotonic Mechanism. | An Isotonic Mechanism for Overlapping Ownership | 2023-06-19 23:33:25 | Jibang Wu, Haifeng Xu, Yifan Guo, Weijie Su | http://arxiv.org/abs/2306.11154v1, http://arxiv.org/pdf/2306.11154v1 | cs.GT |
35,827 | th | We study the power of menus of contracts in principal-agent problems with
adverse selection (agents can be one of several types) and moral hazard (we
cannot observe agent actions directly). For principal-agent problems with $T$
types and $n$ actions, we show that the best menu of contracts can obtain a
factor $\Omega(\max(n, \log T))$ more utility for the principal than the best
individual contract, partially resolving an open question of Guruganesh et al.
(2021). We then turn our attention to randomized menus of linear contracts,
where we likewise show that randomized linear menus can be $\Omega(T)$ better
than the best single linear contract. As a corollary, we show this implies an
analogous gap between deterministic menus of (general) contracts and randomized
menus of contracts (as introduced by Castiglioni et al. (2022)). | The Power of Menus in Contract Design | 2023-06-22 07:28:44 | Guru Guruganesh, Jon Schneider, Joshua Wang, Junyao Zhao | http://arxiv.org/abs/2306.12667v1, http://arxiv.org/pdf/2306.12667v1 | cs.GT |
35,828 | th | One cannot make truly fair decisions using integer linear programs unless one
controls the selection probabilities of the (possibly many) optimal solutions.
For this purpose, we propose a unified framework when binary decision variables
represent agents with dichotomous preferences, who only care about whether they
are selected in the final solution. We develop several general-purpose
algorithms to fairly select optimal solutions, for example, by maximizing the
Nash product or the minimum selection probability, or by using a random
ordering of the agents as a selection criterion (Random Serial Dictatorship).
As such, we embed the black-box procedure of solving an integer linear program
into a framework that is explainable from start to finish. Moreover, we study
the axiomatic properties of the proposed methods by embedding our framework
into the rich literature of cooperative bargaining and probabilistic social
choice. Lastly, we evaluate the proposed methods on a specific application,
namely kidney exchange. We find that while the methods maximizing the Nash
product or the minimum selection probability outperform the other methods on
the evaluated welfare criteria, methods such as Random Serial Dictatorship
perform reasonably well in computation times that are similar to those of
finding a single optimal solution. | Fair integer programming under dichotomous preferences | 2023-06-23 12:06:13 | Tom Demeulemeester, Dries Goossens, Ben Hermans, Roel Leus | http://arxiv.org/abs/2306.13383v1, http://arxiv.org/pdf/2306.13383v1 | cs.GT |
35,829 | th | In this paper we give the first explicit enumeration of all maximal Condorcet
domains on $n\leq 7$ alternatives. This has been accomplished by developing a
new algorithm for constructing Condorcet domains, and an implementation of that
algorithm which has been run on a supercomputer.
We follow this up by the first survey of the properties of all maximal
Condorcet domains up to degree 7, with respect to many properties studied in
the social sciences and mathematical literature. We resolve several open
questions posed by other authors, both by examples from our data and theorems.
We give a new set of results on the symmetry properties of Condorcet domains
which unify earlier works.
Finally we discuss connections to other domain types such as non-dictatorial
domains and generalisations of single-peaked domains. All our data is made
freely available for other researches via a new website. | Condorcet Domains of Degree at most Seven | 2023-06-28 11:05:06 | Dolica Akello-Egwell, Charles Leedham-Green, Alastair Litterick, Klas Markström, Søren Riis | http://arxiv.org/abs/2306.15993v5, http://arxiv.org/pdf/2306.15993v5 | cs.DM |
35,831 | th | The Black-Scholes-Merton model is a mathematical model for the dynamics of a
financial market that includes derivative investment instruments, and its
formula provides a theoretical price estimate of European-style options. The
model's fundamental idea is to eliminate risk by hedging the option by
purchasing and selling the underlying asset in a specific way, that is, to
replicate the payoff of the option with a portfolio (which continuously trades
the underlying) whose value at each time can be verified. One of the most
crucial, yet restrictive, assumptions for this task is that the market follows
a geometric Brownian motion, which has been relaxed and generalized in various
ways.
The concept of robust finance revolves around developing models that account
for uncertainties and variations in financial markets. Martingale Optimal
Transport, which is an adaptation of the Optimal Transport theory to the robust
financial framework, is one of the most prominent directions. In this paper, we
consider market models with arbitrarily many underlying assets whose values are
observed over arbitrarily many time periods, and demonstrates the existence of
a portfolio sub- or super-hedging a general path-dependent derivative security
in terms of trading European options and underlyings, as well as the portfolio
replicating the derivative payoff when the market model yields the extremal
price of the derivative given marginal distributions of the underlyings. In
mathematical terms, this paper resolves the question of dual attainment for the
multi-period vectorial martingale optimal transport problem. | Replication of financial derivatives under extreme market models given marginals | 2023-07-03 10:44:59 | Tongseok Lim | http://arxiv.org/abs/2307.00807v1, http://arxiv.org/pdf/2307.00807v1 | q-fin.MF |
35,832 | th | Classification algorithms are increasingly used in areas such as housing,
credit, and law enforcement in order to make decisions affecting peoples'
lives. These algorithms can change individual behavior deliberately (a fraud
prediction algorithm deterring fraud) or inadvertently (content sorting
algorithms spreading misinformation), and they are increasingly facing public
scrutiny and regulation. Some of these regulations, like the elimination of
cash bail in some states, have focused on \textit{lowering the stakes of
certain classifications}. In this paper we characterize how optimal
classification by an algorithm designer can affect the distribution of behavior
in a population -- sometimes in surprising ways. We then look at the effect of
democratizing the rewards and punishments, or stakes, to algorithmic
classification to consider how a society can potentially stem (or facilitate!)
predatory classification. Our results speak to questions of algorithmic
fairness in settings where behavior and algorithms are interdependent, and
where typical measures of fairness focusing on statistical accuracy across
groups may not be appropriate. | Algorithms, Incentives, and Democracy | 2023-07-05 17:22:01 | Elizabeth Maggie Penn, John W. Patty | http://arxiv.org/abs/2307.02319v1, http://arxiv.org/pdf/2307.02319v1 | econ.TH |
35,833 | th | We study Proportional Response Dynamics (PRD) in linear Fisher markets where
participants act asynchronously. We model this scenario as a sequential process
in which in every step, an adversary selects a subset of the players that will
update their bids, subject to liveness constraints. We show that if every
bidder individually uses the PRD update rule whenever they are included in the
group of bidders selected by the adversary, then (in the generic case) the
entire dynamic converges to a competitive equilibrium of the market. Our proof
technique uncovers further properties of linear Fisher markets, such as the
uniqueness of the equilibrium for generic parameters and the convergence of
associated best-response dynamics and no-swap regret dynamics under certain
conditions. | Asynchronous Proportional Response Dynamics in Markets with Adversarial Scheduling | 2023-07-09 09:31:20 | Yoav Kolumbus, Menahem Levy, Noam Nisan | http://arxiv.org/abs/2307.04108v1, http://arxiv.org/pdf/2307.04108v1 | cs.GT |
35,834 | th | In an information aggregation game, a set of senders interact with a receiver
through a mediator. Each sender observes the state of the world and
communicates a message to the mediator, who recommends an action to the
receiver based on the messages received. The payoff of the senders and of the
receiver depend on both the state of the world and the action selected by the
receiver. This setting extends the celebrated cheap talk model in two aspects:
there are many senders (as opposed to just one) and there is a mediator. From a
practical perspective, this setting captures platforms in which strategic
experts advice is aggregated in service of action recommendations to the user.
We aim at finding an optimal mediator/platform that maximizes the users'
welfare given highly resilient incentive compatibility requirements on the
equilibrium selected: we want the platform to be incentive compatible for the
receiver/user when selecting the recommended action, and we want it to be
resilient against group deviations by the senders/experts. We provide highly
positive answers to this challenge, manifested through efficient algorithms. | Resilient Information Aggregation | 2023-07-11 10:06:13 | Itai Arieli, Ivan Geffner, Moshe Tennenholtz | http://dx.doi.org/10.4204/EPTCS.379.6, http://arxiv.org/abs/2307.05054v1, http://arxiv.org/pdf/2307.05054v1 | econ.TH |
35,835 | th | With a novel search algorithm or assortment planning or assortment
optimization algorithm that takes into account a Bayesian approach to
information updating and two-stage assortment optimization techniques, the
current research provides a novel concept of competitiveness in the digital
marketplace. Via the search algorithm, there is competition between the
platform, vendors, and private brands of the platform. The current paper
suggests a model and discusses how competition and collusion arise in the
digital marketplace through assortment planning or assortment optimization
algorithm. Furthermore, it suggests a model of an assortment algorithm free
from collusion between the platform and the large vendors. The paper's major
conclusions are that collusive assortment may raise a product's purchase
likelihood but fail to maximize expected revenue. The proposed assortment
planning, on the other hand, maintains competitiveness while maximizing
expected revenue. | A Model of Competitive Assortment Planning Algorithm | 2023-07-16 17:15:18 | Dipankar Das | http://arxiv.org/abs/2307.09479v1, http://arxiv.org/pdf/2307.09479v1 | econ.TH |
35,853 | th | We consider manipulations in the context of coalitional games, where a
coalition aims to increase the total payoff of its members. An allocation rule
is immune to coalitional manipulation if no coalition can benefit from internal
reallocation of worth on the level of its subcoalitions
(reallocation-proofness), and if no coalition benefits from a lower worth while
all else remains the same (weak coalitional monotonicity). Replacing additivity
in Shapley's original characterization by these requirements yields a new
foundation of the Shapley value, i.e., it is the unique efficient and symmetric
allocation rule that awards nothing to a null player and is immune to
coalitional manipulations. We further find that for efficient allocation rules,
reallocation-proofness is equivalent to constrained marginality, a weaker
variant of Young's marginality axiom. Our second characterization improves upon
Young's characterization by weakening the independence requirement intrinsic to
marginality. | Coalitional Manipulations and Immunity of the Shapley Value | 2023-10-31 15:43:31 | Christian Basteck, Frank Huettner | http://arxiv.org/abs/2310.20415v1, http://arxiv.org/pdf/2310.20415v1 | econ.TH |
35,836 | th | A growing number of central authorities use assignment mechanisms to allocate
students to schools in a way that reflects student preferences and school
priorities. However, most real-world mechanisms give students an incentive to
be strategic and misreport their preferences. In this paper, we provide an
identification approach for causal effects of school assignment on future
outcomes that accounts for strategic misreporting. Misreporting may invalidate
existing point-identification approaches, and we derive sharp bounds for causal
effects that are robust to strategic behavior. Our approach applies to any
mechanism as long as there exist placement scores and cutoffs that characterize
that mechanism's allocation rule. We use data from a deferred acceptance
mechanism that assigns students to more than 1,000 university-major
combinations in Chile. Students behave strategically because the mechanism in
Chile constrains the number of majors that students submit in their preferences
to eight options. Our methodology takes that into account and partially
identifies the effect of changes in school assignment on various graduation
outcomes. | Causal Effects in Matching Mechanisms with Strategically Reported Preferences | 2023-07-26 19:35:42 | Marinho Bertanha, Margaux Luflade, Ismael Mourifié | http://arxiv.org/abs/2307.14282v1, http://arxiv.org/pdf/2307.14282v1 | econ.EM |
35,837 | th | We present a new optimization-based method for aggregating preferences in
settings where each decision maker, or voter, expresses preferences over pairs
of alternatives. The challenge is to come up with a ranking that agrees as much
as possible with the votes cast in cases when some of the votes conflict. Only
a collection of votes that contains no cycles is non-conflicting and can induce
a partial order over alternatives. Our approach is motivated by the observation
that a collection of votes that form a cycle can be treated as ties. The method
is then to remove unions of cycles of votes, or circulations, from the vote
graph and determine aggregate preferences from the remainder.
We introduce the strong maximum circulation which is formed by a union of
cycles, the removal of which guarantees a unique outcome in terms of the
induced partial order. Furthermore, it contains all the aggregate preferences
remaining following the elimination of any maximum circulation. In contrast,
the well-known, optimization-based, Kemeny method has non-unique output and can
return multiple, conflicting rankings for the same input. In addition, Kemeny's
method requires solving an NP-hard problem, whereas our algorithm is efficient,
based on network flow techniques, and runs in strongly polynomial time,
independent of the number of votes.
We address the construction of a ranking from the partial order and show that
rankings based on a convex relaxation of Kemeny's model are consistent with our
partial order. We then study the properties of removing a maximal circulation
versus a maximum circulation and establish that, while maximal circulations
will in general identify a larger number of aggregate preferences, the partial
orders induced by the removal of different maximal circulations are not unique
and may be conflicting. Moreover, finding a minimum maximal circulation is an
NP-hard problem. | The Strong Maximum Circulation Algorithm: A New Method for Aggregating Preference Rankings | 2023-07-28 20:51:05 | Nathan Atkinson, Scott C. Ganz, Dorit S. Hochbaum, James B. Orlin | http://arxiv.org/abs/2307.15702v1, http://arxiv.org/pdf/2307.15702v1 | cs.SI |
35,838 | th | We study the impact of data sharing policies on cyber insurance markets.
These policies have been proposed to address the scarcity of data about cyber
threats, which is essential to manage cyber risks. We propose a Cournot duopoly
competition model in which two insurers choose the number of policies they
offer (i.e., their production level) and also the resources they invest to
ensure the quality of data regarding the cost of claims (i.e., the data quality
of their production cost). We find that enacting mandatory data sharing
sometimes creates situations in which at most one of the two insurers invests
in data quality, whereas both insurers would invest when information sharing is
not mandatory. This raises concerns about the merits of making data sharing
mandatory. | Duopoly insurers' incentives for data quality under a mandatory cyber data sharing regime | 2023-05-29 23:19:14 | Carlos Barreto, Olof Reinert, Tobias Wiesinger, Ulrik Franke | http://dx.doi.org/10.1016/j.cose.2023.103292, http://arxiv.org/abs/2308.00795v1, http://arxiv.org/pdf/2308.00795v1 | econ.TH |
35,839 | th | We introduce a new network centrality measure founded on the Gately value for
cooperative games with transferable utilities. A directed network is
interpreted as representing control or authority relations between
players--constituting a hierarchical network. The power distribution of a
hierarchical network can be represented through a TU-game. We investigate the
properties of this TU-representation and investigate the Gately value of the
TU-representation resulting in the Gately power measure. We establish when the
Gately measure is a Core power gauge, investigate the relationship of the
Gately with the $\beta$-measure, and construct an axiomatisation of the Gately
measure. | Game theoretic foundations of the Gately power measure for directed networks | 2023-08-04 15:00:28 | Robert P. Gilles, Lina Mallozzi | http://arxiv.org/abs/2308.02274v1, http://arxiv.org/pdf/2308.02274v1 | cs.GT |
35,840 | th | Major advances in Machine Learning (ML) and Artificial Intelligence (AI)
increasingly take the form of developing and releasing general-purpose models.
These models are designed to be adapted by other businesses and agencies to
perform a particular, domain-specific function. This process has become known
as adaptation or fine-tuning. This paper offers a model of the fine-tuning
process where a Generalist brings the technological product (here an ML model)
to a certain level of performance, and one or more Domain-specialist(s) adapts
it for use in a particular domain. Both entities are profit-seeking and incur
costs when they invest in the technology, and they must reach a bargaining
agreement on how to share the revenue for the technology to reach the market.
For a relatively general class of cost and revenue functions, we characterize
the conditions under which the fine-tuning game yields a profit-sharing
solution. We observe that any potential domain-specialization will either
contribute, free-ride, or abstain in their uptake of the technology, and we
provide conditions yielding these different strategies. We show how methods
based on bargaining solutions and sub-game perfect equilibria provide insights
into the strategic behavior of firms in these types of interactions, and we
find that profit-sharing can still arise even when one firm has significantly
higher costs than another. We also provide methods for identifying
Pareto-optimal bargaining arrangements for a general set of utility functions. | Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models | 2023-08-08 20:01:42 | Benjamin Laufer, Jon Kleinberg, Hoda Heidari | http://arxiv.org/abs/2308.04399v2, http://arxiv.org/pdf/2308.04399v2 | cs.GT |
35,871 | th | Fairness is one of the most desirable societal principles in collective
decision-making. It has been extensively studied in the past decades for its
axiomatic properties and has received substantial attention from the multiagent
systems community in recent years for its theoretical and computational aspects
in algorithmic decision-making. However, these studies are often not
sufficiently rich to capture the intricacies of human perception of fairness in
the ambivalent nature of the real-world problems. We argue that not only fair
solutions should be deemed desirable by social planners (designers), but they
should be governed by human and societal cognition, consider perceived outcomes
based on human judgement, and be verifiable. We discuss how achieving this goal
requires a broad transdisciplinary approach ranging from computing and AI to
behavioral economics and human-AI interaction. In doing so, we identify
shortcomings and long-term challenges of the current literature of fair
division, describe recent efforts in addressing them, and more importantly,
highlight a series of open research directions. | The Fairness Fair: Bringing Human Perception into Collective Decision-Making | 2023-12-22 06:06:24 | Hadi Hosseini | http://arxiv.org/abs/2312.14402v1, http://arxiv.org/pdf/2312.14402v1 | cs.AI |
35,841 | th | In the committee selection problem, the goal is to choose a subset of size
$k$ from a set of candidates $C$ that collectively gives the best
representation to a set of voters. We consider this problem in Euclidean
$d$-space where each voter/candidate is a point and voters' preferences are
implicitly represented by Euclidean distances to candidates. We explore
fault-tolerance in committee selection and study the following three variants:
(1) given a committee and a set of $f$ failing candidates, find their optimal
replacement; (2) compute the worst-case replacement score for a given committee
under failure of $f$ candidates; and (3) design a committee with the best
replacement score under worst-case failures. The score of a committee is
determined using the well-known (min-max) Chamberlin-Courant rule: minimize the
maximum distance between any voter and its closest candidate in the committee.
Our main results include the following: (1) in one dimension, all three
problems can be solved in polynomial time; (2) in dimension $d \geq 2$, all
three problems are NP-hard; and (3) all three problems admit a constant-factor
approximation in any fixed dimension, and the optimal committee problem has an
FPT bicriterion approximation. | Fault Tolerance in Euclidean Committee Selection | 2023-08-14 19:50:48 | Chinmay Sonar, Subhash Suri, Jie Xue | http://arxiv.org/abs/2308.07268v1, http://arxiv.org/pdf/2308.07268v1 | cs.GT |
35,842 | th | Monetary conditions are frequently cited as a significant factor influencing
fluctuations in commodity prices. However, the precise channels of transmission
are less well identified. In this paper, we develop a unified theory to study
the impact of interest rates on commodity prices and the underlying mechanisms.
To that end, we extend the competitive storage model to accommodate
stochastically evolving interest rates, and establish general conditions under
which (i) a unique rational expectations equilibrium exists and can be
efficiently computed, and (ii) interest rates are negatively correlated with
commodity prices. As an application, we quantify the impact of interest rates
on commodity prices through the speculative channel, namely, the role of
speculators in the physical market whose incentive to hold inventories is
influenced by interest rate movements. Our findings demonstrate that real
interest rates have nontrivial and persistent negative effect on commodity
prices, and the magnitude of the impact varies substantially under different
market supply and interest rate regimes. | Interest Rate Dynamics and Commodity Prices | 2023-08-15 08:10:35 | Christophe Gouel, Qingyin Ma, John Stachurski | http://arxiv.org/abs/2308.07577v1, http://arxiv.org/pdf/2308.07577v1 | econ.TH |
35,843 | th | Classic optimal transport theory is built on minimizing the expected cost
between two given distributions. We propose the framework of distorted optimal
transport by minimizing a distorted expected cost. This new formulation is
motivated by concrete problems in decision theory, robust optimization, and
risk management, and it has many distinct features compared to the classic
theory. We choose simple cost functions and study different distortion
functions and their implications on the optimal transport plan. We show that on
the real line, the comonotonic coupling is optimal for the distorted optimal
transport problem when the distortion function is convex and the cost function
is submodular and monotone. Some forms of duality and uniqueness results are
provided. For inverse-S-shaped distortion functions and linear cost, we obtain
the unique form of optimal coupling for all marginal distributions, which turns
out to have an interesting ``first comonotonic, then counter-monotonic"
dependence structure; for S-shaped distortion functions a similar structure is
obtained. Our results highlight several challenges and features in distorted
optimal transport, offering a new mathematical bridge between the fields of
probability, decision theory, and risk management. | Distorted optimal transport | 2023-08-22 10:25:51 | Haiyan Liu, Bin Wang, Ruodu Wang, Sheng Chao Zhuang | http://arxiv.org/abs/2308.11238v1, http://arxiv.org/pdf/2308.11238v1 | math.OC |
35,844 | th | In 1979, Weitzman introduced Pandora's box problem as a framework for
sequential search with costly inspections. Recently, there has been a surge of
interest in Pandora's box problem, particularly among researchers working at
the intersection of economics and computation. This survey provides an overview
of the recent literature on Pandora's box problem, including its latest
extensions and applications in areas such as market design, decision theory,
and machine learning. | Recent Developments in Pandora's Box Problem: Variants and Applications | 2023-08-23 19:39:14 | Hedyeh Beyhaghi, Linda Cai | http://arxiv.org/abs/2308.12242v1, http://arxiv.org/pdf/2308.12242v1 | cs.GT |
35,845 | th | We analyse the typical structure of games in terms of the connectivity
properties of their best-response graphs. Our central result shows that almost
every game that is 'generic' (without indifferences) and has a pure Nash
equilibrium and a 'large' number of players is connected, meaning that every
action profile that is not a pure Nash equilibrium can reach every pure Nash
equilibrium via best-response paths. This has important implications for
dynamics in games. In particular, we show that there are simple, uncoupled,
adaptive dynamics for which period-by-period play converges almost surely to a
pure Nash equilibrium in almost every large generic game that has one (which
contrasts with the known fact that there is no such dynamic that leads almost
surely to a pure Nash equilibrium in every generic game that has one). We build
on recent results in probabilistic combinatorics for our characterisation of
game connectivity. | Game Connectivity and Adaptive Dynamics | 2023-09-19 16:32:34 | Tom Johnston, Michael Savery, Alex Scott, Bassel Tarbush | http://arxiv.org/abs/2309.10609v3, http://arxiv.org/pdf/2309.10609v3 | econ.TH |
35,846 | th | Is more information always better? Or are there some situations in which more
information can make us worse off? Good (1967) argues that expected utility
maximizers should always accept more information if the information is
cost-free and relevant. But Good's argument presupposes that you are certain
you will update by conditionalization. If we relax this assumption and allow
agents to be uncertain about updating, these agents can be rationally required
to reject free and relevant information. Since there are good reasons to be
uncertain about updating, rationality can require you to prefer ignorance. | Rational Aversion to Information | 2023-09-21 06:51:52 | Sven Neth | http://dx.doi.org/10.1086/727772, http://arxiv.org/abs/2309.12374v3, http://arxiv.org/pdf/2309.12374v3 | stat.OT |
35,847 | th | Community rating is a policy that mandates uniform premium regardless of the
risk factors. In this paper, our focus narrows to the single contract
interpretation wherein we establish a theoretical framework for community
rating using Stiglitz's (1977) monopoly model in which there is a continuum of
agents. We exhibit profitability conditions and show that, under mild
regularity conditions, the optimal premium is unique and satisfies the inverse
elasticity rule. Our numerical analysis, using realistic parameter values,
reveals that under regulation, a 10% increase in indemnity is possible with
minimal impact on other variables. | Theoretical Foundations of Community Rating by a Private Monopolist Insurer: Framework, Regulation, and Numerical Analysis | 2023-09-27 00:02:00 | Yann Braouezec, John Cagnol | http://arxiv.org/abs/2309.15269v2, http://arxiv.org/pdf/2309.15269v2 | econ.TH |
35,848 | th | This paper fundamentally reformulates economic and financial theory to
include electronic currencies. The valuation of the electronic currencies will
be based on macroeconomic theory and the fundamental equation of monetary
policy, not the microeconomic theory of discounted cash flows. The view of
electronic currency as a transactional equity associated with tangible assets
of a sub-economy will be developed, in contrast to the view of stock as an
equity associated mostly with intangible assets of a sub-economy. The view will
be developed of the electronic currency management firm as an entity
responsible for coordinated monetary (electronic currency supply and value
stabilization) and fiscal (investment and operational) policies of a
substantial (for liquidity of the electronic currency) sub-economy. The risk
model used in the valuations and the decision-making will not be the
ubiquitous, yet inappropriate, exponential risk model that leads to discount
rates, but will be multi time scale models that capture the true risk. The
decision-making will be approached from the perspective of true systems control
based on a system response function given by the multi scale risk model and
system controllers that utilize the Deep Reinforcement Learning, Generative
Pretrained Transformers, and other methods of Artificial Intelligence
(DRL/GPT/AI). Finally, the sub-economy will be viewed as a nonlinear complex
physical system with both stable equilibriums that are associated with
short-term exploitation, and unstable equilibriums that need to be stabilized
with active nonlinear control based on the multi scale system response
functions and DRL/GPT/AI. | A new economic and financial theory of money | 2023-10-08 06:16:06 | Michael E. Glinsky, Sharon Sievert | http://arxiv.org/abs/2310.04986v4, http://arxiv.org/pdf/2310.04986v4 | econ.TH |
35,849 | th | We propose an operating-envelope-aware, prosumer-centric, and efficient
energy community that aggregates individual and shared community distributed
energy resources and transacts with a regulated distribution system operator
(DSO) under a generalized net energy metering tariff design. To ensure safe
network operation, the DSO imposes dynamic export and import limits, known as
dynamic operating envelopes, on end-users' revenue meters. Given the operating
envelopes, we propose an incentive-aligned community pricing mechanism under
which the decentralized optimization of community members' benefit implies the
optimization of overall community welfare. The proposed pricing mechanism
satisfies the cost-causation principle and ensures the stability of the energy
community in a coalition game setting. Numerical examples provide insights into
the characteristics of the proposed pricing mechanism and quantitative measures
of its performance. | Operating-Envelopes-Aware Decentralized Welfare Maximization for Energy Communities | 2023-10-11 06:04:34 | Ahmed S. Alahmed, Guido Cavraro, Andrey Bernstein, Lang Tong | http://arxiv.org/abs/2310.07157v1, http://arxiv.org/pdf/2310.07157v1 | eess.SY |
35,850 | th | Sustainability of common-pool resources hinges on the interplay between human
and environmental systems. However, there is still a lack of a novel and
comprehensive framework for modelling extraction of common-pool resources and
cooperation of human agents that can account for different factors that shape
the system behavior and outcomes. In particular, we still lack a critical value
for ensuring resource sustainability under different scenarios. In this paper,
we present a novel framework for studying resource extraction and cooperation
in human-environmental systems for common-pool resources. We explore how
different factors, such as resource availability and conformity effect,
influence the players' decisions and the resource outcomes. We identify
critical values for ensuring resource sustainability under various scenarios.
We demonstrate the observed phenomena are robust to the complexity and
assumptions of the models and discuss implications of our study for policy and
practice, as well as the limitations and directions for future research. | Impact of resource availability and conformity effect on sustainability of common-pool resources | 2023-10-11 18:18:13 | Chengyi Tu, Renfei Chen, Ying Fan, Xuwei Pan | http://arxiv.org/abs/2310.07577v2, http://arxiv.org/pdf/2310.07577v2 | econ.TH |
35,851 | th | Commuters looking for the shortest path to their destinations, the security
of networked computers, hedge funds trading on the same stocks, governments and
populations acting to mitigate an epidemic, or employers and employees agreeing
on a contact, are all examples of (dynamic) stochastic differential games. In
essence, game theory deals with the analysis of strategic interactions among
multiple decision-makers. The theory has had enormous impact in a wide variety
of fields, but its rigorous mathematical analysis is rather recent. It started
with the pioneering work of von Neumann and Morgenstern published in 1944.
Since then, game theory has taken centre stage in applied mathematics and
related areas. Game theory has also played an important role in unsuspected
areas: for instance in military applications, when the analysis of guided
interceptor missiles in the 1950s motivated the study of games evolving
dynamically in time. Such games (when possibly subject to randomness) are
called stochastic differential games. Their study started with the work of
Issacs, who crucially recognised the importance of (stochastic) control theory
in the area. Over the past few decades since Isaacs's work, a rich theory of
stochastic differential game has emerged and branched into several directions.
This paper will review recent advances in the study of solvability of
stochastic differential games, with a focus on a purely probabilistic technique
to approach the problem. Unsurprisingly, the number of players involved in the
game is a major factor of the analysis. We will explain how the size of the
population impacts the analyses and solvability of the problem, and discuss
mean field games as well as the convergence of finite player games to mean
field games. | On the population size in stochastic differential games | 2023-10-15 22:06:56 | Dylan Possamaï, Ludovic Tangpi | http://arxiv.org/abs/2310.09919v1, http://arxiv.org/pdf/2310.09919v1 | math.PR |
35,852 | th | We explore a version of the minimax theorem for two-person win-lose games
with infinitely many pure strategies. In the countable case, we give a
combinatorial condition on the game which implies the minimax property. In the
general case, we prove that a game satisfies the minimax property along with
all its subgames if and only if none of its subgames is isomorphic to the
"larger number game." This generalizes a recent theorem of Hanneke, Livni and
Moran. We also propose several applications of our results outside of game
theory. | The minimax property in infinite two-person win-lose games | 2023-10-30 10:21:52 | Ron Holzman | http://arxiv.org/abs/2310.19314v1, http://arxiv.org/pdf/2310.19314v1 | cs.GT |
35,872 | th | A cake allocation is called *strongly-proportional* if it allocates each
agent a piece worth for them strictly more than their fair share of 1/n the
total cake value. It is called *connected* if it allocates each agent a
connected piece. We present a necessary and sufficient condition for the
existence of a strongly-proportional connected cake-allocation among agents
with strictly positive valuations. | On Connected Strongly-Proportional Cake-Cutting | 2023-12-23 22:08:46 | Zsuzsanna Jankó, Attila Joó, Erel Segal-Halevi | http://arxiv.org/abs/2312.15326v1, http://arxiv.org/pdf/2312.15326v1 | math.CO |
35,854 | th | In the ultimatum game, the challenge is to explain why responders reject
non-zero offers thereby defying classical rationality. Fairness and related
notions have been the main explanations so far. We explain this rejection
behavior via the following principle: if the responder regrets less about
losing the offer than the proposer regrets not offering the best option, the
offer is rejected. This principle qualifies as a rational punishing behavior
and it replaces the experimentally falsified classical rationality (the subgame
perfect Nash equilibrium) that leads to accepting any non-zero offer. The
principle is implemented via the transitive regret theory for probabilistic
lotteries. The expected utility implementation is a limiting case of this. We
show that several experimental results normally prescribed to fairness and
intent-recognition can be given an alternative explanation via rational
punishment; e.g. the comparison between "fair" and "superfair", the behavior
under raising the stakes etc. Hence we also propose experiments that can
distinguish these two scenarios (fairness versus regret-based punishment). They
assume different utilities for the proposer and responder. We focus on the
mini-ultimatum version of the game and also show how it can emerge from a more
general setup of the game. | Ultimatum game: regret or fairness? | 2023-11-07 11:54:02 | Lida H. Aleksanyan, Armen E. Allahverdyan, Vardan G. Bardakhchyan | http://arxiv.org/abs/2311.03814v1, http://arxiv.org/pdf/2311.03814v1 | econ.TH |
35,855 | th | We introduce and mathematically study a conceptual model for the dynamics of
the buyers population in markets of perishable goods where prices are not
posted. Buyers behaviours are driven partly by loyalty to previously visited
merchants and partly by sensitivity to merchants intrinsic attractiveness.
Moreover, attractiveness evolve in time depending on the relative volumes of
buyers, assuming profit/competitiveness optimisation when
favourable/unfavourable. While this negative feedback mechanism is a source of
instability that promotes oscillatory behaviour, our analysis identifies those
critical features that are responsible for the asymptotic stability of
stationary states, both in their immediate neighbourhood and globally in phase
space. In particular, we show that while full loss of clientele occurs
(depending on the initial state) in case of a bounded reactivity rate, it
cannot happen when this rate is unbounded and merchants resilience always
prevails in this case. Altogether, our analysis provides mathematical insights
into the consequences of introducing feedback into buyer-seller interactions
and their diversified impacts on the long term levels of clientele in the
markets. | Population dynamics in fresh product markets with no posted prices | 2023-11-07 16:38:17 | Ali Ellouze, Bastien Fernandez | http://arxiv.org/abs/2311.03987v1, http://arxiv.org/pdf/2311.03987v1 | econ.TH |
35,856 | th | Formally, for common knowledge to arise in a dynamic setting, knowledge that
it has arisen must be simultaneously attained by all players. As a result, new
common knowledge is unattainable in many realistic settings, due to timing
frictions. This unintuitive phenomenon, observed by Halpern and Moses (1990),
was discussed by Arrow et al. (1987) and by Aumann (1989), was called a paradox
by Morris (2014), and has evaded satisfactory resolution for four decades. We
resolve this paradox by proposing a new definition for common knowledge, which
coincides with the traditional one in static settings but generalizes it in
dynamic settings. Under our definition, common knowledge can arise without
simultaneity, particularly in canonical examples of the Haplern-Moses paradox.
We demonstrate its usefulness by deriving for it an agreement theorem \`a la
Aumann (1976), and showing that it arises in the setting of Geanakoplos and
Polemarchakis (1982) with timing frictions added. | Common Knowledge, Regained | 2023-11-08 01:38:16 | Yannai A. Gonczarowski, Yoram Moses | http://arxiv.org/abs/2311.04374v1, http://arxiv.org/pdf/2311.04374v1 | econ.TH |
35,857 | th | We investigate the problem of approximating an incomplete preference relation
$\succsim$ on a finite set by a complete preference relation. We aim to obtain
this approximation in such a way that the choices on the basis of two
preferences, one incomplete, the other complete, have the smallest possible
discrepancy in the aggregate. To this end, we use the top-difference metric on
preferences, and define a best complete approximation of $\succsim$ as a
complete preference relation nearest to $\succsim$ relative to this metric. We
prove that such an approximation must be a maximal completion of $\succsim$,
and that it is, in fact, any one completion of $\succsim$ with the largest
index. Finally, we use these results to provide a sufficient condition for the
best complete approximation of a preference to be its canonical completion.
This leads to closed-form solutions to the best approximation problem in the
case of several incomplete preference relations of interest. | Best Complete Approximations of Preference Relations | 2023-11-11 21:45:59 | Hiroki Nishimura, Efe A. Ok | http://arxiv.org/abs/2311.06641v1, http://arxiv.org/pdf/2311.06641v1 | econ.TH |
35,858 | th | We study a repeated Principal Agent problem between a long lived Principal
and Agent pair in a prior free setting. In our setting, the sequence of
realized states of nature may be adversarially chosen, the Agent is non-myopic,
and the Principal aims for a strong form of policy regret. Following Camara,
Hartline, and Johnson, we model the Agent's long-run behavior with behavioral
assumptions that relax the common prior assumption (for example, that the Agent
has no swap regret). Within this framework, we revisit the mechanism proposed
by Camara et al., which informally uses calibrated forecasts of the unknown
states of nature in place of a common prior. We give two main improvements.
First, we give a mechanism that has an exponentially improved dependence (in
terms of both running time and regret bounds) on the number of distinct states
of nature. To do this, we show that our mechanism does not require truly
calibrated forecasts, but rather forecasts that are unbiased subject to only a
polynomially sized collection of events -- which can be produced with
polynomial overhead. Second, in several important special cases -- including
the focal linear contracting setting -- we show how to remove strong
``Alignment'' assumptions (which informally require that near-ties are always
broken in favor of the Principal) by specifically deploying ``stable'' policies
that do not have any near ties that are payoff relevant to the Principal. Taken
together, our new mechanism makes the compelling framework proposed by Camara
et al. much more powerful, now able to be realized over polynomially sized
state spaces, and while requiring only mild assumptions on Agent behavior. | Efficient Prior-Free Mechanisms for No-Regret Agents | 2023-11-14 00:13:42 | Natalie Collina, Aaron Roth, Han Shao | http://arxiv.org/abs/2311.07754v1, http://arxiv.org/pdf/2311.07754v1 | cs.GT |
35,859 | th | How can an informed sender persuade a receiver, having only limited
information about the receiver's beliefs? Motivated by research showing
generative AI can simulate economic agents, we initiate the study of
information design with an oracle. We assume the sender can learn more about
the receiver by querying this oracle, e.g., by simulating the receiver's
behavior. Aside from AI motivations such as general-purpose Large Language
Models (LLMs) and problem-specific machine learning models, alternate
motivations include customer surveys and querying a small pool of live users.
Specifically, we study Bayesian Persuasion where the sender has a
second-order prior over the receiver's beliefs. After a fixed number of queries
to an oracle to refine this prior, the sender commits to an information
structure. Upon receiving the message, the receiver takes a payoff-relevant
action maximizing her expected utility given her posterior beliefs. We design
polynomial-time querying algorithms that optimize the sender's expected utility
in this Bayesian Persuasion game. As a technical contribution, we show that
queries form partitions of the space of receiver beliefs that can be used to
quantify the sender's knowledge. | Algorithmic Persuasion Through Simulation: Information Design in the Age of Generative AI | 2023-11-30 02:01:33 | Keegan Harris, Nicole Immorlica, Brendan Lucier, Aleksandrs Slivkins | http://arxiv.org/abs/2311.18138v1, http://arxiv.org/pdf/2311.18138v1 | cs.GT |
35,860 | th | We analyze the overall benefits of an energy community cooperative game under
which distributed energy resources (DER) are shared behind a regulated
distribution utility meter under a general net energy metering (NEM) tariff.
Two community DER scheduling algorithms are examined. The first is a community
with centrally controlled DER, whereas the second is decentralized letting its
members schedule their own DER locally. For both communities, we prove that the
cooperative game's value function is superadditive, hence the grand coalition
achieves the highest welfare. We also prove the balancedness of the cooperative
game under the two DER scheduling algorithms, which means that there is a
welfare re-distribution scheme that de-incentivizes players from leaving the
grand coalition to form smaller ones. Lastly, we present five ex-post and an
ex-ante welfare re-distribution mechanisms and evaluate them in simulation, in
addition to investigating the performance of various community sizes under the
two DER scheduling algorithms. | Resource Sharing in Energy Communities: A Cooperative Game Approach | 2023-11-30 21:41:16 | Ahmed S. Alahmed, Lang Tong | http://arxiv.org/abs/2311.18792v1, http://arxiv.org/pdf/2311.18792v1 | cs.GT |
35,861 | th | The field of algorithmic fairness has rapidly emerged over the past 15 years
as algorithms have become ubiquitous in everyday lives. Algorithmic fairness
traditionally considers statistical notions of fairness algorithms might
satisfy in decisions based on noisy data. We first show that these are
theoretically disconnected from welfare-based notions of fairness. We then
discuss two individual welfare-based notions of fairness, envy freeness and
prejudice freeness, and establish conditions under which they are equivalent to
error rate balance and predictive parity, respectively. We discuss the
implications of these findings in light of the recently discovered
impossibility theorem in algorithmic fairness (Kleinberg, Mullainathan, &
Raghavan (2016), Chouldechova (2017)). | Algorithmic Fairness with Feedback | 2023-12-06 00:42:14 | John W. Patty, Elizabeth Maggie Penn | http://arxiv.org/abs/2312.03155v1, http://arxiv.org/pdf/2312.03155v1 | econ.TH |
35,862 | th | Multiwinner voting captures a wide variety of settings, from parliamentary
elections in democratic systems to product placement in online shopping
platforms. There is a large body of work dealing with axiomatic
characterizations, computational complexity, and algorithmic analysis of
multiwinner voting rules. Although many challenges remain, significant progress
has been made in showing existence of fair and representative outcomes as well
as efficient algorithmic solutions for many commonly studied settings. However,
much of this work focuses on single-shot elections, even though in numerous
real-world settings elections are held periodically and repeatedly. Hence, it
is imperative to extend the study of multiwinner voting to temporal settings.
Recently, there have been several efforts to address this challenge. However,
these works are difficult to compare, as they model multi-period voting in very
different ways. We propose a unified framework for studying temporal fairness
in this domain, drawing connections with various existing bodies of work, and
consolidating them within a general framework. We also identify gaps in
existing literature, outline multiple opportunities for future work, and put
forward a vision for the future of multiwinner voting in temporal settings. | Temporal Fairness in Multiwinner Voting | 2023-12-07 19:38:32 | Edith Elkind, Svetlana Obraztsova, Nicholas Teh | http://arxiv.org/abs/2312.04417v2, http://arxiv.org/pdf/2312.04417v2 | cs.GT |
35,863 | th | Reducing wealth inequality and disparity is a global challenge. The economic
system is mainly divided into (1) gift and reciprocity, (2) power and
redistribution, (3) market exchange, and (4) mutual aid without reciprocal
obligations. The current inequality stems from a capitalist economy consisting
of (2) and (3). To sublimate (1), which is the human economy, to (4), the
concept of a "mixbiotic society" has been proposed in the philosophical realm.
This is a society in which free and diverse individuals, "I," mix with each
other, recognize their respective "fundamental incapability" and sublimate them
into "WE" solidarity. The economy in this society must have moral
responsibility as a coadventurer and consideration for vulnerability to risk.
Therefore, I focus on two factors of mind perception: moral responsibility and
risk vulnerability, and propose a novel model of wealth distribution following
an econophysical approach. Specifically, I developed a joint-venture model, a
redistribution model in the joint-venture model, and a "WE economy" model. A
simulation comparison of a combination of the joint ventures and redistribution
with the WE economies reveals that WE economies are effective in reducing
inequality and resilient in normalizing wealth distribution as advantages, and
susceptible to free riders as disadvantages. However, this disadvantage can be
compensated for by fostering consensus and fellowship, and by complementing it
with joint ventures. This study essentially presents the effectiveness of moral
responsibility, the complementarity between the WE economy and the joint
economy, and the direction of the economy toward reducing inequality. Future
challenges are to develop the WE economy model based on real economic analysis
and psychology, as well as to promote WE economy fieldwork for worker coops and
platform cooperatives to realize a desirable mixbiotic society. | WE economy: Potential of mutual aid distribution based on moral responsibility and risk vulnerability | 2023-12-12 04:52:45 | Takeshi Kato | http://arxiv.org/abs/2312.06927v1, http://arxiv.org/pdf/2312.06927v1 | econ.TH |
35,873 | th | The Council of the European Union (EU) is one of the main decision-making
bodies of the EU. Many decisions require a qualified majority: the support of
55% of the member states (currently 15) that represent at least 65% of the
total population. We investigate how the power distribution, based on the
Shapley-Shubik index, and the proportion of winning coalitions change if these
criteria are modified within reasonable bounds. The influence of the two
countries with about 4% of the total population each is found to be almost
flat. The level of decisiveness decreases if the population criterion is above
68% or the states criterion is at least 17. The proportion of winning
coalitions can be increased from 13.2% to 20.8% (30.1%) such that the maximal
relative change in the Shapley--Shubik indices remains below 3.5% (5.5%). Our
results are indispensable to evaluate any proposal for reforming the qualified
majority voting system. | Voting power in the Council of the European Union: A comprehensive sensitivity analysis | 2023-12-28 11:07:33 | Dóra Gréta Petróczy, László Csató | http://arxiv.org/abs/2312.16878v1, http://arxiv.org/pdf/2312.16878v1 | physics.soc-ph |
35,864 | th | Algorithmic predictions are increasingly used to inform the allocations of
goods and interventions in the public sphere. In these domains, predictions
serve as a means to an end. They provide stakeholders with insights into
likelihood of future events as a means to improve decision making quality, and
enhance social welfare. However, if maximizing welfare is the ultimate goal,
prediction is only a small piece of the puzzle. There are various other policy
levers a social planner might pursue in order to improve bottom-line outcomes,
such as expanding access to available goods, or increasing the effect sizes of
interventions.
Given this broad range of design decisions, a basic question to ask is: What
is the relative value of prediction in algorithmic decision making? How do the
improvements in welfare arising from better predictions compare to those of
other policy levers? The goal of our work is to initiate the formal study of
these questions. Our main results are theoretical in nature. We identify
simple, sharp conditions determining the relative value of prediction
vis-\`a-vis expanding access, within several statistical models that are
popular amongst quantitative social scientists. Furthermore, we illustrate how
these theoretical insights may be used to guide the design of algorithmic
decision making systems in practice. | The Relative Value of Prediction in Algorithmic Decision Making | 2023-12-13 23:52:45 | Juan Carlos Perdomo | http://arxiv.org/abs/2312.08511v1, http://arxiv.org/pdf/2312.08511v1 | cs.CY |
35,865 | th | A linear-quadratic-Gaussian (LQG) game is an incomplete information game with
quadratic payoff functions and Gaussian payoff states. This study addresses an
information design problem to identify an information structure that maximizes
a quadratic objective function. Gaussian information structures are found to be
optimal among all information structures. Furthermore, the optimal Gaussian
information structure can be determined by semidefinite programming, which is a
natural extension of linear programming. This paper provides sufficient
conditions for the optimality and suboptimality of both no and full information
disclosure. In addition, we characterize optimal information structures in
symmetric LQG games and optimal public information structures in asymmetric LQG
games, with each structure presented in a closed-form expression. | LQG Information Design | 2023-12-15 04:36:36 | Masaki Miyashita, Takashi Ui | http://arxiv.org/abs/2312.09479v1, http://arxiv.org/pdf/2312.09479v1 | econ.TH |
35,866 | th | How do we ascribe subjective probability? In decision theory, this question
is often addressed by representation theorems, going back to Ramsey (1926),
which tell us how to define or measure subjective probability by observable
preferences. However, standard representation theorems make strong rationality
assumptions, in particular expected utility maximization. How do we ascribe
subjective probability to agents which do not satisfy these strong rationality
assumptions? I present a representation theorem with weak rationality
assumptions which can be used to define or measure subjective probability for
partly irrational agents. | Better Foundations for Subjective Probability | 2023-12-15 16:52:17 | Sven Neth | http://arxiv.org/abs/2312.09796v1, http://arxiv.org/pdf/2312.09796v1 | stat.OT |
35,867 | th | Algorithmic monoculture arises when many decision-makers rely on the same
algorithm to evaluate applicants. An emerging body of work investigates
possible harms of this kind of homogeneity, but has been limited by the
challenge of incorporating market effects in which the preferences and behavior
of many applicants and decision-makers jointly interact to determine outcomes.
Addressing this challenge, we introduce a tractable theoretical model of
algorithmic monoculture in a two-sided matching market with many participants.
We use the model to analyze outcomes under monoculture (when decision-makers
all evaluate applicants using a common algorithm) and under polyculture (when
decision-makers evaluate applicants independently). All else equal, monoculture
(1) selects less-preferred applicants when noise is well-behaved, (2) matches
more applicants to their top choice, though individual applicants may be worse
off depending on their value to decision-makers and risk tolerance, and (3) is
more robust to disparities in the number of applications submitted. | Monoculture in Matching Markets | 2023-12-15 17:46:54 | Kenny Peng, Nikhil Garg | http://arxiv.org/abs/2312.09841v1, http://arxiv.org/pdf/2312.09841v1 | cs.GT |
35,868 | th | While there is universal agreement that agents ought to act ethically, there
is no agreement as to what constitutes ethical behaviour. To address this
problem, recent philosophical approaches to `moral uncertainty' propose
aggregation of multiple ethical theories to guide agent behaviour. However, one
of the foundational proposals for aggregation - Maximising Expected
Choiceworthiness (MEC) - has been criticised as being vulnerable to fanaticism;
the problem of an ethical theory dominating agent behaviour despite low
credence (confidence) in said theory. Fanaticism thus undermines the
`democratic' motivation for accommodating multiple ethical perspectives. The
problem of fanaticism has not yet been mathematically defined. Representing
moral uncertainty as an instance of social welfare aggregation, this paper
contributes to the field of moral uncertainty by 1) formalising the problem of
fanaticism as a property of social welfare functionals and 2) providing
non-fanatical alternatives to MEC, i.e. Highest k-trimmed Mean and Highest
Median. | Moral Uncertainty and the Problem of Fanaticism | 2023-12-18 19:09:09 | Jazon Szabo, Jose Such, Natalia Criado, Sanjay Modgil | http://arxiv.org/abs/2312.11589v1, http://arxiv.org/pdf/2312.11589v1 | cs.AI |
35,869 | th | Control barrier functions (CBFs) and safety-critical control have seen a
rapid increase in popularity in recent years, predominantly applied to systems
in aerospace, robotics and neural network controllers. Control barrier
functions can provide a computationally efficient method to monitor arbitrary
primary controllers and enforce state constraints to ensure overall system
safety. One area that has yet to take advantage of the benefits offered by CBFs
is the field of finance and economics. This manuscript re-introduces three
applications of traditional control to economics, and develops and implements
CBFs for such problems. We consider the problem of optimal advertising for the
deterministic and stochastic case and Merton's portfolio optimization problem.
Numerical simulations are used to demonstrate the effectiveness of using
traditional control solutions in tandem with CBFs and stochastic CBFs to solve
such problems in the presence of state constraints. | Stochastic Control Barrier Functions for Economics | 2023-12-20 00:34:54 | David van Wijk | http://arxiv.org/abs/2312.12612v1, http://arxiv.org/pdf/2312.12612v1 | econ.TH |
35,870 | th | May's Theorem [K. O. May, Econometrica 20 (1952) 680-684] characterizes
majority voting on two alternatives as the unique preferential voting method
satisfying several simple axioms. Here we show that by adding some desirable
axioms to May's axioms, we can uniquely determine how to vote on three
alternatives. In particular, we add two axioms stating that the voting method
should mitigate spoiler effects and avoid the so-called strong no show paradox.
We prove a theorem stating that any preferential voting method satisfying our
enlarged set of axioms, which includes some weak homogeneity and preservation
axioms, agrees with Minimax voting in all three-alternative elections, except
perhaps in some improbable knife-edged elections in which ties may arise and be
broken in different ways. | An extension of May's Theorem to three alternatives: axiomatizing Minimax voting | 2023-12-21 22:18:28 | Wesley H. Holliday, Eric Pacuit | http://arxiv.org/abs/2312.14256v1, http://arxiv.org/pdf/2312.14256v1 | econ.TH |
35,874 | th | The ongoing rapid development of the e-commercial and interest-base websites
make it more pressing to evaluate objects' accurate quality before
recommendation by employing an effective reputation system. The objects'
quality are often calculated based on their historical information, such as
selected records or rating scores, to help visitors to make decisions before
watching, reading or buying. Usually high quality products obtain a higher
average ratings than low quality products regardless of rating biases or
errors. However many empirical cases demonstrate that consumers may be misled
by rating scores added by unreliable users or deliberate tampering. In this
case, users' reputation, i.e., the ability to rating trustily and precisely,
make a big difference during the evaluating process. Thus, one of the main
challenges in designing reputation systems is eliminating the effects of users'
rating bias on the evaluation results. To give an objective evaluation of each
user's reputation and uncover an object's intrinsic quality, we propose an
iterative balance (IB) method to correct users' rating biases. Experiments on
two online video-provided Web sites, namely MovieLens and Netflix datasets,
show that the IB method is a highly self-consistent and robust algorithm and it
can accurately quantify movies' actual quality and users' stability of rating.
Compared with existing methods, the IB method has higher ability to find the
"dark horses", i.e., not so popular yet good movies, in the Academy Awards. | Eliminating the effect of rating bias on reputation systems | 2018-01-17 19:24:03 | Leilei Wu, Zhuoming Ren, Xiao-Long Ren, Jianlin Zhang, Linyuan Lü | http://arxiv.org/abs/1801.05734v1, http://arxiv.org/pdf/1801.05734v1 | physics.soc-ph |
35,875 | th | Evolutionarily stable strategy (ESS) is an important solution concept in game
theory which has been applied frequently to biological models. Informally an
ESS is a strategy that if followed by the population cannot be taken over by a
mutation strategy that is initially rare. Finding such a strategy has been
shown to be difficult from a theoretical complexity perspective. We present an
algorithm for the case where mutations are restricted to pure strategies, and
present experiments on several game classes including random and a
recently-proposed cancer model. Our algorithm is based on a mixed-integer
non-convex feasibility program formulation, which constitutes the first general
optimization formulation for this problem. It turns out that the vast majority
of the games included in the experiments contain ESS with small support, and
our algorithm is outperformed by a support-enumeration based approach. However
we suspect our algorithm may be useful in the future as games are studied that
have ESS with potentially larger and unknown support size. | Optimization-Based Algorithm for Evolutionarily Stable Strategies against Pure Mutations | 2018-03-01 23:08:21 | Sam Ganzfried | http://arxiv.org/abs/1803.00607v2, http://arxiv.org/pdf/1803.00607v2 | cs.GT |
35,876 | th | In an economic market, sellers, infomediaries and customers constitute an
economic network. Each seller has her own customer group and the seller's
private customers are unobservable to other sellers. Therefore, a seller can
only sell commodities among her own customers unless other sellers or
infomediaries share her sale information to their customer groups. However, a
seller is not incentivized to share others' sale information by default, which
leads to inefficient resource allocation and limited revenue for the sale. To
tackle this problem, we develop a novel mechanism called customer sharing
mechanism (CSM) which incentivizes all sellers to share each other's sale
information to their private customer groups. Furthermore, CSM also
incentivizes all customers to truthfully participate in the sale. In the end,
CSM not only allocates the commodities efficiently but also optimizes the
seller's revenue. | Customer Sharing in Economic Networks with Costs | 2018-07-18 11:55:27 | Bin Li, Dong Hao, Dengji Zhao, Tao Zhou | http://arxiv.org/abs/1807.06822v1, http://arxiv.org/pdf/1807.06822v1 | cs.GT |
35,877 | th | We consider a network of agents. Associated with each agent are her covariate
and outcome. Agents influence each other's outcomes according to a certain
connection/influence structure. A subset of the agents participate on a
platform, and hence, are observable to it. The rest are not observable to the
platform and are called the latent agents. The platform does not know the
influence structure of the observable or the latent parts of the network. It
only observes the data on past covariates and decisions of the observable
agents. Observable agents influence each other both directly and indirectly
through the influence they exert on the latent agents.
We investigate how the platform can estimate the dependence of the observable
agents' outcomes on their covariates, taking the latent agents into account.
First, we show that this relationship can be succinctly captured by a matrix
and provide an algorithm for estimating it under a suitable approximate
sparsity condition using historical data of covariates and outcomes for the
observable agents. We also obtain convergence rates for the proposed estimator
despite the high dimensionality that allows more agents than observations.
Second, we show that the approximate sparsity condition holds under the
standard conditions used in the literature. Hence, our results apply to a large
class of networks. Finally, we apply our results to two practical settings:
targeted advertising and promotional pricing. We show that by using the
available historical data with our estimator, it is possible to obtain
asymptotically optimal advertising/pricing decisions, despite the presence of
latent agents. | Latent Agents in Networks: Estimation and Targeting | 2018-08-14 22:57:55 | Baris Ata, Alexandre Belloni, Ozan Candogan | http://arxiv.org/abs/1808.04878v3, http://arxiv.org/pdf/1808.04878v3 | cs.SI |
35,878 | th | In a pathbreaking paper, Cover and Ordentlich (1998) solved a max-min
portfolio game between a trader (who picks an entire trading algorithm,
$\theta(\cdot)$) and "nature," who picks the matrix $X$ of gross-returns of all
stocks in all periods. Their (zero-sum) game has the payoff kernel
$W_\theta(X)/D(X)$, where $W_\theta(X)$ is the trader's final wealth and $D(X)$
is the final wealth that would have accrued to a $\$1$ deposit into the best
constant-rebalanced portfolio (or fixed-fraction betting scheme) determined in
hindsight. The resulting "universal portfolio" compounds its money at the same
asymptotic rate as the best rebalancing rule in hindsight, thereby beating the
market asymptotically under extremely general conditions. Smitten with this
(1998) result, the present paper solves the most general tractable version of
Cover and Ordentlich's (1998) max-min game. This obtains for performance
benchmarks (read: derivatives) that are separately convex and homogeneous in
each period's gross-return vector. For completely arbitrary (even
non-measurable) performance benchmarks, we show how the axiom of choice can be
used to "find" an exact maximin strategy for the trader. | Multilinear Superhedging of Lookback Options | 2018-10-05 01:50:42 | Alex Garivaltis | http://arxiv.org/abs/1810.02447v2, http://arxiv.org/pdf/1810.02447v2 | q-fin.PR |
35,879 | th | We show that, in a resource allocation problem, the ex ante aggregate utility
of players with cumulative-prospect-theoretic preferences can be increased over
deterministic allocations by implementing lotteries. We formulate an
optimization problem, called the system problem, to find the optimal lottery
allocation. The system problem exhibits a two-layer structure comprised of a
permutation profile and optimal allocations given the permutation profile. For
any fixed permutation profile, we provide a market-based mechanism to find the
optimal allocations and prove the existence of equilibrium prices. We show that
the system problem has a duality gap, in general, and that the primal problem
is NP-hard. We then consider a relaxation of the system problem and derive some
qualitative features of the optimal lottery structure. | Optimal Resource Allocation over Networks via Lottery-Based Mechanisms | 2018-12-03 04:04:36 | Soham R. Phade, Venkat Anantharam | http://dx.doi.org/10.1007/978-3-030-16989-3_4, http://arxiv.org/abs/1812.00501v1, http://arxiv.org/pdf/1812.00501v1 | econ.TH |
35,880 | th | Spending by the UK's National Health Service (NHS) on independent healthcare
treatment has been increased in recent years and is predicted to sustain its
upward trend with the forecast of population growth. Some have viewed this
increase as an attempt not to expand the patients' choices but to privatize
public healthcare. This debate poses a social dilemma whether the NHS should
stop cooperating with Private providers. This paper contributes to healthcare
economic modelling by investigating the evolution of cooperation among three
proposed populations: Public Healthcare Providers, Private Healthcare Providers
and Patients. The Patient population is included as a main player in the
decision-making process by expanding patient's choices of treatment. We develop
a generic basic model that measures the cost of healthcare provision based on
given parameters, such as NHS and private healthcare providers' cost of
investments in both sectors, cost of treatments and gained benefits. A
patient's costly punishment is introduced as a mechanism to enhance cooperation
among the three populations. Our findings show that cooperation can be improved
with the introduction of punishment (patient's punishment) against defecting
providers. Although punishment increases cooperation, it is very costly
considering the small improvement in cooperation in comparison to the basic
model. | Pathways to Good Healthcare Services and Patient Satisfaction: An Evolutionary Game Theoretical Approach | 2019-07-06 18:38:33 | Zainab Alalawi, The Anh Han, Yifeng Zeng, Aiman Elragig | http://dx.doi.org/10.13140/RG.2.2.30657.10086, http://arxiv.org/abs/1907.07132v1, http://arxiv.org/pdf/1907.07132v1 | physics.soc-ph |
35,881 | th | This note provides a neat and enjoyable expansion and application of the
magnificent Ordentlich-Cover theory of "universal portfolios." I generalize
Cover's benchmark of the best constant-rebalanced portfolio (or 1-linear
trading strategy) in hindsight by considering the best bilinear trading
strategy determined in hindsight for the realized sequence of asset prices. A
bilinear trading strategy is a mini two-period active strategy whose final
capital growth factor is linear separately in each period's gross return vector
for the asset market. I apply Cover's ingenious (1991) performance-weighted
averaging technique to construct a universal bilinear portfolio that is
guaranteed (uniformly for all possible market behavior) to compound its money
at the same asymptotic rate as the best bilinear trading strategy in hindsight.
Thus, the universal bilinear portfolio asymptotically dominates the original
(1-linear) universal portfolio in the same technical sense that Cover's
universal portfolios asymptotically dominate all constant-rebalanced portfolios
and all buy-and-hold strategies. In fact, like so many Russian dolls, one can
get carried away and use these ideas to construct an endless hierarchy of ever
more dominant $H$-linear universal portfolios. | A Note on Universal Bilinear Portfolios | 2019-07-23 08:55:28 | Alex Garivaltis | http://arxiv.org/abs/1907.09704v2, http://arxiv.org/pdf/1907.09704v2 | q-fin.MF |
35,882 | th | We define discounted differential privacy, as an alternative to
(conventional) differential privacy, to investigate privacy of evolving
datasets, containing time series over an unbounded horizon. We use privacy loss
as a measure of the amount of information leaked by the reports at a certain
fixed time. We observe that privacy losses are weighted equally across time in
the definition of differential privacy, and therefore the magnitude of
privacy-preserving additive noise must grow without bound to ensure
differential privacy over an infinite horizon. Motivated by the discounted
utility theory within the economics literature, we use exponential and
hyperbolic discounting of privacy losses across time to relax the definition of
differential privacy under continual observations. This implies that privacy
losses in distant past are less important than the current ones to an
individual. We use discounted differential privacy to investigate privacy of
evolving datasets using additive Laplace noise and show that the magnitude of
the additive noise can remain bounded under discounted differential privacy. We
illustrate the quality of privacy-preserving mechanisms satisfying discounted
differential privacy on smart-meter measurement time-series of real households,
made publicly available by Ausgrid (an Australian electricity distribution
company). | Temporally Discounted Differential Privacy for Evolving Datasets on an Infinite Horizon | 2019-08-12 07:25:54 | Farhad Farokhi | http://arxiv.org/abs/1908.03995v2, http://arxiv.org/pdf/1908.03995v2 | cs.CR |
35,883 | th | Many real-world domains contain multiple agents behaving strategically with
probabilistic transitions and uncertain (potentially infinite) duration. Such
settings can be modeled as stochastic games. While algorithms have been
developed for solving (i.e., computing a game-theoretic solution concept such
as Nash equilibrium) two-player zero-sum stochastic games, research on
algorithms for non-zero-sum and multiplayer stochastic games is limited. We
present a new algorithm for these settings, which constitutes the first
parallel algorithm for multiplayer stochastic games. We present experimental
results on a 4-player stochastic game motivated by a naval strategic planning
scenario, showing that our algorithm is able to quickly compute strategies
constituting Nash equilibrium up to a very small degree of approximation error. | Parallel Algorithm for Approximating Nash Equilibrium in Multiplayer Stochastic Games with Application to Naval Strategic Planning | 2019-10-01 07:08:14 | Sam Ganzfried, Conner Laughlin, Charles Morefield | http://arxiv.org/abs/1910.00193v4, http://arxiv.org/pdf/1910.00193v4 | cs.GT |
35,884 | th | We describe a new complete algorithm for computing Nash equilibrium in
multiplayer general-sum games, based on a quadratically-constrained feasibility
program formulation. We demonstrate that the algorithm runs significantly
faster than the prior fastest complete algorithm on several game classes
previously studied and that its runtimes even outperform the best incomplete
algorithms. | Fast Complete Algorithm for Multiplayer Nash Equilibrium | 2020-02-12 02:42:14 | Sam Ganzfried | http://arxiv.org/abs/2002.04734v10, http://arxiv.org/pdf/2002.04734v10 | cs.GT |
35,887 | th | This paper presents an inverse reinforcement learning~(IRL) framework for
Bayesian stopping time problems. By observing the actions of a Bayesian
decision maker, we provide a necessary and sufficient condition to identify if
these actions are consistent with optimizing a cost function. In a Bayesian
(partially observed) setting, the inverse learner can at best identify
optimality wrt the observed strategies. Our IRL algorithm identifies optimality
and then constructs set-valued estimates of the cost function.To achieve this
IRL objective, we use novel ideas from Bayesian revealed preferences stemming
from microeconomics. We illustrate the proposed IRL scheme using two important
examples of stopping time problems, namely, sequential hypothesis testing and
Bayesian search. As a real-world example, we illustrate using a YouTube dataset
comprising metadata from 190000 videos how the proposed IRL method predicts
user engagement in online multimedia platforms with high accuracy. Finally, for
finite datasets, we propose an IRL detection algorithm and give finite sample
bounds on its error probabilities. | Necessary and Sufficient Conditions for Inverse Reinforcement Learning of Bayesian Stopping Time Problems | 2020-07-07 17:14:12 | Kunal Pattanayak, Vikram Krishnamurthy | http://arxiv.org/abs/2007.03481v6, http://arxiv.org/pdf/2007.03481v6 | cs.LG |
35,888 | th | We consider a model of urban spatial structure proposed by Harris and Wilson
(Environment and Planning A, 1978). The model consists of fast dynamics, which
represent spatial interactions between locations by the entropy-maximizing
principle, and slow dynamics, which represent the evolution of the spatial
distribution of local factors that facilitate such spatial interactions. One
known limitation of the Harris and Wilson model is that it can have multiple
locally stable equilibria, leading to a dependence of predictions on the
initial state. To overcome this, we employ equilibrium refinement by stochastic
stability. We build on the fact that the model is a large-population potential
game and that stochastically stable states in a potential game correspond to
global potential maximizers. Unlike local stability under deterministic
dynamics, the stochastic stability approach allows a unique and unambiguous
prediction for urban spatial configurations. We show that, in the most likely
spatial configuration, the number of retail agglomerations decreases either
when shopping costs for consumers decrease or when the strength of
agglomerative effects increases. | Stochastic stability of agglomeration patterns in an urban retail model | 2020-11-13 09:31:07 | Minoru Osawa, Takashi Akamatsu, Yosuke Kogure | http://arxiv.org/abs/2011.06778v1, http://arxiv.org/pdf/2011.06778v1 | econ.TH |
35,889 | th | In this paper, we consider a network of consumers who are under the combined
influence of their neighbors and external influencing entities (the marketers).
The consumers' opinion follows a hybrid dynamics whose opinion jumps are due to
the marketing campaigns. By using the relevant static game model proposed
recently in [1], we prove that although the marketers are in competition and
therefore create tension in the network, the network reaches a consensus.
Exploiting this key result, we propose a coopetition marketing strategy which
combines the one-shot Nash equilibrium actions and a policy of no advertising.
Under reasonable sufficient conditions, it is proved that the proposed
coopetition strategy profile Pareto-dominates the one-shot Nash equilibrium
strategy. This is a very encouraging result to tackle the much more challenging
problem of designing Pareto-optimal and equilibrium strategies for the
considered dynamical marketing game. | Allocating marketing resources over social networks: A long-term analysis | 2020-11-17 12:39:52 | Vineeth S. Varma, Samson Lasaulce, Julien Mounthanyvong, Irinel-Constantin Morarescu | http://arxiv.org/abs/2011.09268v1, http://arxiv.org/pdf/2011.09268v1 | econ.TH |
35,890 | th | We study competitive location problems in a continuous setting, in which
facilities have to be placed in a rectangular domain $R$ of normalized
dimensions of $1$ and $\rho\geq 1$, and distances are measured according to the
Manhattan metric. We show that the family of 'balanced' facility configurations
(in which the Voronoi cells of individual facilities are equalized with respect
to a number of geometric properties) is considerably richer in this metric than
for Euclidean distances. Our main result considers the 'One-Round Voronoi Game'
with Manhattan distances, in which first player White and then player Black
each place $n$ points in $R$; each player scores the area for which one of its
facilities is closer than the facilities of the opponent. We give a tight
characterization: White has a winning strategy if and only if $\rho\geq n$; for
all other cases, we present a winning strategy for Black. | Competitive Location Problems: Balanced Facility Location and the One-Round Manhattan Voronoi Game | 2020-11-26 16:20:21 | Thomas Byrne, Sándor P. Fekete, Jörg Kalcsics, Linda Kleist | http://arxiv.org/abs/2011.13275v2, http://arxiv.org/pdf/2011.13275v2 | cs.CG |
35,891 | th | We study a spatial, one-shot prisoner's dilemma (PD) model in which selection
operates on both an organism's behavioral strategy (cooperate or defect) and
its choice of when to implement that strategy across a set of discrete time
slots. Cooperators evolve to fixation regularly in the model when we add time
slots to lattices and small-world networks, and their portion of the population
grows, albeit slowly, when organisms interact in a scale-free network. This
selection for cooperators occurs across a wide variety of time slots and it
does so even when a crucial condition for the evolution of cooperation on
graphs is violated--namely, when the ratio of benefits to costs in the PD does
not exceed the number of spatially-adjacent organisms. | Temporal assortment of cooperators in the spatial prisoner's dilemma | 2020-11-29 23:27:19 | Tim Johnson, Oleg Smirnov | http://arxiv.org/abs/2011.14440v1, http://arxiv.org/pdf/2011.14440v1 | q-bio.PE |
35,892 | th | Two long-lived senders play a dynamic game of competitive persuasion. Each
period, each provides information to a single short-lived receiver. When the
senders also set prices, we unearth a folk theorem: if they are sufficiently
patient, virtually any vector of feasible and individually rational payoffs can
be sustained in a subgame perfect equilibrium. Without price-setting, there is
a unique subgame perfect equilibrium. In it, patient senders provide less
information--maximally patient ones none. | Dynamic Competitive Persuasion | 2018-11-28 19:44:01 | Mark Whitmeyer | http://arxiv.org/abs/1811.11664v6, http://arxiv.org/pdf/1811.11664v6 | math.PR |
35,913 | th | The standard game-theoretic solution concept, Nash equilibrium, assumes that
all players behave rationally. If we follow a Nash equilibrium and opponents
are irrational (or follow strategies from a different Nash equilibrium), then
we may obtain an extremely low payoff. On the other hand, a maximin strategy
assumes that all opposing agents are playing to minimize our payoff (even if it
is not in their best interest), and ensures the maximal possible worst-case
payoff, but results in exceedingly conservative play. We propose a new solution
concept called safe equilibrium that models opponents as behaving rationally
with a specified probability and behaving potentially arbitrarily with the
remaining probability. We prove that a safe equilibrium exists in all
strategic-form games (for all possible values of the rationality parameters),
and prove that its computation is PPAD-hard. We present exact algorithms for
computing a safe equilibrium in both 2 and $n$-player games, as well as
scalable approximation algorithms. | Safe Equilibrium | 2022-01-12 04:45:51 | Sam Ganzfried | http://arxiv.org/abs/2201.04266v10, http://arxiv.org/pdf/2201.04266v10 | cs.GT |
35,893 | th | The design of mechanisms that encourage pro-social behaviours in populations
of self-regarding agents is recognised as a major theoretical challenge within
several areas of social, life and engineering sciences. When interference from
external parties is considered, several heuristics have been identified as
capable of engineering a desired collective behaviour at a minimal cost.
However, these studies neglect the diverse nature of contexts and social
structures that characterise real-world populations. Here we analyse the impact
of diversity by means of scale-free interaction networks with high and low
levels of clustering, and test various interference mechanisms using
simulations of agents facing a cooperative dilemma. Our results show that
interference on scale-free networks is not trivial and that distinct levels of
clustering react differently to each interference mechanism. As such, we argue
that no tailored response fits all scale-free networks and present which
mechanisms are more efficient at fostering cooperation in both types of
networks. Finally, we discuss the pitfalls of considering reckless interference
mechanisms. | Exogenous Rewards for Promoting Cooperation in Scale-Free Networks | 2019-05-13 13:57:38 | Theodor Cimpeanu, The Anh Han, Francisco C. Santos | http://dx.doi.org/10.1162/isal_a_00181, http://arxiv.org/abs/1905.04964v2, http://arxiv.org/pdf/1905.04964v2 | cs.GT |
35,894 | th | The observed architecture of ecological and socio-economic networks differs
significantly from that of random networks. From a network science standpoint,
non-random structural patterns observed in real networks call for an
explanation of their emergence and an understanding of their potential systemic
consequences. This article focuses on one of these patterns: nestedness. Given
a network of interacting nodes, nestedness can be described as the tendency for
nodes to interact with subsets of the interaction partners of better-connected
nodes. Known since more than $80$ years in biogeography, nestedness has been
found in systems as diverse as ecological mutualistic organizations, world
trade, inter-organizational relations, among many others. This review article
focuses on three main pillars: the existing methodologies to observe nestedness
in networks; the main theoretical mechanisms conceived to explain the emergence
of nestedness in ecological and socio-economic networks; the implications of a
nested topology of interactions for the stability and feasibility of a given
interacting system. We survey results from variegated disciplines, including
statistical physics, graph theory, ecology, and theoretical economics.
Nestedness was found to emerge both in bipartite networks and, more recently,
in unipartite ones; this review is the first comprehensive attempt to unify
both streams of studies, usually disconnected from each other. We believe that
the truly interdisciplinary endeavour -- while rooted in a complex systems
perspective -- may inspire new models and algorithms whose realm of application
will undoubtedly transcend disciplinary boundaries. | Nestedness in complex networks: Observation, emergence, and implications | 2019-05-18 17:12:52 | Manuel Sebastian Mariani, Zhuo-Ming Ren, Jordi Bascompte, Claudio Juan Tessone | http://dx.doi.org/10.1016/j.physrep.2019.04.001, http://arxiv.org/abs/1905.07593v1, http://arxiv.org/pdf/1905.07593v1 | physics.soc-ph |
35,895 | th | The Sharing Economy (which includes Airbnb, Apple, Alibaba, Uber, WeWork,
Ebay, Didi Chuxing, Amazon) blossomed across the world, triggered structural
changes in industries and significantly affected international capital flows
primarily by disobeying a wide variety of statutes and laws in many countries.
They also illegally reduced and changing the nature of competition in many
industries often to the detriment of social welfare. This article develops new
dynamic pricing models for the SEOs and derives some stability properties of
mixed games and dynamic algorithms which eliminate antitrust liability and also
reduce deadweight losses, greed, Regret and GPS manipulation. The new dynamic
pricing models contravene the Myerson Satterthwaite Impossibility Theorem. | Complexity, Stability Properties of Mixed Games and Dynamic Algorithms, and Learning in the Sharing Economy | 2020-01-18 04:09:36 | Michael C. Nwogugu | http://arxiv.org/abs/2001.08192v1, http://arxiv.org/pdf/2001.08192v1 | cs.GT |
35,896 | th | Successful algorithms have been developed for computing Nash equilibrium in a
variety of finite game classes. However, solving continuous games -- in which
the pure strategy space is (potentially uncountably) infinite -- is far more
challenging. Nonetheless, many real-world domains have continuous action
spaces, e.g., where actions refer to an amount of time, money, or other
resource that is naturally modeled as being real-valued as opposed to integral.
We present a new algorithm for {approximating} Nash equilibrium strategies in
continuous games. In addition to two-player zero-sum games, our algorithm also
applies to multiplayer games and games with imperfect information. We
experiment with our algorithm on a continuous imperfect-information Blotto
game, in which two players distribute resources over multiple battlefields.
Blotto games have frequently been used to model national security scenarios and
have also been applied to electoral competition and auction theory. Experiments
show that our algorithm is able to quickly compute close approximations of Nash
equilibrium strategies for this game. | Algorithm for Computing Approximate Nash Equilibrium in Continuous Games with Application to Continuous Blotto | 2020-06-12 22:53:18 | Sam Ganzfried | http://arxiv.org/abs/2006.07443v5, http://arxiv.org/pdf/2006.07443v5 | cs.GT |
35,897 | th | In this work, we provide a general mathematical formalism to study the
optimal control of an epidemic, such as the COVID-19 pandemic, via incentives
to lockdown and testing. In particular, we model the interplay between the
government and the population as a principal-agent problem with moral hazard,
\`a la Cvitani\'c, Possama\"i, and Touzi [27], while an epidemic is spreading
according to dynamics given by compartmental stochastic SIS or SIR models, as
proposed respectively by Gray, Greenhalgh, Hu, Mao, and Pan [45] and Tornatore,
Buccellato, and Vetro [88]. More precisely, to limit the spread of a virus, the
population can decrease the transmission rate of the disease by reducing
interactions between individuals. However, this effort, which cannot be
perfectly monitored by the government, comes at social and monetary cost for
the population. To mitigate this cost, and thus encourage the lockdown of the
population, the government can put in place an incentive policy, in the form of
a tax or subsidy. In addition, the government may also implement a testing
policy in order to know more precisely the spread of the epidemic within the
country, and to isolate infected individuals. In terms of technical results, we
demonstrate the optimal form of the tax, indexed on the proportion of infected
individuals, as well as the optimal effort of the population, namely the
transmission rate chosen in response to this tax. The government's optimisation
problem then boils down to solving an Hamilton-Jacobi-Bellman equation.
Numerical results confirm that if a tax policy is implemented, the population
is encouraged to significantly reduce its interactions. If the government also
adjusts its testing policy, less effort is required on the population side,
individuals can interact almost as usual, and the epidemic is largely contained
by the targeted isolation of positively-tested individuals. | Incentives, lockdown, and testing: from Thucydides's analysis to the COVID-19 pandemic | 2020-09-01 17:36:28 | Emma Hubert, Thibaut Mastrolia, Dylan Possamaï, Xavier Warin | http://dx.doi.org/10.1007/s00285-022-01736-0, http://arxiv.org/abs/2009.00484v2, http://arxiv.org/pdf/2009.00484v2 | q-bio.PE |
35,898 | th | On-line firms deploy suites of software platforms, where each platform is
designed to interact with users during a certain activity, such as browsing,
chatting, socializing, emailing, driving, etc. The economic and incentive
structure of this exchange, as well as its algorithmic nature, have not been
explored to our knowledge. We model this interaction as a Stackelberg game
between a Designer and one or more Agents. We model an Agent as a Markov chain
whose states are activities; we assume that the Agent's utility is a linear
function of the steady-state distribution of this chain. The Designer may
design a platform for each of these activities/states; if a platform is adopted
by the Agent, the transition probabilities of the Markov chain are affected,
and so is the objective of the Agent. The Designer's utility is a linear
function of the steady state probabilities of the accessible states minus the
development cost of the platforms. The underlying optimization problem of the
Agent -- how to choose the states for which to adopt the platform -- is an MDP.
If this MDP has a simple yet plausible structure (the transition probabilities
from one state to another only depend on the target state and the recurrent
probability of the current state) the Agent's problem can be solved by a greedy
algorithm. The Designer's optimization problem (designing a custom suite for
the Agent so as to optimize, through the Agent's optimum reaction, the
Designer's revenue), is NP-hard to approximate within any finite ratio;
however, the special case, while still NP-hard, has an FPTAS. These results
generalize from a single Agent to a distribution of Agents with finite support,
as well as to the setting where the Designer must find the best response to the
existing strategies of other Designers. We discuss other implications of our
results and directions of future research. | The Platform Design Problem | 2020-09-14 02:53:19 | Christos Papadimitriou, Kiran Vodrahalli, Mihalis Yannakakis | http://arxiv.org/abs/2009.06117v2, http://arxiv.org/pdf/2009.06117v2 | cs.GT |
35,899 | th | After the first lockdown in response to the COVID-19 outbreak, many countries
faced difficulties in balancing infection control with economics. Due to
limited prior knowledge, economists began researching this issue using
cost-benefit analysis and found that infection control processes significantly
affect economic efficiency. A UK study used economic parameters to numerically
demonstrate an optimal balance in the process, including keeping the infected
population stationary. However, universally applicable knowledge, which is
indispensable for the guiding principles of infection control, has not yet been
clearly developed because of the methodological limitations of simulation
studies. Here, we propose a simple model and theoretically prove the universal
result of economic irreversibility by applying the idea of thermodynamics to
pandemic control. This means that delaying infection control measures is more
expensive than implementing infection control measures early while keeping
infected populations stationary. This implies that once the infected population
increases, society cannot return to its previous state without extra
expenditures. This universal result is analytically obtained by focusing on the
infection-spreading phase of pandemics, and is applicable not just to COVID-19,
regardless of "herd immunity." It also confirms the numerical observation of
stationary infected populations in its optimally efficient process. Our
findings suggest that economic irreversibility is a guiding principle for
balancing infection control with economic effects. | Economic irreversibility in pandemic control processes: Rigorous modeling of delayed countermeasures and consequential cost increases | 2020-10-01 14:28:45 | Tsuyoshi Hondou | http://dx.doi.org/10.7566/JPSJ.90.114007, http://arxiv.org/abs/2010.00305v9, http://arxiv.org/pdf/2010.00305v9 | q-bio.PE |
35,900 | th | Heifetz, Meier and Schipper (HMS) present a lattice model of awareness. The
HMS model is syntax-free, which precludes the simple option to rely on formal
language to induce lattices, and represents uncertainty and unawareness with
one entangled construct, making it difficult to assess the properties of
either. Here, we present a model based on a lattice of Kripke models, induced
by atom subset inclusion, in which uncertainty and unawareness are separate. We
show the models to be equivalent by defining transformations between them which
preserve formula satisfaction, and obtain completeness through our and HMS'
results. | Awareness Logic: A Kripke-based Rendition of the Heifetz-Meier-Schipper Model | 2020-12-24 00:24:06 | Gaia Belardinelli, Rasmus K. Rendsvig | http://dx.doi.org/10.1007/978-3-030-65840-3_3, http://arxiv.org/abs/2012.12982v1, http://arxiv.org/pdf/2012.12982v1 | cs.AI |