id
int64 28.8k
36k
| category
stringclasses 3
values | text
stringlengths 44
3.03k
| title
stringlengths 10
236
| published
stringlengths 19
19
| author
stringlengths 6
943
| link
stringlengths 66
127
| primary_category
stringclasses 62
values |
---|---|---|---|---|---|---|---|
35,484 |
th
|
Revision game is a very new model formulating the real-time situation where
players dynamically prepare and revise their actions in advance before a
deadline when payoffs are realized. It is at the cutting edge of dynamic game
theory and can be applied in many real-world scenarios, such as eBay auction,
stock market, election, online games, crowdsourcing, etc. In this work, we
novelly identify a class of strategies for revision games which are called
Limited Retaliation strategies. An limited retaliation strategy stipulates
that, (1) players first follow a recommended cooperative plan; (2) if anyone
deviates from the plan, the limited retaliation player retaliates by using the
defection action for a limited duration; (3) after the retaliation, the limited
retaliation player returns to the cooperative plan. A limited retaliation
strategy has three key features. It is cooperative, sustaining a high level of
social welfare. It is vengeful, deterring the opponent from betrayal by
threatening with a future retaliation. It is yet forgiving, since it resumes
cooperation after a proper retaliation. The cooperativeness and vengefulness
make it constitute cooperative subgame perfect equilibrium, while the
forgiveness makes it tolerate occasional mistakes. limited retaliation
strategies show significant advantages over Grim Trigger, which is currently
the only known strategy for revision games. Besides its contribution as a new
robust and welfare-optimizing equilibrium strategy, our results about limited
retaliation strategy can also be used to explain how easy cooperation can
happen, and why forgiveness emerges in real-world multi-agent interactions. In
addition, limited retaliation strategies are simple to derive and
computationally efficient, making it easy for algorithm design and
implementation in many multi-agent systems.
|
Cooperation, Retaliation and Forgiveness in Revision Games
|
2021-12-04 10:40:09
|
Dong Hao, Qi Shi, Jinyan Su, Bo An
|
http://arxiv.org/abs/2112.02271v4, http://arxiv.org/pdf/2112.02271v4
|
cs.GT
|
35,485 |
th
|
In a crowdsourcing contest, a principal holding a task posts it to a crowd.
People in the crowd then compete with each other to win the rewards. Although
in real life, a crowd is usually networked and people influence each other via
social ties, existing crowdsourcing contest theories do not aim to answer how
interpersonal relationships influence people's incentives and behaviors and
thereby affect the crowdsourcing performance. In this work, we novelly take
people's social ties as a key factor in the modeling and designing of agents'
incentives in crowdsourcing contests. We establish two contest mechanisms by
which the principal can impel the agents to invite their neighbors to
contribute to the task. The first mechanism has a symmetric Bayesian Nash
equilibrium, and it is very simple for agents to play and easy for the
principal to predict the contest performance. The second mechanism has an
asymmetric Bayesian Nash equilibrium, and agents' behaviors in equilibrium show
a vast diversity which is strongly related to their social relations. The
Bayesian Nash equilibrium analysis of these new mechanisms reveals that,
besides agents' intrinsic abilities, the social relations among them also play
a central role in decision-making. Moreover, we design an effective algorithm
to automatically compute the Bayesian Nash equilibrium of the invitation
crowdsourcing contest and further adapt it to a large graph dataset. Both
theoretical and empirical results show that the new invitation crowdsourcing
contests can substantially enlarge the number of participants, whereby the
principal can obtain significantly better solutions without a large
advertisement expenditure.
|
Social Sourcing: Incorporating Social Networks Into Crowdsourcing Contest Design
|
2021-12-06 12:18:18
|
Qi Shi, Dong Hao
|
http://dx.doi.org/10.1109/TNET.2022.3223367, http://arxiv.org/abs/2112.02884v2, http://arxiv.org/pdf/2112.02884v2
|
cs.AI
|
35,486 |
th
|
In statistical decision theory, a model is said to be Pareto optimal (or
admissible) if no other model carries less risk for at least one state of
nature while presenting no more risk for others. How can you rationally
aggregate/combine a finite set of Pareto optimal models while preserving Pareto
efficiency? This question is nontrivial because weighted model averaging does
not, in general, preserve Pareto efficiency. This paper presents an answer in
four logical steps: (1) A rational aggregation rule should preserve Pareto
efficiency (2) Due to the complete class theorem, Pareto optimal models must be
Bayesian, i.e., they minimize a risk where the true state of nature is averaged
with respect to some prior. Therefore each Pareto optimal model can be
associated with a prior, and Pareto efficiency can be maintained by aggregating
Pareto optimal models through their priors. (3) A prior can be interpreted as a
preference ranking over models: prior $\pi$ prefers model A over model B if the
average risk of A is lower than the average risk of B. (4) A
rational/consistent aggregation rule should preserve this preference ranking:
If both priors $\pi$ and $\pi'$ prefer model A over model B, then the prior
obtained by aggregating $\pi$ and $\pi'$ must also prefer A over B. Under these
four steps, we show that all rational/consistent aggregation rules are as
follows: Give each individual Pareto optimal model a weight, introduce a weak
order/ranking over the set of Pareto optimal models, aggregate a finite set of
models S as the model associated with the prior obtained as the weighted
average of the priors of the highest-ranked models in S. This result shows that
all rational/consistent aggregation rules must follow a generalization of
hierarchical Bayesian modeling. Following our main result, we present
applications to Kernel smoothing, time-depreciating models, and voting
mechanisms.
|
Aggregation of Pareto optimal models
|
2021-12-08 11:21:15
|
Hamed Hamze Bajgiran, Houman Owhadi
|
http://arxiv.org/abs/2112.04161v1, http://arxiv.org/pdf/2112.04161v1
|
econ.TH
|
35,487 |
th
|
In this discussion draft, we explore different duopoly games of players with
quadratic costs, where the market is supposed to have the isoelastic demand.
Different from the usual approaches based on numerical computations, the
methods used in the present work are built on symbolic computations, which can
produce analytical and rigorous results. Our investigations show that the
stability regions are enlarged for the games considered in this work compared
to their counterparts with linear costs, which generalizes the classical
results of "F. M. Fisher. The stability of the Cournot oligopoly solution: The
effects of speeds of adjustment and increasing marginal costs. The Review of
Economic Studies, 28(2):125--135, 1961.".
|
Stability of Cournot duopoly games with isoelastic demands and quadratic costs
|
2021-12-11 13:52:07
|
Xiaoliang Li, Li Su
|
http://arxiv.org/abs/2112.05948v2, http://arxiv.org/pdf/2112.05948v2
|
cs.SC
|
35,496 |
th
|
Static stability in economic models means negative incentives for deviation
from equilibrium strategies, which we expect to assure a return to equilibrium,
i.e., dynamic stability, as long as agents respond to incentives. There have
been many attempts to prove this link, especially in evolutionary game theory,
yielding both negative and positive results. This paper presents a universal
and intuitive approach to this link. We prove that static stability assures
dynamic stability if agents' choices of switching strategies are rationalizable
by introducing costs and constraints in those switching decisions. This idea
guides us to define \textit{net }gains from switches as the payoff improvement
after deducting the costs. Under rationalizable dynamics, an agent maximizes
the expected net gain subject to the constraints. We prove that the aggregate
maximized expected net gain works as a Lyapunov function. It also explains
reasons behind the known negative results. While our analysis here is confined
to myopic evolutionary dynamics in population games, our approach is applicable
to more complex situations.
|
Net gains in evolutionary dynamics: A unifying and intuitive approach to dynamic stability
|
2018-05-13 18:23:19
|
Dai Zusai
|
http://arxiv.org/abs/1805.04898v9, http://arxiv.org/pdf/1805.04898v9
|
math.OC
|
35,497 |
th
|
Efficient computability is an important property of solution concepts in
matching markets. We consider the computational complexity of finding and
verifying various solution concepts in trading networks-multi-sided matching
markets with bilateral contracts-under the assumption of full substitutability
of agents' preferences. It is known that outcomes that satisfy trail stability
always exist and can be found in linear time. Here we consider a slightly
stronger solution concept in which agents can simultaneously offer an upstream
and a downstream contract. We show that deciding the existence of outcomes
satisfying this solution concept is an NP-complete problem even in a special
(flow network) case of our model. It follows that the existence of stable
outcomes--immune to deviations by arbitrary sets of agents-is also an NP-hard
problem in trading networks (and in flow networks). Finally, we show that even
verifying whether a given outcome is stable is NP-complete in trading networks.
|
Complexity of Stability in Trading Networks
|
2018-05-22 20:42:34
|
Tamás Fleiner, Zsuzsanna Jankó, Ildikó Schlotter, Alexander Teytelboym
|
http://arxiv.org/abs/1805.08758v2, http://arxiv.org/pdf/1805.08758v2
|
cs.CC
|
35,499 |
th
|
In principal-agent models, a principal offers a contract to an agent to
perform a certain task. The agent exerts a level of effort that maximizes her
utility. The principal is oblivious to the agent's chosen level of effort, and
conditions her wage only on possible outcomes. In this work, we consider a
model in which the principal is unaware of the agent's utility and action
space: she sequentially offers contracts to identical agents, and observes the
resulting outcomes. We present an algorithm for learning the optimal contract
under mild assumptions. We bound the number of samples needed for the principal
to obtain a contract that is within $\eps$ of her optimal net profit for every
$\eps>0$. Our results are robust even when considering risk-averse agents.
Furthermore, we show that when there are only two possible outcomes or the
agent is risk-neutral, the algorithm's outcome approximates the optimal
contract described in the classical theory.
|
Learning Approximately Optimal Contracts
|
2018-11-16 13:05:42
|
Alon Cohen, Moran Koren, Argyrios Deligkas
|
http://arxiv.org/abs/1811.06736v2, http://arxiv.org/pdf/1811.06736v2
|
cs.GT
|
35,500 |
th
|
We study the incentive properties of envy-free mechanisms for the allocation
of rooms and payments of rent among financially constrained roommates. Each
agent reports her values for rooms, her housing earmark (soft budget), and an
index that reflects the difficulty the agent experiences from having to pay
over this amount. Then an envy-free allocation for these reports is
recommended. The complete information non-cooperative outcomes of each of these
mechanisms are exactly the envy-free allocations with respect to true
preferences if and only if the admissible budget violation indices have a
bound.
|
Expressive mechanisms for equitable rent division on a budget
|
2019-02-08 07:35:33
|
Rodrigo A. Velez
|
http://arxiv.org/abs/1902.02935v3, http://arxiv.org/pdf/1902.02935v3
|
econ.TH
|
35,501 |
th
|
Evaluation of systemic risk in networks of financial institutions in general
requires information of inter-institution financial exposures. In the framework
of Debt Rank algorithm, we introduce an approximate method of systemic risk
evaluation which requires only node properties, such as total assets and
liabilities, as inputs. We demonstrate that this approximation captures a large
portion of systemic risk measured by Debt Rank. Furthermore, using Monte Carlo
simulations, we investigate network structures that can amplify systemic risk.
Indeed, while no topology in general sense is {\em a priori} more stable if the
market is liquid [1], a larger complexity is detrimental for the overall
stability [2]. Here we find that the measure of scalar assortativity correlates
well with level of systemic risk. In particular, network structures with high
systemic risk are scalar assortative, meaning that risky banks are mostly
exposed to other risky banks. Network structures with low systemic risk are
scalar disassortative, with interactions of risky banks with stable banks.
|
Controlling systemic risk - network structures that minimize it and node properties to calculate it
|
2019-02-22 16:28:26
|
Sebastian M. Krause, Hrvoje Štefančić, Vinko Zlatić, Guido Caldarelli
|
http://dx.doi.org/10.1103/PhysRevE.103.042304, http://arxiv.org/abs/1902.08483v1, http://arxiv.org/pdf/1902.08483v1
|
q-fin.RM
|
35,502 |
th
|
We provide conditions for stable equilibrium in Cournot duopoly models with
tax evasion and time delay. We prove that our conditions actually imply
asymptotically stable equilibrium and delay independence. Conditions include
the same marginal cost and equal probability for evading taxes. We give
examples of cost and inverse demand functions satisfying the proposed
conditions. Some economic interpretations of our results are also included.
|
Conditions for stable equilibrium in Cournot duopoly models with tax evasion and time delay
|
2019-05-08 00:35:45
|
Raul Villafuerte-Segura, Eduardo Alvarado-Santos, Benjamin A. Itza-Ortiz
|
http://dx.doi.org/10.1063/1.5131266, http://arxiv.org/abs/1905.02817v2, http://arxiv.org/pdf/1905.02817v2
|
math.OC
|
35,503 |
th
|
We consider the problem of a decision-maker searching for information on
multiple alternatives when information is learned on all alternatives
simultaneously. The decision-maker has a running cost of searching for
information, and has to decide when to stop searching for information and
choose one alternative. The expected payoff of each alternative evolves as a
diffusion process when information is being learned. We present necessary and
sufficient conditions for the solution, establishing existence and uniqueness.
We show that the optimal boundary where search is stopped (free boundary) is
star-shaped, and present an asymptotic characterization of the value function
and the free boundary. We show properties of how the distance between the free
boundary and the diagonal varies with the number of alternatives, and how the
free boundary under parallel search relates to the one under sequential search,
with and without economies of scale on the search costs.
|
Parallel Search for Information
|
2019-05-16 03:54:49
|
T. Tony Ke, Wenpin Tang, J. Miguel Villas-Boas, Yuming Zhang
|
http://arxiv.org/abs/1905.06485v2, http://arxiv.org/pdf/1905.06485v2
|
econ.TH
|
35,504 |
th
|
In finite games mixed Nash equilibria always exist, but pure equilibria may
fail to exist. To assess the relevance of this nonexistence, we consider games
where the payoffs are drawn at random. In particular, we focus on games where a
large number of players can each choose one of two possible strategies, and the
payoffs are i.i.d. with the possibility of ties. We provide asymptotic results
about the random number of pure Nash equilibria, such as fast growth and a
central limit theorem, with bounds for the approximation error. Moreover, by
using a new link between percolation models and game theory, we describe in
detail the geometry of Nash equilibria and show that, when the probability of
ties is small, a best-response dynamics reaches a Nash equilibrium with a
probability that quickly approaches one as the number of players grows. We show
that a multitude of phase transitions depend only on a single parameter of the
model, that is, the probability of having ties.
|
Pure Nash Equilibria and Best-Response Dynamics in Random Games
|
2019-05-26 11:08:35
|
Ben Amiet, Andrea Collevecchio, Marco Scarsini, Ziwen Zhong
|
http://arxiv.org/abs/1905.10758v4, http://arxiv.org/pdf/1905.10758v4
|
cs.GT
|
35,507 |
th
|
In the current book I suggest an off-road path to the subject of optimal
transport. I tried to avoid prior knowledge of analysis, PDE theory and
functional analysis, as much as possible. Thus I concentrate on discrete and
semi-discrete cases, and always assume compactness for the underlying spaces.
However, some fundamental knowledge of measure theory and convexity is
unavoidable. In order to make it as self-contained as possible I included an
appendix with some basic definitions and results. I believe that any graduate
student in mathematics, as well as advanced undergraduate students, can read
and understand this book. Some chapters (in particular in Parts II\&III ) can
also be interesting for experts. Starting with the the most fundamental, fully
discrete problem I attempted to place optimal transport as a particular case of
the celebrated stable marriage problem. From there we proceed to the partition
problem, which can be formulated as a transport from a continuous space to a
discrete one. Applications to information theory and game theory (cooperative
and non-cooperative) are introduced as well.
Finally, the general case of transport between two compact measure spaces is
introduced as a coupling between two semi-discrete transports.
|
Semi-discrete optimal transport
|
2019-11-11 18:44:44
|
Gershon Wolansky
|
http://arxiv.org/abs/1911.04348v4, http://arxiv.org/pdf/1911.04348v4
|
math.OC
|
35,509 |
th
|
Assortment optimization is an important problem that arises in many
industries such as retailing and online advertising where the goal is to find a
subset of products from a universe of substitutable products which maximize
seller's expected revenue. One of the key challenges in this problem is to
model the customer substitution behavior. Many parametric random utility
maximization (RUM) based choice models have been considered in the literature.
However, in all these models, probability of purchase increases as we include
more products to an assortment. This is not true in general and in many
settings more choices hurt sales. This is commonly referred to as the choice
overload. In this paper we attempt to address this limitation in RUM through a
generalization of the Markov chain based choice model considered in Blanchet et
al. (2016). As a special case, we show that our model reduces to a
generalization of MNL with no-purchase attractions dependent on the assortment
S and strictly increasing with the size of assortment S. While we show that the
assortment optimization under this model is NP-hard, we present fully
polynomial-time approximation scheme (FPTAS) under reasonable assumptions.
|
A Generalized Markov Chain Model to Capture Dynamic Preferences and Choice Overload
|
2019-11-15 19:02:16
|
Kumar Goutam, Vineet Goyal, Agathe Soret
|
http://arxiv.org/abs/1911.06716v4, http://arxiv.org/pdf/1911.06716v4
|
econ.TH
|
35,510 |
th
|
Distortion-based analysis has established itself as a fruitful framework for
comparing voting mechanisms. m voters and n candidates are jointly embedded in
an (unknown) metric space, and the voters submit rankings of candidates by
non-decreasing distance from themselves. Based on the submitted rankings, the
social choice rule chooses a winning candidate; the quality of the winner is
the sum of the (unknown) distances to the voters. The rule's choice will in
general be suboptimal, and the worst-case ratio between the cost of its chosen
candidate and the optimal candidate is called the rule's distortion. It was
shown in prior work that every deterministic rule has distortion at least 3,
while the Copeland rule and related rules guarantee worst-case distortion at
most 5; a very recent result gave a rule with distortion $2+\sqrt{5} \approx
4.236$.
We provide a framework based on LP-duality and flow interpretations of the
dual which provides a simpler and more unified way for proving upper bounds on
the distortion of social choice rules. We illustrate the utility of this
approach with three examples. First, we give a fairly simple proof of a strong
generalization of the upper bound of 5 on the distortion of Copeland, to social
choice rules with short paths from the winning candidate to the optimal
candidate in generalized weak preference graphs. A special case of this result
recovers the recent $2+\sqrt{5}$ guarantee. Second, using this generalized
bound, we show that the Ranked Pairs and Schulze rules have distortion
$\Theta(\sqrt(n))$. Finally, our framework naturally suggests a combinatorial
rule that is a strong candidate for achieving distortion 3, which had also been
proposed in recent work. We prove that the distortion bound of 3 would follow
from any of three combinatorial conjectures we formulate.
|
An Analysis Framework for Metric Voting based on LP Duality
|
2019-11-17 09:34:11
|
David Kempe
|
http://arxiv.org/abs/1911.07162v3, http://arxiv.org/pdf/1911.07162v3
|
cs.GT
|
35,512 |
th
|
In distortion-based analysis of social choice rules over metric spaces, one
assumes that all voters and candidates are jointly embedded in a common metric
space. Voters rank candidates by non-decreasing distance. The mechanism,
receiving only this ordinal (comparison) information, should select a candidate
approximately minimizing the sum of distances from all voters. It is known that
while the Copeland rule and related rules guarantee distortion at most 5, many
other standard voting rules, such as Plurality, Veto, or $k$-approval, have
distortion growing unboundedly in the number $n$ of candidates.
Plurality, Veto, or $k$-approval with small $k$ require less communication
from the voters than all deterministic social choice rules known to achieve
constant distortion. This motivates our study of the tradeoff between the
distortion and the amount of communication in deterministic social choice
rules.
We show that any one-round deterministic voting mechanism in which each voter
communicates only the candidates she ranks in a given set of $k$ positions must
have distortion at least $\frac{2n-k}{k}$; we give a mechanism achieving an
upper bound of $O(n/k)$, which matches the lower bound up to a constant. For
more general communication-bounded voting mechanisms, in which each voter
communicates $b$ bits of information about her ranking, we show a slightly
weaker lower bound of $\Omega(n/b)$ on the distortion.
For randomized mechanisms, it is known that Random Dictatorship achieves
expected distortion strictly smaller than 3, almost matching a lower bound of
$3-\frac{2}{n}$ for any randomized mechanism that only receives each voter's
top choice. We close this gap, by giving a simple randomized social choice rule
which only uses each voter's first choice, and achieves expected distortion
$3-\frac{2}{n}$.
|
Communication, Distortion, and Randomness in Metric Voting
|
2019-11-19 10:15:37
|
David Kempe
|
http://arxiv.org/abs/1911.08129v2, http://arxiv.org/pdf/1911.08129v2
|
cs.GT
|
35,514 |
th
|
We consider the facility location problem in the one-dimensional setting
where each facility can serve a limited number of agents from the algorithmic
and mechanism design perspectives. From the algorithmic perspective, we prove
that the corresponding optimization problem, where the goal is to locate
facilities to minimize either the total cost to all agents or the maximum cost
of any agent is NP-hard. However, we show that the problem is fixed-parameter
tractable, and the optimal solution can be computed in polynomial time whenever
the number of facilities is bounded, or when all facilities have identical
capacities. We then consider the problem from a mechanism design perspective
where the agents are strategic and need not reveal their true locations. We
show that several natural mechanisms studied in the uncapacitated setting
either lose strategyproofness or a bound on the solution quality for the total
or maximum cost objective. We then propose new mechanisms that are
strategyproof and achieve approximation guarantees that almost match the lower
bounds.
|
Facility Location Problem with Capacity Constraints: Algorithmic and Mechanism Design Perspectives
|
2019-11-22 05:14:34
|
Haris Aziz, Hau Chan, Barton E. Lee, Bo Li, Toby Walsh
|
http://arxiv.org/abs/1911.09813v1, http://arxiv.org/pdf/1911.09813v1
|
cs.GT
|
35,516 |
th
|
A fundamental result in cake cutting states that for any number of players
with arbitrary preferences over a cake, there exists a division of the cake
such that every player receives a single contiguous piece and no player is left
envious. We generalize this result by showing that it is possible to partition
the players into groups of any desired sizes and divide the cake among the
groups, so that each group receives a single contiguous piece and no player
finds the piece of another group better than that of the player's own group.
|
How to Cut a Cake Fairly: A Generalization to Groups
|
2020-01-10 10:19:18
|
Erel Segal-Halevi, Warut Suksompong
|
http://dx.doi.org/10.1080/00029890.2021.1835338, http://arxiv.org/abs/2001.03327v3, http://arxiv.org/pdf/2001.03327v3
|
econ.TH
|
35,517 |
th
|
We model the production of complex goods in a large supply network. Each firm
sources several essential inputs through relationships with other firms.
Individual supply relationships are at risk of idiosyncratic failure, which
threatens to disrupt production. To protect against this, firms multisource
inputs and strategically invest to make relationships stronger, trading off the
cost of investment against the benefits of increased robustness. A supply
network is called fragile if aggregate output is very sensitive to small
aggregate shocks. We show that supply networks of intermediate productivity are
fragile in equilibrium, even though this is always inefficient. The endogenous
configuration of supply networks provides a new channel for the powerful
amplification of shocks.
|
Supply Network Formation and Fragility
|
2020-01-12 08:13:38
|
Matthew Elliott, Benjamin Golub, Matthew V. Leduc
|
http://arxiv.org/abs/2001.03853v7, http://arxiv.org/pdf/2001.03853v7
|
econ.TH
|
35,519 |
th
|
How should one combine noisy information from diverse sources to make an
inference about an objective ground truth? This frequently recurring, normative
question lies at the core of statistics, machine learning, policy-making, and
everyday life. It has been called "combining forecasts", "meta-analysis",
"ensembling", and the "MLE approach to voting", among other names. Past studies
typically assume that noisy votes are identically and independently distributed
(i.i.d.), but this assumption is often unrealistic. Instead, we assume that
votes are independent but not necessarily identically distributed and that our
ensembling algorithm has access to certain auxiliary information related to the
underlying model governing the noise in each vote. In our present work, we: (1)
define our problem and argue that it reflects common and socially relevant real
world scenarios, (2) propose a multi-arm bandit noise model and count-based
auxiliary information set, (3) derive maximum likelihood aggregation rules for
ranked and cardinal votes under our noise model, (4) propose, alternatively, to
learn an aggregation rule using an order-invariant neural network, and (5)
empirically compare our rules to common voting rules and naive
experience-weighted modifications. We find that our rules successfully use
auxiliary information to outperform the naive baselines.
|
Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes
|
2020-01-28 00:21:19
|
Silviu Pitis, Michael R. Zhang
|
http://arxiv.org/abs/2001.10092v1, http://arxiv.org/pdf/2001.10092v1
|
cs.MA
|
35,520 |
th
|
While fictitious play is guaranteed to converge to Nash equilibrium in
certain game classes, such as two-player zero-sum games, it is not guaranteed
to converge in non-zero-sum and multiplayer games. We show that fictitious play
in fact leads to improved Nash equilibrium approximation over a variety of game
classes and sizes than (counterfactual) regret minimization, which has recently
produced superhuman play for multiplayer poker. We also show that when
fictitious play is run several times using random initializations it is able to
solve several known challenge problems in which the standard version is known
to not converge, including Shapley's classic counterexample. These provide some
of the first positive results for fictitious play in these settings, despite
the fact that worst-case theoretical results are negative.
|
Empirical Analysis of Fictitious Play for Nash Equilibrium Computation in Multiplayer Games
|
2020-01-30 06:47:09
|
Sam Ganzfried
|
http://arxiv.org/abs/2001.11165v8, http://arxiv.org/pdf/2001.11165v8
|
cs.GT
|
35,551 |
th
|
We state and prove Kuhn's equivalence theorem for a new representation of
games, the intrinsic form. First, we introduce games in intrinsic form where
information is represented by $\sigma$-fields over a product set. For this
purpose, we adapt to games the intrinsic representation that Witsenhausen
introduced in control theory. Those intrinsic games do not require an explicit
description of the play temporality, as opposed to extensive form games on
trees. Second, we prove, for this new and more general representation of games,
that behavioral and mixed strategies are equivalent under perfect recall
(Kuhn's theorem). As the intrinsic form replaces the tree structure with a
product structure, the handling of information is easier. This makes the
intrinsic form a new valuable tool for the analysis of games with information.
|
Kuhn's Equivalence Theorem for Games in Intrinsic Form
|
2020-06-26 10:35:21
|
Benjamin Heymann, Michel de Lara, Jean-Philippe Chancelier
|
http://arxiv.org/abs/2006.14838v1, http://arxiv.org/pdf/2006.14838v1
|
math.OC
|
35,521 |
th
|
We study the design of decision-making mechanism for resource allocations
over a multi-agent system in a dynamic environment. Agents' privately observed
preference over resources evolves over time and the population is dynamic due
to the adoption of stopping rules. The proposed model designs the rules of
encounter for agents participating in the dynamic mechanism by specifying an
allocation rule and three payment rules to elicit agents' coupled decision
makings of honest preference reporting and optimal stopping over multiple
periods. The mechanism provides a special posted-price payment rule that
depends only on each agent's realized stopping time to directly influence the
population dynamics. This letter focuses on the theoretical implementability of
the rules in perfect Bayesian Nash equilibrium and characterizes necessary and
sufficient conditions to guarantee agents' honest equilibrium behaviors over
periods. We provide the design principles to construct the payments in terms of
the allocation rules and identify the restrictions of the designer's ability to
influence the population dynamics. The established conditions make the
designer's problem of finding multiple rules to determine an optimal allocation
rule.
|
Implementability of Honest Multi-Agent Sequential Decision-Making with Dynamic Population
|
2020-03-06 16:06:47
|
Tao Zhang, Quanyan Zhu
|
http://arxiv.org/abs/2003.03173v2, http://arxiv.org/pdf/2003.03173v2
|
eess.SY
|
35,523 |
th
|
We consider a robust version of the revenue maximization problem, where a
single seller wishes to sell $n$ items to a single unit-demand buyer. In this
robust version, the seller knows the buyer's marginal value distribution for
each item separately, but not the joint distribution, and prices the items to
maximize revenue in the worst case over all compatible correlation structures.
We devise a computationally efficient (polynomial in the support size of the
marginals) algorithm that computes the worst-case joint distribution for any
choice of item prices. And yet, in sharp contrast to the additive buyer case
(Carroll, 2017), we show that it is NP-hard to approximate the optimal choice
of prices to within any factor better than $n^{1/2-\epsilon}$. For the special
case of marginal distributions that satisfy the monotone hazard rate property,
we show how to guarantee a constant fraction of the optimal worst-case revenue
using item pricing; this pricing equates revenue across all possible
correlations and can be computed efficiently.
|
Escaping Cannibalization? Correlation-Robust Pricing for a Unit-Demand Buyer
|
2020-03-12 20:24:56
|
Moshe Babaioff, Michal Feldman, Yannai A. Gonczarowski, Brendan Lucier, Inbal Talgam-Cohen
|
http://arxiv.org/abs/2003.05913v2, http://arxiv.org/pdf/2003.05913v2
|
cs.GT
|
35,524 |
th
|
Over the last few decades, electricity markets around the world have adopted
multi-settlement structures, allowing for balancing of supply and demand as
more accurate forecast information becomes available. Given increasing
uncertainty due to adoption of renewables, more recent market design work has
focused on optimization of expectation of some quantity, e.g. social welfare.
However, social planners and policy makers are often risk averse, so that such
risk neutral formulations do not adequately reflect prevailing attitudes
towards risk, nor explain the decisions that follow. Hence we incorporate the
commonly used risk measure conditional value at risk (CVaR) into the central
planning objective, and study how a two-stage market operates when the
individual generators are risk neutral. Our primary result is to show existence
(by construction) of a sequential competitive equilibrium (SCEq) in this
risk-aware two-stage market. Given equilibrium prices, we design a market
mechanism which achieves social cost minimization assuming that agents are non
strategic.
|
A Risk Aware Two-Stage Market Mechanism for Electricity with Renewable Generation
|
2020-03-13 08:19:43
|
Nathan Dahlin, Rahul Jain
|
http://arxiv.org/abs/2003.06119v1, http://arxiv.org/pdf/2003.06119v1
|
eess.SY
|
35,525 |
th
|
In this paper we consider a local service-requirement assignment problem
named exact capacitated domination from an algorithmic point of view. This
problem aims to find a solution (a Nash equilibrium) to a game-theoretic model
of public good provision. In the problem we are given a capacitated graph, a
graph with a parameter defined on each vertex that is interpreted as the
capacity of that vertex. The objective is to find a DP-Nash subgraph: a
spanning bipartite subgraph with partite sets D and P, called the D-set and
P-set respectively, such that no vertex in P is isolated and that each vertex
in D is adjacent to a number of vertices equal to its capacity. We show that
whether a capacitated graph has a unique DP-Nash subgraph can be decided in
polynomial time. However, we also show that the nearby problem of deciding
whether a capacitated graph has a unique D-set is co-NP-complete.
|
Exact capacitated domination: on the computational complexity of uniqueness
|
2020-03-16 13:47:10
|
Gregory Gutin, Philip R Neary, Anders Yeo
|
http://arxiv.org/abs/2003.07106v3, http://arxiv.org/pdf/2003.07106v3
|
math.CO
|
35,526 |
th
|
Motivated by empirical evidence that individuals within group decision making
simultaneously aspire to maximize utility and avoid inequality we propose a
criterion based on the entropy-norm pair for geometric selection of strict Nash
equilibria in n-person games. For this, we introduce a mapping of an n-person
set of Nash equilibrium utilities in an Entropy-Norm space. We suggest that the
most suitable group choice is the equilibrium closest to the largest
entropy-norm pair of a rescaled Entropy-Norm space. Successive application of
this criterion permits an ordering of the possible Nash equilibria in an
n-person game accounting simultaneously equality and utility of players
payoffs. Limitations of this approach for certain exceptional cases are
discussed. In addition, the criterion proposed is applied and compared with the
results of a group decision making experiment.
|
Entropy-Norm space for geometric selection of strict Nash equilibria in n-person games
|
2020-03-20 15:30:57
|
A. B. Leoneti, G. A. Prataviera
|
http://dx.doi.org/10.1016/j.physa.2020.124407, http://arxiv.org/abs/2003.09225v1, http://arxiv.org/pdf/2003.09225v1
|
physics.soc-ph
|
35,768 |
th
|
A menu description presents a mechanism to player $i$ in two steps. Step (1)
uses the reports of other players to describe $i$'s menu: the set of $i$'s
potential outcomes. Step (2) uses $i$'s report to select $i$'s favorite outcome
from her menu. Can menu descriptions better expose strategyproofness, without
sacrificing simplicity? We propose a new, simple menu description of Deferred
Acceptance. We prove that -- in contrast with other common matching mechanisms
-- this menu description must differ substantially from the corresponding
traditional description. We demonstrate, with a lab experiment on two
elementary mechanisms, the promise and challenges of menu descriptions.
|
Strategyproofness-Exposing Mechanism Descriptions
|
2022-09-27 07:31:42
|
Yannai A. Gonczarowski, Ori Heffetz, Clayton Thomas
|
http://arxiv.org/abs/2209.13148v2, http://arxiv.org/pdf/2209.13148v2
|
econ.TH
|
35,527 |
th
|
Reinforcement learning algorithms describe how an agent can learn an optimal
action policy in a sequential decision process, through repeated experience. In
a given environment, the agent policy provides him some running and terminal
rewards. As in online learning, the agent learns sequentially. As in
multi-armed bandit problems, when an agent picks an action, he can not infer
ex-post the rewards induced by other action choices. In reinforcement learning,
his actions have consequences: they influence not only rewards, but also future
states of the world. The goal of reinforcement learning is to find an optimal
policy -- a mapping from the states of the world to the set of actions, in
order to maximize cumulative reward, which is a long term strategy. Exploring
might be sub-optimal on a short-term horizon but could lead to optimal
long-term ones. Many problems of optimal control, popular in economics for more
than forty years, can be expressed in the reinforcement learning framework, and
recent advances in computational science, provided in particular by deep
learning algorithms, can be used by economists in order to solve complex
behavioral problems. In this article, we propose a state-of-the-art of
reinforcement learning techniques, and present applications in economics, game
theory, operation research and finance.
|
Reinforcement Learning in Economics and Finance
|
2020-03-23 01:31:35
|
Arthur Charpentier, Romuald Elie, Carl Remlinger
|
http://arxiv.org/abs/2003.10014v1, http://arxiv.org/pdf/2003.10014v1
|
econ.TH
|
35,528 |
th
|
In 1979, Hylland and Zeckhauser \cite{hylland} gave a simple and general
scheme for implementing a one-sided matching market using the power of a
pricing mechanism. Their method has nice properties -- it is incentive
compatible in the large and produces an allocation that is Pareto optimal --
and hence it provides an attractive, off-the-shelf method for running an
application involving such a market. With matching markets becoming ever more
prevalant and impactful, it is imperative to finally settle the computational
complexity of this scheme.
We present the following partial resolution:
1. A combinatorial, strongly polynomial time algorithm for the special case
of $0/1$ utilities.
2. An example that has only irrational equilibria, hence proving that this
problem is not in PPAD. Furthermore, its equilibria are disconnected, hence
showing that the problem does not admit a convex programming formulation.
3. A proof of membership of the problem in the class FIXP.
We leave open the (difficult) question of determining if the problem is
FIXP-hard. Settling the status of the special case when utilities are in the
set $\{0, {\frac 1 2}, 1 \}$ appears to be even more difficult.
|
Computational Complexity of the Hylland-Zeckhauser Scheme for One-Sided Matching Markets
|
2020-04-03 05:53:09
|
Vijay V. Vazirani, Mihalis Yannakakis
|
http://arxiv.org/abs/2004.01348v6, http://arxiv.org/pdf/2004.01348v6
|
cs.GT
|
35,529 |
th
|
We consider the sale of a single item to multiple buyers by a
revenue-maximizing seller. Recent work of Akbarpour and Li formalizes
\emph{credibility} as an auction desideratum, and prove that the only optimal,
credible, strategyproof auction is the ascending price auction with reserves
(Akbarpour and Li, 2019).
In contrast, when buyers' valuations are MHR, we show that the mild
additional assumption of a cryptographically secure commitment scheme suffices
for a simple \emph{two-round} auction which is optimal, strategyproof, and
credible (even when the number of bidders is only known by the auctioneer).
We extend our analysis to the case when buyer valuations are
$\alpha$-strongly regular for any $\alpha > 0$, up to arbitrary $\varepsilon$
in credibility. Interestingly, we also prove that this construction cannot be
extended to regular distributions, nor can the $\varepsilon$ be removed with
multiple bidders.
|
Credible, Truthful, and Two-Round (Optimal) Auctions via Cryptographic Commitments
|
2020-04-03 17:43:02
|
Matheus V. X. Ferreira, S. Matthew Weinberg
|
http://dx.doi.org/10.1145/3391403.3399495, http://arxiv.org/abs/2004.01598v2, http://arxiv.org/pdf/2004.01598v2
|
cs.GT
|
35,530 |
th
|
A fundamental property of choice functions is stability, which, loosely
speaking, prescribes that choice sets are invariant under adding and removing
unchosen alternatives. We provide several structural insights that improve our
understanding of stable choice functions. In particular, (i) we show that every
stable choice function is generated by a unique simple choice function, which
never excludes more than one alternative, (ii) we completely characterize which
simple choice functions give rise to stable choice functions, and (iii) we
prove a strong relationship between stability and a new property of tournament
solutions called local reversal symmetry. Based on these findings, we provide
the first concrete tournament---consisting of 24 alternatives---in which the
tournament equilibrium set fails to be stable. Furthermore, we prove that there
is no more discriminating stable tournament solution than the bipartisan set
and that the bipartisan set is the unique most discriminating tournament
solution which satisfies standard properties proposed in the literature.
|
On the Structure of Stable Tournament Solutions
|
2020-04-03 19:16:00
|
Felix Brandt, Markus Brill, Hans Georg Seedig, Warut Suksompong
|
http://dx.doi.org/10.1007/s00199-016-1024-x, http://arxiv.org/abs/2004.01651v1, http://arxiv.org/pdf/2004.01651v1
|
econ.TH
|
35,531 |
th
|
We propose a Condorcet consistent voting method that we call Split Cycle.
Split Cycle belongs to the small family of known voting methods satisfying the
anti-vote-splitting criterion of independence of clones. In this family, only
Split Cycle satisfies a new criterion we call immunity to spoilers, which
concerns adding candidates to elections, as well as the known criteria of
positive involvement and negative involvement, which concern adding voters to
elections. Thus, in contrast to other clone-independent methods, Split Cycle
mitigates both "spoiler effects" and "strong no show paradoxes."
|
Split Cycle: A New Condorcet Consistent Voting Method Independent of Clones and Immune to Spoilers
|
2020-04-06 02:20:17
|
Wesley H. Holliday, Eric Pacuit
|
http://dx.doi.org/10.1007/s11127-023-01042-3, http://arxiv.org/abs/2004.02350v10, http://arxiv.org/pdf/2004.02350v10
|
cs.GT
|
35,533 |
th
|
We study a resource allocation setting where $m$ discrete items are to be
divided among $n$ agents with additive utilities, and the agents' utilities for
individual items are drawn at random from a probability distribution. Since
common fairness notions like envy-freeness and proportionality cannot always be
satisfied in this setting, an important question is when allocations satisfying
these notions exist. In this paper, we close several gaps in the line of work
on asymptotic fair division. First, we prove that the classical round-robin
algorithm is likely to produce an envy-free allocation provided that
$m=\Omega(n\log n/\log\log n)$, matching the lower bound from prior work. We
then show that a proportional allocation exists with high probability as long
as $m\geq n$, while an allocation satisfying envy-freeness up to any item (EFX)
is likely to be present for any relation between $m$ and $n$. Finally, we
consider a related setting where each agent is assigned exactly one item and
the remaining items are left unassigned, and show that the transition from
non-existence to existence with respect to envy-free assignments occurs at
$m=en$.
|
Closing Gaps in Asymptotic Fair Division
|
2020-04-12 11:21:09
|
Pasin Manurangsi, Warut Suksompong
|
http://dx.doi.org/10.1137/20M1353381, http://arxiv.org/abs/2004.05563v1, http://arxiv.org/pdf/2004.05563v1
|
cs.GT
|
35,534 |
th
|
We present an analysis of the Proof-of-Work consensus algorithm, used on the
Bitcoin blockchain, using a Mean Field Game framework. Using a master equation,
we provide an equilibrium characterization of the total computational power
devoted to mining the blockchain (hashrate). From a simple setting we show how
the master equation approach allows us to enrich the model by relaxing most of
the simplifying assumptions. The essential structure of the game is preserved
across all the enrichments. In deterministic settings, the hashrate ultimately
reaches a steady state in which it increases at the rate of technological
progress. In stochastic settings, there exists a target for the hashrate for
every possible random state. As a consequence, we show that in equilibrium the
security of the underlying blockchain is either $i)$ constant, or $ii)$
increases with the demand for the underlying cryptocurrency.
|
Mean Field Game Approach to Bitcoin Mining
|
2020-04-17 13:57:33
|
Charles Bertucci, Louis Bertucci, Jean-Michel Lasry, Pierre-Louis Lions
|
http://arxiv.org/abs/2004.08167v1, http://arxiv.org/pdf/2004.08167v1
|
econ.TH
|
35,535 |
th
|
This paper develops the category $\mathbf{NCG}$. Its objects are
node-and-choice games, which include essentially all extensive-form games. Its
morphisms allow arbitrary transformations of a game's nodes, choices, and
players, as well as monotonic transformations of the utility functions of the
game's players. Among the morphisms are subgame inclusions. Several
characterizations and numerous properties of the isomorphisms are derived. For
example, it is shown that isomorphisms preserve the game-theoretic concepts of
no-absentmindedness, perfect-information, and (pure-strategy) Nash-equilibrium.
Finally, full subcategories are defined for choice-sequence games and
choice-set games, and relationships among these two subcategories and
$\mathbf{NCG}$ itself are expressed and derived via isomorphic inclusions and
equivalences.
|
The Category of Node-and-Choice Extensive-Form Games
|
2020-04-23 17:41:59
|
Peter A. Streufert
|
http://arxiv.org/abs/2004.11196v2, http://arxiv.org/pdf/2004.11196v2
|
econ.TH
|
35,536 |
th
|
We study search, evaluation, and selection of candidates of unknown quality
for a position. We examine the effects of "soft" affirmative action policies
increasing the relative percentage of minority candidates in the candidate
pool. We show that, while meant to encourage minority hiring, such policies may
backfire if the evaluation of minority candidates is noisier than that of
non-minorities. This may occur even if minorities are at least as qualified and
as valuable as non-minorities. The results provide a possible explanation for
why certain soft affirmative action policies have proved counterproductive,
even in the absence of (implicit) biases.
|
Soft Affirmative Action and Minority Recruitment
|
2020-04-30 20:01:35
|
Daniel Fershtman, Alessandro Pavan
|
http://arxiv.org/abs/2004.14953v1, http://arxiv.org/pdf/2004.14953v1
|
econ.TH
|
35,537 |
th
|
We introduce an algorithmic decision process for multialternative choice that
combines binary comparisons and Markovian exploration. We show that a
preferential property, transitivity, makes it testable.
|
Multialternative Neural Decision Processes
|
2020-05-03 16:19:37
|
Carlo Baldassi, Simone Cerreia-Vioglio, Fabio Maccheroni, Massimo Marinacci, Marco Pirazzini
|
http://arxiv.org/abs/2005.01081v5, http://arxiv.org/pdf/2005.01081v5
|
cs.AI
|
35,538 |
th
|
In this paper, we consider a discrete-time stochastic Stackelberg game with a
single leader and multiple followers. Both the followers and the leader
together have conditionally independent private types, conditioned on action
and previous state, that evolve as controlled Markov processes. The objective
is to compute the stochastic Stackelberg equilibrium of the game where the
leader commits to a dynamic strategy. Each follower's strategy is the best
response to the leader's strategies and other followers' strategies while the
each leader's strategy is optimum given the followers play the best response.
In general, computing such equilibrium involves solving a fixed-point equation
for the whole game. In this paper, we present a backward recursive algorithm
that computes such strategies by solving smaller fixed-point equations for each
time $t$. Based on this algorithm, we compute stochastic Stackelberg
equilibrium of a security example and a dynamics information design example
used in~\cite{El17} (beeps).
|
Sequential decomposition of stochastic Stackelberg games
|
2020-05-05 11:24:28
|
Deepanshu Vasal
|
http://arxiv.org/abs/2005.01997v2, http://arxiv.org/pdf/2005.01997v2
|
math.OC
|
35,539 |
th
|
In~[1],authors considered a general finite horizon model of dynamic game of
asymmetric information, where N players have types evolving as independent
Markovian process, where each player observes its own type perfectly and
actions of all players. The authors present a sequential decomposition
algorithm to find all structured perfect Bayesian equilibria of the game. The
algorithm consists of solving a class of fixed-point of equations for each time
$t,\pi_t$, whose existence was left as an open question. In this paper, we
prove existence of these fixed-point equations for compact metric spaces.
|
Existence of structured perfect Bayesian equilibrium in dynamic games of asymmetric information
|
2020-05-12 10:37:44
|
Deepanshu Vasal
|
http://arxiv.org/abs/2005.05586v2, http://arxiv.org/pdf/2005.05586v2
|
cs.GT
|
35,607 |
th
|
The paper proposes a natural measure space of zero-sum perfect information
games with upper semicontinuous payoffs. Each game is specified by the game
tree, and by the assignment of the active player and of the capacity to each
node of the tree. The payoff in a game is defined as the infimum of the
capacity over the nodes that have been visited during the play. The active
player, the number of children, and the capacity are drawn from a given joint
distribution independently across the nodes. We characterize the cumulative
distribution function of the value $v$ using the fixed points of the so-called
value generating function. The characterization leads to a necessary and
sufficient condition for the event $v \geq k$ to occur with positive
probability. We also study probabilistic properties of the set of Player I's
$k$-optimal strategies and the corresponding plays.
|
Random perfect information games
|
2021-04-21 16:38:03
|
János Flesch, Arkadi Predtetchinski, Ville Suomala
|
http://arxiv.org/abs/2104.10528v1, http://arxiv.org/pdf/2104.10528v1
|
cs.GT
|
35,540 |
th
|
In a two-player zero-sum graph game the players move a token throughout a
graph to produce an infinite path, which determines the winner or payoff of the
game. Traditionally, the players alternate turns in moving the token. In {\em
bidding games}, however, the players have budgets, and in each turn, we hold an
"auction" (bidding) to determine which player moves the token: both players
simultaneously submit bids and the higher bidder moves the token. The bidding
mechanisms differ in their payment schemes. Bidding games were largely studied
with variants of {\em first-price} bidding in which only the higher bidder pays
his bid. We focus on {\em all-pay} bidding, where both players pay their bids.
Finite-duration all-pay bidding games were studied and shown to be technically
more challenging than their first-price counterparts. We study for the first
time, infinite-duration all-pay bidding games. Our most interesting results are
for {\em mean-payoff} objectives: we portray a complete picture for games
played on strongly-connected graphs. We study both pure (deterministic) and
mixed (probabilistic) strategies and completely characterize the optimal sure
and almost-sure (with probability $1$) payoffs that the players can
respectively guarantee. We show that mean-payoff games under all-pay bidding
exhibit the intriguing mathematical properties of their first-price
counterparts; namely, an equivalence with {\em random-turn games} in which in
each turn, the player who moves is selected according to a (biased) coin toss.
The equivalences for all-pay bidding are more intricate and unexpected than for
first-price bidding.
|
Infinite-Duration All-Pay Bidding Games
|
2020-05-12 11:51:46
|
Guy Avni, Ismaël Jecker, Đorđe Žikelić
|
http://arxiv.org/abs/2005.06636v2, http://arxiv.org/pdf/2005.06636v2
|
econ.TH
|
35,541 |
th
|
We consider the problem of dynamic information design with one sender and one
receiver where the sender observers a private state of the system and takes an
action to send a signal based on its observation to a receiver. Based on this
signal, the receiver takes an action that determines rewards for both the
sender and the receiver and controls the state of the system. In this technical
note, we show that this problem can be considered as a problem of dynamic game
of asymmetric information and its perfect Bayesian equilibrium (PBE) and
Stackelberg equilibrium (SE) can be analyzed using the algorithms presented in
[1], [2] by the same author (among others). We then extend this model when
there is one sender and multiple receivers and provide algorithms to compute a
class of equilibria of this game.
|
Dynamic information design
|
2020-05-13 20:26:08
|
Deepanshu Vasal
|
http://arxiv.org/abs/2005.07267v1, http://arxiv.org/pdf/2005.07267v1
|
econ.TH
|
35,542 |
th
|
Stable matching in a community consisting of $N$ men and $N$ women is a
classical combinatorial problem that has been the subject of intense
theoretical and empirical study since its introduction in 1962 in a seminal
paper by Gale and Shapley.
When the input preference profile is generated from a distribution, we study
the output distribution of two stable matching procedures:
women-proposing-deferred-acceptance and men-proposing-deferred-acceptance. We
show that the two procedures are ex-ante equivalent: that is, under certain
conditions on the input distribution, their output distributions are identical.
In terms of technical contributions, we generalize (to the non-uniform case)
an integral formula, due to Knuth and Pittel, which gives the probability that
a fixed matching is stable. Using an inclusion-exclusion principle on the set
of rotations, we give a new formula which gives the probability that a fixed
matching is the women/men-optimal stable matching. We show that those two
probabilities are equal with an integration by substitution.
|
Two-Sided Random Matching Markets: Ex-Ante Equivalence of the Deferred Acceptance Procedures
|
2020-05-18 13:49:39
|
Simon Mauras
|
http://dx.doi.org/10.1145/3391403.3399448, http://arxiv.org/abs/2005.08584v1, http://arxiv.org/pdf/2005.08584v1
|
cs.GT
|
35,543 |
th
|
We consider two models of computation for Tarski's order preserving function
f related to fixed points in a complete lattice: the oracle function model and
the polynomial function model. In both models, we find the first polynomial
time algorithm for finding a Tarski's fixed point. In addition, we provide a
matching oracle bound for determining the uniqueness in the oracle function
model and prove it is Co-NP hard in the polynomial function model. The
existence of the pure Nash equilibrium in supermodular games is proved by
Tarski's fixed point theorem. Exploring the difference between supermodular
games and Tarski's fixed point, we also develop the computational results for
finding one pure Nash equilibrium and determining the uniqueness of the
equilibrium in supermodular games.
|
Computations and Complexities of Tarski's Fixed Points and Supermodular Games
|
2020-05-20 06:32:37
|
Chuangyin Dang, Qi Qi, Yinyu Ye
|
http://arxiv.org/abs/2005.09836v1, http://arxiv.org/pdf/2005.09836v1
|
cs.GT
|
35,544 |
th
|
If agents cooperate only within small groups of some bounded sizes, is there
a way to partition the population into small groups such that no collection of
agents can do better by forming a new group? This paper revisited f-core in a
transferable utility setting. By providing a new formulation to the problem, we
built up a link between f-core and the transportation theory. Such a link helps
us to establish an exact existence result, and a characterization result of
f-core for a general class of agents, as well as some improvements in computing
the f-core in the finite type case.
|
Cooperation in Small Groups -- an Optimal Transport Approach
|
2020-05-22 18:56:08
|
Xinyang Wang
|
http://arxiv.org/abs/2005.11244v1, http://arxiv.org/pdf/2005.11244v1
|
econ.TH
|
35,545 |
th
|
In fair division problems, we are given a set $S$ of $m$ items and a set $N$
of $n$ agents with individual preferences, and the goal is to find an
allocation of items among agents so that each agent finds the allocation fair.
There are several established fairness concepts and envy-freeness is one of the
most extensively studied ones. However envy-free allocations do not always
exist when items are indivisible and this has motivated relaxations of
envy-freeness: envy-freeness up to one item (EF1) and envy-freeness up to any
item (EFX) are two well-studied relaxations. We consider the problem of finding
EF1 and EFX allocations for utility functions that are not necessarily
monotone, and propose four possible extensions of different strength to this
setting.
In particular, we present a polynomial-time algorithm for finding an EF1
allocation for two agents with arbitrary utility functions. An example is given
showing that EFX allocations need not exist for two agents with non-monotone,
non-additive, identical utility functions. However, when all agents have
monotone (not necessarily additive) identical utility functions, we prove that
an EFX allocation of chores always exists. As a step toward understanding the
general case, we discuss two subclasses of utility functions: Boolean utilities
that are $\{0,+1\}$-valued functions, and negative Boolean utilities that are
$\{0,-1\}$-valued functions. For the latter, we give a polynomial time
algorithm that finds an EFX allocation when the utility functions are
identical.
|
Envy-free Relaxations for Goods, Chores, and Mixed Items
|
2020-06-08 12:25:31
|
Kristóf Bérczi, Erika R. Bérczi-Kovács, Endre Boros, Fekadu Tolessa Gedefa, Naoyuki Kamiyama, Telikepalli Kavitha, Yusuke Kobayashi, Kazuhisa Makino
|
http://arxiv.org/abs/2006.04428v1, http://arxiv.org/pdf/2006.04428v1
|
econ.TH
|
35,546 |
th
|
We survey the design of elections that are resilient to attempted
interference by third parties. For example, suppose votes have been cast in an
election between two candidates, and then each vote is randomly changed with a
small probability, independently of the other votes. It is desirable to keep
the outcome of the election the same, regardless of the changes to the votes.
It is well known that the US electoral college system is about 5 times more
likely to have a changed outcome due to vote corruption, when compared to a
majority vote. In fact, Mossel, O'Donnell and Oleszkiewicz proved in 2005 that
the majority voting method is most stable to this random vote corruption, among
voting methods where each person has a small influence on the election. We
discuss some recent progress on the analogous result for elections between more
than two candidates. In this case, plurality should be most stable to
corruption in votes. We also survey results on adversarial election
manipulation (where an adversary can select particular votes to change, perhaps
in a non-random way), and we briefly discuss ranked choice voting methods
(where a vote is a ranked list of candidates).
|
Designing Stable Elections: A Survey
|
2020-06-09 21:59:48
|
Steven Heilman
|
http://arxiv.org/abs/2006.05460v2, http://arxiv.org/pdf/2006.05460v2
|
math.PR
|
35,547 |
th
|
A population of voters must elect representatives among themselves to decide
on a sequence of possibly unforeseen binary issues. Voters care only about the
final decision, not the elected representatives. The disutility of a voter is
proportional to the fraction of issues, where his preferences disagree with the
decision.
While an issue-by-issue vote by all voters would maximize social welfare, we
are interested in how well the preferences of the population can be
approximated by a small committee.
We show that a k-sortition (a random committee of k voters with the majority
vote within the committee) leads to an outcome within the factor 1+O(1/k) of
the optimal social cost for any number of voters n, any number of issues $m$,
and any preference profile.
For a small number of issues m, the social cost can be made even closer to
optimal by delegation procedures that weigh committee members according to
their number of followers. However, for large m, we demonstrate that the
k-sortition is the worst-case optimal rule within a broad family of
committee-based rules that take into account metric information about the
preference profile of the whole population.
|
Representative Committees of Peers
|
2020-06-14 11:20:47
|
Reshef Meir, Fedor Sandomirskiy, Moshe Tennenholtz
|
http://arxiv.org/abs/2006.07837v1, http://arxiv.org/pdf/2006.07837v1
|
cs.GT
|
35,548 |
th
|
We study the problem of modeling purchase of multiple products and utilizing
it to display optimized recommendations for online retailers and e-commerce
platforms.
We present a parsimonious multi-purchase family of choice models called the
Bundle-MVL-K family, and develop a binary search based iterative strategy that
efficiently computes optimized recommendations for this model. We establish the
hardness of computing optimal recommendation sets, and derive several
structural properties of the optimal solution that aid in speeding up
computation. This is one of the first attempts at operationalizing
multi-purchase class of choice models. We show one of the first quantitative
links between modeling multiple purchase behavior and revenue gains. The
efficacy of our modeling and optimization techniques compared to competing
solutions is shown using several real world datasets on multiple metrics such
as model fitness, expected revenue gains and run-time reductions. For example,
the expected revenue benefit of taking multiple purchases into account is
observed to be $\sim5\%$ in relative terms for the Ta Feng and UCI shopping
datasets, when compared to the MNL model for instances with $\sim 1500$
products. Additionally, across $6$ real world datasets, the test log-likelihood
fits of our models are on average $17\%$ better in relative terms. Our work
contributes to the study multi-purchase decisions, analyzing consumer demand
and the retailers optimization problem. The simplicity of our models and the
iterative nature of our optimization technique allows practitioners meet
stringent computational constraints while increasing their revenues in
practical recommendation applications at scale, especially in e-commerce
platforms and other marketplaces.
|
Multi-Purchase Behavior: Modeling, Estimation and Optimization
|
2020-06-15 02:47:14
|
Theja Tulabandhula, Deeksha Sinha, Saketh Reddy Karra, Prasoon Patidar
|
http://arxiv.org/abs/2006.08055v2, http://arxiv.org/pdf/2006.08055v2
|
cs.IR
|
35,549 |
th
|
We consider an odd-sized "jury", which votes sequentially between two states
of Nature (say A and B, or Innocent and Guilty) with the majority opinion
determining the verdict. Jurors have private information in the form of a
signal in [-1,+1], with higher signals indicating A more likely. Each juror has
an ability in [0,1], which is proportional to the probability of A given a
positive signal, an analog of Condorcet's p for binary signals. We assume that
jurors vote honestly for the alternative they view more likely, given their
signal and prior voting, because they are experts who want to enhance their
reputation (after their vote and actual state of Nature is revealed). For a
fixed set of jury abilities, the reliability of the verdict depends on the
voting order. For a jury of size three, the optimal ordering is always as
follows: middle ability first, then highest ability, then lowest. For
sufficiently heterogeneous juries, sequential voting is more reliable than
simultaneous voting and is in fact optimal (allowing for non-honest voting).
When average ability is fixed, verdict reliability is increasing in
heterogeneity.
For medium-sized juries, we find through simulation that the median ability
juror should still vote first and the remaining ones should have increasing and
then decreasing abilities.
|
Optimizing Voting Order on Sequential Juries: A Median Voter Theorem and Beyond
|
2020-06-24 23:58:23
|
Steve Alpern, Bo Chen
|
http://arxiv.org/abs/2006.14045v2, http://arxiv.org/pdf/2006.14045v2
|
econ.TH
|
35,550 |
th
|
This paper studies competitions with rank-based reward among a large number
of teams. Within each sizable team, we consider a mean-field contribution game
in which each team member contributes to the jump intensity of a common Poisson
project process; across all teams, a mean field competition game is formulated
on the rank of the completion time, namely the jump time of Poisson project
process, and the reward to each team is paid based on its ranking. On the layer
of teamwise competition game, three optimization problems are introduced when
the team size is determined by: (i) the team manager; (ii) the central planner;
(iii) the team members' voting as partnership. We propose a relative
performance criteria for each team member to share the team's reward and
formulate some special cases of mean field games of mean field games, which are
new to the literature. In all problems with homogeneous parameters, the
equilibrium control of each worker and the equilibrium or optimal team size can
be computed in an explicit manner, allowing us to analytically examine the
impacts of some model parameters and discuss their economic implications. Two
numerical examples are also presented to illustrate the parameter dependence
and comparison between different team size decision making.
|
Teamwise Mean Field Competitions
|
2020-06-24 17:13:43
|
Xiang Yu, Yuchong Zhang, Zhou Zhou
|
http://arxiv.org/abs/2006.14472v2, http://arxiv.org/pdf/2006.14472v2
|
cs.GT
|
35,552 |
th
|
The study of network formation is pervasive in economics, sociology, and many
other fields. In this paper, we model network formation as a `choice' that is
made by nodes in a network to connect to other nodes. We study these `choices'
using discrete-choice models, in which an agent chooses between two or more
discrete alternatives. We employ the `repeated-choice' (RC) model to study
network formation. We argue that the RC model overcomes important limitations
of the multinomial logit (MNL) model, which gives one framework for studying
network formation, and that it is well-suited to study network formation. We
also illustrate how to use the RC model to accurately study network formation
using both synthetic and real-world networks. Using edge-independent synthetic
networks, we also compare the performance of the MNL model and the RC model. We
find that the RC model estimates the data-generation process of our synthetic
networks more accurately than the MNL model. In a patent citation network,
which forms sequentially, we present a case study of a qualitatively
interesting scenario -- the fact that new patents are more likely to cite
older, more cited, and similar patents -- for which employing the RC model
yields interesting insights.
|
Mixed Logit Models and Network Formation
|
2020-06-30 07:01:02
|
Harsh Gupta, Mason A. Porter
|
http://arxiv.org/abs/2006.16516v5, http://arxiv.org/pdf/2006.16516v5
|
cs.SI
|
35,553 |
th
|
In a 1983 paper, Yannelis-Prabhakar rely on Michael's selection theorem to
guarantee a continuous selection in the context of the existence of maximal
elements and equilibria in abstract economies. In this tribute to Nicholas
Yannelis, we root this paper in Chapter II of Yannelis' 1983 Rochester Ph.D.
dissertation, and identify its pioneering application of the paracompactness
condition to current and ongoing work of Yannelis and his co-authors, and to
mathematical economics more generally. We move beyond the literature to provide
a necessary and sufficient condition for upper semi-continuous local and global
selections of correspondences, and to provide application to five domains of
Yannelis' interests: Berge's maximum theorem, the Gale-Nikaido-Debreu lemma,
the Gale-McKenzie survival assumption, Shafer's non-transitive setting, and the
Anderson-Khan-Rashid approximate existence theorem. The last resonates with
Chapter VI of the Yannelis' dissertation.
|
The Yannelis-Prabhakar Theorem on Upper Semi-Continuous Selections in Paracompact Spaces: Extensions and Applications
|
2020-06-30 13:56:20
|
M. Ali Khan, Metin Uyanik
|
http://dx.doi.org/10.1007/s00199-021-01359-4, http://arxiv.org/abs/2006.16681v1, http://arxiv.org/pdf/2006.16681v1
|
econ.TH
|
35,554 |
th
|
Simulated Annealing is the crowning glory of Markov Chain Monte Carlo Methods
for the solution of NP-hard optimization problems in which the cost function is
known. Here, by replacing the Metropolis engine of Simulated Annealing with a
reinforcement learning variation -- that we call Macau Algorithm -- we show
that the Simulated Annealing heuristic can be very effective also when the cost
function is unknown and has to be learned by an artificial agent.
|
Ergodic Annealing
|
2020-08-01 13:17:11
|
Carlo Baldassi, Fabio Maccheroni, Massimo Marinacci, Marco Pirazzini
|
http://arxiv.org/abs/2008.00234v1, http://arxiv.org/pdf/2008.00234v1
|
cs.AI
|
35,555 |
th
|
Motivated by an equilibrium problem, we establish the existence of a solution
for a family of Markovian backward stochastic differential equations with
quadratic nonlinearity and discontinuity in $Z$. Using unique continuation and
backward uniqueness, we show that the set of discontinuity has measure zero. In
a continuous-time stochastic model of an endowment economy, we prove the
existence of an incomplete Radner equilibrium with nondegenerate endogenous
volatility.
|
Radner equilibrium and systems of quadratic BSDEs with discontinuous generators
|
2020-08-08 14:55:17
|
Luis Escauriaza, Daniel C. Schwarz, Hao Xing
|
http://arxiv.org/abs/2008.03500v3, http://arxiv.org/pdf/2008.03500v3
|
math.PR
|
35,556 |
th
|
We propose six axioms concerning when one candidate should defeat another in
a democratic election involving two or more candidates. Five of the axioms are
widely satisfied by known voting procedures. The sixth axiom is a weakening of
Kenneth Arrow's famous condition of the Independence of Irrelevant Alternatives
(IIA). We call this weakening Coherent IIA. We prove that the five axioms plus
Coherent IIA single out a method of determining defeats studied in our recent
work: Split Cycle. In particular, Split Cycle provides the most resolute
definition of defeat among any satisfying the six axioms for democratic defeat.
In addition, we analyze how Split Cycle escapes Arrow's Impossibility Theorem
and related impossibility results.
|
Axioms for Defeat in Democratic Elections
|
2020-08-16 00:43:51
|
Wesley H. Holliday, Eric Pacuit
|
http://arxiv.org/abs/2008.08451v4, http://arxiv.org/pdf/2008.08451v4
|
econ.TH
|
35,557 |
th
|
We consider a discrete-time dynamic search game in which a number of players
compete to find an invisible object that is moving according to a time-varying
Markov chain. We examine the subgame perfect equilibria of these games. The
main result of the paper is that the set of subgame perfect equilibria is
exactly the set of greedy strategy profiles, i.e. those strategy profiles in
which the players always choose an action that maximizes their probability of
immediately finding the object. We discuss various variations and extensions of
the model.
|
Search for a moving target in a competitive environment
|
2020-08-21 22:08:16
|
Benoit Duvocelle, János Flesch, Hui Min Shi, Dries Vermeulen
|
http://arxiv.org/abs/2008.09653v2, http://arxiv.org/pdf/2008.09653v2
|
math.OC
|
35,558 |
th
|
We introduce a discrete-time search game, in which two players compete to
find an object first. The object moves according to a time-varying Markov chain
on finitely many states. The players know the Markov chain and the initial
probability distribution of the object, but do not observe the current state of
the object. The players are active in turns. The active player chooses a state,
and this choice is observed by the other player. If the object is in the chosen
state, this player wins and the game ends. Otherwise, the object moves
according to the Markov chain and the game continues at the next period.
We show that this game admits a value, and for any error-term $\veps>0$, each
player has a pure (subgame-perfect) $\veps$-optimal strategy. Interestingly, a
0-optimal strategy does not always exist. The $\veps$-optimal strategies are
robust in the sense that they are $2\veps$-optimal on all finite but
sufficiently long horizons, and also $2\veps$-optimal in the discounted version
of the game provided that the discount factor is close to 1. We derive results
on the analytic and structural properties of the value and the $\veps$-optimal
strategies. Moreover, we examine the performance of the finite truncation
strategies, which are easy to calculate and to implement. We devote special
attention to the important time-homogeneous case, where additional results
hold.
|
A competitive search game with a moving target
|
2020-08-27 13:12:17
|
Benoit Duvocelle, János Flesch, Mathias Staudigl, Dries Vermeulen
|
http://arxiv.org/abs/2008.12032v1, http://arxiv.org/pdf/2008.12032v1
|
cs.GT
|
35,559 |
th
|
Agent-based modeling (ABM) is a powerful paradigm to gain insight into social
phenomena. One area that ABM has rarely been applied is coalition formation.
Traditionally, coalition formation is modeled using cooperative game theory. In
this paper, a heuristic algorithm is developed that can be embedded into an ABM
to allow the agents to find coalition. The resultant coalition structures are
comparable to those found by cooperative game theory solution approaches,
specifically, the core. A heuristic approach is required due to the
computational complexity of finding a cooperative game theory solution which
limits its application to about only a score of agents. The ABM paradigm
provides a platform in which simple rules and interactions between agents can
produce a macro-level effect without the large computational requirements. As
such, it can be an effective means for approximating cooperative game solutions
for large numbers of agents. Our heuristic algorithm combines agent-based
modeling and cooperative game theory to help find agent partitions that are
members of a games' core solution. The accuracy of our heuristic algorithm can
be determined by comparing its outcomes to the actual core solutions. This
comparison achieved by developing an experiment that uses a specific example of
a cooperative game called the glove game. The glove game is a type of exchange
economy game. Finding the traditional cooperative game theory solutions is
computationally intensive for large numbers of players because each possible
partition must be compared to each possible coalition to determine the core
set; hence our experiment only considers games of up to nine players. The
results indicate that our heuristic approach achieves a core solution over 90%
of the time for the games considered in our experiment.
|
Finding Core Members of Cooperative Games using Agent-Based Modeling
|
2020-08-30 20:38:43
|
Daniele Vernon-Bido, Andrew J. Collins
|
http://arxiv.org/abs/2009.00519v1, http://arxiv.org/pdf/2009.00519v1
|
cs.MA
|
35,560 |
th
|
Data are invaluable. How can we assess the value of data objectively,
systematically and quantitatively? Pricing data, or information goods in
general, has been studied and practiced in dispersed areas and principles, such
as economics, marketing, electronic commerce, data management, data mining and
machine learning. In this article, we present a unified, interdisciplinary and
comprehensive overview of this important direction. We examine various
motivations behind data pricing, understand the economics of data pricing and
review the development and evolution of pricing models according to a series of
fundamental principles. We discuss both digital products and data products. We
also consider a series of challenges and directions for future work.
|
A Survey on Data Pricing: from Economics to Data Science
|
2020-09-09 22:31:38
|
Jian Pei
|
http://dx.doi.org/10.1109/TKDE.2020.3045927, http://arxiv.org/abs/2009.04462v2, http://arxiv.org/pdf/2009.04462v2
|
econ.TH
|
35,561 |
th
|
This work researches the impact of including a wider range of participants in
the strategy-making process on the performance of organizations which operate
in either moderately or highly complex environments. Agent-based simulation
demonstrates that the increased number of ideas generated from larger and
diverse crowds and subsequent preference aggregation lead to rapid discovery of
higher peaks in the organization's performance landscape. However, this is not
the case when the expansion in the number of participants is small. The results
confirm the most frequently mentioned benefit in the Open Strategy literature:
the discovery of better performing strategies.
|
On the Effectiveness of Minisum Approval Voting in an Open Strategy Setting: An Agent-Based Approach
|
2020-09-07 17:50:35
|
Joop van de Heijning, Stephan Leitner, Alexandra Rausch
|
http://arxiv.org/abs/2009.04912v2, http://arxiv.org/pdf/2009.04912v2
|
cs.AI
|
35,562 |
th
|
Complexity and limited ability have profound effect on how we learn and make
decisions under uncertainty. Using the theory of finite automaton to model
belief formation, this paper studies the characteristics of optimal learning
behavior in small and big worlds, where the complexity of the environment is
low and high, respectively, relative to the cognitive ability of the decision
maker. Optimal behavior is well approximated by the Bayesian benchmark in very
small world but is more different as the world gets bigger. In addition, in big
worlds, the optimal learning behavior could exhibit a wide range of
well-documented non-Bayesian learning behavior, including the use of
heuristics, correlation neglect, persistent over-confidence, inattentive
learning, and other behaviors of model simplification or misspecification.
These results establish a clear and testable relationship among the prominence
of non-Bayesian learning behavior, complexity, and cognitive ability.
|
Learning in a Small/Big World
|
2020-09-24 22:25:02
|
Benson Tsz Kin Leung
|
http://arxiv.org/abs/2009.11917v8, http://arxiv.org/pdf/2009.11917v8
|
econ.TH
|
35,563 |
th
|
Consider the set of probability measures with given marginal distributions on
the product of two complete, separable metric spaces, seen as a correspondence
when the marginal distributions vary. In problems of optimal transport,
continuity of this correspondence from marginal to joint distributions is often
desired, in light of Berge's Maximum Theorem, to establish continuity of the
value function in the marginal distributions, as well as stability of the set
of optimal transport plans. Bergin (1999) established the continuity of this
correspondence, and in this note, we present a novel and considerably shorter
proof of this important result. We then examine an application to an assignment
game (transferable utility matching problem) with unknown type distributions.
|
On the Continuity of the Feasible Set Mapping in Optimal Transport
|
2020-09-27 16:17:26
|
Mario Ghossoub, David Saunders
|
http://arxiv.org/abs/2009.12838v1, http://arxiv.org/pdf/2009.12838v1
|
q-fin.RM
|
35,564 |
th
|
Evolutionary game theory has proven to be an elegant framework providing many
fruitful insights in population dynamics and human behaviour. Here, we focus on
the aspect of behavioural plasticity and its effect on the evolution of
populations. We consider games with only two strategies in both well-mixed
infinite and finite populations settings. We assume that individuals might
exhibit behavioural plasticity referred to as incompetence of players. We study
the effect of such heterogeneity on the outcome of local interactions and,
ultimately, on global competition. For instance, a strategy that was dominated
before can become desirable from the selection perspective when behavioural
plasticity is taken into account. Furthermore, it can ease conditions for a
successful fixation in infinite populations' invasions. We demonstrate our
findings on the examples of Prisoners' Dilemma and Snowdrift game, where we
define conditions under which cooperation can be promoted.
|
The role of behavioural plasticity in finite vs infinite populations
|
2020-09-28 12:14:58
|
M. Kleshnina, K. Kaveh, K. Chatterjee
|
http://arxiv.org/abs/2009.13160v1, http://arxiv.org/pdf/2009.13160v1
|
q-bio.PE
|
35,565 |
th
|
We analyze statistical discrimination in hiring markets using a multi-armed
bandit model. Myopic firms face workers arriving with heterogeneous observable
characteristics. The association between the worker's skill and characteristics
is unknown ex ante; thus, firms need to learn it. Laissez-faire causes
perpetual underestimation: minority workers are rarely hired, and therefore,
the underestimation tends to persist. Even a marginal imbalance in the
population ratio frequently results in perpetual underestimation. We propose
two policy solutions: a novel subsidy rule (the hybrid mechanism) and the
Rooney Rule. Our results indicate that temporary affirmative actions
effectively alleviate discrimination stemming from insufficient data.
|
On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach
|
2020-10-02 19:20:14
|
Junpei Komiyama, Shunya Noda
|
http://arxiv.org/abs/2010.01079v6, http://arxiv.org/pdf/2010.01079v6
|
econ.TH
|
35,566 |
th
|
The Glosten-Milgrom model describes a single asset market, where informed
traders interact with a market maker, in the presence of noise traders. We
derive an analogy between this financial model and a Szil\'ard information
engine by {\em i)} showing that the optimal work extraction protocol in the
latter coincides with the pricing strategy of the market maker in the former
and {\em ii)} defining a market analogue of the physical temperature from the
analysis of the distribution of market orders. Then we show that the expected
gain of informed traders is bounded above by the product of this market
temperature with the amount of information that informed traders have, in exact
analogy with the corresponding formula for the maximal expected amount of work
that can be extracted from a cycle of the information engine. This suggests
that recent ideas from information thermodynamics may shed light on financial
markets, and lead to generalised inequalities, in the spirit of the extended
second law of thermodynamics.
|
Information thermodynamics of financial markets: the Glosten-Milgrom model
|
2020-10-05 13:36:07
|
Léo Touzo, Matteo Marsili, Don Zagier
|
http://dx.doi.org/10.1088/1742-5468/abe59b, http://arxiv.org/abs/2010.01905v2, http://arxiv.org/pdf/2010.01905v2
|
cond-mat.stat-mech
|
35,567 |
th
|
Linear Fisher markets are a fundamental economic model with applications in
fair division as well as large-scale Internet markets. In the
finite-dimensional case of $n$ buyers and $m$ items, a market equilibrium can
be computed using the Eisenberg-Gale convex program. Motivated by large-scale
Internet advertising and fair division applications, this paper considers a
generalization of a linear Fisher market where there is a finite set of buyers
and a continuum of items. We introduce generalizations of the Eisenberg-Gale
convex program and its dual to this infinite-dimensional setting, which leads
to Banach-space optimization problems. We establish existence of optimal
solutions, strong duality, as well as necessity and sufficiency of KKT-type
conditions. All these properties are established via non-standard arguments,
which circumvent the limitations of duality theory in optimization over
infinite-dimensional Banach spaces. Furthermore, we show that there exists a
pure equilibrium allocation, i.e., a division of the item space. When the item
space is a closed interval and buyers have piecewise linear valuations, we show
that the Eisenberg-Gale-type convex program over the infinite-dimensional
allocations can be reformulated as a finite-dimensional convex conic program,
which can be solved efficiently using off-the-shelf optimization software based
on primal-dual interior-point methods. Based on our convex conic reformulation,
we develop the first polynomial-time cake-cutting algorithm that achieves
Pareto optimality, envy-freeness, and proportionality. For general buyer
valuations or a very large number of buyers, we propose computing market
equilibrium using stochastic dual averaging, which finds approximate
equilibrium prices with high probability. Finally, we discuss how the above
results easily extend to the case of quasilinear utilities.
|
Infinite-Dimensional Fisher Markets and Tractable Fair Division
|
2020-10-07 00:05:49
|
Yuan Gao, Christian Kroer
|
http://arxiv.org/abs/2010.03025v5, http://arxiv.org/pdf/2010.03025v5
|
cs.GT
|
35,568 |
th
|
I characterize the consumer-optimal market segmentation in competitive
markets where multiple firms selling differentiated products to consumers with
unit demand. This segmentation is public---in that each firm observes the same
market segments---and takes a simple form: in each market segment, there is a
dominant firm favored by all consumers in that segment. By segmenting the
market, all but the dominant firm maximally compete to poach the consumer's
business, setting price to equal marginal cost. Information, thus, is being
used to amplify competition. This segmentation simultaneously generates an
efficient allocation and delivers to each firm its minimax profit.
|
Using Information to Amplify Competition
|
2020-10-12 00:02:42
|
Wenhao Li
|
http://arxiv.org/abs/2010.05342v2, http://arxiv.org/pdf/2010.05342v2
|
econ.GN
|
35,569 |
th
|
The Empirical Revenue Maximization (ERM) is one of the most important price
learning algorithms in auction design: as the literature shows it can learn
approximately optimal reserve prices for revenue-maximizing auctioneers in both
repeated auctions and uniform-price auctions. However, in these applications
the agents who provide inputs to ERM have incentives to manipulate the inputs
to lower the outputted price. We generalize the definition of an
incentive-awareness measure proposed by Lavi et al (2019), to quantify the
reduction of ERM's outputted price due to a change of $m\ge 1$ out of $N$ input
samples, and provide specific convergence rates of this measure to zero as $N$
goes to infinity for different types of input distributions. By adopting this
measure, we construct an efficient, approximately incentive-compatible, and
revenue-optimal learning algorithm using ERM in repeated auctions against
non-myopic bidders, and show approximate group incentive-compatibility in
uniform-price auctions.
|
A Game-Theoretic Analysis of the Empirical Revenue Maximization Algorithm with Endogenous Sampling
|
2020-10-12 11:20:35
|
Xiaotie Deng, Ron Lavi, Tao Lin, Qi Qi, Wenwei Wang, Xiang Yan
|
http://arxiv.org/abs/2010.05519v1, http://arxiv.org/pdf/2010.05519v1
|
cs.GT
|
35,570 |
th
|
We examine a problem of demand for insurance indemnification, when the
insured is sensitive to ambiguity and behaves according to the Maxmin-Expected
Utility model of Gilboa and Schmeidler (1989), whereas the insurer is a
(risk-averse or risk-neutral) Expected-Utility maximizer. We characterize
optimal indemnity functions both with and without the customary ex ante
no-sabotage requirement on feasible indemnities, and for both concave and
linear utility functions for the two agents. This allows us to provide a
unifying framework in which we examine the effects of the no-sabotage
condition, marginal utility of wealth, belief heterogeneity, as well as
ambiguity (multiplicity of priors) on the structure of optimal indemnity
functions. In particular, we show how the singularity in beliefs leads to an
optimal indemnity function that involves full insurance on an event to which
the insurer assigns zero probability, while the decision maker assigns a
positive probability. We examine several illustrative examples, and we provide
numerical studies for the case of a Wasserstein and a Renyi ambiguity set.
|
Optimal Insurance under Maxmin Expected Utility
|
2020-10-14 23:06:04
|
Corina Birghila, Tim J. Boonen, Mario Ghossoub
|
http://arxiv.org/abs/2010.07383v1, http://arxiv.org/pdf/2010.07383v1
|
q-fin.RM
|
35,571 |
th
|
This paper models the US-China trade conflict and attempts to analyze the
(optimal) strategic choices. In contrast to the existing literature on the
topic, we employ the expected utility theory and examine the conflict
mathematically. In both perfect information and incomplete information games,
we show that expected net gains diminish as the utility of winning increases
because of the costs incurred during the struggle. We find that the best
response function exists for China but not for the US during the conflict. We
argue that the less the US coerces China to change its existing trade
practices, the higher the US expected net gains. China's best choice is to
maintain the status quo, and any further aggression in its policy and behavior
will aggravate the situation.
|
Modeling the US-China trade conflict: a utility theory approach
|
2020-10-23 15:31:23
|
Yuhan Zhang, Cheng Chang
|
http://arxiv.org/abs/2010.12351v1, http://arxiv.org/pdf/2010.12351v1
|
econ.GN
|
35,572 |
th
|
EIP-1559 is a proposal to make several tightly coupled additions to
Ethereum's transaction fee mechanism, including variable-size blocks and a
burned base fee that rises and falls with demand. This report assesses the
game-theoretic strengths and weaknesses of the proposal and explores some
alternative designs.
|
Transaction Fee Mechanism Design for the Ethereum Blockchain: An Economic Analysis of EIP-1559
|
2020-12-02 00:48:57
|
Tim Roughgarden
|
http://arxiv.org/abs/2012.00854v1, http://arxiv.org/pdf/2012.00854v1
|
cs.GT
|
35,573 |
th
|
In an election campaign, candidates must decide how to optimally allocate
their efforts/resources optimally among the regions of a country. As a result,
the outcome of the election will depend on the players' strategies and the
voters' preferences. In this work, we present a zero-sum game where two
candidates decide how to invest a fixed resource in a set of regions, while
considering their sizes and biases. We explore the Majority System (MS) as well
as the Electoral College (EC) voting systems. We prove equilibrium existence
and uniqueness under MS in a deterministic model; in addition, their closed
form expressions are provided when fixing the subset of regions and relaxing
the non-negative investing constraint. For the stochastic case, we use Monte
Carlo simulations to compute the players' payoffs, together with its gradient
and hessian. For the EC, given the lack of Equilibrium in pure strategies, we
propose an iterative algorithm to find Equilibrium in mixed strategies in a
subset of the simplex lattice. We illustrate numerical instances under both
election systems, and contrast players' equilibrium strategies. Finally, we
show that polarization induces candidates to focus on larger regions with
negative biases under MS, whereas candidates concentrate on swing states under
EC.
|
On the Resource Allocation for Political Campaigns
|
2020-12-05 00:15:18
|
Sebastián Morales, Charles Thraves
|
http://arxiv.org/abs/2012.02856v1, http://arxiv.org/pdf/2012.02856v1
|
cs.GT
|
35,574 |
th
|
An increasing number of politicians are relying on cheaper, easier to access
technologies such as online social media platforms to communicate with their
constituency. These platforms present a cheap and low-barrier channel of
communication to politicians, potentially intensifying political competition by
allowing many to enter political races. In this study, we demonstrate that
lowering costs of communication, which allows many entrants to come into a
competitive market, can strengthen an incumbent's position when the newcomers
compete by providing more information to the voters. We show an asymmetric
bad-news-good-news effect where early negative news hurts the challengers more
than the positive news benefit them, such that in aggregate, an incumbent
politician's chances of winning is higher with more entrants in the market. Our
findings indicate that communication through social media and other platforms
can intensify competition, how-ever incumbency advantage may be strengthened
rather than weakened as an outcome of higher number of entrants into a
political market.
|
Competition, Politics, & Social Media
|
2020-12-06 20:15:55
|
Benson Tsz Kin Leung, Pinar Yildirim
|
http://arxiv.org/abs/2012.03327v1, http://arxiv.org/pdf/2012.03327v1
|
econ.GN
|
35,575 |
th
|
This is an expanded version of the lecture given at the AMS Short Course on
Mean Field Games, on January 13, 2020 in Denver CO. The assignment was to
discuss applications of Mean Field Games in finance and economics. I need to
admit upfront that several of the examples reviewed in this chapter were
already discussed in book form. Still, they are here accompanied with
discussions of, and references to, works which appeared over the last three
years. Moreover, several completely new sections are added to show how recent
developments in financial engineering and economics can benefit from being
viewed through the lens of the Mean Field Game paradigm. The new financial
engineering applications deal with bitcoin mining and the energy markets, while
the new economic applications concern models offering a smooth transition
between macro-economics and finance, and contract theory.
|
Applications of Mean Field Games in Financial Engineering and Economic Theory
|
2020-12-09 09:57:20
|
Rene Carmona
|
http://arxiv.org/abs/2012.05237v1, http://arxiv.org/pdf/2012.05237v1
|
q-fin.GN
|
35,576 |
th
|
We examine the long-term behavior of a Bayesian agent who has a misspecified
belief about the time lag between actions and feedback, and learns about the
payoff consequences of his actions over time. Misspecified beliefs about time
lags result in attribution errors, which have no long-term effect when the
agent's action converges, but can lead to arbitrarily large long-term
inefficiencies when his action cycles. Our proof uses concentration
inequalities to bound the frequency of action switches, which are useful to
study learning problems with history dependence. We apply our methods to study
a policy choice game between a policy-maker who has a correctly specified
belief about the time lag and the public who has a misspecified belief.
|
Misspecified Beliefs about Time Lags
|
2020-12-14 06:45:43
|
Yingkai Li, Harry Pei
|
http://arxiv.org/abs/2012.07238v1, http://arxiv.org/pdf/2012.07238v1
|
econ.TH
|
35,577 |
th
|
We analyze how interdependencies between organizations in financial networks
can lead to multiple possible equilibrium outcomes. A multiplicity arises if
and only if there exists a certain type of dependency cycle in the network that
allows for self-fulfilling chains of defaults. We provide necessary and
sufficient conditions for banks' solvency in any equilibrium. Building on these
conditions, we characterize the minimum bailout payments needed to ensure
systemic solvency, as well as how solvency can be ensured by guaranteeing a
specific set of debt payments. Bailout injections needed to eliminate
self-fulfilling cycles of defaults (credit freezes) are fully recoverable,
while those needed to prevent cascading defaults outside of cycles are not. We
show that the minimum bailout problem is computationally hard, but provide an
upper bound on optimal payments and show that the problem has intuitive
solutions in specific network structures such as those with disjoint cycles or
a core-periphery structure.
|
Credit Freezes, Equilibrium Multiplicity, and Optimal Bailouts in Financial Networks
|
2020-12-22 09:02:09
|
Matthew O. Jackson, Agathe Pernoud
|
http://arxiv.org/abs/2012.12861v2, http://arxiv.org/pdf/2012.12861v2
|
cs.GT
|
35,578 |
th
|
The controversies around the 2020 US presidential elections certainly casts
serious concerns on the efficiency of the current voting system in representing
the people's will. Is the naive Plurality voting suitable in an extremely
polarized political environment? Alternate voting schemes are gradually gaining
public support, wherein the voters rank their choices instead of just voting
for their first preference. However they do not capture certain crucial aspects
of voter preferences like disapprovals and negativities against candidates. I
argue that these unexpressed negativities are the predominant source of
polarization in politics. I propose a voting scheme with an explicit expression
of these negative preferences, so that we can simultaneously decipher the
popularity as well as the polarity of each candidate. The winner is picked by
an optimal tradeoff between the most popular and the least polarizing
candidate. By penalizing the candidates for their polarization, we can
discourage the divisive campaign rhetorics and pave way for potential third
party candidates.
|
Negative votes to depolarize politics
|
2020-12-26 04:05:24
|
Karthik H. Shankar
|
http://dx.doi.org/10.1007/s10726-022-09799-6, http://arxiv.org/abs/2012.13657v1, http://arxiv.org/pdf/2012.13657v1
|
econ.TH
|
35,579 |
th
|
We consider games of chance played by someone with external capital that
cannot be applied to the game and determine how this affects risk-adjusted
optimal betting. Specifically, we focus on Kelly optimization as a metric,
optimizing the expected logarithm of total capital including both capital in
play and the external capital. For games with multiple rounds, we determine the
optimal strategy through dynamic programming and construct a close
approximation through the WKB method. The strategy can be described in terms of
short-term utility functions, with risk aversion depending on the ratio of the
amount in the game to the external money. Thus, a rational player's behavior
varies between conservative play that approaches Kelly strategy as they are
able to invest a larger fraction of total wealth and extremely aggressive play
that maximizes linear expectation when a larger portion of their capital is
locked away. Because you always have expected future productivity to account
for as external resources, this goes counter to the conventional wisdom that
super-Kelly betting is a ruinous proposition.
|
Calculated Boldness: Optimizing Financial Decisions with Illiquid Assets
|
2020-12-27 02:33:33
|
Stanislav Shalunov, Alexei Kitaev, Yakov Shalunov, Arseniy Akopyan
|
http://arxiv.org/abs/2012.13830v1, http://arxiv.org/pdf/2012.13830v1
|
q-fin.PM
|
35,580 |
th
|
Toward explaining the persistence of biased inferences, we propose a
framework to evaluate competing (mis)specifications in strategic settings.
Agents with heterogeneous (mis)specifications coexist and draw Bayesian
inferences about their environment through repeated play. The relative
stability of (mis)specifications depends on their adherents' equilibrium
payoffs. A key mechanism is the learning channel: the endogeneity of perceived
best replies due to inference. We characterize when a rational society is only
vulnerable to invasion by some misspecification through the learning channel.
The learning channel leads to new stability phenomena, and can confer an
evolutionary advantage to otherwise detrimental biases in economically relevant
applications.
|
Evolutionarily Stable (Mis)specifications: Theory and Applications
|
2020-12-30 05:33:15
|
Kevin He, Jonathan Libgober
|
http://arxiv.org/abs/2012.15007v4, http://arxiv.org/pdf/2012.15007v4
|
econ.TH
|
35,581 |
th
|
We analyze the performance of the best-response dynamic across all
normal-form games using a random games approach. The playing sequence -- the
order in which players update their actions -- is essentially irrelevant in
determining whether the dynamic converges to a Nash equilibrium in certain
classes of games (e.g. in potential games) but, when evaluated across all
possible games, convergence to equilibrium depends on the playing sequence in
an extreme way. Our main asymptotic result shows that the best-response dynamic
converges to a pure Nash equilibrium in a vanishingly small fraction of all
(large) games when players take turns according to a fixed cyclic order. By
contrast, when the playing sequence is random, the dynamic converges to a pure
Nash equilibrium if one exists in almost all (large) games.
|
Best-response dynamics, playing sequences, and convergence to equilibrium in random games
|
2021-01-12 01:32:48
|
Torsten Heinrich, Yoojin Jang, Luca Mungo, Marco Pangallo, Alex Scott, Bassel Tarbush, Samuel Wiese
|
http://arxiv.org/abs/2101.04222v3, http://arxiv.org/pdf/2101.04222v3
|
econ.TH
|
35,582 |
th
|
The classic paper of Shapley and Shubik \cite{Shapley1971assignment}
characterized the core of the assignment game using ideas from matching theory
and LP-duality theory and their highly non-trivial interplay. Whereas the core
of this game is always non-empty, that of the general graph matching game can
be empty.
This paper salvages the situation by giving an imputation in the
$2/3$-approximate core for the latter. This bound is best possible, since it is
the integrality gap of the natural underlying LP. Our profit allocation method
goes further: the multiplier on the profit of an agent is often better than ${2
\over 3}$ and lies in the interval $[{2 \over 3}, 1]$, depending on how
severely constrained the agent is.
Next, we provide new insights showing how discerning core imputations of an
assignment games are by studying them via the lens of complementary slackness.
We present a relationship between the competitiveness of individuals and teams
of agents and the amount of profit they accrue in imputations that lie in the
core, where by {\em competitiveness} we mean whether an individual or a team is
matched in every/some/no maximum matching. This also sheds light on the
phenomenon of degeneracy in assignment games, i.e., when the maximum weight
matching is not unique.
The core is a quintessential solution concept in cooperative game theory. It
contains all ways of distributing the total worth of a game among agents in
such a way that no sub-coalition has incentive to secede from the grand
coalition. Our imputation, in the $2/3$-approximate core, implies that a
sub-coalition will gain at most a $3/2$ factor by seceding, and less in typical
cases.
|
The General Graph Matching Game: Approximate Core
|
2021-01-19 03:53:22
|
Vijay V. Vazirani
|
http://arxiv.org/abs/2101.07390v4, http://arxiv.org/pdf/2101.07390v4
|
cs.GT
|
35,583 |
th
|
In the era of a growing population, systemic changes to the world, and the
rising risk of crises, humanity has been facing an unprecedented challenge of
resource scarcity. Confronting and addressing the issues concerning the scarce
resource's conservation, competition, and stimulation by grappling its
characteristics and adopting viable policy instruments calls the
decision-maker's attention with a paramount priority. In this paper, we develop
the first general decentralized cross-sector supply chain network model that
captures the unique features of scarce resources under a unifying fiscal policy
scheme. We formulate the problem as a network equilibrium model with
finite-dimensional variational inequality theories. We then characterize the
network equilibrium with a set of classic theoretical properties, as well as
with a set of properties that are novel to the network games application
literature, namely, the lowest eigenvalue of the game Jacobian. Lastly, we
provide a series of illustrative examples, including a medical glove supply
network, to showcase how our model can be used to investigate the efficacy of
the imposed policies in relieving supply chain distress and stimulating
welfare. Our managerial insights inform and expand the political dialogues on
fiscal policy design, public resource legislation, social welfare
redistribution, and supply chain practice toward sustainability.
|
Relief and Stimulus in A Cross-sector Multi-product Scarce Resource Supply Chain Network
|
2021-01-23 01:48:41
|
Xiaowei Hu, Peng Li
|
http://arxiv.org/abs/2101.09373v3, http://arxiv.org/pdf/2101.09373v3
|
econ.TH
|
35,584 |
th
|
Data sharing issues pervade online social and economic environments. To
foster social progress, it is important to develop models of the interaction
between data producers and consumers that can promote the rise of cooperation
between the involved parties. We formalize this interaction as a game, the data
sharing game, based on the Iterated Prisoner's Dilemma and deal with it through
multi-agent reinforcement learning techniques. We consider several strategies
for how the citizens may behave, depending on the degree of centralization
sought. Simulations suggest mechanisms for cooperation to take place and, thus,
achieve maximum social utility: data consumers should perform some kind of
opponent modeling, or a regulator should transfer utility between both players
and incentivise them.
|
Data sharing games
|
2021-01-26 14:29:01
|
Víctor Gallego, Roi Naveiro, David Ríos Insua, Wolfram Rozas
|
http://arxiv.org/abs/2101.10721v1, http://arxiv.org/pdf/2101.10721v1
|
cs.GT
|
35,585 |
th
|
The ELLIS PhD program is a European initiative that supports excellent young
researchers by connecting them to leading researchers in AI. In particular, PhD
students are supervised by two advisors from different countries: an advisor
and a co-advisor. In this work we summarize the procedure that, in its final
step, matches students to advisors in the ELLIS 2020 PhD program. The steps of
the procedure are based on the extensive literature of two-sided matching
markets and the college admissions problem [Knuth and De Bruijn, 1997, Gale and
Shapley, 1962, Rothand Sotomayor, 1992]. We introduce PolyGS, an algorithm for
the case of two-sided markets with quotas on both sides (also known as
many-to-many markets) which we use throughout the selection procedure of
pre-screening, interview matching and final matching with advisors. The
algorithm returns a stable matching in the sense that no unmatched persons
prefer to be matched together rather than with their current partners (given
their indicated preferences). Roth [1984] gives evidence that only stable
matchings are likely to be adhered to over time. Additionally, the matching is
student-optimal. Preferences are constructed based on the rankings each side
gives to the other side and the overlaps of research fields. We present and
discuss the matchings that the algorithm produces in the ELLIS 2020 PhD
program.
|
Two-Sided Matching Markets in the ELLIS 2020 PhD Program
|
2021-01-28 18:50:15
|
Maximilian Mordig, Riccardo Della Vecchia, Nicolò Cesa-Bianchi, Bernhard Schölkopf
|
http://arxiv.org/abs/2101.12080v3, http://arxiv.org/pdf/2101.12080v3
|
cs.GT
|
35,586 |
th
|
How cooperation evolves and manifests itself in the thermodynamic or infinite
player limit of social dilemma games is a matter of intense speculation.
Various analytical methods have been proposed to analyze the thermodynamic
limit of social dilemmas. In this work, we compare two analytical methods,
i.e., Darwinian evolution and Nash equilibrium mapping, with a numerical
agent-based approach. For completeness, we also give results for another
analytical method, Hamiltonian dynamics. In contrast to Hamiltonian dynamics,
which involves the maximization of payoffs of all individuals, in Darwinian
evolution, the payoff of a single player is maximized with respect to its
interaction with the nearest neighbor. While the Hamiltonian dynamics method
utterly fails as compared to Nash equilibrium mapping, the Darwinian evolution
method gives a false positive for game magnetization -- the net difference
between the fraction of cooperators and defectors -- when payoffs obey the
condition a + d = b + c, wherein a,d represents the diagonal elements and b,c
the off-diagonal elements in a symmetric social dilemma game payoff matrix.
When either a + d =/= b + c or when one looks at the average payoff per player,
the Darwinian evolution method fails, much like the Hamiltonian dynamics
approach. On the other hand, the Nash equilibrium mapping and numerical
agent-based method agree well for both game magnetization and average payoff
per player for the social dilemmas in question, i.e., the Hawk-Dove game and
the Public goods game. This paper thus brings to light the inconsistency of the
Darwinian evolution method vis-a-vis both Nash equilibrium mapping and a
numerical agent-based approach.
|
Nash equilibrium mapping vs Hamiltonian dynamics vs Darwinian evolution for some social dilemma games in the thermodynamic limit
|
2021-02-27 22:13:49
|
Colin Benjamin, Arjun Krishnan U M
|
http://dx.doi.org/10.1140/epjb/s10051-023-00573-4, http://arxiv.org/abs/2103.00295v2, http://arxiv.org/pdf/2103.00295v2
|
cond-mat.stat-mech
|
35,587 |
th
|
We study equilibrium distancing during epidemics. Distancing reduces the
individual's probability of getting infected but comes at a cost. It creates a
single-peaked epidemic, flattens the curve and decreases the size of the
epidemic. We examine more closely the effects of distancing on the outset, the
peak and the final size of the epidemic. First, we define a behavioral basic
reproduction number and show that it is concave in the transmission rate. The
infection, therefore, spreads only if the transmission rate is in the
intermediate region. Second, the peak of the epidemic is non-monotonic in the
transmission rate. A reduction in the transmission rate can lead to an increase
of the peak. On the other hand, a decrease in the cost of distancing always
flattens the curve. Third, both an increase in the infection rate as well as an
increase in the cost of distancing increase the size of the epidemic. Our
results have important implications on the modeling of interventions. Imposing
restrictions on the infection rate has qualitatively different effects on the
trajectory of the epidemics than imposing assumptions on the cost of
distancing. The interventions that affect interactions rather than the
transmission rate should, therefore, be modeled as changes in the cost of
distancing.
|
Epidemics with Behavior
|
2021-02-28 22:11:31
|
Satoshi Fukuda, Nenad Kos, Christoph Wolf
|
http://arxiv.org/abs/2103.00591v1, http://arxiv.org/pdf/2103.00591v1
|
econ.GN
|
35,588 |
th
|
The economic approach to determine optimal legal policies involves maximizing
a social welfare function. We propose an alternative: a consent-approach that
seeks to promote consensual interactions and deter non-consensual interactions.
The consent-approach does not rest upon inter-personal utility comparisons or
value judgments about preferences. It does not require any additional
information relative to the welfare-approach. We highlight the contrast between
the welfare-approach and the consent-approach using a stylized model inspired
by seminal cases of harassment and the #MeToo movement. The social welfare
maximizing penalty for harassment in our model can be zero under the
welfare-approach but not under the consent-approach.
|
Welfare v. Consent: On the Optimal Penalty for Harassment
|
2021-03-01 06:52:41
|
Ratul Das Chaudhury, Birendra Rai, Liang Choon Wang, Dyuti Banerjee
|
http://arxiv.org/abs/2103.00734v2, http://arxiv.org/pdf/2103.00734v2
|
econ.GN
|
35,589 |
th
|
When does society eventually learn the truth, or take the correct action, via
observational learning? In a general model of sequential learning over social
networks, we identify a simple condition for learning dubbed excludability.
Excludability is a joint property of agents' preferences and their information.
When required to hold for all preferences, it is equivalent to information
having "unbounded beliefs", which demands that any agent can individually
identify the truth, even if only with small probability. But unbounded beliefs
may be untenable with more than two states: e.g., it is incompatible with the
monotone likelihood ratio property. Excludability reveals that what is crucial
for learning, instead, is that a single agent must be able to displace any
wrong action, even if she cannot take the correct action. We develop two
classes of preferences and information that jointly satisfy excludability: (i)
for a one-dimensional state, preferences with single-crossing differences and a
new informational condition, directionally unbounded beliefs; and (ii) for a
multi-dimensional state, Euclidean preferences and subexponential
location-shift information.
|
Beyond Unbounded Beliefs: How Preferences and Information Interplay in Social Learning
|
2021-03-04 02:31:19
|
Navin Kartik, SangMok Lee, Tianhao Liu, Daniel Rappoport
|
http://arxiv.org/abs/2103.02754v4, http://arxiv.org/pdf/2103.02754v4
|
econ.TH
|
35,641 |
th
|
We determine winners and losers of immigration using a general equilibrium
search and matching model in which native and non-native employees, who are
heterogeneous with respect to their skill level, produce different types of
goods. Unemployment benefits and the provision of public goods are financed by
a progressive taxation on wages and profits. The estimation of the baseline
model for Italy shows that the presence of non-natives in 2017 led real wages
of low and high-skilled employees to be 4% lower and 8% higher, respectively.
Profits of employers in the low-skilled market were 6% lower, while those of
employers in the high-skilled market were 10% higher. At aggregate level, total
GDP was 14% higher, GDP per worker and the per capita provision of public goods
4% higher, while government revenues and social security contributions raised
by 70 billion euros and 18 billion euros, respectively.
|
Winners and losers of immigration
|
2021-07-14 11:28:43
|
Davide Fiaschi, Cristina Tealdi
|
http://arxiv.org/abs/2107.06544v2, http://arxiv.org/pdf/2107.06544v2
|
econ.GN
|
35,590 |
th
|
We study learning dynamics in distributed production economies such as
blockchain mining, peer-to-peer file sharing and crowdsourcing. These economies
can be modelled as multi-product Cournot competitions or all-pay auctions
(Tullock contests) when individual firms have market power, or as Fisher
markets with quasi-linear utilities when every firm has negligible influence on
market outcomes. In the former case, we provide a formal proof that Gradient
Ascent (GA) can be Li-Yorke chaotic for a step size as small as $\Theta(1/n)$,
where $n$ is the number of firms. In stark contrast, for the Fisher market
case, we derive a Proportional Response (PR) protocol that converges to market
equilibrium. The positive results on the convergence of the PR dynamics are
obtained in full generality, in the sense that they hold for Fisher markets
with \emph{any} quasi-linear utility functions. Conversely, the chaos results
for the GA dynamics are established even in the simplest possible setting of
two firms and one good, and they hold for a wide range of price functions with
different demand elasticities. Our findings suggest that by considering
multi-agent interactions from a market rather than a game-theoretic
perspective, we can formally derive natural learning protocols which are stable
and converge to effective outcomes rather than being chaotic.
|
Learning in Markets: Greed Leads to Chaos but Following the Price is Right
|
2021-03-15 19:48:30
|
Yun Kuen Cheung, Stefanos Leonardos, Georgios Piliouras
|
http://arxiv.org/abs/2103.08529v2, http://arxiv.org/pdf/2103.08529v2
|
cs.GT
|
35,591 |
th
|
Previous research on two-dimensional extensions of Hotelling's location game
has argued that spatial competition leads to maximum differentiation in one
dimensions and minimum differentiation in the other dimension. We expand on
existing models to allow for endogenous entry into the market. We find that
competition may lead to the min/max finding of previous work but also may lead
to maximum differentiation in both dimensions. The critical issue in
determining the degree of differentiation is if existing firms are seeking to
deter entry of a new firm or to maximizing profits within an existing, stable
market.
|
Differentiation in a Two-Dimensional Market with Endogenous Sequential Entry
|
2021-03-20 01:27:00
|
Jeffrey D. Michler, Benjamin M. Gramig
|
http://arxiv.org/abs/2103.11051v1, http://arxiv.org/pdf/2103.11051v1
|
econ.GN
|
35,592 |
th
|
We introduce a one-parameter family of polymatrix replicators defined in a
three-dimensional cube and study its bifurcations. For a given interval of
parameters, this family exhibits suspended horseshoes and persistent strange
attractors. The proof relies on the existence of a homoclinic cycle to the
interior equilibrium. We also describe the phenomenological steps responsible
for the transition from regular to chaotic dynamics in our system (route to
chaos).
|
Persistent Strange attractors in 3D Polymatrix Replicators
|
2021-03-20 23:52:42
|
Telmo Peixe, Alexandre A. Rodrigues
|
http://dx.doi.org/10.1016/j.physd.2022.133346, http://arxiv.org/abs/2103.11242v2, http://arxiv.org/pdf/2103.11242v2
|
math.DS
|
35,593 |
th
|
Product personalization opens the door to price discrimination. A rich
product line allows firms to better tailor products to consumers' tastes, but
the mere choice of a product carries valuable information about consumers that
can be leveraged for price discrimination. We study this trade-off in an
upstream-downstream model, where a consumer buys a good of variable quality
upstream, followed by an indivisible good downstream. The downstream firm's use
of the consumer's purchase history for price discrimination introduces a novel
distortion: The upstream firm offers a subset of the products that it would
offer if, instead, it could jointly design its product line and downstream
pricing. By controlling the degree of product personalization the upstream firm
curbs ratcheting forces that result from the consumer facing downstream price
discrimination.
|
Purchase history and product personalization
|
2021-03-22 01:17:35
|
Laura Doval, Vasiliki Skreta
|
http://arxiv.org/abs/2103.11504v5, http://arxiv.org/pdf/2103.11504v5
|
econ.TH
|
35,594 |
th
|
We propose a new forward electricity market framework that admits
heterogeneous market participants with second-order cone strategy sets, who
accurately express the nonlinearities in their costs and constraints through
conic bids, and a network operator facing conic operational constraints. In
contrast to the prevalent linear-programming-based electricity markets, we
highlight how the inclusion of second-order cone constraints improves
uncertainty-, asset- and network-awareness of the market, which is key to the
successful transition towards an electricity system based on weather-dependent
renewable energy sources. We analyze our general market-clearing proposal using
conic duality theory to derive efficient spatially-differentiated prices for
the multiple commodities, comprising of energy and flexibility services. Under
the assumption of perfect competition, we prove the equivalence of the
centrally-solved market-clearing optimization problem to a competitive spatial
price equilibrium involving a set of rational and self-interested participants
and a price setter. Finally, under common assumptions, we prove that moving
towards conic markets does not incur the loss of desirable economic properties
of markets, namely market efficiency, cost recovery and revenue adequacy. Our
numerical studies focus on the specific use case of uncertainty-aware market
design and demonstrate that the proposed conic market brings advantages over
existing alternatives within the linear programming market framework.
|
Moving from Linear to Conic Markets for Electricity
|
2021-03-22 21:26:33
|
Anubhav Ratha, Pierre Pinson, Hélène Le Cadre, Ana Virag, Jalal Kazempour
|
http://arxiv.org/abs/2103.12122v3, http://arxiv.org/pdf/2103.12122v3
|
econ.TH
|
35,595 |
th
|
We introduce a model of the diffusion of an epidemic with demographically
heterogeneous agents interacting socially on a spatially structured network.
Contagion-risk averse agents respond behaviorally to the diffusion of the
infections by limiting their social interactions. Schools and workplaces also
respond by allowing students and employees to attend and work remotely. The
spatial structure induces local herd immunities along socio-demographic
dimensions, which significantly affect the dynamics of infections. We study
several non-pharmaceutical interventions; e.g., i) lockdown rules, which set
thresholds on the spread of the infection for the closing and reopening of
economic activities; ii) neighborhood lockdowns, leveraging granular
(neighborhood-level) information to improve the effectiveness public health
policies; iii) selective lockdowns, which restrict social interactions by
location (in the network) and by the demographic characteristics of the agents.
Substantiating a "Lucas critique" argument, we assess the cost of naive
discretionary policies ignoring agents and firms' behavioral responses.
|
Spatial-SIR with Network Structure and Behavior: Lockdown Rules and the Lucas Critique
|
2021-03-25 15:30:00
|
Alberto Bisin, Andrea Moro
|
http://arxiv.org/abs/2103.13789v3, http://arxiv.org/pdf/2103.13789v3
|
econ.GN
|
35,596 |
th
|
In recent years, prominent blockchain systems such as Bitcoin and Ethereum
have experienced explosive growth in transaction volume, leading to frequent
surges in demand for limited block space and causing transaction fees to
fluctuate by orders of magnitude. Existing systems sell space using first-price
auctions; however, users find it difficult to estimate how much they need to
bid in order to get their transactions accepted onto the chain. If they bid too
low, their transactions can have long confirmation times. If they bid too high,
they pay larger fees than necessary.
In light of these issues, new transaction fee mechanisms have been proposed,
most notably EIP-1559, aiming to provide better usability. EIP-1559 is a
history-dependent mechanism that relies on block utilization to adjust a base
fee. We propose an alternative design -- a {\em dynamic posted-price mechanism}
-- which uses not only block utilization but also observable bids from past
blocks to compute a posted price for subsequent blocks. We show its potential
to reduce price volatility by providing examples for which the prices of
EIP-1559 are unstable while the prices of the proposed mechanism are stable.
More generally, whenever the demand for the blockchain stabilizes, we ask if
our mechanism is able to converge to a stable state. Our main result provides
sufficient conditions in a probabilistic setting for which the proposed
mechanism is approximately welfare optimal and the prices are stable. Our main
technical contribution towards establishing stability is an iterative algorithm
that, given oracle access to a Lipschitz continuous and strictly concave
function $f$, converges to a fixed point of $f$.
|
Dynamic Posted-Price Mechanisms for the Blockchain Transaction Fee Market
|
2021-03-26 00:41:13
|
Matheus V. X. Ferreira, Daniel J. Moroz, David C. Parkes, Mitchell Stern
|
http://dx.doi.org/10.1145/3479722.3480991, http://arxiv.org/abs/2103.14144v2, http://arxiv.org/pdf/2103.14144v2
|
cs.GT
|
35,597 |
th
|
This paper shows the usefulness of Perov's contraction principle, which
generalizes Banach's contraction principle to a vector-valued metric, for
studying dynamic programming problems in which the discount factor can be
stochastic. The discounting condition $\beta<1$ is replaced by $\rho(B)<1$,
where $B$ is an appropriate nonnegative matrix and $\rho$ denotes the spectral
radius. Blackwell's sufficient condition is also generalized in this setting.
Applications to asset pricing and optimal savings are discussed.
|
Perov's Contraction Principle and Dynamic Programming with Stochastic Discounting
|
2021-03-26 02:14:58
|
Alexis Akira Toda
|
http://dx.doi.org/10.1016/j.orl.2021.09.001, http://arxiv.org/abs/2103.14173v2, http://arxiv.org/pdf/2103.14173v2
|
econ.TH
|
35,598 |
th
|
We propose a simple rule of thumb for countries which have embarked on a
vaccination campaign while still facing the need to keep non-pharmaceutical
interventions (NPI) in place because of the ongoing spread of SARS-CoV-2. If
the aim is to keep the death rate from increasing, NPIs can be loosened when it
is possible to vaccinate more than twice the growth rate of new cases. If the
aim is to keep the pressure on hospitals under control, the vaccination rate
has to be about four times higher. These simple rules can be derived from the
observation that the risk of death or a severe course requiring hospitalization
from a COVID-19 infection increases exponentially with age and that the sizes
of age cohorts decrease linearly at the top of the population pyramid.
Protecting the over 60-year-olds, which constitute approximately one-quarter of
the population in Europe (and most OECD countries), reduces the potential loss
of life by 95 percent.
|
When to end a lock down? How fast must vaccination campaigns proceed in order to keep health costs in check?
|
2021-03-29 15:20:34
|
Claudius Gros, Thomas Czypionka, Daniel Gros
|
http://dx.doi.org/10.1098/rsos.211055, http://arxiv.org/abs/2103.15544v4, http://arxiv.org/pdf/2103.15544v4
|
q-bio.PE
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.