text
stringlengths 448
13k
| label
int64 0
1
| doc_idx
stringlengths 8
14
|
---|---|---|
---
abstract: 'The Casimir effect is a force arising in the macroscopic world as a result of radiation pressure of vacuum fluctuations. It thus plays a key role in the emerging domain of nano-electro-mechanical systems (NEMS). This role is reviewed in the present paper, with discussions of the influence of the material properties of the mirrors, as well as the geometry dependence of the Casimir effect between corrugated mirrors. In particular, the lateral component of the Casimir force and restoring torque between metal plates with misaligned corrugations are evaluated.'
author:
- 'Cyriaque Genet$^1$, Astrid Lambrecht$^2$, and Serge Reynaud$^2$'
title: The Casimir effect in the nanoworld
---
Introduction {#intro}
============
The Casimir force was predicted in $1948$ by H.B.G. Casimir as an attractive force between two perfectly reflecting, plane and parallel mirrors in vacuum [@Casimir48]. The force has been measured in different experiments with an increasing control of the experimental conditions . This has been considered as an important aim which should allow an accurate comparison between theoretical predictions and experimental observations [@Milonni94; @LamoreauxResource99; @Reynaud01; @GenetIAP02; @LambrechtPoincare]. These advances have been reviewed in a number of papers, for example [@LambrechtPoincare; @Bordag01; @Milton05] and in a special issue of the New Journal | 1 | member_56 |
of Physics [@NJP06].
Meanwhile, it has been realized that the Casimir force was a dominant force at micron or sub-micron distances, and then clearly an important aspect in the domain of micro- and nano-oscillators (MEMS, NEMS) [@BuksPRB2001; @ChanScience2001; @ChanPRL2001] now emerging from modern nanofabrication techniques [@EkinciRSI2005]. If the Casimir force has been primarly considered as a source of stiction between mobile parts, it is now recognized as an essential source of actuation to be used in the design of MEMS and NEMS.
In both fundamental and technological contexts, it is extremely important to take into account the real experimental situations which largely differ from the ideal conditions considered by Casimir. We review below some theoretical tools which have shown their efficiency for a general formulation of the Casimir effect, accounting for the material properties of the interacting plates as well as for the effect of non planar boundary geometries.
Idealized Casimir force {#sec:1}
=======================
The Casimir force and energy between two perfectly reflecting, plane and parallel mirrors immersed in quantum vacuum have the following forms $$\begin{aligned}
F_{\rm Cas} = \frac{\pi^{2}\hbar c}{240}\frac{A}{L^{4}} \ \ , \ \
E_{\rm Cas} = - \frac{\pi^{2}\hbar c}{720}\frac{A}{L^{3}}. \label{FEcas}\end{aligned}$$ These expressions correspond to an attractive force | 1 | member_56 |
$F_{\rm Cas}$ and a binding energy $E_{\rm Cas}$. Remarquably, they depend only on geometrical quantities, the area $A$ of the mirrors and their distance $L$ ($A\gg L^{2}$), and fundamental constants, the Planck constant $\hbar$ and the speed of light $c$.
Imperfect reflection {#sec:3}
====================
Experiments are performed with real metallic mirrors which good reflectors only at frequencies below their plasma frequency $\omega_{\rm P}$ which depends on the properties of the conduction electrons in the metal. The effect of imperfect reflection on the Casimir force and energy has been recognized long time ago [@Lifshitz56; @Schwinger78] though it has been described with good accuracy only recently [@Lamoreaux99; @Lambrecht00; @KlimPRA00]. We recall below the scattering theory of the Casimir force which has been developed and used to this aim [@Jaekel91; @GenetPRA03; @LambrechtNJP06].
We begin with perfectly plane and parallel mirrors, separated by a distance $L$. The two mirrors form a Fabry-Perot cavity and the fluctuations of the intracavity fields propagating back and forth along the cavity axis can be calculated in terms of the fluctuations of the incoming free-space fields. The field modes are characterized by their frequency $\omega$, transverse wavevector ${\bf k}$ with components $k_{x},k_{y}$ in the plane of the mirrors, and | 1 | member_56 |
by their polarization $p$. Time invariance of the problem, as well as transverse spatial translation invariance (along $x$ and $y$) ensure that the frequency, the transverse wavevector and the polarization are conserved quantities throughout the scattering process on the cavity. The scattering couples only the free vacuum modes with opposite signs for the component $k_{z}$ of the wavevector along the longitudinal $z$ axis of the cavity. We write $r_{\bf k}^{p}[\omega]$ the reflection amplitude of the mirror $i=1,2$ as seen from the inner side of the cavity. These amplitudes obey general physical properties of causality, unitarity and high frequency transparency.
The spectral density of the vacuum intracavity fields is changed with respect to that of free-fields outside the cavity. The ratio of energy inside the cavity to energy outside the cavity is fixed, for a given mode, by the following function $$\begin{aligned}
g_{\bf k}^{p}[\omega] = \frac{1-|\rho_{\bf k}^{p}[\omega]|^{2}}{|1-\rho_{\bf k}^{p}[\omega]|^{2}} \ \ , \ \ \rho_{\bf k}^{p}[\omega] = r_{\bf k}^{p}[\omega]_{1}r_{\bf k}^{p}[\omega]_{2}e^{2ik_{z}L}.\end{aligned}$$ This statement constitues a theorem which has been demonstrated for lossless as well as lossy mirrors [@GenetPRA03; @Barnett96]. It does not depend on the state of the fields and is therefore valid for vacuum fluctuations as well as for thermal fluctuations, assuming | 1 | member_56 |
thermal equilibrium. We do not discuss here the issue of thermal dependence of the Casimir effect (see for example the recent review [@Brevik06]) and restrict our attention to the zero temperature limit.
The force is the difference in radiation pressure between inner and outer faces of the mirrors, integrated over all the modes. Using analyticity properties, the force and energy may be written as integrals over imaginary frequencies $\omega =i\xi$ $$\begin{aligned}
F&=&\frac{\hbar A}{\pi} \sum_{p}\int\frac{{\rm d}^{2}{\bf k}}{4\pi^{2}}\int_{0}^{\infty}{\rm d}\xi \kappa[i\xi]\frac{\rho_{\bf k}^{p}[i\xi]}{1-\rho_{\bf k}^{p}[i\xi]} \ \ , \nonumber \\
E&=&\frac{\hbar A}{2\pi}\sum_{p}\int\frac{{\rm d}^{2}{\bf k}}{4\pi^{2}}\int_{0}^{\infty}{\rm d}\xi \ln \left(1-\rho_{\bf k}^{p}[i\xi]\right). \label{FEscatt}\end{aligned}$$ $\kappa[i\xi]=\sqrt{{\bf k}^{2}+\xi^{2} / c^{2}}$ is the longitudinal component of the wavevector evaluated for imaginary frequencies.
The expressions (\[FEscatt\]) are regular for any physical model of the reflection amplitudes. High frequency transparency of any real mirror ensures that the integrals are convergent, and free from the divergences usually associated with the infinitness of vacuum energy. They reproduce the Lifshitz expression for the Casimir force [@Lifshitz56; @Schwinger78] when assuming that the metal plates have large optical thickness with reflection amplitudes given by the Fresnel laws on the vacuum-bulk interface $$\begin{aligned}
r_{\bf k}^{\rm TE}[i\xi]&=&-\frac{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}-c\kappa}{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}+c\kappa} \ \ , \nonumber \\
r_{\bf k}^{\rm TM}[i\xi]&=&-\frac{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}-c\kappa\varepsilon[i\xi]}{\sqrt{\xi^{2}\left(\varepsilon[i\xi]-1\right)+c^{2}\kappa^{2}}+c\kappa\varepsilon[i\xi]}. \label{Fresnel}\end{aligned}$$ Here $\varepsilon[i\xi]$ is the dielectric function | 1 | member_56 |
describing a optical response of the material inside the mirrors. Taken together, relations (\[FEscatt\]) and (\[Fresnel\]) reproduce the Lifshitz expression [@Lifshitz56]. They are known to tend to the original Casimir expression in the limit $\varepsilon \rightarrow \infty$ which produces perfectly reflecting mirrors [@Schwinger78].
We may emphasize at this point that relations (\[FEscatt\]) are more general than Lifshitz expression which, incidentally, were not written originally in terms of reflection amplitudes [@Katz77]. They are valid for example for non-local optical responses of the mirrors provided the reflection amplitudes are substituted by their possibly more complicated expressions. The only limitation, discussed below, is associated with the assumption of specular scattering.
Finite conductivity corrections {#sec:4}
===============================
We now review the corrections to the Casimir expression coming from the finite conductivity of the bulk material. Here, these corrections are deduced from relations (\[FEscatt\]), assuming Fresnel laws (\[Fresnel\]) for a local optical response of the bulk material. This function may be given by a simple description of the conduction electrons in terms of a plasma model $$\begin{aligned}
\varepsilon[i\xi]=1+\frac{\omega_{\rm P}^{2}}{\xi^2},\end{aligned}$$ characterized by a plasma frequency $\omega_{\rm P}$ and wavelength $\lambda_{\rm P}\equiv 2\pi c/\omega_{\rm P}$. It may be given by a more realistic representation based upon tabulated optical | 1 | member_56 |
data and which includes the contribution of interband electrons [@Lambrecht00].
The corrections to the Casimir effect are conveniently represented in terms of factors measuring the reduction of the force and energy with respect to the ideal limit of perfect mirrors $$\begin{aligned}
F&=&\eta_{\rm F}F_{\rm Cas} \ \ , \ \ \eta_{\rm F} < 1 \ \ \textrm{and} \nonumber \\
E&=&\eta_{\rm E}E_{\rm Cas} \ \ , \ \ \eta_{\rm E} < 1.\end{aligned}$$ The results of the calculations are plotted on Fig.(\[fig:1\]) for Au-covered mirrors. They are shown as $\eta_{\rm F}$ varying versus the ratio of the cavity length $L$ to the plasma wavelength $\lambda_{\rm P}$.
For metals used in recent experiments, the plasma wavelength lies around $0.1\mu$m ($136$nm for Au and Cu). At large distances $L\gg\lambda_{\rm P}$, the ideal Casimir formula is recovered ($\eta_{\rm F}\rightarrow 1$), as expected. At short distances, a significant reduction of the force is obtained, with a change in the power law for the variation of the force with distance. This change can be understood as the result of the Coulomb interaction of surface plasmons at the two vacuum interfaces [@GenetAFLdB04; @Henkel04]. This interpretation may be actually generalized to arbitrary distances at the price of a full electromagnetic | 1 | member_56 |
treatment of the plasmonic as well as ordinary photonic modes [@Intravaia05; @Intravaia07]. The plasma model is sufficient for a first description of the variation of the force with distance but it is not sufficient for a precise comparison.
First, the relaxation of the conduction electrons has to be accounted for. Then, interband transitions are reached for metals like Au, Cu or Al for photon energies of a few eV and their effect on the optical response has to be taken into account for evaluating the Casimir force at short (sub-micron) distances. This can be done by using tabulated optical data which are integrated using causality relations [@Lambrecht00]. The result of the corresponding evaluation is shown on Fig.(\[fig:1\]). It is worth stressing that calculations are sensitive to the existing differences in optical data between different tabulated sets [@Pirozhenko06]. This means that an imperfect knowledge of the optical properties of the mirrors used in the experiment is a source of uncertainty in the experiment-theory comparison. Ideally, if the aim is to have a reliable theoretical evaluation of the Casimir force to be compared with experiments, it is necessary to measure the reflection amplitudes *in situ*.
Silicon slab mirrors {#sec:5}
====================
As stressed | 1 | member_56 |
in the introduction, the relevance of the Casimir effect on nanosystems calls for a precise understanding not only of the influence of material optical properties on the Casimir force, but also of the influence of geometrical parameters, such as the thickness of the coatings [@IannuzziPNAS2004; @LisantiPNAS2005] or the thickness of the mirrors themselves. In this context, structures made of silicon, the reference material used in nano-fabrication processes, are particularly interesting to study [@LambrechtEPL2007; @Chen07].
The reflection amplitude corresponding to a slab of finite thickness $D$ is different from the bulk expression and is given through a Fabry-Perot formula $$\begin{aligned}
r_{\bf k}^{p}[i\xi]_{\rm slab}&=&r_{\bf k}^{p}[i\xi]\frac{1-e^{-2\delta}}{1-(r_{\bf k}^{p}[i\xi])^{2}e^{-2\delta}} \ \ , \nonumber \\
\delta &=&\frac{D}{c}\sqrt{\xi^{2}(\varepsilon[i\xi]-1)+c^{2}\kappa^{2}}. \label{Rslab}\end{aligned}$$ $r_{\bf k}^{p}[i\xi]$ is the bulk reflection amplitude given by (\[Fresnel\]). Using these reflection amplitudes for calculating the Casimir force between two Si slabs, interesting behaviours have been noted [@LambrechtEPL2007] which differ from the situation of metallic mirrors. In particular, it was shown that the material thickness has a stronger influence on the Casimir force for Si slabs than for Au slabs. For Si, the force decreases as soon as the slab separation $L$ is larger than the slab thickness $D$, as seen on Fig.(\[fig:2\]).
In contrast to metals | 1 | member_56 |
which become perfect reflectors in the limit of zero frequency, Si is a semiconductor with a finite transverse plasma frequency $\omega_{0}$ corresponding to a cut-off wavelength $\lambda_{0}=2\pi c / \omega_{0}\sim 286$nm. For cavity length $L$ smaller than this cut-off wavelength, Si tends to become transparent. The associated optical thickness $\delta$ given in Eq.(\[Rslab\]) is large, so that the Si slab behaves like a bulk Si mirror with low reflectivity at high frequency. The Casimir force is then much smaller than the perfect reflection limit of Eq.(\[FEcas\]). On the other hand, at low frequencies $\omega\ll\omega_{0}$, one will have $\delta\ll 1$ together with $c\kappa\rightarrow 0$, low frequencies being predominant at large distances. In this latter case, the slab is transparent again, and the Casimir force between two Si slabs is decreased when $L\geq D$. This result can have interesting consequences for nanostructures as it opens a way to control the magnitude of the Casimir force and possibly eliminate an unwanted Casimir source of stiction. From a fundamental point of view, it also offers a new solution to study the comparison between experiment and theory of the Casimir force [@Chen07]
Geometry and the Casimir effect {#sec:6}
===============================
Geometry effects are expected to lead | 1 | member_56 |
to a rich variety of behaviours in the Casimir physics [@Balian7778; @Plunien86; @Balian0304]. Recent advances make it possible to explore this interplay, both from experimental and theoretical point of views. This also offers new possibilities for tailoring the Casimir force through specific designs [@EmigEPL03].
Force and energy evaluations between non planar mirrors are commonly obtained using the so-called proximity-force approximation (PFA) [@Derjaguin68; @Langbein71]. This approximation amounts to an averaging of plane-plane contributions over the distribution of local interplate separations defined by the chosen geometry. For the energy, the PFA leads to $$\begin{aligned}
E_{\rm PFA} = \int\frac{{\rm d}^{2}{\bf r}}{A}E_{\rm PP}\left(\ell)\right) \ , \
\ell \equiv L-h_{1}({\bf r})-h_{2}({\bf r}), \label{PFArough}\end{aligned}$$ with $h_{1}({\bf r})$ and $h_{2}({\bf r})$ the surface profiles of each mirrors. Such profiles can be described by their spectra evaluated over the surface $A$ of the mirrors $$\begin{aligned}
\int \frac{{\rm d}^{2} {\bf r}}{A}h_{i}({\bf r})h_{j}({\bf r})=\int\frac{{\rm d}^{2}{\bf k}}{(2\pi)^{2}}h_{i}[{\bf k}]h_{j}[-{\bf k}] \ , \ i,j=1,2 \end{aligned}$$ with $h_{i}[{\bf k}]$ the Fourier transform of $h_{i}({\bf r})$, and by the associated correlation lengths $\ell_{\rm C}$. When they are smaller than the other length scales, the amplitudes of deformations can be considered as perturbations. A second order expansion in the profiles can thus be performed leading | 1 | member_56 |
to $$\begin{aligned}
E_{\rm PFA} = E_{\rm PP}+\frac{1}{2}\frac{\partial^{2}E_{\rm PP}}{\partial L^{2}}\int \frac{{\rm d}^{2} {\bf r}}{A}(h_{1}({\bf r})+h_{2}({\bf r}))^{2}. \label{PFAgen}\end{aligned}$$ The trivial first-order term has been discarded, assuming that the deformations have zero spatial averages $\int {\rm d}^{2}{\bf r}h_{i=1,2}({\bf r}) / A =0$.
The evaluation of the effect of geometry through the PFA, based on a summation procedure over local contributions assumes some additivity property of the Casimir effect, whereas the Casimir force is known not to be additive. The PFA can only be accurate for surfaces which can be considered as nearly plane with respect to other scales such as the separation distance $L$ [@GenetEPL03]. For example, it allows one to calculate the Casimir force in the plane-sphere (PS) configuration as $$\begin{aligned}
F_{\rm PS}=\frac{2\pi R}{A}E_{\rm PP}, \ \ \textrm{with} \ \ L\ll R, \label{PFAPS}\end{aligned}$$ where $E_{\rm PP}$ is the Casimir energy in the plane-plane (PP) geometry. Most recent experiments are performed in the plane-sphere geometry which is much simpler to control than the plane-plane configuration. The PFA is here expected to be valid provided the radius $R$ of the sphere is much larger than the distance $L$ of closest approach.
But the PFA certainly fails for describing more general surface profiles. As far | 1 | member_56 |
as plate deformations are concerned, it can only be valid in the limit $\ell_{\rm C}\gg L$ which corresponds to a trivial specular description of the reflection process on the surfaces [@MaiaNetoPRA05]. For the general case, a description of non specular scattering process on mirrors is available for analyzing the connection between geometry and the Casimir effect [@MaiaNetoPRA05]. An expression for the Casimir energy between parallel mirrors with arbitrary surface profiles has been derived in [@LambrechtNJP06; @RodriguesPRA2007] $$\begin{aligned}
E=\hbar\int\limits_{0}^{\infty}\frac{{\rm d}\xi}{2\pi}{\rm Tr} \ {\rm ln}\left(1-\rm{R}_{1}\left(i\xi\right)e^{-K\left(i\xi\right)L}\rm{R}_{2}\left(i\xi\right)e^{-K\left(i\xi\right)L}\right)\ \ \label{formule}\end{aligned}$$ This expression is based on non-specular reflection matrices $\rm{R}_{1}$ and $\rm{R}_{2}$ associated to each mirror. While the operator $e^{-K\left(i\xi\right)L}$ corresponds to propagation of the field between the two mirrors, and is diagonal in the plane-wave basis with elements given by $K(i\xi) =\sqrt{{\bf k}^2+\xi^2 / c^{2}}$, the two matrices $\rm{R}_{1}$ and $\rm{R}_{2}$ are non-diagonal on plane-waves. This corresponds to a mixing of polarizations and wavevectors, due to non-specularity diffraction on the gratings formed by the profiles on the surfaces of the mirrors.
As it is reviewed below, this formula (\[formule\]) has been used to evaluate the effect of surface roughness [@MaiaNetoPRA05] or corrugations on the Casimir force [@RodriguesEPL2006; @RodriguesPRL2006]. Analytical expressions have been derived through a | 1 | member_56 |
perturbative treatment, with the roughness or the corrugation amplitudes taken as the smallest length scales involved in the problem. The effect of the optical response of the metal has been included in these calculations. It is worth stressing that this formula has a wider range of validity. It can in principle describe structured plates with large corrugation amplitudes, as well as material properties not limited to a simple plasma model. The only task for a quantitative evaluation of the Casimir force or energy is to obtain the actual form of the reflection operators $\rm{R}_{1}$ and $\rm{R}_{2}$ to be inserted into Eq.(\[formule\]).
Roughness correction {#sec:7}
====================
A correction to the Casimir force that must be accounted for is the effect of surface roughness, intrinsic to any real mirror. This effect is analyzed in recent experiments through procedures based on the PFA [@BordagPLA95; @KlimPRA99]. The general formula (\[formule\]) has been used to go beyond this approximation [@MaiaNetoPRA05]. As already stressed, the roughness amplitude must be the smallest length scale for perturbation theory to hold. Meanwhile, the plasma wavelength, the mirror separation and the roughness correlation length may have arbitrary relative values with respect to each other.
We remind that the roughness profiles | 1 | member_56 |
are defined with respect to reference mirror planes separated by the mean distance $L$. We assume that profiles have zero averages and show no cross-correlations. We also suppose that the area $A$ of each plate is large enough to include many correlation areas ($A\gg \ell_{\rm C}^{2}$), so that surface averages are identical to statistical averages. Up to second order in the profiles, the correction to the Casimir energy may thus be written as follows $$\begin{aligned}
\delta E_{\rm PP}=\int\frac{{\rm d}^{2}{\bf k}}{(2\pi)^{2}}G_{\rm rough}[{\bf k}]\sigma[{\bf k}]. \label{beyondPFArug}\end{aligned}$$ Here $\sigma[{\bf k}]$ corresponding to the roughness spectrum added over the two plates. $G_{\rm rough}[{\bf k}]$ is a spectral sensitivity to roughness of the Casimir energy. Due to cylindrical symmetry with respect to rotations in the transverse plane, it only depends on $k=|{\bf k}|$. This dependence reveals that the roughness correction does not only depend on the root-mean-square (rms) roughness, but also on the spectral distribution of the roughness. Fig.(\[fig:3\]) displays $G_{\rm rough}[k]$ normalized by $E_{\rm PP}$ as it has been calculated for Au-covered mirrors and for various interplate distances.
The rich behaviours of $G_{\rm rough}[k]$ as a function of the length scales is discussed in [@MaiaNetoEPL05].
What we want to stress here is that this | 1 | member_56 |
function describes deviations from the PFA. The width of the roughness spectrum $\sigma[{\bf k}]$ is indeed fixed by the inverse of the correlation length $\ell_{\rm C}$. When this spectrum is contained in the region where $G_{\rm rough}[k]$ remains close to its secular limit $G_{\rm rough}[0]$, we can approximate Eq.(\[beyondPFArug\]) as proportional to the rms roughness $$\begin{aligned}
\delta E_{\rm PP}\simeq G_{\rm rough}[0]\langle h_{1}^{2}+h_{2}^{2}\rangle. \label{PFArug}\end{aligned}$$ This corresponds effectively to the PFA expression, as the consequence of a theorem which was proved in [@MaiaNetoPRA05] $$\begin{aligned}
G_{\rm rough}[k\rightarrow 0]= \frac{1}{2}\frac{\partial^{2}E_{\rm PP}}{\partial L^{2}}.
\label{PFtheorem}\end{aligned}$$ Equation (\[PFtheorem\]) is nothing but a properly stated “Proximity Force Theorem”. It can however not be confused with the “Proximity Force Approximation” (\[PFArug\]) which is a good approximation only for smooth enough mirrors, that is also for large enough roughness correlation lengths $\ell_{\rm C}$.
In the general case, the PFA result (\[PFArug\]) underestimates the effect of roughness. When performing the theory-comparison, one has therefore to carefully assess the roughness correction by measuring the roughness spectra *in situ* and using the roughness sensitivity function as given in [@MaiaNetoPRA05; @MaiaNetoEPL05]. The PFA can only be used if $\ell_{\rm C}$ has been proven to be large enough or, in a looser way, when the | 1 | member_56 |
roughness correction has been estimated to have a negligible value.
Lateral force between corrugated plates {#sec:8}
=======================================
As the roughness effect remains a small correction to the Casimir force, it seems difficult to measure the deviation from PFA regime and check its agreement with theory. Fortunately, there exists an experimental configuration showing more promising perspectives as a potential probe of the non-trivial interplay between the Casimir effect and geometry.
This configuration corresponds to periodically corrugated metallic plates placed face to face in vacuum, so that a lateral component of the Casimir force arises due to the breaking of the transverse translational invariance [@Golestanian]. A recent experiment has demonstrated the feasibility of a lateral force measurement at separation distances of the order of $\sim 100$nm [@ChenPRL02]. Since it would disappear in the absence of corrugation, the lateral force should not be considered as a small correction to the otherwise dominant normal Casimir force, as it was the case for the study of roughness. As we will see below, the deviation from PFA indeed appears as a factor in front of the lateral force, so that a precise measurement of this force would test in a crucial manner the interplay between Casimir | 1 | member_56 |
effect and geometry [@RodriguesPRL2006]. As the experiments are performed at short distances, it cannot be described with the assumption of perfect reflection, where analytical results are available [@EmigPRA03; @EmigPRL05]. Again, the general scattering formula (\[formule\]) shows the ability to give an estimation for the lateral force for arbitrary relative values of the length scales $\lambda_{\rm C}$, $\lambda_{\rm P}$ and $L$, provided the corrugation amplitudes $a_{i=1,2}$ remain the smallest length scales of the problem.
We consider two metallic mirrors, both sinusoidally corrugated along one dimension, with the same corrugation wavelength $\lambda_{C}$, separated by a distance $L$ and facing each other with a relative spatial mismatch $b$ between the corrugation crests -see Fig.(\[fig:4\]).
The profiles $h_{i=1,2}({\bf r})$, ${\bf r}=(x,y)$, of the two uniaxial (along $y$) corrugated mirrors are defined by the two functions $h_{1} = a_{1}\cos\left(k_{\rm C}x\right)$ and $h_{2}=a_{2}\cos\left(k_{\rm C}\left(x-b\right)\right)$ with $k_{\rm C}= 2\pi / \lambda_{\rm C}$ the wavevector associated to the corrugation wavelength $\lambda_{\rm C}$. We take both profiles with zero spatial averages. At the second order in the corrugations, cross-terms of the form $a_{1}a_{2}$ appear which contribute to the lateral force because the energy depends on the transverse mismatch $b$.
This fact, a consequence of the correlation between the two | 1 | member_56 |
corrugation profiles, induces a contrast with the case of roughness where the effect was associated with quadratic terms $h_{i=1,2}^{2}$. It implies that the evaluation of the lateral force only involves first-order non-specular amplitudes calculated on each mirror separately. The full calculation gives the second-order correction to the Casimir energy induced by the corrugations $$\begin{aligned}
\delta E_{\rm PP}=A\frac{a_{1}a_{2}}{2}\cos (k b) G_{\rm C}[k].
\label{secorder}\end{aligned}$$ The function $G_{\rm C}[{\bf k}]$ is given in [@RodriguesPRL2006] and does only depend on the modulus $k$ of ${\bf k}$. Here again, the PFA regime is recovered in the $k\rightarrow 0$ limit, as a consequence $$\begin{aligned}
G_{\rm C}[k\rightarrow 0]=\frac{1}{2} \frac{\partial^{2}E_{\rm PP}}{\partial L^{2}}.\end{aligned}$$ This theorem is ensured, for any specific model of the material medium, by the fact that $G_{\rm C}$ is given for $k\rightarrow 0$ by the specular limit of non-specular reflection amplitudes [@MaiaNetoPRA05].
In order to compare with experiments, we consider the expression of the lateral force in the plane-sphere configuration. It is derived from the plane-plane configuration using the PFA, reliable as long as $L\ll R$. In fact, there is no interplay between curvature and corrugation provided $RL\gg\lambda_{\rm C}^{2}$, a condition met in the experiment reported in [@ChenPRL02].
From Eq.(\[PFAPS\]), the lateral force in the plane-sphere | 1 | member_56 |
geometry is eventually given as [@RodriguesPRL2006] $$\begin{aligned}
F_{\rm PS}^{\rm lat}=-\frac{\partial }{\partial b}E_{\rm PS}^{\rm lat}=\pi a_{1}a_{2}kR\sin (kb )
\int_{\infty}^{L}{\rm d}L^{\prime}G[k,L^{\prime}].\end{aligned}$$ The force is plotted in Fig.(\[fig:5\]) as a function of $k$, with length scales $\lambda_{\rm C}$, $\lambda_{\rm P}$ and $L$ fitting the experimental values [@ChenPRL02]. As the corrugation amplitudes are not small enough in the experiment to meet the perturbation condition, the theory and experiment can unfortunately not be compared directly. The plot on Fig.(\[fig:5\]) nevertheless shows the interesting fact that the length scales taken from the experiment, with $k$ indicated by the vertical dashed line, clearly fall outside the PFA sector in the perturbative calculation. For related implications, we refer the reader to the discussions in [@RodriguesPRA2007].
It appears clearly on the figure that the PFA overestimates the magnitude of the lateral force for arbitrary $k$. We also note that the PFA prediction for the force scales as $k$ when $k$ increases from zero. At larger values of $k$ in contrast, the lateral force decreases. This is due to the one-way propagation factor separating the two first-order non-specular reflections at each plate, given as a decresing exponential $e^{-kL}$ in the high $k$ limit [@RodriguesPRL2006]. It follows that there is a | 1 | member_56 |
maximal force when $k$ is varied. It corresponds to $k=9\times 10^{-3}$nm$^{-1}$ with the other length scales corresponding to the experiment. The ratio $L / \lambda_{\rm C} = 1 / \pi$ is thus falling outside the PFA sector which confirms that a lateral force measurement is an excellent candidate for probing deviations from the PFA.
Torque {#sec:9}
======
Another interesting observable for exploring the non-trivial geometry dependence of the Casimir energy is the torque arising when the corrugations of the two plates are misaligned. With this angular mismatch between the corrugations, rotational symmetry is broken and induces a restoring torque between the plates.
The calculations are quite similar to those which were done for aligned corrugated surfaces, in particular because the same non-specular reflection coefficients are used to describe each plate. The second-order correction is still given by the sensitivity function $G_{\rm C}[{\bf k}]$ which does only depend on the modulus of the corrugation wavevector ${\bf k}$. The difference with the lateral force case lies only in the fact that the corrugation profiles $h_{i=1,2}({\bf r})=a_{i}\cos ({\bf k}_{i}\cdot {\bf r} - kb_{i})$ corresponds to different corrugation wavevectors ${\bf k}_{i=1,2}$ having however the same modulus $k=2\pi /\lambda_{\rm C}$. The angular mismatch between ${\bf | 1 | member_56 |
k}_{1}$ and ${\bf k}_{2}$ is given by the angle $\theta$. The parameters $b_{i}$ represent lateral displacements with respect to the configuration with a line of maximum height at the origin. We assume that the corrugation $h_{2}$ is restricted to a rectangular section of area $L_{x}L_{y}$ centered at $x=b_{2},y=0$ and much larger than $L^{2}$ so that diffraction at the borders can be neglected. With these assumptions, and in the limit of long corrugation lines $kL_{y}\gg 1$ with $L_{x}$ smaller or of the order of $L_{y}$, the energy correction per unit area is given in [@RodriguesEPL2006] as $$\begin{aligned}
\frac{\delta E_{\rm PP}}{L_{x}L_{y}}=\frac{a_{1}a_{2}}{2}G_{\rm C}[k]\cos (kb)\frac{\sin (kL_{y}\theta /2)}{kL_{y}\theta /2}.
\label{torque}\end{aligned}$$ The spatial coefficient $b=b_{2}\cos \theta -b_{1}$ is the relative lateral displacement along the direction ${\bf k}_{1}$. As expected by symmetry, this correction is invariant under the transformation $\theta \rightarrow - \theta$ and $\theta\rightarrow \pi -\theta$ due to the fact that the corrugation lines have no orientation. The case $\theta =0$ corresponds to the result of pure lateral displacement discussed in the preceding section.
The scale of the energy variation with $b$ and $\theta$ is set by the parameter $ \lambda_{\rm C} / L_{y}$. In fact, if plate $2$ is released after a rotation of $\theta | 1 | member_56 |
>\lambda_{\rm C} / L_{y}$, its subsequent motion is a combination of rotation and lateral displacements. Rotation is favored over lateral displacements for $\theta <\lambda_{\rm C} / L_{y}$ (see Fig.(1) in [@RodriguesEPL2006]). The torque $\tau = - \partial \delta E_{\rm PP} / \partial\theta $ is evaluated in [@RodriguesEPL2006] for corrugated Au mirrors, with corrugation amplitudes $a_{1}=a_{2}=14$nm, corrugation length $L_{y}=24\mu$m and separated by a distance of $L=1\mu$m. It is maximum at $\theta = 0.66 \lambda_{\rm C} / L$ and is plotted in Fig.(\[fig:6\]) as a function of $k$. It starts increasing linearly with $k$ in the $k\rightarrow 0$ PFA sector and for the same reason as the lateral force, it decreases exponentially in the high-$k$ limit.
As is clear on Fig.(\[fig:6\]), the PFA overestimates the magnitude of the torque by a factor of the order of $2$ at the peak value of the torque. The discrepancy even increases with $k$, since smaller values of $k$ correspond to smoother surfaces. The conditions are gathered up towards a direct experimental evidence of a non-trivial effect of geometry.
Fig.(\[fig:6\]) also displays the torque when evaluated between perfect metallic corrugated mirrors [@EmigPRA03]. The corresponding deviation with respect to the calculation given by Eq.(\[torque\]) stresses that at | 1 | member_56 |
a separation distance of $L=1\mu$m, the optical response of the metal must be accounted for in an accurate evaluation of the torque. The perfect conductor limit is reached only if the plasma wavelength $\lambda_{\rm P}$ is the smallest length scales (apart from the corrugation amplitudes) of the problem.
Conclusion
==========
New perspectives for studying the interplay between Casimir effect and geometry are today clearly visible. The theoretical formalism is better and better mastered, so that a rich variety of configurations can be studied. Meanwhile, novel experimental capabilities are available, allowing one to address challenging questions. Proposals have been recently made for measuring the torque between birefringent dielectric disks [@MundayPRA2005]. A measurement between metallic corrugated mirrors seems to be more easily accessible, with the torque turning out to be up to three orders of magnitude higher than the torque between dielectric plates, for comparable separation distance. At the same time, alternative routes are explored in order to probe quantum vacuum geometrical effects [@RodriguezPRL2007]. Cold atoms techniques also look like particularly promising, as they should allow one to see deviations from the PFA on the lateral component of the Casimir-Polder force, with a Bose-Einstein condensate used as a local probe trapped close | 1 | member_56 |
to a corrugated surface [@DalvitPRL08]. These trends suggest that demonstrations of non-trivial effects of geometry should be within reach.
[99]{}
H.B.G. Casimir, Proc. K. Ned. Akad. Wet. **51**, 793 (1948)
M.J. Sparnaay, in *Physics in the Making* eds. A. Sarlemijn and M.J. Sparnaay (North-Holland, 1989) p. 235 and references therein
S.K.L. Lamoreaux, Phys. Rev. Lett. **78**, 5 (1997)
U. Mohideen and A. Roy, Phys. Rev. Lett. **81**, 4549 (1998)
B.W. Harris, F. Chen, and U. Mohideen, Phys. Rev. **A62**, 052109 (2000)
Th. Ederth, Phys. Rev. **A62**, 062104 (2000)
G. Bressi, G. Carugno, R. Onofrio, and G. Ruoso, Phys. Rev. Lett. **88**, 041804 (2002)
R.S. Decca, D. López, E. Fischbach, and D.E. Krause, Phys. Rev. Lett. **91**, 050402 (2003) and references therein
F. Chen, G.L. Klimchitskaya, U. Mohideen, and V.M. Mostepanenko, Phys. Rev. **A69**, 022117 (2004)
R.S. Decca, D. López, E. Fischbach, G.L. Klimchitskaya, D.E. Krause, and V.M. Mostepanenko, Annals Phys. **318**, 37 (2005)
R.S. Decca, D. López, E. Fischbach, G.L. Klimchitskaya, D.E. Krause, and V.M. Mostepanenko, Phys. Rev. **D75**, 077101 (2007)
P.W. Milonni, *The quantum vacuum* (Academic, 1994)
S.K. Lamoreaux, Resource Letter in Am. J. Phys. **67**, 850 (1999)
S. Reynaud, A. Lambrecht, C. Genet and M.T. Jaekel, C. R. | 1 | member_56 |
---
abstract: 'Process mining techniques focus on extracting insight in processes from event logs. In many cases, events recorded in the event log are too fine-grained, causing process discovery algorithms to discover incomprehensible process models or process models that are not representative of the event log. We show that when process discovery algorithms are only able to discover an unrepresentative process model from a low-level event log, structure in the process can in some cases still be discovered by first abstracting the event log to a higher level of granularity. This gives rise to the challenge to bridge the gap between an original low-level event log and a desired high-level perspective on this log, such that a more structured or more comprehensible process model can be discovered. We show that supervised learning can be leveraged for the event abstraction task when annotations with high-level interpretations of the low-level events are available for a subset of the sequences (i.e., traces). We present a method to generate feature vector representations of events based on XES extensions, and describe an approach to abstract events in an event log with Condition Random Fields using these event features. Furthermore, we propose a sequence-focused metric to | 1 | member_57 |
evaluate supervised event abstraction results that fits closely to the tasks of process discovery and conformance checking. We conclude this paper by demonstrating the usefulness of supervised event abstraction for obtaining more structured and/or more comprehensible process models using both real life event data and synthetic event data.'
author:
- '\'
- '\'
bibliography:
- 'IEEEabrv.bib'
- 'bibliography.bib'
title: Event Abstraction for Process Mining using Supervised Learning Techniques
---
Process Mining, Event Abstraction, Probabilistic Graphical Models
Introduction
============
Process mining is a fast growing discipline that combines knowledge and techniques from computational intelligence, data mining, process modeling and process analysis [@Aalst2011]. Process mining focuses on the analysis of event logs, which consists of sequences of real-life events observed from process executions, originating e.g. from logs from ERP systems. An important subfield of process mining is process discovery, which is concerned with the task of finding a process model that is representative of the behavior seen in an event log. Many different process discovery algorithms exist ([@Aalst2004; @Gunther2007; @Werf2008; @Weijters2011; @Leemans2013]), and many different types of process models can be discovered by process discovery methods, including BPMN models, Petri nets, process trees, and statecharts.
![An excerpt of a “spaghetti”-like process model.[]{data-label="fig:spaghetti"}](spaghetti){width="48.00000%"}
| 1 | member_57 |
As event logs are often not generated specifically for the application of process mining, events granularity of the event log at hand might be too low level. It is vital for successful application of process discovery techniques to have event logs at an appropriate level of abstraction. Process discovery techniques when the input event log is too low level might result in process model with one or more undesired properties. First of all, the resulting process model might be “spaghetti”-like, as shown in Figure \[fig:spaghetti\], containing of an uninterpretable mess of nodes and connections. The aim of process discovery is to discover a structured, “lasagna”-like, process model as shown in Figure \[fig:lasagna\], which is much more interpretable than a “spaghetti”-like model. Secondly, the activities in the process model might have too specific, non-meaningful, names. Third, as we show in section \[sec:motivating\_example\], process discovery algorithms are sometimes not able to discover a process model that represents the low-level event log well, while being able to discover to discover a representative process model from a corresponding high-level event log. The problems mentioned illustrate the need for a method to abstract too low-level event logs into higher level event logs.
![A structured, or | 1 | member_57 |
“lasagna”-like, process model.[]{data-label="fig:lasagna"}](lasagna){width="25.00000%"}
Several methods have been explored within the process mining field that address the challenge of abstracting low-level events to higher level events ([@Bose2009; @Gunther2010; @Dongen2010]). Existing event abstraction methods rely on unsupervised learning techniques to abstract low-level into high-level events by clustering together groups of low-level events into one high-level event. However, using unsupervised learning introduces two new problems. First, it is unclear how to label high-level events that are obtained by clustering low-level events. Current techniques require the user / process analyst to provide high-level event labels themselves based on domain knowledge, or generate long labels by concatenating the labels of all low-level events incorporated in the cluster. However, long concatenated labels quickly become unreadable for larger clusters, and it is far from trivial for a user to come up with sensible labels manually. In addition, unsupervised learning approaches for event abstraction give no guidance with respect to the desired level of abstraction. Many existing event abstraction methods contain one or more parameters to control the degree in which events are clustered into higher level events. Finding the right level of abstraction providing meaningful results is often a matter of trial-and-error.
In some cases, training data | 1 | member_57 |
with high-level target labels of low-level events are available, or can be obtained, for a subset of the traces. In many settings, obtaining high-level annotations for all traces in an event log is infeasible or too expensive. Learning a supervised learning model on the set of traces where high-level target labels are available, and applying that model to other low-level traces where no high-level labels are available, allows us to build a high-level interpretation of a low-level event log, which can then be used as input for process mining techniques.
In this paper we describe a method for supervised event abstraction that enables process discovery from too fine-grained event logs. This method can be applied to any event log where higher level training labels of low level events are available for a subset of the traces in the event log. We start by giving an overview of related work from the activity recognition field in Section \[sec:related\]. In Section \[sec:preliminaries\] we introduce basic concepts and definitions used throughout the rest of the paper. Section \[sec:motivating\_example\] explains the problem of not being able to mine representative process models from low-level data in more detail. In Section \[sec:features\] we describe a method | 1 | member_57 |
to automatically retrieve a feature vector representation of an event that can be used with supervised learning techniques, making use of certain aspects of the XES standard definition for event logs [@Gunther2014]. In the same section we describe a supervised learning method to map low-level events into target high-level events. Sections \[sec:case\_1\] and \[sec:case\_2\] respectively show the added value of the described supervised event abstraction method for process mining on a real life event log from a smart home environment and on a synthetic log from a digital photocopier respectively. Section \[sec:conclusion\] concludes the paper.
Related Work {#sec:related}
============
Supervised event abstraction is an unexplored problem in process mining. A related field is activity recognition within the field of ubiquitous computing. Activity recognition focuses on the task of detecting human activity from either passive sensors [@Kasteren2008; @Tapia2004], wearable sensors [@Bao2004; @Kwapisz2011], or cameras [@Poppe2010]. Activity recognition methods generally work on discrete time windows over the time series of sensor values and aim to map each time window onto the correct type of human activity, e.g. *eating* or *sleeping*. Activity recognition methods can be classified into probabilistic approaches [@Kasteren2008; @Tapia2004; @Bao2004; @Kwapisz2011] and approaches based on ontology reasoning [@Chen2009; @Riboni2011]. The | 1 | member_57 |
strength of probabilistic system based approaches compared to methods based on ontology reasoning is their ability to handle noise, uncertainty and incomplete in sensor data [@Chen2009].
Tapia [@Tapia2004] was the first to explore supervised learning methods to infer human activity from passive sensors, using a naive Bayes classifier. More recently, probabilistic graphical models started to play an important role in the activity recognition field [@Kasteren2008; @Kasteren2007]. Van Kasteren et al. [@Kasteren2008] explored the use Conditional Random Fields [@Lafferty2001] and Hidden Markov Models [@Rabiner1986]. Van Kasteren and Kr[ö]{}se [@Kasteren2007] applied Bayesian Networks [@Friedman1997] on the activity recognition task. Kim et al. [@Kim2010] found Hidden Markov Models to be incapable of capturing long-range or transitive dependencies between observations, which results in difficulties recognizing multiple interacting activities (concurrent or interwoven). Conditional Random Fields do not posses these limitations.
The main differences between existing work in activity recognition and the approach presented in this paper are the input data on which they can be applied and the generality of the approach. Activity recognition techniques consider the input data to be a multidimensional time series of the sensor values over time based on which time windows are mapped onto human activities. An appropriate time window | 1 | member_57 |
size is determined based on domain knowledge of the data set. In supervised event abstraction we aim for a generic method that works for all XES event logs in general. A time window based approach contrasts with our aim for generality, as no single time window size will be appropriate for all event logs. Furthermore, the durations of the events within a single event log might differ drastically (e.g. one event might take seconds, while another event takes months), in which case time window based approaches will either miss short events in case of larger time windows or resort to very large numbers of time windows resulting in very long computational time. Therefore, we map each individual low-level event to a high-level event and do not use time windows. In a smart home environment context with passive sensors, each change in a binary sensor value can be considered to be a low-level event.
Preliminaries {#sec:preliminaries}
=============
In this section we introduce basic concepts used throughout the paper.
We use the usual sequence definition, and denote a sequence by listing its elements, e.g. we write $\langle a_1,a_2,\dots,a_{n} \rangle$ for a (finite) sequence $s:\{1,\dots,n\}\to S$ of elements from some alphabet $S$, where | 1 | member_57 |
$s(i)=a_i$ for any $i \in \{1,\dots,n\}$.
XES Event Logs
--------------
We use the XES standard definition of event logs, an overview of which is shown in Figure \[fig:XES\_metamodel\]. XES defines an event *log* as a set of *traces*, which in itself is a sequence of *event*s. The log, traces and events can all contain one or more *attribute*s, which consist of a *key* and a *value* of a certain type. Event or trace attributes may be *global*, which indicates that the attribute needs to be defined for each event or trace respectively. A log contains one or more *classifier*s, which can be seen as labeling functions on the events of a log, defined on global event attributes. *Extension*s define a set of attributes on log, trace, or event level, in such a way that the semantics of these attributes are clearly defined. One can view XES extensions as a specification of attributes that events, traces, or event logs themselves frequently contain. XES defines the following standard extensions:
Concept
: [Specifies the generally understood name of the event/trace/log (attribute ’Concept:name’).]{}
Lifecycle
: [Specifies the lifecycle phase (attribute ’Lifecycle:transition’) that the event represents in a transactional model of their generating activity. The | 1 | member_57 |
*Lifecycle* extension also specifies a standard transactional model for activities.]{}
Organizational
: [Specifies three attributes for events, which identify the actor having caused the event (attribute ’Organizational:resource’), his role in the organization (attribute ’Organizational:role’), and the group or department within the organization where he is located (attribute ’Organizational:group’).]{}
Time
: [Specifies the date and time at which an event occurred (attribute ’Time:timestamp’).]{}
Semantic
: [Allows definition of an activity meta-model that specifies higher-level aggregate views on events (attribute ’Semantic:modelReference’).]{}
![XES event log meta-model, as defined in [@Gunther2014].[]{data-label="fig:XES_metamodel"}](XES_metamodel){width="0.95\linewidth"}
We introduce a special attribute of type *String* with key *label*, which represents a high-level version of the generally understood name of an event. The *concept* name of a event is then considered to be a low-level name of an event. The *Semantic* extension closely resembles the *label* attribute, however, by specifying relations between low-level and high-level events in a meta-model, the *Semantic* extension assumes that all instances of a low-level event type belong to the same high-level event type. The *label* attribute specifies the high-level label for each event individually, allowing for example one low-level event of low-level type *Dishes & cups cabinet* to be of high-level type *Taking medicine*, and another | 1 | member_57 |
low-level event of the same type to be of high-level type *Eating*. Note that for some traces high-level annotations might be available, in which case its events contain the *label* attribute, while other traces might not be annotated. High-level interpretations of unannotated traces, by inferring the *label* attribute based on information that is present in the annotated traces, allow the use of unannotated traces for process discovery and conformance checking on a high level.
Petri nets
----------
A process modeling notation frequently used as output of process discovery techniques is the Petri net. Petri nets are directed bipartite graphs consisting of transitions and places, connected by arcs. Transitions represent activities, while places represent the status of the system before and after execution of a transition. Labels are assigned to transitions to indicate the type of activity that they represent. A special label $\tau$ is used to represent invisible transitions, which are only used for routing purposes and do not represent any real activity.
\[def:lpn\] A labeled Petri net is a tuple $N=(P,T,F,R,\ell)$ where $P$ is a finite set of places, $T$ is a *finite set* of transitions such that $P \cap T = \emptyset$, and $F \subseteq (P \times T) | 1 | member_57 |
\cup (T \times P)$ is a set of directed arcs, called the flow relation, $R$ is a finite set of labels representing event types, with $\tau \notin R$ is a label representing an invisible action, and $\ell:T\rightarrow R\cup {\tau}$ is a labeling function that assigns a label to each transition.
The state of a Petri net is defined w.r.t. the state that a process instance can be in during its execution. A state of a Petri net is captured by the marking of its places with tokens. In a given state, each place is either empty, or it contains a certain number of tokens. A transition is enabled in a given marking if all places with an outgoing arc to this transitions contain at least one token. Once a transition fires (i.e. is executed), a token is removed from all places with outgoing arcs to the firing transition and a token is put to all places with incoming arcs from the firing transition, leading to a new state.
A marked Petri net is a pair $(N,M)$, where $N=(P,T,F,L,\ell)$ is a labeled Petri net and where $M \in \mathbb{B}(P)$ denotes the marking of $N$. For $n \in (P \cup T)$ we | 1 | member_57 |
use $\bullet n$ and $n \bullet$ to denote the set of inputs and outputs of n respectively. Let $C(s,e)$ indicate the number of occurrences (count) of element $e$ in multiset $s$. A transition $t\in T$ is enabled in a marking $M$ of net $N$ if $\forall p \in \bullet t : C(M,p)>0$. An enabled transition $t$ may fire, removing one token from each of the input places $\bullet t$ and producing one token for each of the output places $t\bullet$.
Figure \[fig:double\_flower\] shows three Petri nets, with the circles representing places, the squares representing transitions. The black squares represent invisible transitions, or, $\tau$ transitions. Places annotated with an **f** belong to the final marking, indicating that the process execution can terminate in this marking.
The topmost Petri net in Figure \[fig:double\_flower\] initially has one token in the place $p1$, indicated by the dot. Firing of silent transition $t1$ takes the token from $p1$ and puts a token in both $p2$ and $p3$, enabling both $t2$ and $t3$. When $t2$ fires, it takes the token from $p2$ and puts a token in $p4$. When $t3$ fires, it takes the token from $p3$ and puts a token in $p5$. After $t2$ and | 1 | member_57 |
$t3$ have both fired, resulting in a token in both $p4$ and $p5$, $t4$ is enabled. Executing $t4$ takes the token from both $p4$ and $p5$, and puts a token in $p6$. The **f** indicates that the process execution can stop in the marking consisting of this place. Alternatively, it can fire $t5$, taking the token from $p6$ and placing a token in $p2$ and $p5$, which allows for execution of $MC$ and $W$ to reach the marking consisting of $p6$ again. We refer the interested reader to [@Reisig2012] for an extensive review of Petri nets.
Conditional Random Field {#sec:crf}
------------------------
We view the recognition of high-level event labels as a sequence labeling task in which each event is classified as one of the higher-level events from a high-level event alphabet. Conditional Random Fields (CRFs) [@Lafferty2001] are a type of probabilistic graphical model which has become popular in the fields of language processing and computer vision for the task of sequence labeling. A Conditional Random Field models the conditional probability distribution of the label sequence given an observation sequence using a log-linear model. We use Linear-chain Conditional Random Fields, a subclass of Conditional Random Fields that has been widely used | 1 | member_57 |
for sequence labeling tasks, which takes the following form:\
$p(y|x) = \frac{1}{Z(x)}exp(\sum_{t=1}\sum_k\lambda_k f_k(t,y_{t-1},y_t,x))$\
where $Z(x)$ is the normalization factor, $X=\langle x_1,\dots,x_n\rangle$ is an observation sequence, $Y=\langle y_1,\dots,y_n\rangle$ is the associated label sequence, $f_k$ and $\lambda_k$ respectively are feature functions and their weights. Feature functions, which can be binary or real valued, are defined on the observations and are used to compute label probabilities. In contrast to Hidden Markov Models [@Rabiner1986], feature functions are not assumed to be mutually independent.
Motivating Example {#sec:motivating_example}
==================
[0.5]{}
\[node distance=1.4cm, on grid,>=stealth’, bend angle=20, auto, every place/.style= [minimum size=6mm]{}, every transition/.style = [minimum size = 6mm]{} \] (p2) \[label=below:p1\]; (2) \[fill=black, right = of p2, label=below:t1\] edge \[pre\] node\[auto\] (p2); (p3) \[above right = of 2, label=below:p2\] edge\[pre\] node\[auto\] (2); (p4) \[below right = of 2, label=below:p3\] edge\[pre\] node\[auto\] (2); (3) \[label=center:, label=below:t2, right = of p3\] edge\[pre\] node\[auto\] (p3); (4) \[label=center:, label=below:t3, right = of p4\] edge\[pre\] node\[auto\] (p4); (p5) \[right = of 3, label=below:p4\] edge\[pre\] node\[auto\] (3); (p6) \[right = of 4, label=below:p5\] edge\[pre\] node\[auto\] (4); (5) \[label=center:, label=above:t4, above right = of p6\] edge\[pre\] node\[auto\] (p5) edge\[pre\] node\[auto\] (p6); (p8) \[right = of 5, label=below right:**f**, label=below:p6\] edge\[pre\] node\[auto\] (5); (6) | 1 | member_57 |
\[fill=black,above = of p8, label=above:t5\] edge\[pre\] node\[auto\] (p8) edge\[post,bend left\] node\[auto\] (p6) edge\[post,bend right\] node\[auto\] (p3);
[0.22]{}
\[node distance=1.3cm, on grid,>=stealth’, bend angle=20, auto, every place/.style= [minimum size=6mm]{}, every transition/.style = [minimum size = 6mm]{} \] (p1); (0) \[right = of p1,fill=black\] edge \[pre\] node\[auto\] (p1); (p2)\[above right = of 0, label=below right:**f**\] edge \[pre\] node\[auto\] (0); (2) \[label=center:, above right = of p2\] edge \[pre, bend left\] node\[auto\] (p2) edge \[post, bend right\] node\[auto\] (p2); (1) \[label=center:, above left = of p2\] edge \[pre, bend left\] node\[auto\] (p2) edge \[post, bend right\] node\[auto\] (p2); (p3)\[below right = of 0\] edge\[pre\] node\[auto\] (0);
\(3) \[label=center:, right = of p3\] edge \[pre\] node\[auto\] (p3);
[0.22]{}
\[node distance=1.3cm, on grid,>=stealth’, bend angle=20, auto, every place/.style= [minimum size=6mm]{}, every transition/.style = [minimum size = 6mm]{} \] (p1); edge \[pre, bend left\] node\[auto\] (p1) edge \[post, bend right\] node\[auto\] (p1); (t1) \[label=above:, above right = of p1\] edge \[pre\] node\[auto\] (p1); (t2) \[label=below:, below right = of p1\] edge \[post\] node\[auto\] (p1); (p2)\[below right = of t1, label=below right:**f**\] edge \[pre\] node\[auto\] (t1) edge \[post\] node\[auto\] (t2);
Figure \[fig:double\_flower\] shows on a simple example how a process can be structured at a high level | 1 | member_57 |
while this structure is not discoverable from a low-level log of this process. The bottom right Petri net shows the example process at a high-level. The high-level process model allows for any finite length alternating sequence of *Taking medicine* and *Eating* activities. The *Taking medicine* high-level activity is defined as a subprocess, corresponding to the topmost Petri net, which consists of low-level events *Medicine cabinet (MC)*, *Dishes & cups cabinet (DCC)*, and *Water (W)*. The *Eating* high-level event is also defined as a subprocess, shown in the bottom left Petri net, which consists of low-level events *Dishes & cups cabinet (DCC)* and *Cutlery drawer (CD)* that can occur an arbitrary number of times in any order and low-level event *Dishwasher (D)* which occurs exactly once, but at an arbitrary point in the *Eating* process.
When we apply the Inductive Miner process discovery algorithm [@Leemans2013] to low-level traces generated by the hierarchical process of Figure \[fig:double\_flower\], we obtain the process model shown in Figure \[fig:merged\_flower\]. The obtained process model allows for almost all possible sequences over the alphabet $\{CD,D,DCC,MC,W\}$, as the only constraint introduced by the model is that *DCC* and *D* are required to be executed starting from the initial | 1 | member_57 |
marking to end up with the same marking. Firing of all other transitions in the model can be skipped. Behaviorally this model is very close to the so called “flower” model [@Aalst2011], the model that allows for all behavior over its alphabet. The alternating structure between *Taking medicine* and *Eating* that was present in the high-level process in Figure \[fig:double\_flower\] cannot be observed in the process model in Figure \[fig:merged\_flower\]. This is caused by high variance in start and end events of the high-level event subprocesses of *Taking medicine* and *Eating* as well as by the overlap in event types between these two subprocesses.
![image](Capture_IM_Red_read){width="75.00000%"}
When the event log would have consisted of the high-level *Eating* and *Taking medicine* events, process discovery techniques have no problems to discover the alternating structure in the bottom right Petri net of Figure \[fig:double\_flower\]. To discover the high-level alternating structure from a low-level event log it is necessary to first abstract the events in the event log. Through supervised learning techniques the mapping from low-level events to high-level events can be learned from examples, without requiring a hand-made ontology. Similar approaches have been explored in activity recognition in the field of ubiquitous computing, where | 1 | member_57 |
low-level sensor signals are mapped to high-level activities from a human behavior perspective. The input data in this setting are continuous time series from sensors. Change points in these time series are triggered by low-level activities like *opening/closing the fridge door*, and the annotations of the higher level events (e.g. *cooking*) are often obtained through manual activity diaries. In contrast to unsupervised event abstraction, the annotations in supervised event abstraction provide guidance on how to label higher level events and guidance for the target level of abstraction.
Event Abstraction as a Sequence Labeling Task {#sec:features}
=============================================
In this section we describe an approach to supervised abstraction of events based on Conditional Random Fields. Additionally, we describe feature functions on XES event logs in a general way by using XES extensions. Figure \[fig:overview\] provides a conceptual overview of the supervised event abstraction method. The approach takes two inputs, 1) a set of annotated traces, which are traces where the high-level event that a low-level event belongs to (the *label* attribute of the low-level event) is known for all low-level events in the trace, and 2) a set of unannotated traces, which are traces where the low-level events are not mapped to | 1 | member_57 |
high-level events. Conditional Random Fields are trained of the annotated traces to create a probabilistic mapping from low-level events to high-level events. This mapping, once obtained, can be applied to the unannotated traces in order to estimate the corresponding high-level event for each low-level event (its *label* attribute). Often sequences of low-level events in the traces with high-level annotations will have the same *label* attribute. We make the working assumption that multiple high-level events are executed in parallel. This enables us to interpret a sequence of identical *label* attribute values as a single instance of a high-level event. To obtain a true high-level log, we *collapse* sequences of events with the same value for the *label* attribute into two events with this value as *concept* name, where the first event has a *lifecycle* ’start’ and the second has the *lifecycle* ’complete’. Table \[tab:collapse\] illustrates this collapsing procedure through an input and output event log.
[0.48]{}
Case Time:timestamp Concept:name label
------ --------------------- ----------------------- -----------------
1 03/11/2015 08:45:23 Medicine cabinet Taking medicine
1 03/11/2015 08:46:11 Dishes & cups cabinet Taking medicine
1 03/11/2015 08:46:45 Water Taking medicine
1 03/11/2015 08:47:59 Dishes & cups cabinet Eating
1 03/11/2015 08:47:89 Dishwasher Eating
1 03/11/2015 | 1 | member_57 |
17:10:58 Dishes & cups cabinet Taking medicine
1 03/11/2015 17:10:69 Medicine cabinet Taking medicine
1 03/11/2015 17:11:18 Water Taking medicine
[0.48]{}
Case Time:timestamp Concept:name Lifecycle:transition
------ --------------------- ----------------- ----------------------
1 03/11/2015 08:45:23 Taking medicine Start
1 03/11/2015 08:46:45 Taking medicine Complete
1 03/11/2015 08:47:59 Eating Start
1 03/11/2015 08:47:89 Eating Complete
1 03/11/2015 17:10:58 Taking medicine Start
1 03/11/2015 17:11:18 Taking medicine Complete
\[tab:collapse\]
The method described in this section is implemented and available for use as a plugin for the ProM 6 [@Verbeek2010] process mining toolkit and is based on the GRMM [@Sutton2006] implementation of Conditional Random Fields.
We now show for each XES extension how it can be translated into useful feature functions for event abstraction. Note that we do not limit ourselves to XES logs that contain all XES extensions; when a log contains a subset of the extensions, a subset of the feature functions will be available for the supervised learning step. This approach leads to a feature space of unknown size, potentially causing problems related to the curse of dimensionality, therefore we use L1-regularized Conditional Random Fields. L1 regularization causes the vector of feature weights to be sparse, meaning that only a small fraction of | 1 | member_57 |
the features have a non-zero weight and are actually used by the prediction model. Since the L1-norm is non-differentiable, we use OWL-QN [@Andrew2007] to optimize the model.
![Conceptual overview of Supervised Event Abstraction.[]{data-label="fig:overview"}](overview_method){width="50.00000%"}
From a XES Log to a Feature Space
---------------------------------
### Concept extension
The low-level labels of the preceding events in a trace can contain useful contextual information for high-level label classification. Based on the n-gram of $n$ last-seen events in a trace, we can calculate the probability that the current event has a label $l$. A multinoulli distribution is estimated for each n-gram of $n$ consecutive events, based on the training data. The Conditional Random Field model requires feature functions with numerical range. A concept extension based feature function with two parameters, $n$ and $l$, is valued with the multinoulli-estimated probability of the current event having high-level label $l$ given the n-gram of the last $n$ low-level event labels.
### Organizational extension
Similar to the concept extension feature functions, multinoulli distributions can be estimated on the training set for n-grams of *resource*, *role*, or *group* attributes of the last $n$ events. Likewise, an organizational extension based feature function with three parameters, n-gram size $n$, $o\in\{resource,role,group\}$, and label | 1 | member_57 |
$l$, is valued with the multinoulli-estimated probability of label $l$ given the n-gram of the last $n$ event resources/roles/groups.
### Time extension
In terms of time, there are several potentially existing patterns. A certain high-level event might for example be concentrated in a certain parts of a day, of a week, or of a month. This concentration can however not be modeled with a single Gaussian distribution, as it might be the case that a high-level event has high probability to occur in the morning or in the evening, but low probability to occur in the afternoon in-between. Therefore we use a Gaussian Mixture Model (GMM) to model the probability of a high-level label $l$ given the timestamp. Bayesian Information Criterion (BIC) [@Schwarz1978] is used to determine the number of components of the GMM, which gives the model an incentive to not combine more Gaussians in the mixture than needed. A GMM is estimated on training data, modeling the probabilities of each label based on the time passed since the start of the day, week or month. A time extension based feature function with two parameters, $t\in\{day,week,month,\dots\}$ and label $l$, is valued with the GMM-estimated probability of label $l$ given | 1 | member_57 |
the $t$ view on the event timestamp.
### Lifecycle extension & Time extension
The XES standard [@Gunther2014] defines several lifecycle stages of a process. When an event log possesses both the lifecycle extension and the time extension, time differences can be calculated between different stages of the life cycle of a single activity. For a *complete* event for example, one could calculate the time difference with the associated *start* event of the same activity. Finding the associated *start* event becomes nontrivial when multiple instances of the same activity are in parallel, as it is then unknown which *complete* event belongs to which *start* event. We assume consecutive lifecycle steps of activities running in parallel to occur in the same order as the preceding lifecycle step. For example, when we observe two *start* events of an activity of type *A* in a row, followed by two *complete* events of type *A*, we assume the first *complete* to belong to the first *start*, and the second *complete* to belong to the second *start*.
We estimate a Gaussian Mixture Model (GMM) for each tuple of two lifecycle steps for a certain activity on the time differences between those two lifecycle steps for this | 1 | member_57 |
activity. A feature based on both the lifecycle and the time extension, with a label parameter $l$ and lifecycle $c$, is valued with the GMM-estimated probability of label $l$ given the time between the current event and lifecycle $c$. Bayesian Information Criterion (BIC) [@Schwarz1978] is again used to determine the number of components of the GMM.
Evaluating High-level Event Predictions for Process Mining Applications
-----------------------------------------------------------------------
Existing approaches in the field of activity recognition take as input time windows where each time window is represented by a feature vector that describes the sensor activity or status during that time window. Hence, evaluation methods in the activity recognition field are window-based, using evaluation metrics like the percentage of correctly classified time slices [@Tapia2004; @Kasteren2007; @Kasteren2008]. There are two reasons to deviate from this evaluation methodology in a process mining setting. First, our method operates on events instead of time windows. Second, the accuracy of the resulting high level sequences is much more important for many process mining techniques (e.g. process discovery, conformance checking) than the accuracy of predicting each individual minute of the day.
We use *Levenshtein similarity* that expresses the degree in which two traces are similar using a metric based | 1 | member_57 |
---
abstract: |
In the design of incentive compatible mechanisms, a common approach is to enforce incentive compatibility as constraints in programs that optimize over feasible mechanisms. Such constraints are often imposed on sparsified representations of the type spaces, such as their discretizations or samples, in order for the program to be manageable. In this work, we explore limitations of this approach, by studying whether all dominant strategy incentive compatible mechanisms on a set $T$ of discrete types can be extended to the convex hull of $T$.
Dobzinski, Fu and Kleinberg (2015) answered the question affirmatively for all settings where types are single dimensional. It is not difficult to show that the same holds when the set of feasible outcomes is downward closed. In this work we show that the question has a negative answer for certain non-downward-closed settings with multi-dimensional types. This result should call for caution in the use of the said approach to enforcing incentive compatibility beyond single-dimensional preferences and downward closed feasible outcomes.
author:
- 'Submission Number: 9394'
- |
Taylor Lundy and Hu Fu\
University of British Columbia\
[email protected], [email protected]\
bibliography:
- 'ref.bib'
date: May 2019
title: Limitations of Incentive Compatibility on Discrete Type Spaces
| 1 | member_58 |
---
Introduction {#sec:intro}
============
Mechanism design studies optimization problems with private inputs from strategic agents. An agent’s input, known as her *type*, is her private information including her valuation for the social outcomes, which are to be decided upon by the mechanism. A mechanism needs to solicit such information to achieve certain goals, e.g. maximizing welfare, revenue, surplus or fairness measures. It needs to provide its participants with correct incentives, via both social outcomes and payments, so that the agents find it in their best interests to reveal their types.
Dominant strategy incentive compatibility (DSIC) is one of the strongest and most widely used solution concepts that guarantee such incentives. Under DSIC, every participant, no matter what type she possesses and no matter what types the other participants report to the mechanism, will maximize her utility by revealing her true type. Not only is this a strong guarantee for the mechanism designer that true information should be reported and optimized over, it also alleviates the burden of strategizing from the participating agents — telling truth is a dominant strategy regardless of the other agents’ types or strategies. Partly thanks to this strong incentive guarantee, the two fundamental auctions, namely, the | 1 | member_58 |
VCG auction that maximizes social welfare [@Vic61; @Clarke71; @Groves73], and Myerson’s auction that maximizes expected revenue for selling a single item [@Myerson81], have been foundational in both the theory and practice of mechanism design.
As the scope of mechanism design expands beyond the classical settings, incentive compatible mechanisms that are optimal for various objective often lack the simple structures that characterize the VCG and Myerson’s mechanisms. By and large, there have been two approaches to the design of incentive compatible mechanisms. The first approach focuses on classes of mechanisms that, by their simple structures, have obvious incentive guarantees. For example, in a multi-item auction, a sequential pricing mechanism puts prices on items and asks each agent in turn to choose her favorite items that remain; the bidders are not asked about their values, and choosing utility-maximizing items (according to their true values) is the obvious strategy to adopt (see, e.g., [@CHMS10]; [@CMS10]; [@FGL15]). Another example is to optimize over parameterized “VCG-like” mechanisms which inherit incentive properties from the VCG mechanism (e.g. [@sandholm2015automated]). This approach is often used to search for mechanisms whose performance is a factor away from being optimal, since the optimal mechanism or its very close approximations are | 1 | member_58 |
often not within the class of mechanisms being searched over.
The second approach forgoes structures that are easily interpretable, and exhaustively searches for the optimal mechanism. This is exemplified by solving mathematical programs (typically linear or convex programs) whose feasible regions contain the set of all incentive compatible mechanisms (see, e.g. [@conitzer2002complexity]; [@DFK15]; [@DDT17];[@FH18]). Typically, incentive requirements are hardwired as constraints in such programs.
Difficulty arises in the second approach when one would like to adopt strong incentive guarantees such as DSIC, which need at least one constraint per profile of types to specify. When the space of possible types is a continuum, this gives rise to uncountably many constraints. While this does not always make the program impossible to solve it considerably complicates the task. One way to work around this is to discretize the type space and only impose incentive compatible (IC) constraints on the set of discrete types used to represent the type space. Discretization is also embodied in the idea of a given prior distribution over a set of discrete types, on which the optimization can then be based (e.g. [@conitzer2002complexity]; [@DFK15]). The most common motivation for such prior distributions is that they naturally result from | 1 | member_58 |
samples (e.g. from past observations or market research) from an underlying distribution, whereas the true distribution may be supported on a continuum of types. This approach motivates the question we study in this work.
#### Questions we study.
In this work we aim to answer the question: when one has a mechanism that is DSIC on a discretized subset of a type space, can one always find a mechanism that has the same behavior on the subset and yet is DSIC on the whole type space? To make the question more concrete, we study the natural case where the whole type space is the convex hull of the discrete subset. To make the presentation easier, in the following we denote by $\typespaces$ the discrete subset of types, and $\operatorname{Conv}(\typespaces)$ its convex hull.
We consider the question a fundamental one for the second approach to mechanism design that we described above. When one optimizes for a mechanism with IC constraints imposed only on $\typespaces$, if the resulting mechanism cannot be *extended* to the original type space, it loses incentive guarantees when put to use.
One objection may be that, given a mechanism that is DSIC on $\typespaces$, one may always run | 1 | member_58 |
it on $\operatorname{Conv}(\typespaces)$, by restricting the “bidding language”, so that a type in $\operatorname{Conv}(\typespaces)$ but not in $\typespaces$ has to report a type in $\typespaces$. Such mechanisms, however, may lose the incentive guarantee which makes DSIC mechanisms attractive in the first place. Unless one can show that agents with types not in $\typespaces$ have a dominant strategy in such mechanisms, such agents need to strategize over which types in $\typespaces$ to report, depending on the types and strategies of their opponents. In the scenario where $\typespaces$ is a set of samples from a continuous distribution, the vast majority of types may not be in $\typespaces$ and have no incentive guarantee, which is clearly undesirable. In settings where the agents’ types are single dimensional, @DFK15 showed that “restricting the bidding language” does turn any mechanism DSIC on $\typespaces$ into a mechanism DSIC on any superset of $\typespaces$: each type in the superset has a dominant strategy, and by the revelation principle this gives rise to a DSIC mechanism that extends the given mechanism’s behavior on $\typespaces$. To the best of our knowledge, no such guarantees are known beyond single dimensional settings.
#### Our Results.
For agents with multi-dimensional types, we first | 1 | member_58 |
give a condition under which any DSIC mechanism on $\typespaces$ can be extended to a DSIC mechanism on $\operatorname{Conv}(\typespaces)$, via an argument that is different from @DFK15’s yet still straightforward (Theorem \[thm:swap\]). In particular, the condition is satisfied whenever the set of feasible outcomes is *downward closed* (Theorem \[thm:downward\]).
Our main result, however, is a construction of a set $\typespaces$ of multi-dimensional types and a DSIC mechanism on it, for which we show that no DSIC mechanism on $\operatorname{Conv}(\typespaces)$ can output the same social outcomes on types in $\typespaces$. The impossibility result stands even if the extension mechanism is allowed to be randomized. This shows that, without conditions such as single-dimensional types or downward closed set of feasible outcomes, designing incentive compatible mechanisms by focusing on a discrete subset of types can be a questionable approach to designing mechanisms for the whole type space — there may not be any mechanism DSIC on the whole type space which behave the same way on the subset.
Near the end, we give a multi-dimensional setting where the expected *revenue* of a mechanism with only correct incentives for a set $\typespace$ of types can be unboundedly more than the revenue of a mechanism | 1 | member_58 |
for $\operatorname{Conv}(\typespace)$. This example is much less involved than our main result, because revenue optimal mechanisms are meaningful only when they do not overcharge any reported type and guarantee non-negative utility. This constraint can be much more stringent when imposed for all types in $\operatorname{Conv}(\typespace)$ than for $\typespace$ only.
Related Works {#sec:related}
-------------
For multi-dimensional preferences, an allocation rule for any fixed agent is implementable if and only if it satisfies the so-called *cyclic monotonicity* property [@Rochet87]. When the type space is convex, it turns out the weaker condition of *weak monotonicity* suffices for implementability [@SY05; @AK14]. It is notable that the two solution concepts we compare in Section \[sec:mapping\] precisely correspond to the case where the type space is convex and that where it is not. However, nowhere in our arguments do we make use of this beautiful fact.
Another closely related body of work, is the literature on automated mechanism design. In automated mechanism design, mechanisms are optimized for the setting and objective automatically using information about agents’ type distributions. When this work was introduced by @conitzer2002complexity the input for this problem was an explicit description of the agents distributions, however recent work has moved towards replacing this | 1 | member_58 |
explicit description with samples from the agents type distribution [@likhodedov2004methods; @likhodedov2005approximating; @sandholm2015automated]. Our work highlights how interpolating between discrete samples can effect not only the objective but the implementablility of the mechanism itself. Luckily, the research on sample based automated mechanism design is able to avoid the pitfalls of only having discrete samples. They do this either by optimizing over parameterized families of mechanisms which are guaranteed to be implementable on the entire typespace or by working in settings where the addition of new types has no effect on the objective (i.e. downward closed settings, see )[@sandholm2015automated; @guo2010computationally; @balcan2018; @morgenstern2016learning]. However, our work points out some difficulties that might arise if one wishes to take a more general, non-parameterized approach to automated mechanism design in settings which are not downward closed.
There are a variety of well-studied settings that are not downward closed in which extending a type space to its the convex hull could cause problems. One commonly studied not downward closed setting arises from the job scheduling problem introduced in @nisan2001algorithmic and later built upon by @schedule2 and @ashlagi2012optimal. since in this problem every job must eventually be scheduled and the set of feasible solutions is not downward | 1 | member_58 |
closed. Another example of a non-downward closed setting is one-sided matching markets in which every agent must be matched with exactly one good. An example of a one-sided matching market is the fair housing allocation studied by @matchmarket. Finally, the facility location problem from @devanur2005strategyproof also not downward closed.
Preliminaries {#sec:prelim}
=============
We consider a setting with $N$ agents where each agent $i$ has a private type $\typei$ from her type space $\typespacei \subseteq \mathbb R_+^m$. The type profile $\types = (\typei[1], \ldots, \typei[N])$ denotes the vector of all agents’ types, from the joint type space $\typespaces \coloneqq \prod_i \typespacei$.
We adopt the standard shorthand notation to write $\typesmi \coloneqq (\type_{1}, \ldots, \type_{i-1}, \type_{i+1}, \ldots, \type_{n})$ from $\typespacesmi \coloneqq \prod_{j \neq i} \typespace_{j}$.
An outcome (or, interchangeably, allocation) for agent $i$ lies in $\mathbb R_+^m$; for an outcome $\alloci$, the agent with type $\typei$ has value $\langle \typei, \alloci \rangle = \sum_{j = 1}^m \type_{ij} \alloc_{ij}$. A social outcome is denoted by a vector $(\alloci[1], \ldots, \alloci[N]) \in \mathbb R_+^{mN}$. The set of all feasible social outcomes (or allocations) is denoted ${\mathscr{F}}\subseteq \mathbb R_+^{mN}$.
For example, in a single-item auction, $m = 1$, each $\typei \in \mathbb R_+$ represents agent | 1 | member_58 |
$i$’s value for the item, and ${\mathscr{F}}\subseteq \mathbb R_+^N$ is the all zero vector (representing not selling) and the $N$ standard bases (each representing selling to a corresponding agent). As another example, in a $m$-unit auction with unit-demand buyers, ${\mathscr{F}}\subseteq \mathbb R_+^{N}$ is the set of all integral points in $\{(\alloci[1], \ldots, \alloci[N]) \in \mathbb R_+^{mN} \mid \sum_{i = 1}^N \alloci[ij] \leq 1, j = 1, 2, \ldots, m\}$.
#### Mechanisms.
A (direct revelation) mechanism consists of an *allocation rule* $\allocs: \typespaces \to {\mathscr{F}}$ and a *payment rule* $\pays: \typespaces \to \mathbb R_+^N$. The mechanism elicits type reports from the agents, and on reported type profile $\types$, decides on an allocation $\allocs(\types) \in {\mathscr{F}}$, with each agent $i$ making a payment of $\payi(\types)$. In general, allocation rules can be randomized, in which case $\allocs(\types)$ is a randomized variable supported on ${\mathscr{F}}$. $\allocs(\cdot)$ induces allocation rule $\alloci(\cdot)$ for each agent $i$: for all $\types$, $\alloci(\types) \in \mathbb R_+^m$ is the vector consisting of the $[(i-1)m + 1]$-st to the $im$-th coordinates in $\allocs(\types)$. When $\allocs(\types)$ is a random variable, so are $\alloci(\types)$’s.
When $\allocs(\types)$ is deterministic, we write $\allocs(\types) = \mathbf{y} \in {\mathscr{F}}$ as a shorthand for ${\operatorname{\mathbf{Pr}}_{}{{\mathchoice{ \left [ \allocs(\types) | 1 | member_58 |
= \mathbf y \right ]}{[ \allocs(\types) = \mathbf y ]}{[ \allocs(\types) = \mathbf y ]}{[ \allocs(\types) = \mathbf y ]} }}} = 1$.
Agents have quasi-linear utilities, that is, when reporting type $\typei'$, agent $i$’s utility is ${\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei, \alloci(\typei', \typesmi) \rangle \right ]}{[ \langle \typei, \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloci(\typei', \typesmi) \rangle ]} }}} - \payi(\typei', \typesmi)$ (where the expectation is taken over the randomness in $\allocs(\types)$.
A mechanism is *dominant strategy incentive compatible* (DSIC) if, for all $\types \in \typespaces$, and for all $\typei' \in \typespacei$, ${\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle \right ]}{[ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei, \typesmi) \rangle ]} }}} - \payi(\typei, \typesmi) \geq {\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle \right ]}{[ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle ]}{[ \langle \typei, \alloc_{i}(\typei', \typesmi) \rangle ]} }}} - \payi(\typei', \typesmi)$. An allocation rule $\allocs$ is said to be DSIC implementable or simply DSIC if there is a payment rule $\pays$ such that $(\allocs, \pays)$ is a DSIC mechanism. In this case, we say | 1 | member_58 |
$\allocs$ is implemented by payment rule $\pays$.
#### Extensions.
Given a subset $S \subseteq \mathbb R^n$, we denote by $\operatorname{Conv}(S)$ the convex hull of $S$.
\[def:extension\] An allocation rule $\extallocs: \operatorname{Conv}(\typespaces) \to {\mathscr{F}}$ is an *extension* of an allocation rule $\allocs: \typespaces \to {\mathscr{F}}$ if for all $\types \in \typespaces$, $\extallocs(\types)$ has the same distribution as $\allocs(\types)$. Similarly, a payment rule $\extpays: \operatorname{Conv}(\typespaces) \to \mathbb R_+^N$ is an extension of payment rule $\pays: \typespaces \to \mathbb R_+^N$ if for all $\types \in \typespaces$, $\extpays(\types) = \pays(\types)$.
In Definition \[def:extension\], if $\allocs(\cdot)$ is deterministic, then $\extallocs(\cdot)$ being an extension simply means $\extallocs(\types) = \allocs(\types)$ for all $\types \in \typespaces$.
#### Downward closed settings.
The feasible allocation set ${\mathscr{F}}$ is *downward closed* if $\mathbf y \in {\mathscr{F}}$ entails $\allocs \in {\mathscr{F}}$ for all $\allocs \preceq \mathbf y$, where $\allocs \preceq \mathbf y$ denotes $\alloci[j] \leq y_j$ for $j = 1, \cdots, mN$.
#### Weak monotonicity.
A well-known necessary condition for an allocation rule to be DSIC implementable is weak monotonicity:
An allocation rule $\allocs: \typespaces \to {\mathscr{F}}$ is *weakly monotone* if for each agent $i$, any $\typei, \typei' \in \typespacei$ and $\typesmi \in \typespacesmi$, $$\begin{aligned}
{\operatorname{\mathbf E}_{}{{\mathchoice{ \left [ \langle \typei - | 1 | member_58 |
\typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle \right ]}{[ \langle \typei - \typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei - \typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle ]}{[ \langle \typei - \typei', \alloci(\typei,\typesmi) - \alloci(\typei', \typesmi) \rangle ]} }}} \geq 0.
\end{aligned}$$
An allocation rule is implementable only if it is weakly monotone.
In fact, @SY05 showed that, if $\typespaces$ is convex, then weak monotonicity is also a sufficient condition for DSIC implementability.
#### Revenue.
A mechanism $(\allocs, \pays)$ is ex post *individually rational* (IR) if for each agent $i$ and for every $\types \in \typespaces$, $\langle \typei, \alloc_{i}(\types) \rangle - \payi(\types) \geq 0$.
Given a distribution $D$ on $\typespaces$ and a mechanism $(\allocs, \pays)$ that is DSIC and ex post IR, the *expected revenue* of the mechanism is ${\operatorname{\mathbf E}_{\types \sim D}{{\mathchoice{ \left [ \sum_i \payi(\types) \right ]}{[ \sum_i \payi(\types) ]}{[ \sum_i \payi(\types) ]}{[ \sum_i \payi(\types) ]} }}}$. The *optimal* revenue is the maximum expected revenue achievable among all DSIC, ex post IR mechanisms.
DSIC Convex Extensions {#sec:map}
======================
\[sec:mapping\] Before presenting our main result on the impossibility to extend DSIC allocation rules, we first complement @DFK15’s result in single-dimensional setting with a simple observation in multi-dimensional preference settings: | 1 | member_58 |
whenever the feasible allocation space is downward closed, any DSIC allocation rule on a type space can be extended to its convex hull by another DSIC allocation rule.
\[thm:downward\] If the set of feasible allocations ${\mathscr{F}}$ is downward closed, for any DSIC allocation rule $\allocs$ on a type space $\typespaces$, there is a DSIC extension $\extallocs$ of $\allocs$ on $\operatorname{Conv}(\typespaces)$. If $\allocs$ is implemented with a payment rule $\pays$, $\extallocs$ can be implemented by an extension $\extpays$ of $\pays$. If $\pays$ is individually rational on $\typespaces$, so is $\extpays$ on $\operatorname{Conv}(\typespaces)$.
If we do not require the statement about individual rationality, extensibility is guaranteed by an even weaker condition, which we call *single swap feasible*.
A feasible allocation set ${\mathscr{F}}$ is *single swap feasible* (SSF) if for every agent $i$ there exists an allocation ${\boldsymbol{x}^{\mathrm{ssf}}}(i) \in {\mathscr{F}}$ such that for any $\allocs' \in {\mathscr{F}}$, $(\alloci',{\boldsymbol{x}^{\mathrm{ssf}}}_{-i}(i)) \in {\mathscr{F}}$.
Intuitively, ${\boldsymbol{x}^{\mathrm{ssf}}}(i)$ is a feasible allocation vector such that if we replace the $i^{\text{th}}$ element of this vector with the $i^{\text{th}}$ element from any other feasible allocation the resulting allocation is still feasible. If ${\mathscr{F}}$ is a product space or is downward closed, it must be SSF. [^1]
\[thm:swap\] If the set of | 1 | member_58 |
feasible allocations ${\mathscr{F}}$ is SSF, any DSIC allocation rule $\allocs$ on a type space $\typespaces$, there is a DSIC extension $\extallocs$ of $\allocs$ on $\operatorname{Conv}(\typespaces)$.
The proofs for both and can be found in the supplementary materials. The main result of this paper is that without this condition a DSIC extension may not exist.
\[thm:rand\] There is a two agent type space $\randtypespaces$ with a DSIC allocation rule $\randallocs$, such that $\randallocs$ cannot be extended by a DSIC allocation rule to $\operatorname{Conv}(\randtypespaces)$.
We prove the theorem in two steps. We first present a setting with three-dimensional preferences for which we show the non-existence of *deterministic* extensions. We then build on the construction, lifting it to a higher dimension, where we strengthen the argument and show the non-existence of extensions that even allow randomization.
Non-existence of deterministic extensions
-----------------------------------------
We first present type space $\dettypespaces = \dettypespacei[1] \times \dettypespacei[2]$ and the allocation rule $\detallocs$, and then show that $\detallocs$ is DSIC and yet cannot be extended by any deterministic DSIC allocation rule on $\operatorname{Conv}(\dettypespaces)$.
The two agents have identical type spaces: for $i = 1, 2$, $\dettypespacei = \dettypespace \coloneqq \{A = [1, 0, 0], B = [0, 1, 0], C | 1 | member_58 |
= [0, 0, 1], D = [\tfrac 1 3, \tfrac 1 3, \tfrac 1 3]\}$. A visual representation of this typespace and its convex hull can be found in figure 1 in the supplementary materials.
The allocation rule $\detallocs$ is also symmetric, in the sense that $\detalloci[1](\typei[1], \typei[2]) = \detalloci[2](\typei[2], \typei[1])$ for any $\typei[1], \typei[2] \in \dettypespace$. We summarize $\detalloci[1]$ with the diagram below. The rows are indexed by agent $1$’s own type $V_1$, and the columns by agent $2$’s type $V_2$: $$\begin{aligned}
\begin{blockarray}{c cccc}
& A & B & C & D \\
\begin{block}{c(cccc)}
A & [1,1,0] & [2,0,2] & [3,0,3] & [4,0,4] \\
B & [0,1,1] & [2,2,0] & [3,3,0] & [4,4,0] \\
C & [1,0,1] & [0,2,2] & [0,3,3] & [0,4,4] \\
D & [0,1,1] & [2,2,0] & [3,3,0] & [4,4,0] \\
\end{block}
\end{blockarray} \end{aligned}$$
The set of all feasible allocations is then ${\mathscr{F}}= \{(\detalloci[1](V_1, V_2), \detalloci[2](V_1, V_2) )\}_{V_1, V_2 \in \dettypespace}$. We hasten to point out that ${\mathscr{F}}$ is *not* the product between the two agents’ respective set of feasible allocations. For example, $[1, 1, 0, 1, 1, 0]$ is in ${\mathscr{F}}$ as it is $\detallocs(A, A)$, but $[1, 1, 0, 0, 1, 1]$ is not. | 1 | member_58 |
This is important for the proof.
$\detallocs$ is DSIC implementable.
Let the payment be 0 for both agents and all type profiles. As the allocation and payment rules are both symmetric, consider either agent $i$. If $\typei[-i] = A$, the maximum value agent $i$ could get, when her type is $A$, $B$, or $C$, is 1, attained with truthful bidding. For $\typei = D$, the four allocations all give the same value $\tfrac 2 3$. Similar arguments hold when $\typei[-i]$ is $B$, $C$ or $D$.
\[thm:det\] There exists no deterministic DSIC extension of $\detallocs$.
Before proving Theorem \[thm:det\], we make several preparatory observations.
A key difficulty with multi-dimensional preferences is the the lack of a payment identity à la Myerson [@Myerson81]. In order to argue that any extension of an allocation rule is not DSIC, one has to either check the many cyclic (or weak) monotonicity conditions, or show that no payment rule can support the extension in a DSIC mechanism. We designed $\dettypespaces$ and $\detallocs$ carefully so that the allocations “lock” the payment rules.
\[lem:equal-pay\] For any allocation rule $\extallocs$ that is an extension of $\detallocs$, if $\extallocs$ can be implemented by a DSIC mechanism with payment rule $\pays$, | 1 | member_58 |
then for any $V \in \{A, B, C, D\}$, $\payi[1](A, V) = \payi[1](B, V) = \payi[1](C, V) = \payi[1](D, V)$, and $\payi[2](V, A) = \payi[2](V, B) = \payi[2](V, C) = \payi[2](V, D)$.
We prove the lemma for agent $1$, and the statement for agent $2$ follows by symmetry.
By DSIC, for any $V \in \dettypespace$, we have $$\begin{aligned}
\langle A , \detalloci[1](A,V)\rangle & -\payi[1](A,V) \\
& \geq \langle A , \detalloci[1](C,V)\rangle -\payi[1](C,V); \\
\langle B , \detalloci[1](B,V)\rangle & -\payi[1](B,V) \\
& \geq \langle B , \detalloci[1](A,V)\rangle -\payi[1](A,V);
\\
\langle C, \detalloci[1](C,V)\rangle & -\payi[1](C,V) \\
& \geq \langle C , \detalloci[1](B,V)\rangle -\payi[1](B,V).\end{aligned}$$ Note that $$\begin{aligned}
\langle A , \detalloci[1](A,V)\rangle = \langle A , \detalloci[1](C,V)\rangle; \\
\langle C , \detalloci[1](C,V)\rangle = \langle C , \detalloci[1](B,V)\rangle; \\
\langle B , \detalloci[1](B,V)\rangle = \langle B , \detalloci[1](A,V)\rangle.
\end{aligned}$$ Therefore $\payi[1](A,V) \leq \payi[1](C,V) \leq \payi[1](B, V) \leq \payi[1](A, V)$. Hence all inequalities are tight and we have $\payi[1](C,V) = \payi[1](A,V) = \payi[1](B,V)$.
For type $D$’s payment, note that $\detalloci[1](B, V)= \detalloci[1](D, V)$. If $\payi[1](B, V) \neq \payi[1](D, V)$, one of $B$ and $D$ must be incentivized to misreport the other type. Therefore $\payi[1](B, V) = \payi[1](D, V)$.
In figure 2 in the supplementary materials we | 1 | member_58 |
give a $3$ dimensional visualization of the agent’s value for each allocation which gives intuition into the proof of \[lem:equal-pay\].
\[lem:fixed-menu\] If $\tilde{x}$ is a deterministic DSIC extension of $\detallocs$, then for $\typei[1] = $ $\frac{1}{3}A + \frac{1}{3}B + \frac{1}{3}D$, and any $V_2 \in \dettypespace$, $\extalloci[1](\typei[1], V_2) \in $ $\{\extalloci[1](A, V_2), \extalloci[1](B, V_2),$ $\extalloci[1](C, V_2),$ $ \extalloci[1](D, V_2)\}$.
For the sake of contradiction, assume $\extallocs$ is a DSIC extension of $\detallocs$, implementable by payment rule $\pays$, and for $\typei[1]$ and $V_2 \in \dettypespace$, $\extalloci[1](\typei[1], V_2) = \detalloci[1](V_1, V_2')$ for some $(V_1, V_2') \in \dettypespaces$ and $V_2' \neq V_2$.
For any $V_2$, one of $\detalloci[1](A, V_2)$ and $\detalloci[1](B, V_2)$ gives an equal positive value to both $A$ and $B$. Let $V_1^*$ be the type that induces this equally valued allocation. (For example, if $V_2 = B$, then $\detalloci[1](A, B) = [2, 0, 2]$ and $\detalloci[1](B, B) = [2, 2, 0]$. Both $A$ and $B$ have the same value for $\detalloci[1](B, B)$ and so type $B$ would be $V_1^*$.)
Observe that, for the allocation $\detalloci[1](V_1, V_2')$, $V_1$ has positive value $\langle V_1, \detalloci[1](V_1, V_2') \rangle$, and no other type has higher value for it. Therefore, type $\typei[1]$ has value at most $\langle | 1 | member_58 |
\frac{2}{3} V_1 + \frac{1}{3} D, \detalloci[1](V_1, V_2') \rangle$ for the allocation. In order for $\typei[1]$ to have no incentive to misreport $V_1^*$, we must have $$\begin{gathered}
\langle \frac{2}{3} V_1 + \frac{1}{3} D, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2)
\\
\geq \langle \frac{2}{3} V_1^* + \frac{1}{3} D, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2)
\label{eq:t1-V1}\end{gathered}$$
On the other hand, in order for type $V_1$ not to have incentive for deviating to $\typei[1]$, we have $$\begin{gathered}
\langle V_1, \detalloci[1](V_1, V_2) \rangle - \payi[1](V_1, V_2)
\\
= \langle V_1^*, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2)
\\
\geq \langle V_1, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2);
\label{eq:V1-t1}
\end{gathered}$$ where for the equality we used the fact that the value obtained by reporting truthfully is the same for every type in $\{A, B, C\}$ given a fixed type of the opponent, and that $\payi[1](V_1, V_2) = \payi[1](V_1^{*}, V_2)$ by Lemma \[lem:equal-pay\].
Similarly, in order for type $D$ not to have incentive for deviating to $\typei[1]$, we have $$\begin{gathered}
\langle D, \detalloci[1](D, V_2) \rangle - \payi[1](D, V_2)
\\
= \langle D, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2)
\\
\geq \langle D, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2),
\label{eq:D-t1}\end{gathered}$$ where for the equality we used the fact that | 1 | member_58 |
type $D$ has the same value for all allocations given a fixed type of the opponent, and that $\payi[1](D, V_2) = \payi[1](V_1, V_2)$ by Lemma \[lem:equal-pay\]. Crucially, and cannot both be tight, because by construction, for any $V_2 \neq V_2'$, $$\begin{gathered}
\langle V_1, \detalloci[1](V_1, V_2) - \detalloci[1](V_1, V_2') \rangle \\
= \frac{3}{2} \langle D, \detalloci[1](D, V_2) - \detalloci[1](V_1, V_2') \rangle \neq 0. \end{gathered}$$ Therefore, $\frac{2}{3} \cdot$ $+ \frac{1}{3} \cdot$ gives $$\begin{gathered}
\langle \frac{2}{3} V_1^* + \frac{1}{3} D, \detalloci[1](V_1^*, V_2) \rangle - \payi[1](V_1^*, V_2)
\\
> \langle \frac{2}{3} V_1 + \frac{1}{3} D, \detalloci[1](V_1, V_2') \rangle - \payi[1](\typei[1], V_2),\end{gathered}$$ which contradicts .
By the same reasoning as for Lemma \[lem:equal-pay\], the following lemma follows from Lemma \[lem:fixed-menu\].
\[lem:general-equal-pay\] If $\extallocs$ is a deterministic DSIC extension of $\detallocs$, implementable by payment rule $\pays$, then for any $\typei[1]$ in the interior of $\operatorname{Conv}(\dettypespace)$ and any $V_2 \in \dettypespace$, $\payi[1](\typei[1], V_2) = \payi[1](A, V_2)$.
We are now ready to prove Theorem \[thm:det\].
Suppose $\extallocs$ is a deterministic DSIC extension of $\detallocs$. We show a contradiction by showing that $\extallocs$ must violate weak monotonicity.
Consider $\typei[1] = \tfrac 1 3 A + \tfrac 1 3 B + \tfrac 1 3 D$ and when agent $2$’s type | 1 | member_58 |
is $A$. Since $\typei[1]$ could report any type in $\dettypespace$, she has as options $\detalloci[1](A, A), \detalloci[1](B, A), \detalloci[1](C, A)$ and $\detalloci[1](D, A) \}$, all at the same price by Lemma \[lem:general-equal-pay\]. By Lemma \[lem:fixed-menu\], these are also all the allocations she could possibly get.
Since $[1, 1, 0]$ is the only allocation for which both types $A$ and $B$ have positive value, it is $\typei[1]$’s preferred allocation, i.e., $\extalloci[1](\typei[1], A)$ must be $[1, 1, 0]$. This in turn implies $\extalloci[2](\typei[1], A) = [1, 1, 0]$. (Recall that ${\mathscr{F}}$ is not a product space, and the only allocation in which agent $1$ gets $[1, 1, 0]$ is $\detallocs(A, A) = [1, 1, 0, 1, 1, 0]$.)
Similarly, one can show $\extalloci[1](\typei[1], D) = [4, 4, 0]$, which implies $\extalloci[2](\typei[1], D) = [2, 2, 0]$ or $[4, 4, 0]$. But in either case, weak monotonicity is violated for agent $2$’s types $A$ and $D$. For example, if $\extalloci[2](\typei[1], D) = [2, 2, 0]$, we have $$\begin{aligned}
\langle A, [1, 1, 0] \rangle + \langle D, [2, 2, 0] \rangle < \langle A, [2, 2, 0] \rangle + \langle D, [1, 1, 0] \rangle.\end{aligned}$$ Therefore, no deterministic DSIC extension of $\detallocs$ is | 1 | member_58 |
possible.
Non-existence of randomized extensions
--------------------------------------
We need a more convoluted construction and a more careful argument to prove the impossibility of extensions that are possibly randomized. We build on $\dettypespaces$ and $\detallocs$ to construct $\randtypespaces$ and $\randallocs$ and prove Theorem \[thm:rand\].
We first raise types in $\dettypespace$ to a space of seven dimensions. Define $A' = [1, 0, 0, 0, 0, 0, 0]$, $B' = [0, 1, 0, 0, 0, 0, 0]$, $C' = [0, 0, 1, 0, 0, 0, 0]$ and $D' = \frac 1 3 (A' + B' + C')$. For ease of notation, we define a mapping $\det: \{A', B', C', D'\} \to \dettypespace$, with $\det(A') = A, \det(B') = B, \det(C') = C$ and $\det(D') = D$. We also introduce four new types, $E' = [0, 0, 0, 1, 0, 0, 0]$, $F'=[0,0,0,0,1,0,0]$, $G'=[0,0,0,0,0,1,0]$ and $H'=[0,0,0,0,0,0,1]$. Define $\randtypespace = \{A', B', C', D', E', F', G'\}$, and $\randtypespaces = \randtypespace \times \randtypespace$.
We now define $\randallocs$, which is again symmetric, in the sense that $\randalloci[1](V_1, V_2) = \randalloci[2](V_2, V_1)$ for every $V_1, V_2 \in \randtypespace$. We therefore only describe $\randalloci[1]$. When both agents report types in $\{A', B', C', D'\}$, the first three coordinates of | 1 | member_58 |
each agent’s allocation are given by $\detallocs$ when fed by the corresponding types in $\dettypespaces$, and the remaining coordinates are filled in according to the opponent’s report. More specifically, $$\begin{gathered}
\forall V_1 \in \{A',B',C',D'\}, \\
\randalloci[1](V_1, A')=[\detalloci[1](\det(V_1), A), 0, 100, 100,100],\\
\randalloci[1](V_1, B')=[\detalloci[1](\det(V_1), B),100 , 0, 100, 100],\\
\randalloci[1](V_1, C')=[\detalloci[1](\det(V_1), C),100, 100, 0, 100], \\
\randalloci[1](V_1, D')=[\detalloci[1](\det(V_1), D), 100, 100, 100,0]\end{gathered}$$ For the other types, we have $$\begin{gathered}
\forall V_1 \in \{E', F', G', H'\}, \forall V_2 \in \{A', B', C', D'\}, \\
\randalloci[1](V_1, V_2) = \randalloci[1](C', V_2).\\
\forall V_2 \in \{E', F', G', H'\}, \forall V_1 \in \randtypespace, \\
\randalloci[1](V_1, V_2) = [0, 0, 0, 100, 100, 100, 100].\end{gathered}$$
Note that $\randallocs$ itself is deterministic. The difficulty we need to overcome in this section is that the *extension* of $\randallocs$ may be randomized, and we must show that any extension to $\operatorname{Conv}(\randtypespaces)$ cannot be DSIC.
The set of feasible allocations is ${\mathscr{F}}$ $= $ $\{\randallocs(V_1, V_2)\}_{V_1, V_2 \in \randtypespace}$. Again we emphasize that the set of feasible allocations is *not* a product space.
We first show that a subset of the payments are still “locked” as they were in the deterministic setting.
\[lem:randequal-pay\] For any allocation rule $\extallocs$ that | 1 | member_58 |
---
abstract: |
We detected a ring-like distribution of far-infrared emission in the direction of the center of the Virgo cluster. We studied this feature in the FIR, radio, and optical domains, and deduced that the dust within the feature reddens the galaxies in the direction of the Virgo cluster but does not affect stars within the Milky Way. This is likely to be a dusty feature in the foreground of the Virgo cluster, presumably in the galactic halo. The HI distribution follows the morphology of the FIR emission and shows peculiar kinematic behavior. We propose that a highly supersonic past collision between an HI cloud and the Galactic HI formed a shock that heated the interface gas to soft X-ray temperatures. HI remnants from the projectile and from the shocked Galactic HI rain down onto the disk as intermediate velocity gas.
Our finding emphasizes that extragalactic astronomy must consider the possibility of extinction by dust at high Galactic latitude and far from the Galactic plane, which may show structure on one-degree and smaller scales. This is particularly important for studies of the Virgo cluster, for example in the determination of the Hubble constant from Cepheids in cluster galaxies.
author:
| 1 | member_59 |
- 'Noah Brosch & Elchanan Almoznino'
- 'Bogdan Wszolek & Konrad Rudnicki'
title: The Nature of a Dusty Ring in Virgo
---
\#1\#2\#3\#4\#5\#6\#7
to\#2
------------------------------------------------------------------------
Introduction
============
The nature of non-luminous matter that is not part of detected and catalogued galaxies remains unsolved by modern astrophysics. As mentioned in a recent thesis, low surface brightness (LSB) objects may prove to be the “icebergs” of the extragalactic world (de Blok 1997). Some searches for non-luminous matter have been successful, the detection of a giant HI ring around the small group of galaxies in Leo centered on M96 (Schneider 1983), extended HI emission in the M81 group (Lo & Sargent 1979), HI companions to dwarf galaxies (for $\sim$25% of the cases: Taylor 1996), and a large neutral hydrogen cloud in the southern outskirts of the Virgo cluster (HI 1225+01: Giovanelli & Haynes 1989).
Along with HI clouds, a few large LSB galaxies have been identified: Malin-1 (Bothun 1987), F568-6 (Bothun 1990), and 1226+0105 (Sprayberry 1993). Their typical star formation rates are $\sim$0.1 M$_{\odot}$/yr and the metallicities are $\sim$1/3 solar. The HI rotation curves, measured by de Block (1997) and by Pickering (1997), indicate that their gaseous component is dynamically significant at all | 1 | member_59 |
radii and that the galaxies are fully dark-matter dominated; their detected baryonic component is less than 4% of the total mass. This last conclusion is valid at least as long as we do not accept any of the more exotic theories of gravitation. The LSB galaxies lack bulges, bars, and nuclear activity, as well as CO or IR emission (have no molecules or dust).
There have also been a few intriguing reports of presumably intergalactic dust clouds. A cloud with 0.5-1.2 mag of extinction was identified in Microscopium by Hoffmeister (1962). Three other similar objects were listed by Rudnicki (1986); they extinguish background objects by 0.57 to 1.2 mag. In all reports the main point of contention was the actual distance to the cloud, which could put it in extragalactic space but could also locate it in the halo of the Milky Way (MW). Sometimes, the argument for an extragalactic location was based on a comparison of the properties of objects whose distance could be estimated and which were located behind the cloud with those of similar objects clearly not within the cloud limits (RR Lyrae stars; Murawski 1983).
The extragalactic nature is only fairly confidently established for the Abadi-Edmunds | 1 | member_59 |
cloud at $\sim$3 Mpc (Abadi & Edmunds 1978). HI 21 cm line emission was detected from this object, whereas in other cases it was not. However, in other cases far-infrared (FIR) emission was detected and could be identified (on morphological and positional criteria) with the obscuring clouds. FIR and HI emission were clearly detected in the case of the Okroy cloud (Wszolek 1988a, 1989). FIR emission was only marginally detected from the Rudnicki-Baranowska cloud (Wszolek 1988b). This indicates that the physical conditions in this kind of objects are far from being uniform. More such examples must be identified and their properties examined.
It is possible that the phenomenon of intergalactic hydrogen clouds could be related to the high-velocity cloud (HVC) complexes. These are HI structures whose radial velocities deviate by several 100 km s$^{-1}$ from the conventional galactic rotation. A recent review of HVCs is by Wakker & van Woerden (1997). Their Table 2 lists a few cloud complexes at distances $\geq$25 kpc; some of these may not belong at all to the MW. IRAS searches for FIR emission of HVC were negative (Wakker & Boulanger 1986), indicating that either the HVCs are dust-free or that their dust grains are | 1 | member_59 |
much cooler than could be detected with IRAS. In this context we also mention the proposition by Blitz (1999) that the HVCs make up the missing mass by being essentially dark halos with low velocity dispersions.
We report here results from a study of a diffuse ring-like FIR feature at high galactic latitude, which we interpret as “local”, not extragalactic, despite first indications to the contrary. The region toward which this feature is located is the center of the Virgo cluster of galaxies. This part of the sky has been studied in exquisite detail, yet new studies always detect interesting features. For example, Katsiyannis (1998) produced a very deep image of the central regions of the cluster from a combination of 13 deep Kodak TechPan films obtained with the UK Schmidt telescope. The image shows large variations in the brightness of the intra-cluster medium, with the brightest regions north of the cluster center. M87 is fairly central in the region of enhanced brightness, close to the upper left corner of the “very high contrast image” in their Fig. 6. Previous deep imaging of the central VC region (Weil 1997) revealed a diffuse extension of (presumably stellar) material extending $\sim$100 kpc | 1 | member_59 |
to the SE of M87. Intergalactic red giant stars were apparently discovered near M87 by Ferguson (1998). It is therefore relevant to search for, and to try and explain, any extended feature one may detect in the direction of the center of the cluster. In this context, we mention the study of Haikala (1995) who examined the UV emission detected in the direction of a dust globule close to the North Galactic Pole, slightly north of the Virgo cluster (VC).
Any material that could produce extinction needs to be accounted for. To the best of our knowledge, nobody attempted to study the obscuration and FIR emission by ISM or IGM in the direction of a rich, nearby cluster of galaxies. This is particularly important for the VC, which serves as one of the key stones in the distance ladder leading up to the determination of the Hubble constant (van den Bergh 1996). The HST Key Project on the Extragalactic Distance Scale, where the required accuracy of the determination of H$_0$ is 10%, could be affected significantly by unaccounted extinction. Until now, seven galaxies within 10$^{\circ}$ of the Virgo center have been observed for Cepheids in this context (Macri 1999).
The | 1 | member_59 |
plan of the paper is as follows: we first describe the FIR observations, which revealed the feature, and present confirmatory evidence of its reality. We then attempt to derive additional properties of the feature, which has an approximate ring shape, using data in the optical and radio domains. We show that the dust in the feature does not seem to affect the stars in the Milky Way but that it apparently reddens galaxies in the VC and beyond. The full data set is discussed in the last section of the paper, in which we also derive some properties of the dust grains in the feature.
Observational data
==================
COBE/DIRBE
----------
Far infrared (FIR) observations from the COBE satellite, specifically with the DIRBE instrument, reveal non-uniform FIR emission from the center of the VC. The DIRBE instrument mapped the entire sky at ten wavelength bands from 1.25 to 240$\mu$m and operated from November 1989 to December 1993 (cryo-cooling was available only for ten months, restricting the availability of the FIR channels). An important feature of DIRBE was that the measurements were performed against an internal calibrator source, with proper accounting for instrumental offsets and interplanetary FIR emission. For the present analysis | 1 | member_59 |
we used the Annual Average Sky Maps (AASM: Hauser 1997), which provide a single, ten-month averaged intensity value per pixel in each of the DIRBE bands. Note that the zodiacal light contribution was not subtracted from the DIRBE counts. This is because we do not estimate the zodiacal contribution to the FIR bands to be significant or to show features on the angular scales relevant here.
We conducted a number of studies of galaxies in the Virgo cluster in which we studied various photometric indices for entire objects as well as for localized regions in each galaxy (Almoznino & Brosch 1998, Heller 1998). The possibility that these programs could be affected by foreground dust imposed our selection of the Virgo Cluster as the initial target for combined FIR and other spectral band interpretations.
We detected a ring-like structure of FIR emission in COBE/DIRBE maps of the VC, which is centered approximately on M87. The ring is approximately centered on (1950) 12$^h$31$^m$; +13$^{\circ}$ (l=285$^{\circ}$.8, b=75$^{\circ}$; J2000) and its diameter is $\sim4^{\circ}$. The width of the FIR emission in the rim of the ring is $\sim1^{\circ}$. The detection was made originally on the COBE/DIRBE maps, but the existence of the feature was | 1 | member_59 |
established also on IRAS maps (see below). The M87 galaxy (l$\approx282^{\circ}$.5, b$\approx+74^{\circ}$.4) is normally taken as the center of the Virgo Cluster (VC) and one could imagine scenarios by which some sort of FIR-emitting matter could be distributed around it. For this reason, we decided to follow the FIR detection of the feature, which we call here “the Virgo Ring” (VR), and investigate it further.
The detection was made on the AASM, which have noise levels of 3 10$^{-3}$ MJy sr$^{-1}$ at 100$\mu$m, 0.6 MJy sr$^{-1}$ at 140$\mu$m, and 0.3 MJy sr$^{-1}$ at 240$\mu$m (Kashlinsky 1999). The ring is visible even by superficial inspection of these COBE/DIRBE gray scale maps. No traces of the ring can be seen on 60$\mu$m or shorter wavelength maps. To obtain detailed insight into the structure of the VR we produced isophotal maps at $\lambda$=100 and 240$\mu$m using the original 0$^{\circ}$.3 square pixels, which are shown as isophote plots in Figure 1. The 100$\mu$m map shows a region of depressed FIR flux where F$_{100}\approx$8.2 MJy sr$^{-1}$. This is surrounded by regions of enhanced FIR emission, which reach F$_{100}\approx$10 MJy sr$^{-1}$. The 240$\mu$m map indicates that the region of reduced FIR emission has F$_{240}\approx$3.7 MJy sr$^{-1}$ | 1 | member_59 |
while the surrounding regions have F$_{240}\approx$5 MJy sr$^{-1}$. It is clear that (a) the DIRBE data indicate a region of low FIR emission surrounded by enhanced emission, and (b) the feature is real, because it appears on more than one DIRBE map. The lowest values of the FIR flux originate presumably from the zodiacal light that was not subtracted from the AASMs and from the cosmic FIR background. As both these components are much smoother than the feature we describe here, there is no need to model them in detail.
IRAS
----
The peculiar FIR features detected by COBE/DIRBE are confirmed by IRAS measurements. The IRAS mission mapped the sky in four wavelength bands from January 1983 to November 1983. The primary goal of the IRAS survey was the detection of point sources, but a catalog of extended sources has also been produced, as well as sky brightness images in each of the four bands with 2’ pixels and 4’-6’ resolution (Beichman 1988).
IRAS 60 and 100$\mu$m Extended Emission Data in the 16$^{\circ}.5\times16^{\circ}.5$ square fields no. 83 and 84 were used to confirm the existence of the ring and to exclude the possibility of instrumental artefacts produced by the COBE/DIRBE | 1 | member_59 |
instrument. We created maps at these two spectral bands with a 4’$\times$4’ beam. The VR is clearly visible on the 100$\mu$m map shown in Fig. 1. A similar 100$\mu$m map based on IRAS observations, and where this feature is also visible, was reproduced already by Leggett (1987) as their Plate 2. The enhanced IRAS resolution relative to COBE/DIRBE allows a good morphological evaluation of the FIR feature. In addition to the north-westerly extension of the FIR emission, along the IRAS scan direction, one sees an arc-like distribution of emission, which could be interpreted as forming an elliptical ring. Note that the feature is visible only on the 100$\mu$m map (shown in Figure 1) and is not seen on the 60$\mu$m map, or on those at even shorter wavelengths (not shown here).
Although the low resolution COBE/DIRBE maps seem to indicate that the FIR emission is arranged in a ring, with low FIR at the center and high emission on its perimeter, the higher resolution IRAS maps show that this is not the case. The FIR emission is distributed in an open configuration, with a region of low emission centered on $\sim12^h30^m$, +13$^{\circ}$.2. The FIR emission could best be described as | 1 | member_59 |
a fork, or a two-arc shape limited to $\alpha$=185$^{\circ}-189^{\circ}$. The eastern side of the feature shows a small region of enhanced FIR emission centered on $\alpha$=185$^{\circ}$ and $\delta$=13$^{\circ}$.5 that stands out over its surroundings and to which we refer as the “main blob” (MB).
Optical information: stars
--------------------------
The dust revealed by the FIR observations may (a) extinguish and (b) redden stars behind it. The first effect is a consequence of the “total extinction” property, whereas the second is the result of “wavelength-selective extinction”. The relative importance of the two effects is linked through the parameter $R=\frac{A_V}{E(B-V)}$, which is determined to first order by the size of the dust grains.
We tested two assumptions, one of extinction within the Milky Way (MW) that would affect some of the stars but not others, and a second that the VR is extragalactic and is located between the MW and the VC. In the second case it would affect the VC galaxies, but none of the MW stars.
For testing the possibility that the dust is “local” one requires a large number of stars with magnitudes and colors. These were extracted from the USNO-A2.0 catalog, which includes blue and red magnitudes for each | 1 | member_59 |
star. The USNO-A2.0 catalog contains $>$5 10$^8$ objects ($\sim$12,750 per square degree) and is based on scans of the Palomar Sky Survey (PSS) plates produced with the Precision Measuring Machine (PMM). The catalog is an improvement over the version 1.0 both in astrometric accuracy and in photometric precision. The photometric accuracy is probably not better than $\sim$0.15 mag, but the depth of the catalog is considerable, as it reaches 20-22 mag (color-dependent). It can, therefore, serve as a source of stellar objects with which one can test the assumption of foreground extinction.
We extracted objects in a number of $1^{\circ}\times1^{\circ}$ regions from the USNO-A2.0 catalog. The extraction locations are listed in Table 1 and correspond to some FIR-bright regions (where we expect a higher density of extinguishing dust) or to some FIR-faint regions (which should be $\sim$transparent). We produced Wolf diagrams for each location, and show these in Figure 2. The Wolf diagrams plot the cumulative distribution of stellar magnitudes against magnitude, and the signature of total extinction in such a plot is a step-like deviation, to fainter magnitudes of the cumulative star counts, from the pattern set by the brighter (and closer, on average) stars. The diagrams do not | 1 | member_59 |
show such a step-like trend for regions in the direction of stronger FIR emission when compared with the behavior of the cumulative distribution in regions with lower FIR emission.
It is also possible to compare the measured behavior of the cumulative star counts with that “predicted” in absence of localized extinction effects by using a model for the stellar distribution in the Galaxy for the same Milky Way locations as sampled here. A very successful and intuitively simple stellar distribution model was produced by Bahcall & Soneira (1984) and is available on-line[^1]. We calculated predicted star counts for the locations of the extracted data from the USNO-A2.0 catalog using the version of the model retrieved in December 1998. The locations are listed in Table 1. We compared the predicted cumulative star counts with the actual star counts. The comparisons are shown in Figure 3 and show no significant deviations from the predicted behavior.
The exercises shown in Figs. 2 and 3 indicate that the stellar distributions are not influenced by the material producing the FIR emission. The conclusion is, therefore, that this material is either extremely nearby, so that all the stars are affected in the same manner, or that | 1 | member_59 |
it is very distant, beyond the more distant stars listed in the USNO-A2.0 catalog.
Optical information: galaxies
-----------------------------
If the dust observed in the FIR does not affect stars in our galaxy, it may be located far from the MW and could affect only objects seen behind it. Testing the assumption of a dust cloud distant from the MW requires a sample of background objects with relatively high surface density, as well as brightness and color information. In the Virgo region, the “standard” extragalactic catalog has been for a number of years the Binggeli (1985) Virgo Cluster Catalog (VCC). The VCC covers $\sim$140 square degrees and contains 2096 galaxies. The surface density of galaxies is, therefore, $\sim$15 galaxies/square degree, on average. While this may appear sufficient, the photometry is not adequate because the galaxy magnitudes in the VCC are eye estimates and may have significant deviations. In addition, no colors are available for most VCC galaxies. We decided therefore to rely on a more recent galaxy compilation, which reaches deeper in brightness and is thus denser than the VCC, has better photometry, and contains color information for the objects.
Currie & Young (1998, hereafter VPC) produced an extensive three-color photometric | 1 | member_59 |
catalog of galaxies in the central regions of the VC. The catalog is based on COSMOS scans of one U plate, two B$_J$ plates, and one R$_C$ plate, all obtained with the UK Schmidt telescope. The plates were photometrically calibrated and objects were extracted automatically, with stars and galaxies separated by an automatic algorithm. The VPC provides an impartial survey of galaxies in the region of interest for the present study, which reaches to B$_J\approx$19 mag, thus it is somewhat shallower but comparable in depth with the stars from the USNO-A2.0 catalog. The area covered by the VPC covers 23 square degrees and is centered on (1950) 12:26 +13:08. The average galaxy surface density is therefore 49 galaxies/square degree, considerably more than that of the VCC.
We attempted to detect total extinction effects on the VPC galaxies by limiting the analysis to regions with high FIR emission and comparing these with similar analyses in the direction of regions with lower FIR emission. Four parallerogram-shaped fields were selected, marked A, B, C, and D on Figure 4. Fields C and D are used to determine the nature of the galaxy population in the general region of the VR. Field A could | 1 | member_59 |
also be used for this purpose, but we caution that a background cluster of galaxies (Abell 1552 at z$\approx$0.084) is located in this field and thus region A may not be representative. This galaxy cluster is presumably part of a background sheet-like complex, which includes also Abell 1526 at a very similar redshift. Field A was selected to offer insight on how the presence of background galaxies disturbs the results. The enhancements of the galaxy background may distort the Wolf diagrams of galaxies (Figure 5), and indeed some FIR enhancements could be in the direction of these galaxy clusters. However, searches for dust in clusters of galaxies have, so far, been negative (Maoz 1995). Thus, we may tentatively discount the FIR enhancements in the direction of background clusters of galaxies as chance superpositions. Field B is considered not to be affected by absorption/extinction and is in the direction of the low FIR emission of the VR.
Attempts to detect the presence of dust as a “total extinction” effect, which modifies the cumulative galaxy counts between the different regions, were not successful. The differences were not significant and indicate that if dust is present, it may cause at most a small | 1 | member_59 |
amount of total extinction: A$_B\leq$0.5 mag. We therefore checked for the presence of color-dependent extinction by studying the distribution of the (U–R$_C$) color index in one square degree areas over the central part of the Virgo cluster. The data used for this test, and an extensive description of the method and results, are given in the Appendix to this paper. Here we emphasize that the results show that the galaxies in the direction of the Virgo Ring (VR) part with the lowest FIR emission appear slightly bluer than those in the direction of the two regions with higher FIR emission. The difference is significant to $\geq$95%. Interpreted as dust extinction, this difference in average (U–R$_C$) color index indicates a possible wavelength-dependent extinction of $\Delta$(U–R$_C$)$\simeq$0.3 mag between areas with high FIR emission and areas with less dust, a total extinction A$_V\simeq$0.33 for a typical Milky Way extinction law, although this was not checked here.
Radio information
-----------------
Here we show that X-ray observations of the region indicate a two-component makeup for the hot gas, and that the morphology and kinematics of the HI are peculiar. Böhringer (1995) mapped the X-ray emission from the immediate vicinity of M87; this is the region | 1 | member_59 |
of interest of the present study. Their findings show the presence of thermal X-ray emission from cooler gas than the intracluster medium. A ROSAT map of the general region, larger than the one analyzed in the 1995 paper, was presented by Böhringer (1994) and shows a ridge of X-ray emission which approximately coincides with the FIR emission ridge to the west of M87. They mention, in particular, the sharp drop in X-ray intensity on the western side of M87. Böhringer (1994) subtracted a model distribution of X-rays from M87 from the ROSAT map and derived a residual map (their Fig. 2) which shows the background cluster A1552 at 12:30+11:30 and a long filament, which is elongated $\sim$north-south at $\alpha\approx$12$^h$30$^m$ and from $\delta\approx$+15 to +6. This filament curves around M87 on its westerly side and seems to follow the contours of the 100$\mu$m emission.
It is tempting to speculate on a possible link between the X-ray and FIR emission presented above, but we caution that this may not be real. One possible factor affecting the morphology of X-ray emiting gas is the amount of foreground HI, which modifies mainly the low energy end of the X-ray spectrum. Shadows in the X-ray | 1 | member_59 |
background caused by foreground HI clouds have been detected mainly in soft X-rays by Egger & Aschenbach (1995). However, the feature detected in the ROSAT maps by Böhringer (1994) is seen in the hard energy band (0.4–2.4 keV), and is thus difficult to attribute it to gas absorption.
EUVE observations of the Virgo cluster (VC) center (Lieu 1996) show the presence of gas at $\sim$0.5 10$^6$ K near M87. This matter forms an additional component of the intra-cluster material (ICM) in Virgo, as follows from their analysis, and cannot be the same hot gas which is responsible for the X-ray emission detected by ROSAT. In order to confirm the existence of this second ICM component of the VC, Lieu (1996) performed HI 21 cm observations with the 43-m Green Bank telescope (angular resolution 21’). The region surveyed by them was centered on M87, had an extent of 2$^{\circ}\times1^{\circ}.6$, and the grid of HI measurements was spaced every 8’$\simeq$1/3 of a resolution element.
A comparison of the HI map of Lieu (1996) with the FIR distributions (see Figure 1) demonstrates that the FIR emission follows the total HI column density. Although Lieu do not mention the velocity range over which the | 1 | member_59 |
Green Bank observations were performed, we assumed these to be at $\sim$0 km s$^{-1}$ because they are supposedly of “Galactic HI” origin. While some VC galaxies do have negative heliocentric velocities (c.f. Binggeli 1985), they mostly concentrate at 1,000–2,000 km s$^{-1}$. For this reason, we think it likely that the HI detected by Lieu (1996) does indeed belong to the MW and, by inference, so does the material producing the FIR emission. We note at this point that the center of the low N(HI) region, at 12:28+12:45 (1950) and only $\sim$half a degree away from M87, has N(HI)$\simeq$1.8 10$^{20}$ atoms cm$^{-2}$. This is coincident with the low FIR emission region. The ridges with the higher N(HI) values correspond to enhanced FIR emission regions.
We produced N(HI) plots for the region using data from the Leiden-Dwingeloo HI survey (Hartmann & Burton 1997, LDS) in order to confirm the HI distribution measured by Lieu (1996). The LDS was conducted with the 25-m radio telescope at Dwingeloo and the data we used cover the velocity range –459$<v_{lsr}<$+415 km s$^{-1}$ with a resolution of 1.03 km s$^{-1}$. The 25-m radio telescope has a 36’ half-power beam and the survey was performed with 0$^{\circ}$.5 spacings. | 1 | member_59 |
We used the file TOTAL\_HI.FIT from the CD-ROM supplied with the printed atlas to extract the proper sky region. The data were transformed from Galactic to equatorial coordinates, accounting for the change of scale from one side of the image to the other. This was done by dividing each pixel value by its [*cos(b)*]{}, to yield consistent units over the field. The HI total column density from the LDS is shown in Figure 6 together with the IRAS 100$\mu$m map and confirms the general impression from the Lieu (1996) map. The HI distribution has a region with lower N(HI) at the center of the Virgo Ring (VR) and ridges of higher HI emission on both sides of the VR.
We also produced position-velocity (PV) plots using the channel data from the LDS, limiting these to l=276.0, the galactic longitude of the HI peak which coincides with the main FIR emission blob in area A of Fig. 6 (the more intense of the FIR peaks), to the center of the VR (l=283.0), and to the second highest FIR peak at l=290.5. The PV plots are shown in Figure 7 and indicate that at the position of the VR there is a | 1 | member_59 |
significant disturbance of the HI, with a strong extension to negative velocities appearing in the PV plots of the high-FIR region. The sheet-like HI distribution, which links HI at low latitudes with gas near the Galactic Pole and has a slightly negative LSR velocity, appears disturbed at b$\approx$75$^{\circ}$.
The velocity plot through the peak emission at l=276.0 at this latitude (Figure 8) shows three peaks separated by $\sim$20 km s$^{-1}$. The strongest has the most negative velocity, approximately –30 km s$^{-1}$ and a FWHP of $\sim$11 km s$^{-1}$. The weakest peak at this location is near +4 km s$^{-1}$ (LSR). The PV in the low FIR region at the center of the VR (l=283$^{\circ}$, b$\approx$75$^{\circ}$) shows a single strong peak at $\sim$–7 km s$^{-1}$ (LSR), with a FWHP of 12.5 km s$^{-1}$ and a low shoulder extending to more negative velocities, down to the velocity of the strong peak at the location of the main blob (–30 km s$^{-1}$). The third PV, at (l=290$^{\circ}$.5, b$\approx$75$^{\circ}$) is narrow with a FWHP of $\sim$ 5km s$^{-1}$ and is centered at –7 km s$^{-1}$ (LSR).
Discussion
==========
We identified a ring-like feature of FIR emission at high galactic latitude, which is distant from | 1 | member_59 |
the main body of the Galaxy and extinguishes light from galaxies in the central part of the Virgo cluster (VC). There is no way to establish a distance to the extinguishing cloud with the data we presented above, except to note that it is probably $>$1 kpc.
A nearby dust feature, observed by Haikala (1995) in the far-UV, has been located at $\sim$120 pc using the distribution of E$_{b-y}$ color excesses. This dust cloudlet produces a visual extinction A$_V\leq$0.4 mag and is located at (l=251.1, b=+73.3); this location is very similar to what we found for the Virgo Ring (VR) and may indicate that either our distance evaluation is wrong, or that the location technique of the Haikala feature did not use a sufficient number of more distant stars.
Indications that the dust cloud cannot be a nearby feature originate mainly from the lack of influence on the distribution of stars. Supporting evidence to the same comes from the reddening study of Knude (1996). He used uvbyH$\beta$ measurements of A3-G0 stars with B$\leq$11.5 mag and $\vert$b$\vert>70^{\circ}$ to determine the distribution of extinction. His results for E$_{b-y}$, broken by galactic latitude and by longitude quadrants, are of particular interest. The area | 1 | member_59 |
of interest for our study is located between the 3rd and 4th quadrants at b$\approx75^{\circ}$; the reddening to this region is small, E$_{b-y}\leq0.017$, which translates into A$_B\leq$0.095. The stars studied by Knude (1996) are closer than 1.5 kpc (for main sequence A stars brighter than 11.5 mag), thus the color-dependent extinction of the VC galaxies we detected, which is equivalent to A$_B\approx$0.4 mag, should be produced by material more distant than 1.5 kpc.
If the cloud would be in the VC itself, its physical size would be $\sim$1.5 Mpc, very large indeed ! The issue of possible diffuse dust in clusters of galaxies has been studied by Ferguson (1993). He concluded, from the lack of a difference between cluster and field galaxies in the correlation of the Mg$_2$ index and (B–V), that dust is not present in the Virgo cluster (upper limit E(B–V)$<$0.06 mag.). A similar conclusion for a large number of Abell clusters, based on the (V–I) color indices of radio quasars seen in their background, was reached by Maoz (1995).
Not accounting for foreground dust may affect adversely some key observations. Our finding confirms the supposition of Zonn (1957) and Zonn & Stodolkiewicz (1958), that because of the | 1 | member_59 |