Dataset Viewer
text
stringlengths 39
745k
|
---|
---
abstract: 'The pivot algorithm is the most efficient known method for sampling polymer configurations for self-avoiding walks and related models. Here we introduce two recent improvements to an efficient binary tree implementation of the pivot algorithm: an extension to an off-lattice model, and a parallel implementation.'
address: 'Department of Mathematics, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria 3122, Australia'
author:
- Nathan Clisby and Dac Thanh Chuong Ho
title: 'Off-lattice and parallel implementations of the pivot algorithm'
---
Introduction {#sec:intro}
============
Self-avoiding walks are non-intersecting paths on lattices such as the two-dimensional square lattice or the three-dimensional simple cubic lattice. Due to universality, they exactly capture the essential physics of the excluded-volume effect for polymers in the good-solvent limit, and as such can be used to study features such as the value of the Flory exponent $\nu$ which relates the geometric size of a walk to the number of monomers in the chain.
The pivot algorithm is the most efficient known method for sampling self-avoiding walks of fixed length. It is a Markov chain Monte Carlo method, which was invented by Lal [@Lal1969MonteCarlocomputer], but first studied in depth by Madras and Sokal [@Madras1988PivotAlgorithmHighly], who also invented an efficient hash table implementation. Recent improvements to the implementation of the pivot algorithm [@Kennedy2002fasterimplementationpivot; @Clisby2010AccurateEstimateCritical; @Clisby2010Efficientimplementationpivot] have dramatically improved computational efficiency to the point where it is possible to rapidly sample polymer configurations with up to 1 billion monomers [@Clisby2018MonteCarlo4dSAWs].
In this paper, we will describe two recent improvements in algorithms to sample self-avoiding walks, focusing in particular on the pivot algorithm. In Sec. \[sec:offlattice\] we describe an off-lattice implementation of the SAW-tree data structure [@Clisby2010Efficientimplementationpivot]. In Sec. \[sec:parallel\] we describe a parallel implementation of the pivot algorithm which improves the sampling rate for very long walks. Finally, we have a brief discussion about prospects for further progress and conclude in Sec. \[sec:conclusion\].
Off-lattice implementation {#sec:offlattice}
==========================
The SAW-tree data structure [@Clisby2010Efficientimplementationpivot] is a binary tree that encodes information about the self-avoiding in an efficient way in nodes in the tree. In particular, the leaves of the tree consist of individual monomers, while the internal nodes store aggregate information about all of the monomers that are below that node within the tree, as well as “symmetry” information which encodes transformations that must be applied to sub-walks before they are concatenated together. The aggregate information that must be stored includes information about the extent of the sub-walk in the form of a “bounding volume”, which is taken to be a rectangle for square-lattice walks, and a rectangular prism for simple-cubic-lattice walks. For lattice self-avoiding walks, the symmetry elements are rotations and reflections that leave the lattice invariant. See [@Clisby2010Efficientimplementationpivot] for a full description of the implementation.
Although lattice self-avoiding walks capture the universal behaviour of polymers in the good-solvent limit, there are strong arguments for why off-lattice models of polymers may have advantages under certain circumstances. Firstly, they provide an opportunity to empirically model more realistic interactions for polymers, and thus to reproduce not only universal features but also make precise experimental predictions. Secondly, under some circumstances it may be the case that the effect of the lattice may have a non-negligible effect, for example when trying to understand the nature of the globule transition it may be the case that the restriction to the lattice significantly influences the nature of the transition. Finally, while lattices have discrete symmetry groups, the symmetry group corresponding to reflections and rotations of ${{\mathbb R}}^d$ is the continuous orthogonal group $O(d)$. This continuous group allows for more freedom for performing pivot moves, and it is conceivable that this additional freedom may enhance sampling efficiency under some circumstances.
We implement the SAW-tree for the bead-necklace, or tangent-hard-sphere, model, which consists of a fully flexible chain of hard spheres that just touch. A typical configuration for this model in ${{\mathbb R}}^2$ is shown in Fig. \[fig:ths\].
( 0.000000000000000e+00, 0.000000000000000e+00) circle (0.5); ( 9.786616592965383e-01, 2.054783604736682e-01) circle (0.5); ( 7.138507999451933e-01,-7.588220122160878e-01) circle (0.5); ( 1.652016368820576e+00,-1.105008905926305e+00) circle (0.5); ( 2.646246004809292e+00,-9.977362117224877e-01) circle (0.5); ( 2.489134802852920e+00,-1.985317130037450e+00) circle (0.5); ( 1.586240769818234e+00,-2.415180323519237e+00) circle (0.5); ( 1.247320086895478e+00,-3.355995273808173e+00) circle (0.5); ( 1.160688733842073e+00,-4.352235710985934e+00) circle (0.5); ( 7.015135092890352e-01,-5.240581428123505e+00) circle (0.5); ( 1.278387485331991e-01,-6.059664616128078e+00) circle (0.5); ( 9.829325913381517e-01,-6.578137874837782e+00) circle (0.5); ( 7.081814926340417e-01,-7.539653258885735e+00) circle (0.5); ( 1.695731958738442e+00,-7.696955760127714e+00) circle (0.5); ( 1.878679392456082e+00,-8.680078456690117e+00) circle (0.5); ( 1.728991642820777e+00,-9.668811776452891e+00) circle (0.5); ( 1.113220493093022e+00,-1.045673683073341e+01) circle (0.5); ( 3.795115065809678e-01,-9.777272980987558e+00) circle (0.5); (-2.104289417354798e-01,-1.058471973925722e+01) circle (0.5); (-5.009594578546239e-02,-1.157178272124454e+01) circle (0.5); (-1.042960620081759e+00,-1.145253617607652e+01) circle (0.5); (-1.745024251321964e+00,-1.074042195933189e+01) circle (0.5); (-2.285333890964720e+00,-1.158188823513212e+01) circle (0.5); (-3.279517601945027e+00,-1.168958572266747e+01) circle (0.5); (-3.014664949667944e+00,-1.072529682789485e+01) circle (0.5); (-3.886513092322947e+00,-1.023552053595228e+01) circle (0.5); (-4.882985538957040e+00,-1.015159995981588e+01) circle (0.5); (-5.158386379043892e+00,-1.111292944406124e+01) circle (0.5); (-6.156662931653451e+00,-1.105424448423360e+01) circle (0.5); (-6.375662239105125e+00,-1.202996949843365e+01) circle (0.5); (-7.032752317546095e+00,-1.278378156308533e+01) circle (0.5); (-6.934322713325797e+00,-1.377892557930260e+01) circle (0.5); (-6.990313442431963e+00,-1.477735686800211e+01) circle (0.5); (-7.486950105403038e+00,-1.564531540675437e+01) circle (0.5); (-7.455465187445388e+00,-1.664481963382957e+01) circle (0.5); (-7.664058957800536e+00,-1.762282200548775e+01) circle (0.5); (-8.591767582783518e+00,-1.799612711813984e+01) circle (0.5); (-9.548096314174508e+00,-1.770383384501146e+01) circle (0.5); (-1.042656985674248e+01,-1.722604284990159e+01) circle (0.5); (-1.106496930040173e+01,-1.645633761389526e+01) circle (0.5); (-1.171513371270300e+01,-1.569654406497756e+01) circle (0.5); (-1.259657486866839e+01,-1.522425020669611e+01) circle (0.5); (-1.283225528518851e+01,-1.619608082016170e+01) circle (0.5); (-1.382091634554016e+01,-1.604591655096177e+01) circle (0.5); (-1.455716917651184e+01,-1.672262311136013e+01) circle (0.5); (-1.523147064610889e+01,-1.746107929022200e+01) circle (0.5); (-1.621509248047241e+01,-1.764132381014795e+01) circle (0.5); (-1.721480257620690e+01,-1.761724634170690e+01) circle (0.5); (-1.741638639517049e+01,-1.859671760924467e+01) circle (0.5); (-1.833285653853490e+01,-1.819661452712505e+01) circle (0.5);
We will now describe the key features of our implementation, and will present evidence that the off-lattice SAW-tree implementation of the pivot algorithm has $O(\log N)$ performance in line with the performance of the original lattice SAW-tree implementation. The description will not be self-contained, and the interested reader is referred to [@Clisby2010Efficientimplementationpivot] for relevant details.
The orthogonal group $O(2)$ is used as the symmetry group for ${{\mathbb R}}^2$, and similarly $O(3)$ is used for ${{\mathbb R}}^3$. The orthogonal group includes rotations as the subgroups $SO(2)$ and $SO(3)$ respectively, but also includes reflection moves.
Symmetry group elements are sampled uniformly at random so as to preserve the Haar measure [@Stewart1980EfficientGenerationOfRandomOrthogonalMatrices] on the group. This automatically ensures that the Markov chain satisfies the detailed-balance condition, and so must be sampling configurations with uniform weights.
As for ergodicity, we feel that it is extremely likely that the algorithm is ergodic. For lattice models the pivot algorithm has been proved to be ergodic; this was first done for ${{\mathbb Z}}^2$ and ${{\mathbb Z}}^3$ in the seminal paper of Madras and Sokal [@Madras1988PivotAlgorithmHighly]. Interestingly, inclusion of reflections seem to be necessary for ergodicity for lattice models. In the continuum, it is our view that the additional freedom afforded as compared to the lattice should mean that pivot algorithm is ergodic in this case, too. We do not have sufficient insight into the problem to know whether the extra freedom would allow one to have an ergodic algorithm with only rotations (and not reflections). Some theoretical work has been done previously on the ergodicity of pivot moves for continuous models [@Plunkett2016OffLatticeSAWPivotAlgorithmVariant], but this is not directly relevant here as the proof relied on double-pivot moves.
The key decision for the SAW-tree implementation for the bead-necklace model is the choice of *bounding volume* to be used. The bounding volume is a shape which is stored in nodes in the SAW-tree, such that it is guaranteed that the entire sub-chain which is represented by the node is completely contained within the bounding volume. The use of a bounding volume is necessary for the rapid detection of self-intersections when a pivot move is attempted.
The natural choice of the bounding volume for ${{\mathbb Z}}^2$ is the rectangle, and for ${{\mathbb Z}}^3$ the natural choice is the rectangular prism. This is because these shapes snugly fit the sub-chains that they contain (in the sense that the sub-chains must touch each boundary or face of the shape), and the shapes are preserved under lattice symmetry operations.
The natural shape for the bounding volume for the bead-necklace model for ${{\mathbb R}}^2$ would seem to be the circle, and similarly for ${{\mathbb R}}^3$ the natural choice would be the sphere. This is because these are the only shapes that are invariant under the action of $O(2)$ and $O(3)$ respectively.
One of the operations that must be performed with bounding volumes [@Clisby2010Efficientimplementationpivot] is the merge operation, which involves combining two bounding volumes (which contain sub-chains) to create a bounding volume that contains both of the original bounding volumes (and hence contains both sub-chains). In contrast to the situation for lattice models, the bounding volumes which result from the merge operation do not necessarily form a snug fit for the polymer sub-chains. This is illustrated in Fig. \[fig:circle\] for an example in ${{\mathbb R}}^2$ where the snugly fitting bounding circles for two sub-chains are merged together so that they contain the concatenated walk. The concatenated walk *does not* touch the boundary of the larger circle.
(0.4,0.15) – (-0.61885,1.90211) ; (-0.61885,1.90211) – (-1.2,0.2); (-1.2,0.2) – (-0.85156,-1.809654) ; (-0.85156,-1.809654) – (0.7,-1.6); (0.7,-1.6) – (1.951834,0.436286);
(0,0) circle (2);
(-1.951834,0.4362868) – (-0.61885,1.90211); (-0.61885,1.90211) – (1.721484,1.01808) ; (1.721484,1.01808) – (0.7,-1.6); (0.7,-1.6)– (-0.85156,-1.809654); (-0.85156,-1.809654) – (-1.2,0.2); (-1.2,0.2)– (0.4,0.15);
(0,0) circle (2);
(0,0) circle (3.951834);
*A priori*, we had no expectation about whether the lack of snug fit for the bounding volumes would prove to be a significant problem. We considered it possible that the error from the fit would grow rapidly as one moved up the SAW-tree, and this would have worsened the performance of the intersection testing algorithm. But, we found that in fact this was not a problem at all. We estimated the mean ratio of the diameter of the bounding volume to the square root of the mean value of the squared end-to-end distance $\langle R_E^2 \rangle^{1/2}$. We found that as the length of the chains increased the ratio was approaching a constant for both ${{\mathbb R}}^2$ and ${{\mathbb R}}^3$, indicating that the error was becoming saturated. For chain lengths of $N=10^6$ this ratio was only 1.45 for ${{\mathbb R}}^2$, and 1.71 for ${{\mathbb R}}^3$. Thus, in the average case this suggests that the lack of a snug fit only results in a constant factor error in the diameter of the bounding volume for the off-lattice implementation. This means that the behaviour of the lattice and off-lattice implementations should be essentially the same, up to a constant factor.
We evaluated the mean CPU time per pivot move for a range of polymer lengths, for lattice and off-lattice SAW-tree implementations in two and three dimensions on Dell PowerEdge FC630 machines with Intel Xeon E5-2680 CPUs, and plot the results of these computer experiments in Figs \[fig:cpud2\] and \[fig:cpud3\].
We found that the time per pivot move attempt was somewhat worse for the off-lattice implementation as compared to the lattice implementation, which was to be expected due to the increased number of operations required for computations involving the symmetry elements and coordinate vectors. But, in absolute terms the performance is still impressive, and for polymers with $10^7$ monomers pivot attempts are performed in mean CPU time of less than 6$\mu$s for ${{\mathbb R}}^2$, and in less than 40$\mu$s for ${{\mathbb R}}^3$. We clearly observe $O(\log N)$ behaviour in each case, which is strong evidence that the off-lattice implementation behaves in fundamentally the same way as the original lattice implementation of the SAW-tree.
![CPU time per pivot move attempt for the bead-necklace model in ${{\mathbb R}}^2$, in comparison to SAWs in ${{\mathbb Z}}^2$, plotted against the number of monomers $N$.\[fig:cpud2\]](cpu_d2-crop){width="0.5\paperwidth"}
![CPU time per pivot move attempt for the bead-necklace model in ${{\mathbb R}}^3$, in comparison to SAWs in ${{\mathbb Z}}^3$, plotted against the number of monomers $N$.\[fig:cpud3\]](cpu_d3-crop){width="0.5\paperwidth"}
Parallel implementation of the pivot algorithm {#sec:parallel}
==============================================
The SAW-tree implementation of the pivot algorithm [@Clisby2010Efficientimplementationpivot] is remarkably efficient, but it suffers from one significant drawback: the intersection testing and SAW-tree update procedures are inherently serial operations. This makes it difficult to take advantage of additional cores to improve the rate at which polymer configurations are sampled. To some extent this issue is obviated by the fact that for number of monomers $N$ up to the order of tens of millions or even 100 million it is possible to run simulations in parallel on multicore machines, and still obtain results in a reasonable clock time.
But, in the regime where a large amount of memory is needed for truly large $N$, of the order of $10^8-10^9$, on the Dell PowerEdge FC630 machines with Intel Xeon E5-2680 CPUs on which computer experiments are being run this prevents all cores being simultaneously used due to memory constraints[^1]. Under these circumstances most cores must be left idle while data is being collected.
Here we will briefly sketch a method to improve the sampling rate by utilising additional cores in exactly this difficult regime.
The key insight is that as the number of monomers increases, the probability of a pivot move being successful decays as a power law of the form $N^{-p}$, with $p \approx 0.19$ for ${{\mathbb Z}}^2$, and $p \approx
0.11$ for ${{\mathbb Z}}^3$. For $N = 10^9$ on ${{\mathbb Z}}^2$, the probability of a pivot move being successful is 0.019, which means that on average roughly 50 unsuccessful pivot attempts are made for each success.
Given that most proposed pivot moves in this regime fail, and so do not result in any update being made for the self-avoiding walk, it is possible to perform many pivot attempts in parallel without this effort being wasted.
For example, imagine that we are sampling SAWs of $10^9$ steps via the pivot algorithm, and we may test for success or failure of up to ten pivot moves simultaneously. Note that a move consists of a proposed monomer location to act as the centre of the pivot move, and a proposed symmetry operation. Suppose for the first batch of ten proposed moves $\{M_1, M_2, \cdots, M_{10}\}$, that each of these moves were unsuccessful. Then, we can move on to another batch, and none of the work performed by any of the threads was wasted. Suppose for the second batch $\{M_{11}, M_{12}, \cdots,
M_{20}\}$ that the first 6 moves $M_{11},\cdots,M_{16}$ are unsuccessful, but $M_{17}$ is successful. Then we need to perform the update associated with the move $M_{17}$ which must happen as a serial operation performed by a single thread. It does not matter whether $M_{18}, M_{19}, M_{20}$ were successful or not: these tests will need to be performed again in case the update has altered the result of the test. The next batch will then consist of ten proposed moves $\{M_{18}, M_{19}, \cdots, M_{27}\}$.
The tests for success or failure will occur for each thread regardless of the outcome of the tests performed by other threads. But, provided the probability of multiple successful moves occurring in a batch is low, then most of this work will not be wasted. The lower the probability of success, the greater the potential for speed up to occur by exploiting parallelism.
We have implemented this idea in a prototype C program with OpenMP being used for managing the parallel pivot attempts. The SAW-tree is held in shared memory where all threads can access it for performing intersection tests. When a pivot move is found to be successful, then the update is performed by a single thread while all other threads remain idle.
We performed computer experiments to test this implementation on the aforementioned FC630 machines for SAWs of various lengths on the square lattice. We utilised 24 threads, with batches (or chunks) of 48 pivot attempts which meant that each thread made two attempted pivot moves on average. We collated the calendar time per pivot attempt in $\mu$s in Table \[tab:parallel\]. The value $t_1$ is the mean CPU time for a single thread, while $t_{24}$ is the mean CPU time for the 24 threads running in parallel. We see that as $N$ increases the probability of a move being successful decreases, and the relative performance of the parallel implementation to the serial implementation improves. For $N = 10^9$ there is roughly a four-fold improvement in performance.
Although it is suitable as a proof-of-concept, the implementation developed thus far is only a prototype, and more work remains to be done to improve its performance. In particular, it should be possible to re-use some information from intersection tests even if these moves are scheduled to occur after a move that is found to be successful. For example, if a move is found to cause a self-intersection between monomers labelled $l$ and $m$ along the chain, then if the prior succesful move involved a pivot site outside of the interval $l$ to $m$ then this would not have any effect on the self-intersection. Nonetheless, even in its current state the performance gain is sufficient to make it worthwhile for use in the large $N$, memory-limited regime.
[rlllll]{} $N$ & $\Pr(\text{success})$ & $1/\Pr(\text{success})$& $t_1$ ($\mu$s) & $t_{24}$ ($\mu$s) & $t_1/t_{24}$\
------------------------------------------------------------------------
$10^6$& 0.068& 15 & 1.58 & 1.35 & 1.17\
$10^7$& 0.044& 23 & 2.31 & 1.07 & 2.16\
$10^8$& 0.029& 34 & 2.90 & 0.903 & 3.21\
------------------------------------------------------------------------
$10^9$& 0.019& 53 & 3.16 & 0.805 & 3.93\
Discussion and conclusion {#sec:conclusion}
=========================
Schnabel and Janke [@Schnabel2019] have very recently implemented a binary tree data structure which is similar to the SAW-tree for the bead-necklace model, as well as a model for which the Lennard-Jones interaction is implemented. The implementation for the bead-necklace model appears to have roughly the same computational efficiency as the implementation sketched here. The efficient implementation for the Lennard-Jones polymer model is very interesting, and a significant advance on the state of the art. It will be interesting to see if further progress in this direction can be made, for example in the evaluation of Coulomb interactions which would be necessary for efficient simulation of polyelectrolytes.
Full details for the off-lattice SAW-tree implementation of the pivot algorithm will be presented elsewhere in future.
More work needs to be done to test and improve the implementation of the parallel version of the pivot algorithm. In future, the parallel implementation of the pivot algorithm will allow for improved simulations of very long SAWs on the square lattice. The method will result in significant speed-ups for SAWs with hundreds of millions or even one billion steps, especially for the square lattice.
References {#references .unnumbered}
==========
[1]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} Lal M 1969 [*Mol. Phys.*]{} [**17**]{} 57–64
Madras N and Sokal A D 1988 [*J. Stat. Phys.*]{} [**50**]{} 109–186
Kennedy T 2002 [*J. Stat. Phys.*]{} [**106**]{} 407–429
Clisby N 2010 [*Phys. Rev. Lett.*]{} [**104**]{} 055702
Clisby N 2010 [*J. Stat. Phys.*]{} [**140**]{} 349–392
Clisby N 2018 [*J. Stat. Phys.*]{} [**172**]{} 477–492
Stewart G W 1980 [*SIAM Journal on Numerical Analysis*]{} [**17**]{} 403–409 Plunkett L and Chapman K 2016 [*J. Phys. A: Math. Theor.*]{} [**49**]{} 135203 Schnabel S and Janke W (*Preprint* )
Acknowledgements {#acknowledgements .unnumbered}
================
Thanks to Stefan Schnabel for communicating results regarding an alternative efficient off-lattice implementation of the pivot algorithm prior to publication. N.C. gratefully acknowledges support from the Australian Research Council under the Future Fellowship scheme (project number FT130100972).
[^1]: There are 24 cores, and total memory available is 128GB.
|
---
abstract: 'We prove that any metric space $X$ homeomorphic to $\mathbb{R}^2$ with locally finite Hausdorff 2-measure satisfies a reciprocal lower bound on modulus of curve families associated to a quadrilateral. More precisely, let $Q \subset X$ be a topological quadrilateral with boundary edges (in cyclic order) denoted by $\zeta_1, \zeta_2, \zeta_3, \zeta_4$ and let $\Gamma(\zeta_i, \zeta_j; Q)$ denote the family of curves in $Q$ connecting $\zeta_i$ and $\zeta_j$; then $\operatorname{mod}\Gamma(\zeta_1, \zeta_3; Q) \operatorname{mod}\Gamma(\zeta_2, \zeta_4; Q) \geq 1/\kappa$ for $\kappa = 2000^2\cdot (4/\pi)^2$. This answers a question in [@Raj:16] concerning minimal hypotheses under which a metric space admits a quasiconformal parametrization by a domain in $\mathbb{R}^2$.'
address: 'Department of Mathematics and Statistics, University of Jyväskylä, P.O. Box 35 (MaD), FI-40014, University of Jyväskylä, Finland.'
author:
- Kai Rajala
- Matthew Romney
bibliography:
- 'ReciprocalLowerBoundBiblio.bib'
title: Reciprocal lower bound on modulus of curve families in metric surfaces
---
[^1]
Introduction
============
The classical uniformization theorem states that any simply connected Riemann surface can be mapped onto either the Euclidean plane $\mathbb{R}^2$, the sphere $\mathbb{S}^2$, or the unit disk $\mathbb{D}$ by a conformal mapping. For obtaining similar results in the setting of metric spaces, the class of conformal mappings is too restrictive and it is natural to consider instead some type of quasiconformal mapping. One such class is [*quasisymmetric mappings*]{}, and a large body of recent literature is dedicated to quasisymmetric uniformization of metric spaces. We mention specifically papers by Semmes [@Sem:96b] and Bonk–Kleiner [@BonkKle:02] as important references.
Another approach is to use the so-called [*geometric definition*]{} of quasiconformal mappings, based on the notion of modulus of a curve family. In the recent paper [@Raj:16], the first-named author proves a version of the uniformization theorem for metric spaces homeomorphic to $\mathbb{R}^2$ with locally finite Hausdorff 2-measure. In the present paper, we call such spaces [*metric surfaces*]{}.
In [@Raj:16] a condition on metric surfaces called [*reciprocality*]{} (see Definition \[defi:reciprocality\] below) is introduced and shown to be necessary and sufficient for the existence of a quasiconformal parametrization by a domain in $\mathbb{R}^2$. We refer the reader to the introduction of [@Raj:16] for a detailed overview of the problem and additional references to the literature.
In this paper, we show that one part of the definition of reciprocality is satisfied by all metric surfaces and therefore is unnecessary. This result gives a positive answer to Question 17.5 from [@Raj:16].
We first recall the relevant definitions and establish some notation. Let $(X,d,\mu)$ be a metric measure space. For a family $\Gamma$ of curves in $X$, the [*$p$-modulus*]{} of $\Gamma$ is defined as $$\operatorname{mod}_p \Gamma = \inf \int_X \rho^p\,d\mu ,$$ where the infimum is taken over all Borel functions $\rho: X \rightarrow [0,\infty]$ with the property that $\int_{\gamma} \rho\,ds \geq 1$ for all locally rectifiable curves $\gamma \in \Gamma$. Such a function $\rho$ is called [*admissible*]{}. If the exponent $p$ is understood, a homeomorphism $f: (X,d,\mu) \rightarrow (Y,d',\nu)$ between metric measure spaces is [*quasiconformal*]{} if there exists $K \geq 1$ such that $$K^{-1} \operatorname{mod}_p \Gamma \leq \operatorname{mod}_p f(\Gamma) \leq K \operatorname{mod}_p \Gamma$$ for all curve families $\Gamma$ in $X$. In this paper, we always take $p=2$ and assume that a metric space $(X,d)$ is equipped with the Hausdorff 2-measure $\mathcal{H}^2$, and we write $\operatorname{mod}\Gamma$ in place of $\operatorname{mod}_2 \Gamma$.
Throughout this paper, we assume that $(X,d)$ is a metric surface as defined above. A [*quadrilateral*]{} in $X$ is a subset $Q \subset X$ homeomorphic to $[0,1]^2$ with four designated non-overlapping boundary arcs, denoted in cyclic order by $\zeta_1$, $\zeta_2$, $\zeta_3$, $\zeta_4$, which are the images of $[0,1] \times\{0\}$, $\{1\} \times [0,1]$, $[0,1] \times \{1\}$ and $\{0\} \times [0,1]$, respectively, under the parametrizing homeomorphism from $[0,1]^2$. We write $\Gamma_1(Q)$ to denote the family $\Gamma(\zeta_1,\zeta_3; Q)$ of curves in $Q$ connecting $\zeta_1$ and $\zeta_3$, and $\Gamma_2(Q)$ to denote the family $\Gamma(\zeta_2,\zeta_4; Q)$ of curves in $Q$ connecting $\zeta_2$ and $\zeta_4$. More generally, for disjoint closed sets $E,F$ contained in the set $G \subset X$, the notation $\Gamma(E,F;G)$ is used to denote the family of curves in $G$ which intersect both $E$ and $F$.
\[defi:reciprocality\] The metric surface $(X, d)$ is [*reciprocal*]{} if there exists $\kappa \geq 1$ such that for all quadrilaterals $Q$ in $X$, $$\label{equ:reciprocality(1)}
\operatorname{mod}\Gamma_1(Q) \operatorname{mod}\Gamma_2(Q) \leq \kappa$$ and $$\label{equ:reciprocality(2)}
\operatorname{mod}\Gamma_1(Q) \operatorname{mod}\Gamma_2(Q) \geq 1/\kappa,$$ and for all $x \in X$ and $R>0$ such that $X \setminus B(x,R) \neq \emptyset$, $$\label{equ:reciprocality(3)}
\lim_{r \rightarrow 0} \operatorname{mod}\Gamma(B(x,r), X \setminus B(x,R); B(x,R)) = 0.$$
We then have the following result.
\[thm:uniformization\] There exists a domain $\Omega \subset \mathbb{R}^2$ and a quasiconformal mapping $f: (X,d) \rightarrow \Omega$ if and only if $X$ is reciprocal.
The necessity of each condition in Definition \[defi:reciprocality\] is immediate; standard computations show that $\mathbb{R}^2$ is reciprocal. The actual content of Theorem \[thm:uniformization\] is that these conditions are sufficient to construct “by hand” a mapping that can then be shown to be quasiconformal. However, the question of whether a weaker set of assumptions might still be sufficient to construct such a quasiconformal mapping is not fully settled in [@Raj:16].
It is not difficult to construct examples of metric surfaces for which conditions and fail. For instance, the quotient space $\mathbb{R}^2/ \sim$, where $x \sim y$ if $x=y$ or if both $x$ and $y$ belong to the closed unit disc, has a natural metric for which both conditions fail. On the other hand, it was conjectured in [@Raj:16] (Question 17.5) that in fact condition holds for all $(X,d)$. The main result of this paper shows that this is indeed the case.
\[thm:main\] Let $(X,d)$ be a metric space homeomorphic to $\mathbb{R}^2$ with locally finite Hausdorff 2-measure. There exists a constant $\kappa\geq 1$, independent of $X$, such that $\operatorname{mod}\Gamma_1(Q) \operatorname{mod}\Gamma_2(Q) \geq 1/\kappa$ for all quadrilaterals $Q \subset X$.
As a consequence of Theorem \[thm:main\], condition in Definition \[defi:reciprocality\] is unnecessary. Our proof as written gives a value of $\kappa = 2000^2\cdot (4/\pi)^2$, though optimizing each step would improve this to $\kappa = 216^2\cdot (4/\pi)^2$. It is a corollary of Theorem 1.5 in [@Raj:16], as improved in [@Rom:17], that if $X$ is reciprocal (and hence $X$ admits a quasiconformal parametrization), then Theorem \[thm:main\] holds with $\kappa = (4/\pi)^2$. For this reason, it is natural to conjecture that the best possible $\kappa$ for the general case is also $(4/\pi)^2$, though our techniques fall far short of this.
In Proposition 15.8 of [@Raj:16], Theorem \[thm:main\] (with a larger value of $\kappa$) is proved under the assumption that $X$ satisfies the mass upper bound $\mathcal{H}^2(B(x,r)) \leq Cr^2$ for some $C>0$ independent of $x$ and $r$. Our proof follows a similar outline; the difficulty is to avoid using the upper bound.
The basic approach is to construct an “energy-minimizing” or “harmonic” function $u: Q \rightarrow [0,\infty)$ which satisfies the boundary constraints $u|\zeta_1 = 0$ and $u|\zeta_3 = 1$. Working only from the assumptions at hand, one can establish relevant properties of $u$. The main property needed to prove Theorem \[thm:main\] is that a version of the coarea inequality holds for $u$. For the case when $X$ satisfies the mass upper bound $\mathcal{H}^2(B(x,r)) \leq Cr^2$, this is found in Proposition 15.7 of [@Raj:16]. The coarea inequality implies that, from the level sets of $u$, one may extract a large family of rectifiable curves contained in $\Gamma_2(Q)$. Since $u$ is defined by means of the curve family $\Gamma_1(Q)$, this provides the necessary link between $\Gamma_1(Q)$ and $\Gamma_2(Q)$. Roughly speaking, if there are few curves in $\Gamma_1(Q)$, as quantified by modulus, then these corresponding curves in $\Gamma_2(Q)$ must be short, which implies that the modulus of $\Gamma_2(Q)$ is large. The organization of the paper is the following. Section \[sec:preliminaries\] contains some basic notation and background, including an overview of the construction of the harmonic function $u$ described in the previous paragraph. In Section \[sec:level\_sets\], we prove several properties of the level sets of $u$ which are required for the proof of Theorem \[thm:main\]. This section expands on the material present in Section 4 of [@Raj:16]. Section \[sec:lower\_bound\] contains the main technical portion of our paper, the coarea inequality for $u$ described previously valid for all metric surfaces, as well as the proof of Theorem \[thm:main\]. Section \[sec:continuity\_u\] contains a final auxiliary result, namely that the harmonic function $u$ is continuous in general. The continuity of $u$ had previously been proved as Theorem 5.1. of [@Raj:16] using the reciprocality conditions and .
Preliminaries {#sec:preliminaries}
=============
In this section, we give a review of notation and auxiliary results from [@Raj:16] that will be needed. For the remainder of this paper, we let $X$ be a metric surface and $Q$ denote a fixed quadrilateral in $X$. We write $\Gamma_1$ for $\Gamma_1(Q)$. We assume throughout this paper that all curves are non-constant.
For $k \in \{1,2\}$ and $\varepsilon > 0$, the [*$k$-dimensional Hausdorff $\varepsilon$-content*]{} of a set $E \subset X$, denoted by $\mathcal{H}_\varepsilon^k(E)$, is defined as $$\mathcal{H}_\varepsilon^k(E) = \inf \left\{ \sum a_k \operatorname{diam}(A_j)^k: E \subset \bigcup_{j=1}^\infty A_j, \operatorname{diam}A_j < \varepsilon \right\},$$ with normalizing constants $a_1 = 1$ and $a_2 = \pi/4$. The [*Hausdorff $k$-measure*]{} of $E$ is defined as $\mathcal{H}^k(E) = \lim_{\varepsilon \rightarrow 0} \mathcal{H}_\varepsilon^k(E)$.
We proceed with an overview of the construction of the harmonic function $u$ corresponding to the curve family $\Gamma_1$, as given in Section 4 of [@Raj:16]. By a standard argument using Mazur’s lemma, there exists a sequence of admissible functions $(\rho_k)$ for $\Gamma_1$ that converges strongly in $L^2$ to a function $\rho \in L^2(Q)$ satisfying $\int_Q \rho^2\,d\mathcal{H}^2 = \operatorname{mod}\Gamma_1$. By Fuglede’s lemma, $$\label{equ:fuglede}
\int_\gamma \rho_k\, ds \rightarrow \int_{\gamma} \rho\, ds < \infty$$ for all curves $\gamma$ in $Q$ except for a family of modulus zero. In particular, this implies that $\rho$ is weakly admissible for $\Gamma_1$ (that is, admissible after removing from $\Gamma_1$ a subfamily of modulus zero). We extend the definition of $\rho$ to the entire space $X$ by setting $\rho(x) = 0$ for all $x \in X \setminus Q$.
Let $\Gamma_0$ be the family of curves in $Q$ with a subcurve on which does not hold. Note that $\operatorname{mod}\Gamma_0 = 0$. We define the function $u$ as follows. Let $x \in Q$. If there exists a curve $\gamma \in \Gamma_1 \setminus \Gamma_0$ whose image contains $x$, then define $$\label{equ:u_definition}
u(x) = \inf \int_{\gamma_x} \rho\,ds,$$ where the infimum is taken over all such curves $\gamma$ and over all subcurves $\gamma_x$ of $\gamma$ joining $\zeta_1$ and $x$. Otherwise, define $u(x)$ by $$u(x) = \liminf_{y \in E, y \rightarrow x} u(y),$$ where $E$ is the set of those $y \in Q$ such that $u(y)$ is defined by . Lemma 4.1 of [@Raj:16] shows that $u$ is well-defined in $Q$.
We recall Lemma 4.3 of [@Raj:16], which states that $\rho$ is a weak upper gradient of $u$. More precisely, $$\label{equ:upper_gradient}
|u(x) - u(y)| \leq \int_{\gamma} \rho\,ds$$ for all curves $\gamma$ in $Q$ with $\gamma \notin \Gamma_0$. In particular, $u$ is absolutely continuous along any curve $\gamma \notin \Gamma_0$. We also recall Lemma 4.5 of [@Raj:16], where it is shown that $0 \leq u(x) \leq 1$ for all $x \in Q$. It follows from that if $x \in \zeta_3$ lies in the image of a curve $\gamma \in \Gamma_1 \setminus \Gamma_0$, then $u(x) \geq 1$ and thus $u(x) = 1$.
As final points of notation, for a set $A \subset Q$, let $\operatorname*{osc}_{A} u = \sup_{x,y \in A} |u(x) - u(y)|$. Let $|\gamma|$ denote the image of the curve $\gamma$ in $Q$.
To study the harmonic function $u$, there are three auxiliary results which are employed repeatedly in [@Raj:16] and which we state here for easy reference. The first concerns the existence of rectifiable curves and can be found as Proposition 15.1 of [@Sem:96c].
\[prop:existence\_paths\] Let $x,y \in X$ be given, $x \neq y$. Suppose that $E \subset X$ is a continuum with $\mathcal{H}^1(E) < \infty$ and $x, y \in E$. Then there is an $L>0$, $L \leq \mathcal{H}^1(E)$, and an injective 1-Lipschitz mapping $\gamma\colon [0,L] \rightarrow X$ such that $\gamma(t) \in E$ for all $t$, $\gamma(0) = x$, $\gamma(L) = y$ and $\mathcal{H}^1(\gamma(F)) = \mathcal{H}^1(F)$ for all measurable sets $F \subset [0,L]$.
The next is the standard coarea inequality for Lipschitz functions on metric spaces, found in [@AmbTil:04 Proposition 3.1.5].
\[prop:coarea\] Let $A \subset X$ be Borel measurable. If $m\colon X \rightarrow \mathbb{R}$ is $L$-Lipschitz and $g\colon A \rightarrow [0, \infty]$ is Borel measurable, then $$\int_{\mathbb{R}} \int_{A \cap m^{-1}(t)} g(s)\, d\mathcal{H}^1(s)\, dt \leq \frac{4L}{\pi} \int_A g(x)\, d\mathcal{H}^2(x).$$
We also need a topological lemma, cf. [@Moo:62 IV Theorem 26].
\[lemm:separating\_continuum\] Let $A,B \subset Q$ be non-empty sets, and let $K \subset Q$ be a compact set such that $A$ and $B$ belong to different components of $Q \setminus K$. Then there is a continuum $F \subset K$ such that $A$ and $B$ belong to different components of $Q \setminus F$. Moreover, if $\mathcal{H}^1(K) < \infty$ and the component of $Q \setminus K$ containing $A$ is contained in the interior of $Q$, then $F$ may be taken to be the image of an injective Lipschitz mapping $\gamma: \mathbb{S}^1 \rightarrow K$.
Level sets of $u$ {#sec:level_sets}
=================
In this section, we prove a number of topological properties for the level sets of the harmonic function $u$, or, more precisely, for the closure of these level sets. This section can be viewed as an extension of Section 4 in [@Raj:16], which also studies those properties of $u$ which can be proved without any use of the reciprocality conditions.
The primary technical difficulty we must deal with is that, without assuming the reciprocality conditions, we do not know [*a priori*]{} that the function $u$ is continuous. However, it is shown in Lemma 4.6 of [@Raj:16] that $u$ satisfies a maximum and a minimum principle. To state it, we use the following notation. For an open set $\Omega \subset X$, or a relatively open set $\Omega \subset Q$, let $$\partial_* \Omega = (\partial \Omega \cap Q) \cup (\overline{\Omega} \cap (\zeta_1 \cup \zeta_3)) .$$ Then we have the following.
\[lemm:maximum\_principle\] Let $\Omega \subset X$ be open. Then $\sup_{x \in \Omega \cap Q} u(x) \leq \sup_{y \in \partial_*\Omega} u(y)$ and $\inf_{x \in \Omega \cap Q} u(x) \geq \inf_{y \in \partial_*\Omega} u(y)$.
Lemma \[lemm:maximum\_principle\] allows us to establish topological properties for the closures of sets of the form $u^{-1}([s,t])$.
\[prop:connect\] For all $s, t \in [0,1]$, $s \leq t$, the set $\overline{u^{-1}([s,t])}$ is connected and intersects both $\zeta_2$ and $\zeta_4$.
Let $E = \overline{u^{-1}([s,t])}$. To prove the first claim, suppose that $E$ is not connected. Then there is an open set $U \subset X$ such that $$\label{sussa}
U \cap E \neq \emptyset, \quad (Q \setminus U) \cap E \neq \emptyset, \quad \partial U \cap E = \emptyset.$$ Let $E_1 = U \cap E$ and $E_2 = (Q \setminus U) \cap E$. By passing to a subset if needed, we may assume that $E_1$ and $E_2$ are each contained within a single component of $U$ and $Q \setminus \overline{U}$, respectively. We fix $\varepsilon >0$ such that $\operatorname{dist}(\partial U,E) > \varepsilon$. By Proposition \[prop:coarea\] applied to $h(x)=\operatorname{dist}(\partial U,x)$, there is $0<p<\varepsilon$ such that $\mathcal{H}^1(h^{-1}(p))< \infty$ and every rectifiable curve $\gamma$ for which $|\gamma| \subset h^{-1}(p)$ lies outside the exceptional set $\Gamma_0$. By and our choice of $p$, the sets $E_1$ and $E_2$ belong to different components of $Q \setminus h^{-1}(p)$. Lemma \[lemm:separating\_continuum\] then shows that $h^{-1}(p)$ has a connected subset $F \subset Q$ such that $E_1$ and $E_2$ belong to different components of $Q \setminus F$. Notice that, for every rectifiable curve $\gamma$ with $|\gamma| \subset F$, $u||\gamma|$ is continuous and either $u(x) < s$ or $u(x)>t$ for all $x \in |\gamma|$. We divide the rest of the proof into cases.
\[pring\] Suppose there is an open set $G \subset X$ such that $\partial G \subset F$ and $E_j \subset G$ for $j=1$ or $j=2$. By Lemma \[lemm:maximum\_principle\] there are $x_0,x_1 \in G$ such that $u(x_0)\leq s$ and $u(x_1)\geq t$. Moreover, by Proposition \[prop:existence\_paths\] there is a rectifiable curve $\gamma$ joining $x_0$ and $x_1$ in $F$. Since $u||\gamma|$ is continuous, we conclude that $u(x) \in E$ for some $x \in |\gamma|$. This is a contradiction, since $E \cap F = \emptyset$.
Suppose next that the set $G$ in Case \[pring\] does not exist. We then find a subcontinuum $F'$ of $F$ with the following properties: $F' \cap \partial Q$ consists of two distinct points $x_0$ and $x_1$, and $E_1$ and $E_2$ belong to different components, say $\Omega_1$ and $\Omega_2$, of $X \setminus (\partial Q \cup F')$. By Proposition \[prop:existence\_paths\] we may moreover assume that $F'=|\gamma|$, where $\gamma:[0,1]\to Q$ is simple and rectifiable, and $\gamma(0)=x_0$, $\gamma(1)=x_1$.
Suppose that both $x_0$ and $x_1$ belong to $\zeta_j$ for some $j=1,\ldots,4$. Then $\partial \Omega_k \subset |\gamma| \cup \zeta_j$ for $k=1$ or $k=2$. As in Case \[pring\], Lemma \[lemm:maximum\_principle\] and the continuity of $u||\gamma|$ show that there exists $x \in |\gamma|$ such that $u(x) \in [s,t]$. This contradicts the construction of $\gamma$. A similar argument can be applied when $x_0 \in \zeta_i$ and $x_1 \in \zeta_j$, where either $i \in \{1,3\}$ and $j \in \{2,4\}$, or $j \in \{1,3\}$ and $i \in \{2,4\}$.
Suppose that $x_0 \in \zeta_1$ and $x_1 \in \zeta_3$. Then, since $\gamma \notin \Gamma_0$, the construction of $u$ shows that $u||\gamma|$ takes all values between $0$ and $1$. In particular, $u(x) \in [s,t]$ for some $x \in |\gamma|$. This contradicts the fact that $|\gamma| \cap E = \emptyset$. The argument remains valid if the roles of $x_0$ and $x_1$ are reversed.
Suppose that $x_0 \in \zeta_2$ and $x_1 \in \zeta_4$. Without loss of generality, we may assume that $\Omega_1$ is the component containing $\zeta_1$. It then follows from Lemma \[lemm:maximum\_principle\] that $u(x) \geq s$ for some $x \in |\gamma|$. Moreover, since $u||\gamma|$ is continuous and $|\gamma| \cap E = \emptyset$, it follows that in fact $u(x) >t$ for every $x \in |\gamma|$. Similarly, applying Lemma \[lemm:maximum\_principle\] to $\Omega_2$ shows that $u(x) < s$ for every $x \in |\gamma|$. This is a contradiction. The argument remains valid if the roles of $x_0$ and $x_1$ are reversed.
We conclude that the set $E$ is connected. It remains to show that $E$ intersects both $\zeta_2$ and $\zeta_4$. Suppose towards contradiction that this is not the case. We may assume without loss of generality that $E$ does not intersect $\zeta_4$. Proposition \[prop:coarea\] applied to $g(x)=\operatorname{dist}(\zeta_4,x)$ shows that there exists a small $p>0$ such that $\mathcal{H}^1(g^{-1}(p))< \infty$. Moreover, by Lemma \[lemm:separating\_continuum\] there is a continuum $F \subset g^{-1}(p)$ joining $\zeta_1$ and $\zeta_3$ in $Q \setminus E$. Proposition \[prop:existence\_paths\] gives a simple curve $\gamma$ such that $|\gamma| \subset F$ also joins $\zeta_1$ and $\zeta_3$. As before, we may assume that $\gamma \notin \Gamma_0$ so that $u||\gamma|$ takes all values between $0$ and $1$. This is a contradiction since $|\gamma| \cap E = \emptyset$. The proof is complete.
Next, we give a generalization of Lemma 15.6 in [@Raj:16], with a corrected constant. The proof is essentially the same as the corresponding proof in [@Raj:16].
\[lemm:oscillation\] Let $x \in Q$ and $r \in (0, r_0)$, where $r_0 = \min\{\operatorname{diam}\zeta_1, \operatorname{diam}\zeta_3\}/4$. Then $$\label{equ:oscillation_bound}
r \mathcal{H}^1(u(B(x,r) \cap Q))\leq \frac{4}{\pi} \int_{B(x,2r)} \rho\, d\mathcal{H}^2.$$ Moreover, if $U(x,r)$ is the $x$-component of $B(x,r) \cap Q$, then $$\label{equ:oscillation_bound2}
r \operatorname*{osc}_{U(x,r)} u \leq \frac{4}{\pi} \int_{B(x,2r)} \rho\, d\mathcal{H}^2.$$
By applying Proposition \[prop:coarea\] to the function $d(\cdot,x)$ and arguing as in the first paragraph of the proof of Proposition \[prop:connect\], we see that for almost every $s \in (r,2r)$, the sphere $S(x,s)$ satisfies $\mathcal{H}^1(S(x,s)) < \infty$ and has the property that $\eta \notin \Gamma_0$ for every curve $\eta$ with $|\eta| \subset S(x,s) \cap Q$. Fix such an $s \in (r,2r)$.
Then $B(x,s) \cap Q$ consists of countably many relatively open components $V_j$. By Lemma \[lemm:separating\_continuum\], for such a component $V_j$ there is a simple curve $\gamma_j$ with $|\gamma_j| \subset S(x,s)$ that separates $Q$ into the relative components $U_j$ and $Q \setminus \overline{U}_j$, where $V_j \subset U_j$. Observe that either $\gamma_j$ is a closed curve, or the two endpoints of $\gamma_j$ are contained in $\partial Q$.
Since $B(x,r) \cap Q \subset \bigcup_j U_j$, we have $$\mathcal{H}^1(u(B(x,r) \cap Q)) \leq \sum_j \operatorname{diam}u(U_j).$$ By the maximum principle Lemma \[lemm:maximum\_principle\], $$\operatorname{diam}u(U_j) \leq \sup_{y,z \in \partial_* U_j} |u(y) - u(z)|.$$ By our assumption that $r \leq \min\{ \operatorname{diam}\zeta_1, \operatorname{diam}\zeta_3\}/4$, it follows that if $\zeta_1 \cap \partial_* U_j \neq \emptyset$, then there exists a point $z_1 \in |\gamma_j| \cap \zeta_1$. Indeed, if $y \in \zeta_1 \cap \partial_* U_j$, then $d(y,x) \leq 2r$. But by assumption, there exists $z \in \zeta_1$ such that $d(y,z) > 4r$. The triangle inequality gives $d(z,x) > 2r$, and in particular $z \notin \overline{U}_j$. Since $\gamma_j$ separates $Q$, we conclude there is a point $z_1 \in |\gamma_j| \cap \zeta_1$. In this case it follows that $0 = \inf_{z \in \partial_* U_j} u(z) = u(z_1) = \min_{z \in |\gamma_j|} u(z)$. On the other hand, if $\zeta_1 \cap \partial_* U_j = \emptyset$, then by Lemma \[lemm:maximum\_principle\] we again have $\inf_{z \in \partial_* U_j} u(z) = \min_{z \in |\gamma_j|} u(z)$.
The same argument shows that if $\zeta_3 \cap \partial_* U_j \neq \emptyset$, then there exists $y_1 \in |\gamma_j| \cap \zeta_3$ such that $1 = \sup_{y \in \partial_* U_j} u(y) = u(y_1) = \max_{y \in \gamma_j} u(y)$. In general, we likewise have $\sup_{y \in \partial_* U_j} u(y) = \max_{y \in |\gamma_j|} u(y)$. This establishes the equality $$\sup_{y,z \in \partial_* U_j} |u(y) - u(z)| = \max_{y,z \in |\gamma_j|} |u(y) - u(z)| .$$ By the upper gradient inequality , $$\max_{y,z \in |\gamma_j|} |u(y) - u(z)| \leq \int_{\gamma_j} \rho\,d\mathcal{H}^1.$$ Finally, combining the estimates gives $$\mathcal{H}^1(u(B(x,r) \cap Q)) \leq \sum_j \operatorname{diam}u(U_j) \leq \sum_j \int_{\gamma_j} \rho\, d\mathcal{H}^1 \leq \int_{S(x,s)} \rho\, d\mathcal{H}^1.$$ Observe that this estimate is the same independent of our choice of $s$. Inequality then follows from integrating over $s$ from $r$ to $2r$ and applying Proposition \[prop:coarea\].
The same argument also verifies inequality , since for each choice of $s \in (r,2r)$ it holds that $\operatorname*{osc}_{U(x,r)} u = \operatorname{diam}u(U(x,r)) \leq \sum_j \operatorname{diam}u(U_j)$.
Without assuming the reciprocality conditions, it is not clear that the function $u$ is continuous. Nevertheless, Lemma \[lemm:oscillation\] implies a certain amount of continuity for $u$, as we show in the following corollary.
\[cor:continuity\] The function $u$ is continuous at $\mathcal{H}^2$-almost every $x \in Q$.
Inequality implies that $$\limsup_{r \rightarrow 0} \operatorname*{osc}_{U(x,r)} u \leq \limsup_{r \rightarrow 0} \frac{4r}{\pi}\cdot \frac{1}{r^2} \int_{B(x,2r)} \rho\, d\mathcal{H}^2$$ for all $x \in Q \setminus \partial Q$. Here, $U(x,r)$ is as in Lemma \[lemm:oscillation\]. From basic properties of pointwise densities of measures (see [@Fed:69 Sec. 2.10.19(3)]), the integrability of $\rho$ and local finiteness of $\mathcal{H}^2$ imply that $$\limsup_{r \rightarrow 0} \frac{1}{r^2} \int_{B(x,2r)} \rho\, d\mathcal{H}^2 < \infty$$ for $\mathcal{H}^2$-almost every $x \in Q$. The result follows by combining the estimates.
Reciprocal lower bound {#sec:lower_bound}
======================
This section is devoted to a proof of Theorem \[thm:main\]. We first state and prove the coarea inequality mentioned above which constitutes the main technical contribution of this paper. This corresponds to Proposition 15.7 in [@Raj:16], where a similar result is proved under the assumption that $X$ has the mass upper bound $\mathcal{H}^2(B(x,r)) \leq Cr^2$. The proof of Proposition \[prop:coarea\_u\], like Proposition 15.7 in [@Raj:16], is based on standard arguments such as that in [@AmbTil:04 Prop. 3.1.5].
\[prop:coarea\_u\] Let $u$ and $\rho$ be as above. For all Borel functions $g: Q \rightarrow [0,\infty]$, $$\int_{[0,1]}^* \int_{\overline{u^{-1}(t)}} g\,d\mathcal{H}^1\,dt \leq 2000 \int_Q g\rho\, d\mathcal{H}^2.$$ Here $\int^*_A a(t)\, dt $ is the upper Lebesgue integral of $a$ over $A$ (see [@Fed:69 Sec. 2.4.2]).
It suffices to consider the case where $g$ is a characteristic function, that is, $g = \chi_E$ for some Borel set $E \subset Q$. Moreover, we may assume that $E$ is open in $Q$. Indeed, for a Borel set $E$ we find open sets $U_j \supset E$, $U_{j+1} \subset U_j$, such that $\mathcal{H}^2(U_j) \to \mathcal{H}^2(E)$. Assuming the proposition for $g=\chi_{U_j}$, we have $$\begin{aligned}
\int_{[0,1]}^* \int_{\overline{u^{-1}(t)}} \chi_E \,d\mathcal{H}^1\,dt &\leq &\int_{[0,1]}^* \int_{\overline{u^{-1}(t)}} \chi_{U_j} \,d\mathcal{H}^1\,dt \leq 2000 \int_Q \chi_{U_j}\rho\, d\mathcal{H}^2 \\
&\longrightarrow& 2000 \int_Q \chi_{E}\rho\, d\mathcal{H}^2.
\end{aligned}$$ So we want to show that $$\label{equ:set_E}
\int_{[0,1]}^* \mathcal{H}^1(\overline{u^{-1}(t)} \cap E) \,dt \leq 2000 \int_E \rho\,d\mathcal{H}^2$$ whenever $E$ is open in $Q$. The proof is divided into two steps, the first dealing with the subset of “good” points of $E$ and the second dealing with the subset of “bad” points. Throughout this proof, all metric balls are considered as subsets of $Q$.
Consider the set $$G = \left\{x \in E: \forall \varepsilon>0, \exists r<\varepsilon, \int_{B(x,10r)} \rho\,d\mathcal{H}^2 \leq 200 \int_{B(x,r)} \rho \,d\mathcal{H}^2\right\}.$$ Fix $\varepsilon>0$. We apply the basic covering theorem ([@Hei:01 Thm. 1.2]) to choose a countable collection of pairwise disjoint balls $B_j = B(x_j,r_j)$ such that $x_j \in G$ and $10r_j \leq \min\{\varepsilon, d(x_j, Q\setminus E)\}$ for each $j$, the collection $\{5B_j\}$ covers $G$, and $$\int_{10B_j} \rho\,d\mathcal{H}^2 \leq 200 \int_{B_j} \rho \,d\mathcal{H}^2$$ for each $j$. We also require that $20r_j < \min\{\operatorname{diam}\zeta_1, \operatorname{diam}\zeta_3\}$ for our application of Lemma \[lemm:oscillation\]. We have $$\sum_j \int_{10B_j} \rho\, d\mathcal{H}^2 \leq \sum_j 200\int_{B_j} \rho\, d\mathcal{H}^2 \leq 200\int_{E} \rho\,d\mathcal{H}^2,$$ where the last inequality follows since by our choice the balls $B_j$ are pairwise disjoint subsets of the open set $E$. For each $j$ fix a measurable set $A_j \supset u(5B_j)$ such that $\mathcal{H}^1(A_j)=\mathcal{H}^1(u(5B_j))$. Moreover, define $g_\varepsilon: [0,1] \rightarrow \mathbb{R}$ by $$g_\varepsilon(t) = \sum_j r_j \chi_{A_j}(t).$$ Integrating and applying Lemma \[lemm:oscillation\] gives $$\int_0^1 g_\varepsilon(t)\,dt = \sum_j r_j \mathcal{H}^1(u(5B_j)) \leq \frac{4}{\pi}\sum_j \int_{10B_j} \rho\, d\mathcal{H}^2.$$ We observe that if $x \in \overline{u^{-1}(t)} \cap G$ for a given $t \in [0,1]$, with $j_x$ such that $x \in 5B_{j_x}$, then of necessity $t \in u(5B_{j_x})$. Hence $\mathcal{H}_\varepsilon^1(\overline{u^{-1}(t)} \cap G) \leq 10g_\varepsilon(t)$, by the definition of Hausdorff $\varepsilon$-content. Letting $\varepsilon \rightarrow 0$ and applying Fatou’s lemma gives $$\int_{[0,1]}^* \mathcal{H}^1(\overline{u^{-1}(t)} \cap G)\, dt \leq 10 \int_0^1 \liminf_{\varepsilon \to 0} g_\varepsilon(t)\, dt \\
\leq 10 \liminf_{\varepsilon \rightarrow 0} \int_0^1 g_\varepsilon(t) \, dt.$$ Combining estimates, we obtain $$\int_{[0,1]}^*\mathcal{H}^1(\overline{u^{-1}(t)} \cap G)\, dt \leq \frac{4 \cdot 2000}{\pi} \int_E \rho\,d\mathcal{H}^2.$$
We turn our attention next to the set $F = E \setminus G$. We claim that $$\label{equ:bad_points}
\int_{[0,1]}^* \mathcal{H}^1(\overline{u^{-1}(t)} \cap F)\,dt = 0.$$ By the definition of $F$, for all $x \in F$ there exists $\varepsilon_x = 10^{-k_x}$ (for some integer $k_x \geq 1$) such that $$\label{equ:bad_iteration}
\int_{B(x,10^{-j})} \rho\,d\mathcal{H}^2 \leq 200^{-1} \int_{B(x,10^{-j+1})} \rho\,d\mathcal{H}^2 \leq \cdots \leq 200^{-(j-k_x)} \int_{B(x,\varepsilon_x)} \rho\,d\mathcal{H}^2$$ for all $j \geq k_x$. For all $k \in \mathbb{N}$, let $F_k = \{x \in F: k_x \leq k\}$. Observe that $F = \bigcup_k F_k$.
Now, fix $k \in \mathbb{N}$ and let $j \geq k$. By definition of the (spherical) Hausdorff measure, there exists a countable collection of balls $B_m=B(x_m, r_m)$ which cover $F_k$, such that $x_m \in F_k$, $r_m \leq \min\{10^{-j}, d(x_m,Q \setminus E),\operatorname{diam}\zeta_1/4, \operatorname{diam}\zeta_3/4\}$, and $\sum 4r_m^2 \leq 4\mathcal{H}^2(F_k)+4/j$. For the last requirement, recall that the spherical Hausdorff 2-measure is at most 4 times the usual Hausdorff 2-measure. For each $m$, let $j_m$ be the largest integer such that $2r_m \leq 10^{-j_m}$. Observe that $10^{-j_m} < 20r_m \leq 20 \cdot 10^{-j}$ and hence that $j_m \geq j-1$.
From Lemma \[lemm:oscillation\] and we deduce $$\begin{aligned}
r_m \mathcal{H}^1(u(B_m))& \leq \frac{4}{\pi} \int_{2B_m} \rho\,d\mathcal{H}^2 \leq \frac{4}{\pi} \int_{B(x,10^{-j_m})}\rho\,d\mathcal{H}^2 \\
& \leq \frac{4}{\pi} \cdot \frac{1}{200} \int_{B(x,10^{-j_m+1})}\rho\,d\mathcal{H}^2 \\
& \leq \cdots \leq \frac{4}{\pi} \cdot \frac{1}{200^{j_m-k}} \int_{B(x,10^{-k})} \rho\, d\mathcal{H}^2.
\end{aligned}$$ In particular, $$\label{eq:oscillation}
r_m \mathcal{H}^1(u(B_m)) \leq \frac{4}{\pi} \cdot \frac{200^k}{200^{j_m}} \int_Q \rho\,d\mathcal{H}^2.$$
Similar to the first step of the proof, for each $m$ fix a measurable $A_m \supset u(B_m)$ such that $\mathcal{H}^1(A_m)=\mathcal{H}^1(u(B_m))$ and define $g_j(t) = \sum_m r_m\chi_{A_m}(t)$. Then, as before, the definition of $\mathcal{H}_{1/j}^1$ gives $$\label{nakki}
\mathcal{H}_{1/j}^1(\overline{u^{-1}(t)} \cap F_k) \leq 2g_j(t)$$ for all $t \in [0,1]$. Integrating gives $$\int_0^1 g_j(t)\,dt \leq \sum_m r_m\mathcal{H}^1(u(B_m)).$$ Applying and using the relationships $1 < 20\cdot 10^{j_m} r_m$ and $j_m \geq j-1$ gives $$\begin{aligned}
\sum_m r_m\mathcal{H}^1(u(B_m)) & \leq \sum_m \frac{3200}{\pi}\cdot 200^k r_m^2 \left( \frac{100}{200} \right)^{j_m} \int_Q \rho\,d\mathcal{H}^2 \\
& \leq \frac{3200}{\pi}\cdot 200^k \left( \frac{100}{200} \right)^{j} \left(\int_Q \rho\,d\mathcal{H}^2 \right) \sum_m r_m^2 \\
& \leq \frac{3200}{\pi}\cdot 200^k \left( \frac{100}{200} \right)^{j} \left(\int_Q \rho\,d\mathcal{H}^2 \right)\left(\mathcal{H}^2(F_k)+1/j \right) .
\end{aligned}$$ From this we obtain $$\lim_{j \to \infty} \int_0^1 g_j(t) \, dt \leq \lim_{j \to \infty} \frac{3200}{\pi}\cdot 200^k \cdot2^{-j} \left(\int_Q \rho\,d\mathcal{H}^2 \right)\left(\mathcal{H}^2(F_k)+1/j \right)=0.$$ Combining with Fatou’s lemma and shows that $ \mathcal{H}^1(\overline{u^{-1}(t)} \cap F_k)=0$ for almost every $t$. Since this is true for all $k$, follows.
With Proposition \[prop:coarea\_u\] in hand, the proof of Theorem \[thm:main\] is now simple.
First, observe from Proposition \[prop:coarea\_u\] that $\mathcal{H}^1(\overline{u^{-1}(t)}) < \infty$ for almost every $t \in [0,1]$. Also, as shown in Proposition \[prop:connect\], $\overline{u^{-1}(t)}$ is connected for all $t$ and connects $\zeta_2$ and $\zeta_4$. By Proposition \[prop:existence\_paths\], for almost every $t \in [0,1]$, $\overline{u^{-1}(t)}$ contains a simple rectifiable curve $\gamma_t$ joining $\zeta_2$ and $\zeta_4$ in $Q$. Let $g: Q \rightarrow [0,\infty]$ be an admissible function for $\Gamma_2$. Then $$\label{nakka}
1 \leq \int_{\gamma_t} g \, ds \leq \int_{\overline{u^{-1}(t)}} g\,d\mathcal{H}^1$$ for almost every $0 \leq t \leq 1$. Combining with Proposition \[prop:coarea\_u\] yields $$1 \leq \int^*_{[0,1]}\int_{\overline{u^{-1}(t)}} g\,d\mathcal{H}^1\,dt \leq \frac{4 \cdot 2000}{\pi} \int_Q g\rho\, d\mathcal{H}^2.$$ By Hölder’s inequality, $$\int_Q g\rho\,d\mathcal{H}^2 \leq \left( \int_Q g^2\,d\mathcal{H}^2 \right)^{1/2} \left( \int_Q \rho^2\, d\mathcal{H}^2 \right)^{1/2} = \left( \int_Q g^2\,d\mathcal{H}^2 \right)^{1/2} (\operatorname{mod}\Gamma_1)^{1/2}.$$ Infimizing over all admissible $g$, we obtain $$\frac{1}{2000^2\cdot (4/\pi)^2} \leq \operatorname{mod}\Gamma_1 \cdot \operatorname{mod}\Gamma_2.$$
We can improve the value of $\kappa$ as follows. For $\delta>0$, a version of the basic covering theorem yields a family of balls $B_j$ with the property that $\{(3+\delta)B_j\}$ covers $G$, instead of $\{5B_j\}$. In the definition of the set $G$ in Proposition \[prop:coarea\_u\], we may then use $B(x,2(3+\delta)r)$ in place of $B(x,10r)$. We also replace the constant 200 with $4(3+\delta)^2 + \delta$. Following the remainder of the proof and letting $\delta \rightarrow 0$ yields the final value of $\kappa = 216^2\cdot (4/\pi)^2$.
Continuity of $u$ {#sec:continuity_u}
=================
In this section, we strengthen Corollary \[cor:continuity\] by showing that the harmonic function $u$ is continuous on the entire set $Q$. In Theorem 5.1 of [@Raj:16], the continuity of $u$ is proved employing reciprocality condition . In contrast, we do not assume any of the reciprocality conditions in this section.
First, we need a technical fact. This is proved using Proposition 3.1 in [@Raj:16] (which is a re-statement of Proposition 15.1 in [@Sem:96c]) and an induction and limiting argument.
\[prop:curve\_parametrization\] Let $X$ be a metric space and $E \subset X$ a continuum with $\mathcal{H}^1(E) < \infty$. For all $x, y \in E$, there is a 1-Lipschitz curve $\gamma: [0, 2\mathcal{H}^1(E)] \rightarrow E$ such that $|\gamma| = E$, $\gamma(0) = x$, $\gamma(2\mathcal{H}^1(E)) = y$, and $\gamma^{-1}(z)$ contains at most two points for $\mathcal{H}^1$-almost every $z \in E$.
For this proof, we will let $D$ denote the length metric on $E$ induced by $d$. We write $D_{zw}$ in place of $D(z,w)$. Observe that $D_{zw} < \infty$ for all $z,w \in E$ by Proposition 3.1 in [@Raj:16]. Also, for $z,w \in E$, we use $\gamma_{zw}$ to denote some fixed choice of injective 1-Lipschitz curve in $E$ from $z$ to $w$ whose length attains $D_{zw}$; the existence of at least one such curve is guaranteed by the Hopf-Rinow theorem. Let $L = 2\mathcal{H}^1(E)$.
We will inductively define a sequence of curves $\gamma_j: [0, L] \rightarrow E$. We define first $\gamma_1$ by $$\gamma_1(t) = \left\{ \begin{array}{ll} \gamma_{xy}(t) & 0 \leq t \leq D_{xy} \\ y & D_{xy} \leq t \leq L \end{array} \right. .$$
For the inductive step, assume that $\gamma_j$ has been defined for some $j \in \mathbb{N}$. If $|\gamma_j| = E$, then stop and take $\gamma = \gamma_j$. Otherwise, define $\gamma_{j+1}$ as follows. Let $z_j$ be a point in $E$ maximizing $D$-distance from $|\gamma_j|$. Such a point exists by the compactness of $E$. Let $\gamma_{w_jz_j}$ be a shortest curve from $|\gamma_j|$ to $z_j$, with initial point $w_j \in |\gamma_j|$. Let $t_j$ denote the smallest point in $[0, L]$ for which $\gamma_j(t_j) = w_j$. Define now $\gamma_{j+1}$ by $$\gamma_{j+1}(t) = \left\{ \begin{array}{ll} \gamma_j(t) & 0 \leq t \leq t_j \\ \gamma_{w_jz_j}(t-t_j) & t_j \leq t \leq t_j + D_{w_jz_j} \\ \gamma_{w_jz_j}(t_j+2D_{w_jz_j} - t) & t_j + D_{w_jz_j} \leq t \leq t_j + 2D_{w_jz_j} \\ \gamma_j(t-2D_{w_jz_j}) & t_j + 2D_{w_jz_j} \leq t \leq \ell(\gamma_j) + 2D_{w_jz_j} \\ y & \ell(\gamma_j) + 2D_{w_jz_j} \leq t \leq L \end{array} \right. .$$
Observe that the curve $\gamma_j$ has multiplicity at most 2, except possibly at the points $w_j$. Thus $\ell(\gamma_j) + 2D_{w_jz_j} \leq D_{xy} + \sum_{k=1}^{j} 2 D_{w_kz_k} < 2\mathcal{H}^1(|\gamma_j|) \leq L$. Hence the curve $\gamma_{j+1}$ is well-defined.
We also note that $D(\gamma_{j+1}(t),\gamma_j(t)) \leq 2D_{w_jz_j}$ for all $t \in [0,L]$ and $j \in \mathbb{N}$, and thus the curves $\gamma_j$ converge pointwise to a curve $\gamma: [0, L] \rightarrow E$. By construction, the curve $\gamma$ has multiplicity at most 2, except possibly on the countable set $\{w_j\}$. To see that $|\gamma| = E$, suppose there exists $z \in E \setminus |\gamma|$. But then $D(z,|\gamma|) > 0$. In particular, there exists $j \in \mathbb{N}$ with $D(w_j,z_j) < D(z, |\gamma_j|)$, contradicting the maximality of the choice of $z_j$.
We proceed now to the main result of this section.
\[thm:continuity\] The function $u$ is continuous in $Q$.
For all $t \in [0,1]$ such that $\mathcal{H}^1(\overline{u^{-1}(t)}) < \infty$, let $\gamma_t$ denote a curve connecting $\zeta_2$ to $\zeta_4$ whose image is $\overline{u^{-1}(t)}$ satisfying the conclusions of Proposition \[prop:curve\_parametrization\]. By Lemma 4.3 in [@Raj:16], $u$ is continuous on each $\gamma_t$ except on a curve family of modulus zero. Observe that $$\int_{\gamma_t} g\,ds \leq 2\int_{\overline{u^{-1}(t)}}g\, d\mathcal{H}^1$$ for each $t$ such that $\gamma_t$ is defined, for any Borel function $g:Q \rightarrow [0, \infty]$. From this fact and the coarea inequality Proposition \[prop:coarea\_u\], it follows that $u$ is continuous on $\gamma_t$ for every $t \in E$, where $E \subset [0,1]$ has full measure.
Suppose for contradiction that $u$ is not continuous at the point $x \in Q$. Let $s_1 = \liminf_{y \rightarrow x} u(y)$ and $s_2 = \limsup_{y \rightarrow x} u(y)$; then $0 \leq s_1 < s_2 \leq 1$. Take $\varepsilon$ satisfying $0 < \varepsilon < (s_2-s_1)/2$. Then $x \in A_1 \cap A_2$, where $A_1= \overline{u^{-1}([s_1-\varepsilon, s_1+\varepsilon])}$ and $A_2 = \overline{u^{-1}([s_2-\varepsilon,s_2+\varepsilon])}$. Pick $t_1,t_2 \in (s_1 + \varepsilon, s_2 - \varepsilon) \cap E$ with $t_1 < t_2$. Observe that $Q \setminus |\gamma_{t_1}|$ consists of two disjoint relatively open sets $U_1, U_2 \subset Q$, where each component of $U_1$ intersects $\zeta_1$ and each component of $U_2$ intersects $\zeta_3$. Lemma \[lemm:maximum\_principle\] implies that $A_1 \subset \overline{U}_1$ and that $A_2 \subset \overline{U}_2$. This shows that $x \in \overline{U}_1 \cap \overline{U}_2$ and hence that $x \in |\gamma_{t_1}|$. Since $u^{-1}(t_1)$ is a dense subset of $|\gamma_{t_1}|$, we see that $u(x) = t_1$. However, the same argument shows that $u(x) = t_2$, giving a contradiction.
[**Acknowledgement.**]{} We are grateful to Toni Ikonen, Atte Lohvansuu, Dimitrios Ntalampekos, Martti Rasimus and the referee for their comments and corrections.
[^1]: The first author was supported by the Academy of Finland, project number 308659. The second author was partially supported by the Academy of Finland grant 288501 and by the ERC Starting Grant 713998 GeoMeG. Primary 30L10, Secondary 30C65, 28A75.
|
---
abstract: 'As of today abuse is a pressing issue to participants and administrators of Online Social Networks (OSN). Abuse in Twitter can spawn from arguments generated for influencing outcomes of a political election, the use of bots to automatically spread misinformation, and generally speaking, activities that [*deny*]{}, [*disrupt*]{}, [*degrade*]{} or [*deceive*]{} other participants and, or the network. Given the difficulty in finding and accessing a large enough sample of abuse ground truth from the Twitter platform, we built and deployed a custom crawler that we use to judiciously collect a new dataset from the Twitter platform with the aim of characterizing the nature of abusive users, a.k.a abusive “birds”, in the wild. We provide a comprehensive set of features based on users’ attributes, as well as social-graph metadata. The former includes metadata about the account itself, while the latter is computed from the social graph among the sender and the receiver of each message. Attribute-based features are useful to characterize user’s accounts in OSN, while graph-based features can reveal the dynamics of information dissemination across the network. In particular, we derive the Jaccard index as a key feature to reveal the benign or malicious nature of directed messages in Twitter. To the best of our knowledge, we are the first to propose such a similarity metric to characterize abuse in Twitter.'
author:
-
-
-
title: 'Trollslayer: Crowdsourcing and Characterization of Abusive Birds in Twitter'
---
Introduction
============
Users of OSN are exposed to abuse by other participants, who typically send their victims harmful messages designed to [*deny*]{}, [*disrupt*]{}, [*degrade*]{} and [*deceive*]{} among a few, as reported by top secret methods for online cyberwarfare in JTRIG [@JTRIGs]. In Twitter, these practices have a non-negligible impact in the manipulation of political elections [@Ferrara2015], fluctuation of stock markets [@Bollen2011] or even promoting terrorism [@twitter-suspension]. As of today, and in the current turmoil of fake news and hate speech, we require a global definition for “abuse”. We find the above definition from JTRIG to be able to cover all types of abuse we find in OSN as of today. Secondly, to identify abuse the Twitter platform often relies on participants reporting such incidents of abuse. In other OSN as Facebook this is also the case, as suggested by the large number of false positives encountered by [@boshmafbots2011] in the Facebook Immune System [@immune]. In addition, Twitter suspending abusive participants can be seen as censorship, as it effectively limits free speech of users in the Internet. Finally, user’s privacy is today an increasing concern for users of large OSN. Privacy often clashes with efforts for reducing abuse in these platforms [@FrenchCourt] because even disclosing metadata that holds individuals accountable in such cases violates the fundamental right to privacy according to the Universal Declaration of Human Rights [@UN]. In the same vein, and back to the Twitter platform, we observe a constant trading of individuals’ privacy for granting governments access to private metadata. This endangers citizens well-being and puts them into the spotlight for law enforcement to charge them with criminal offenses, even when no serious criminal offense has been committed [@caution:2012].
The main contribution of this paper is a large-scale study of the dynamics of abuse in a popular online social micro-blogging media platform, Twitter. For that, we collect a dataset where we annotate a subset of the messages received by potential victims of abuse in order to characterize and assess the prevalence of such malicious messages and participants. Also, we find it revealing to understand how humans agree or not in what represents abuse during the crowd sourcing. In summary, the aim of the study is to answer the following research questions (RQ):
**RQ.1:** Can we obtain relevant abuse ground truth from a large OSN such as Twitter using BFS (Bread-First-Search) sampling for data collection and crowd-sourcing for data annotation? We show statistics about the dataset collected and the annotated dataset respectively.
**RQ.2:** Does it make sense to characterize abuse from a victim’s point of view? We provide a list of user attributes (local) and graph-based (global) features that can characterize abusive behavior.
**RQ.3:** What are the dynamics of abusive behavior? Does it appear as an isolated incident or is it somehow organized? We show that the source of several messages comes from an automated social media scheduling platform that redirects Twitter users to a doubtful site about a fund-raising campaign for a charity (considered as deceive in the abuse definition we employ).
Victim-Centric Methodology
==========================
In order to collect data from Twitter we adapt the usual BFS for crawling social media and start crawling data from a sufficiently representative number of accounts for our measurement, which we we call the victims’ seed set. The first half of accounts are likely victims, chosen independently of any sign or trace of abuse in their public Twitter timeline in order to account for randomness in the measurements. The second half is selected based in their public timeline containing traces or likelihood of abuse, namely potential victims of abuse. Therefore, we define the seed set as made up of potential victims and likely victims. We then bootstrap our crawler, following the recursive procedure in Algorithm \[algo:bfs\], which collects messages directed towards each of the seeds. If a message is directed towards or mentioning two or more victims, we consider it several times for the same message sender but with different destinations. We also collect the subscription and subscriber accounts of sender and receiver in the Twitter social graph, namely follower and followee relationships.
Data model {#datamodel}
----------
Consider a seed set of nodes for forming a graph $\mathcal{G}_s$=$(\mathcal{V}_s, \mathcal{E}_s)$ containing the nodes in the seed set (victims) and their potential perpetrators as the two entities defining the edge relationships in $\mathcal{E}_s$. Given that $\mathcal{G}_s$ is a directed graph made of vertices $(\mathcal{V}_s)$ and edges $(\mathcal{E}_s)$ making up a connection or defining a message sent among a pair of nodes $(u,v)$, we derive two specialized directed graphs with their corresponding relationships, messaging or social follow in the network.
Firstly, let $\mathcal{G}_f$=$(\mathcal{V}_f, \mathcal{E}_f)$ be a directed graph of social relationships where the vertices $\mathcal{V}_f$ represent users and a set of directed edges $\mathcal{E}_f$ representing subscriptions:
$$\mathcal{E}_f \coloneqq \{ (u, v) \mid u \textrm{ publicly follows } v\}$$
Secondly, let $\mathcal{G}_m$=$(\mathcal{V}_m, \mathcal{E}_m)$ be a directed messaging multi-graph with a set of users as vertices $\mathcal{V}_m$, and a set of directed edges representing messages sent by user $u$ mentioning user $v$:
$$\mathcal{E}_m \coloneqq \{ (u, v) \mid u \textrm{ messages } v\ \textrm{with a public mention} \} $$
$\mathcal{E}_m$ models the tweets that are shown to users with or without explicit subscription by the recipient to the sender. Thus, these messages represent a vector for abusive behavior.
To bootstrap our crawler, we start with the mentioned *seed set* and run an adapted and recursive *bounded breath-first-search* (bBFS) procedure on the Twitter input seeds to cover up to a maximum depth [*maxdepth*]{} we pass as parameter to it. In Algorithm \[algo:bfs\] we summarize the operational mode of [*bBFS*]{}.
Boundaries of the data crawl
----------------------------
The configuration of the crawler controls from where the crawl starts and puts some restrictions on where it should stop. The first one of such restrictions during the graph traversal is collecting incoming edges a.k.a followers in Twitter when the number does not exceed an upper bound, depending on the chosen [*maxfollowers*]{} as node popularity. Secondly, the followers must be within a maximum depth we call [*maxdepth*]{} in order to collect the related metadata in the graph belonging to them.
For each node meeting the above constraints, we also collect user account metadata as well as their respective public timeline of messages metadata in Twitter; then we start crawling the followers of nodes at depth 1, and next depth 2 (followers of followers)and so on as set by the parameter mentioned. In our dataset, we never go any further than second degree followers to collect relationships among users in the social graph crawled.
Data annotation {#gt}
---------------
To annotate abuse we have developed an in-house crowd-sourcing platform, [*Trollslayer*]{} [^1], where we enlisted ourselves and various colleagues to assist with the tedious effort of annotating abuse. However, we decide to enlarge our annotations with the support of a commercial crowd-sourcing platform named [*Crowdflower*]{}, where we spent around \$30 in credit using a student data for everyone pack. In the crowd sourcing process we account for scores collected from 156 crowd workers in [*Crowdflower*]{} and 7 trusted crowd workers in [*Trollslayer*]{}, accounting to 163 crowd workers overall. In these two platforms we display the same tweets and the same guidelines to crowd workers that annotate messages. Therefore, we are able to compute the global scores from both platforms on the same tweets to end up with at least 3 annotations per tweet inthe worst case.
Dataset
=======
So far we have judiciously collected a dataset from Twitter to characterize abuse in Twitter. Using crowd workers we obtain abuse ground truth. Next we extract relatively simple features from the collected dataset. Given that the features are largely based on data that is available in the proximity of the potential victim, we aim to characterize the distribution of abuse in an online micro-blogging platform from the view of the victim. This also avoids the Big Data mining that can only be effectively performed by large micro-blogging service providers.
Statistics {#sub:dataset-stats}
----------
Table \[table:crawl\] shows statistics about the dataset collected such as the number of tweets directed toward the list of victims in our seed set. In total, we account for 1648 tweets directed to our seed set at depth 1. Then we show the same statistics organized by *depth* in the recursive crawl performed to obtain the dataset. Note that for the purpose of the statistical analysis of the dataset and findings presented here, we will only take into consideration nodes for which the social graph has been fully collected. Due to Twitter Terms and Conditions (TTC) we plan to make available and public only the identifiers of the messages annotated but not the rest of the information associated to the message, graph or private information that identifies the crowd-workers.
[max width=]{}
-------------------------------------------------------- ------ ------ ----- -----
$\mathcal{E}_s \in \mathcal{G}_s$ directed to seed set – –
$\mathcal{E}_m \in \mathcal{G}_m$
\# with mentions 567
\# with mentions & retweets 113 0
\# with mentions & replies 1183 1026 292 284
\# $\mathcal{E}_f \in \mathcal{G}_f$ 0
-------------------------------------------------------- ------ ------ ----- -----
: Basic statistics of the data crawled[]{data-label="table:crawl"}
### Ground Truth {#sub:agreement}
Following a voting scheme we explain here, we aggregate the votes received for each tweet into a consensus score. We take a pessimistic approach to ensure that a single vote is not decisive in the evaluation of a tweet as abusive (e.g., unlike in Brexit affairs). That is, if the aggregated score is between -1 and 1 the message is considered [*undecided*]{}. The sum of scores will render a tweet as [*abusive*]{} in the ground truth when >1 and for [*acceptable*]{} when <-1 . The final annotated dataset is comprised of labeled messages, out of which are marked as acceptable and as abusive and undecided.
![Agreement in ground truth by platform[]{data-label="fig:hb-scores"}](fig-new-converted/score_abuse_acceptable_boxplots-eps-converted-to){width="\columnwidth"}
Figure \[fig:hb-scores\] shows the result of crowdsourcing abuse annotation when asking crowd-workers to mark messages as either, abusive, acceptable or undecided. Agreement is high in both platforms, even so for abusive messages, but as expected lower than acceptable due to perfect disagreement in a number of tweets as the ones we show in Table \[table:disagreement\]. There are tweets with perfect disagreement in Trollslayer out of around annotated, in Crowdflower out of , and in the aggregate out of mentioned above accounting for aggregated voting of all annotations from both platforms. Generally speaking, we see an upper bound of about 3.75% disagreement for Crowdflower, 2% in Trollslayer and lower bound of 1.3% among both, which highlights the importance of employing a minimal set of trusted crowd workers in the annotations (as we did with Trollslayer).
### Agreement
To ensure agreement among crowd workers is valid, we calculate the inter-assessor agreement score of Randolph’s multi-rater kappa [@randolph2005free] among the crowd workers with common tweets annotated. Similarly to Cohen’s kappa or Fleiss’ Kappa, the Randolph’s kappa descriptive statistic is used to measure the nominal inter-rater agreement between two or more raters in collaborative science experiments. We choose Randolph’s kappa over the others by following Brennan and Predige suggestion from 1981 of using free-marginal kappa when crowd workers can assign a free number of cases to each category being evaluated (e.g., [*abusive*]{}, [*acceptable*]{}) and using fixed-marginal otherwise [@brennan1981coefficient]. Our case considers different crowd workers assigning a different number of annotations to each class or category, which satisfies Randolph’s kappa requirement.
Note that in contrast to simple agreement scores, descriptive statistics consider agreement on all three possibilities, [*abusive*]{}, [*acceptable*]{} and [*undecided*]{}, thus providing a more pessimistic measure of agreement among crowd workers. There are number of descriptive statistics [@Warrens2010] such as Light’s kappa and Hubert’s kappa, which are multi-rater versions of Cohen’s kappa. Fleiss’ kappa is a multi-rater extension of Scott’s pi, whereas Randolph’s kappa generalizes Bennett’ $S$ to multiple raters.
Given this setting, values of kappa can range from -1.0 to 1.0, with -1.0 meaning a complete disagreement below random, 0.0 meaning agreement equal to chance, and 1.0 indicating perfect agreement above chance. According to Randolph, usually a kappa above 0.60 indicates very good inter-rater agreement. Across all annotations we obtain overall agreement of 0.73 and a a Randolph’s free-marginal of 0.59 which is about the recommended value in Randolph’s kappa (0.60).
[|c|c|]{} Time & Text\
2015-11-26 20:51:49 &\
2015-11-23 20:41:52 &\
2015-11-29 11:59:25 &\
We inspect some of the annotations manually and discover that some scores are aggregated as undecided and not as abusive due to their crowd-workers annotating as undecided several of these tweets serially. That shows the cognitive difficulty in the task of annotating abuse or the tedious nature which we mention before (despite having rewarded the crowd-workers in both platforms). On the other hand, we noticed it is easy for crowd workers to spot offensive messages containing [*hate speech*]{} or similar (which in fact is abuse but only a subset according to the [*JTRIG*]{} definition) but not so for deceitful messages or content.
Characterization of Abuse
=========================
This section shows that our method can indeed capture all type of abusive behavior in Twitter and that while humans still have a hard time identifying as abuse deceitful activity, our latest findings suggest the use of network level features to identify some abuse automatically instead.
Incidents
---------
In several cases we find where there is perfect disagreement among crowd workers, see Table \[table:disagreement\]; while in others some of the actual abusive “birds” are just too difficult to spot for humans given just a tweet but more likely if we inspect an exhaustive list of similar messages from the potential perpetrators’ timeline as shown in Table \[table:deceitful\]. In that case the abusive “bird” is repeatedly mentioning the same users through the special character “@” that Twitter enables in order to direct public messages to other participants. Besides, he repeatedly adds a link to a doubtful fund-raising campaign.
[|c|c|c|c|]{} Time & Text & Mentions & Hashtags\
2015-12-11 23:16:25 & & &\
2015-12-11 23:16:27 & & &\
We investigate the owner of the Twitter public profile `@jrbny`: titled “Food Service 4 Rochester Schools”, which is also related to a presumed founder `@JohnLester` and both belonging to “Global Social Entrepreneurship”.
Firstly, we look into the JSON data of the tweet and check the value of the field [*source*]{} in the Twitter API just to confirm that it points to “https://unfollowers.com”, which in turn redirects to “https://statusbrew.com/”, a commercial site to engage online audiences through social media campaigns. This confirms our suspicions about the nature of the profile and its use for a public fundraising campaign. After a quick inspection at the products offered by this social media campaign management site, indeed we see that the site offers an option to automatically “schedule content” for publishing tweets online. In summary, this Twitter account is controlled by humans but uses an automatic scheduling service to post tweets and presumably follow/unfollow other accounts in the hope of obtaining financial donations through an online website. Secondly, expanding the shortened URL linked to tweets as the ones from Table \[table:deceitful\], we find out that indeed the user is redirected to a donation website [^2] from this organization. The site is hosted in Ontario and belongs to the Autonomous System AS62679, namely *Shopify, Inc.*, which reportedly serves several domains distributing malware. We also acknowledge the difficulty in automating crowdsourcing and characterization of the type of abuse [*deceive*]{}. Finally, in order to highlight the effect of automated campaign management tools as the ones used in the above case, we crawled the same profile again in 2016-01-10 23:02:59, and the account had only 16690 followers compared to the current 36531 as of January 2017, therefore showing a successful use of semi-automated agents on Twitter for fund-raising activities.
Features of Abusive Behavior
----------------------------
In order to characterize abuse we extract and build a set of novel features, categorized as [*Attribute*]{} or [*Graph*]{} based, which measure abuse in terms of the [*Message*]{}, [*User*]{}, [*Social*]{} and [*Similarity*]{}. We apply Extraction, Transformation and Loading (ETL) on the raw data in order to obtain the inputs to each of the features in those subcategories. The most readily available properties from the *tweet* are extracted. Then we also capture a number of raw inputs in the tweet that identify the features for a particular *user*. The next, and more complex subset of features involve [*Social*]{} graph metadata, which also enables the computation of the novel [*Similarity*]{} feature subset, namely the Jaccard index ($\mathcal{J}$). Table \[table:features\] summarizes the complete set of features we have developed to evaluate abusive behavior in Twitter.
[max width=0.8]{}
Metadata Feature Description
-- ---------- ------------------------------------------------------------- ----------------------------------------------------------------------------------
\# mentions mentions count in tweet
\# hashtags hashtag count in the tweet
\# retweets times a message has been reposted
is\_retweet (true/false) message is a repost
is\_reply (true/false) message is a reply
sensitive message links to external URL
\#badwords number of swear words from Google [@googlebadwords]
$\nicefrac{\text{\# replies}}{\text{\# tweets} of user}$ fraction of replies to tweets
verified (true/false) sender account is verified by Twitter
\# favorites \# tweets marked as favorites by sender
age of user account days since account creation
\# lists number of lists of sender
$\nicefrac{\text{\# messages}}{\text{age} of user}$ tweets per day
$\nicefrac{\text{\# mentions}}{\text{age} of user}$ mentions per day
$\nicefrac{\text{\# mentions}}{\text{\# tweets} of user}$ ratio of mentions to tweets
account recent check if account age is $<=$ 30 days
\# subscriptions$^s$ followee count from public feed of sender
\# subscribers$^s$ follower count to public feed of sender
$\nicefrac{\text{\# subscribers}}{\text{age}}$ ratio of subscribers count to age of sender
$\nicefrac{\text{\# subscriptions}}{\text{age}}$ ratio of subscriptions count to age of sender
$\nicefrac{\# \text{subscriptions}}{\# \text{subscribers}}$ ratio of subscriptions count to subscribers of sender
$\nicefrac{\# \text{subscribers}}{\# \text{subscriptions}}$ ratio of subscribers count to subscriptions of sender
reciprocity true if bi-directional relationship among sender and receiver in $\mathcal{G}_f$
$\mathcal{J}$ (subscriptions$^s$, subscriptions$^r$) $\mathcal{J}$ of sender & receiver subscriptions
$\mathcal{J}$ (subscribers$^s$, subscribers$^r$) $\mathcal{J}$ of sender & receiver subscribers
$\mathcal{J}$ (subscriptions$^s$, subscribers$^r$) $\mathcal{J}$ of subscriptions of sender & subscribers of receiver
$\mathcal{J}$ (subscribers$^s$, subscriptions$^r$) $\mathcal{J}$ of subscribers of sender & subscriptions of receiver
To visualize the data distribution of the most relevant features from Table \[table:features\] in detail we show the complementary cumulative distribution function (CCDF), which represents the probability $P$ that a feature having value of $\geq x$ in the x axis does not exceed $X$ in the y axis. We use the CCDF in log-log scale to be able to pack a large range of values within the axis of the plot.
In we compare the characteristic distribution among abuse and acceptable content in our annotated dataset. The dotted line here represents abusive while the continuous one acceptable.
For the [*Attribute*]{} based features we notice the most significant gap among acceptable and abusive is the [*Message*]{} category, in particular the number of replies that a sender user has authored, meaning that abusive “birds” reply more often and seek controversy as part of their public speech in Twitter. This makes sense from a “trolling” perspective if we consider that the definition of troll is a user that posts controversial, divisive and at times inflammatory content. Secondly, and to the contrary of what we expected, we observe that humans agree on abuse when there are fewer receivers or mentioned users, so the abuse is less likely to be directed to multiple victims according to this. Otherwise, Table \[table:disagreement\] shows that no agreement is reached with multiple targets if addressing users as a group, which can not be correlated into a personal attack to the potential victim. We see this as an indication of perpetrators sending disguising messages to their victims in order to decrease the visibility of their abusive behavior.
Finally, the distribution presented in the “badwords” feature shows that at least one “badword” exist for many of tweets annotated as abusive by our crowd workers, showing a light tailed distribution with smaller probabilities for a larger number of “badwords”. Firstly, this confirms that human crowd workers are notably good at flagging abusive content when it is related to the language itself and secondly, that abusive messages flagged as such by humans did not contain many “badwords”. That is also confirmed by the fact that “bad words” have a negligible value in the distribution of acceptable for such feature. On the contrary, with hashtags we mostly observe acceptable messages in the CCDF thus indicating that messages from our ground truth flagged as abusive barely contain any hashtags.
We observe that some of the similarity features in the [*graph-based*]{} category exhibit a distinguishable pattern among acceptable and abusive messages. In particular, this is the case for [*mutual subscribers*]{} and [*mutual subscriptions*]{}, where the feature is calculated using [*Social*]{} graph metadata from a pair of users, namely sender and receiver. The most interesting CCDF is perhaps the [*mutual subscriptions*]{} one, Figure \[fig:ccdf-followees-followees\], in which there is a significant initial gap between the social graph of acceptable and abusive messages in the log probability ($P(X>x)$) in the y axis for nearly about two-thirds of the distribution. Note that here the maximum value of the axis runs from zero to $10^0$ given that we compute similarity using Jaccard. Considering that we did not present crowd workers with information about the social graph, it is quite surprising that some of these the graph-based features show a characteristic pattern.
Related Work
============
The following section covers works similar to ours that fall in the categories of the included subsections.
Graph-based
-----------
To characterize abuse without considering the content of the communication, graph-based techniques have been proven useful for detecting and combating dishonest behavior [@Ortega2013] and cyberbullying [@Galan-Garcia2014], as well as to detect fake accounts in OSN [@Cao2012]. However, they suffer from the fact that real-world social graphs do not always conform to the key assumptions made about the system. Thus, it is not easy to prevent attackers from infiltrating the OSN or micro-blogging platform in order to deceive others into befriending them. Consequently, these Sybil accounts can still create the illusion of being strongly connected to a cluster of legitimate user accounts, which in turn would render such graph-based Sybil defenses useless. On the other hand and yet in the context of OSN, graph-based Sybil defenses can benefit from supervised machine learning techniques that consider a wider range of metadata as input into the feature set in order to predict potential victims of abuse [@boshmaf2015thwarting]. Facebook Immune System (FIS) uses information from user activity logs to automatically detect and act upon suspicious behaviors in the OSN. Such automated or semi-automated methods are not perfect. In relation to the FIS, [@boshmafbots2011] found that only about 20% of the deceitful profiles they deployed were actually detected, which shows that such methods result in a significant number of false negatives.
Victim-centric
--------------
The data collection in [@garcia2016discouraging] was partially inspired by the idea of analyzing the victims of abuse to eventually aid individual victims in the prevention and prediction of abusive incidents in online forums and micro-blogging sites as Twitter. One observation from previous research [@boshmaf2015integro] that we have embedded into some of our features is that abusive users can only befriend a fraction of real accounts. Therefore, in the case of Twitter that would mean having bidirectional links with legitimate users. We capture that intuition during data collection by scraping in real-time the messages containing mentions to other users ([@user]{}) and thus we are able to extract features such as ratio of follows sent/received, mutual subscribers/subscriptions, etc.
Natural Language Processing and text based
------------------------------------------
Firstly, previous datasets in this area are not yet released or in their infancy for verification of their applicability as abuse ground truth gold standard. The authors of [@nobata2016abusive] claim to outperform deep learning techniques to detect hate speech, derogatory language and profanity. They compare their results with a previous dataset from [@Djuric:2015] and assess the accuracy of detecting abusive language with distributional semantic features to find out that it does largely depends upon the evolution of the content that abusers post in the platform or else having to retrain the model.
Finally, it is worth mentioning we in our feature set do not include sentiment analysis inputs as [@slangsd] did; simply because we are interested in complex types of abuse that require more than just textual content analysis. Additionally, we have noticed that while some words or expressions may seem abusive at first (e.g., vulgar language), they are not when the conversation takes place between participants that know each other well or are mutually connected in the social graph (e.g., family relatives).
Other datasets
--------------
Following the above classifications, we compile a number of previous works [@de2010does; @cha2010measuring; @kwak2010twitter; @gabielkov2012] that collected a large portion of the Twitter graph for its characterization but not really meant for abusive behavior. Note some of these datasets can provide some utility from their social-graph for characterization of abusive behaviour but they are either anonymized or we are not able to get access to them. Naturally, social-graph metadata is not available due to restrictions imposed by Twitter Terms and Conditions (TTC) for data publishing. We also find the Impermium dataset, from a public Kaggle competition [@impermium-dataset] that provides the text of a number of tweets and labels for classifying such messages as an insult or not. This can be useful for textual analysis of abuse (only for non-subtle insults), which can be supported by application of NLP based techniques, but it does not contain any social graph related metadata that we use in our characterization of abuse. Besides, as the tweet identifiers from the Imperium dataset are anonymized, it is not possible to reproduce data collection.
Conclusion
==========
We concluded that identifying abuse is a hard cognitive task for crowd workers and that it requires employing specific guidelines to support them. It is also necessary to provide a platform as we created or questionnaires to ask crowd workers to flag a tweet as abusive if it falls within any of the categories of the guidelines, in our case the 4 D’s of JTRIG, [*deny*]{}, [*disrupt*]{}, [*degrade*]{}, [*deceive*]{}. As a crowd worker provides a non-binary input value from [*acceptable*]{}, [*abusive*]{}, [*undecided*]{} to annotate tweets from $\mathcal{E}_m$, the latter option is important; even with relatively clear guidelines, crowd workers are often unsure if a particular tweet is abusive. To further compensate for this uncertainty, each tweet has been annotated multiple times by independent crowd workers (at least 3). We highlight the reason for the disagreement we encountered by listing a few tweets in Table \[table:disagreement\]. Table \[table:deceitful\] contains metadata from a user that consistently tweets from a third-party tweet scheduling service.
Additionally, using the set of features presented here one could provide semi-automated abuse detection in order to help humans to act as judges of abuse. Filtering “badwords” is not quite enough to judge a user as abusive or not, so in order to provide a better context to human crowd workers one could imagine coupling the score of attribute based features with those graph-based features that can provide an implicit nature of the relationships between senders and receivers of the content, thus flagging messages or users as abusive “bird” (or not) in Twitter. This will also present an scenario where abuse is a less tedious and self-damaging tasks for human crowd workers reading abusive content during annotation.
Acknowledgments {#acknowledgments .unnumbered}
===============
Acknowledgment {#acknowledgment .unnumbered}
==============
Thank you to the anonymous Trollslayer crowd workers.
[^1]: <https://github.com/algarecu/trollslayer>
[^2]: Campaign site: [www.pureheartsinternational.com](www.pureheartsinternational.com)
|
---
abstract: 'Preliminary results are presented from a simple, single-antenna experiment designed to measure the all-sky radio spectrum between 100 and 200 MHz. The system used an internal comparison-switching scheme to reduce non-smooth instrumental contaminants in the measured spectrum to 75 mK. From the observations, we place an initial upper limit of $450$ mK on the relative brightness temperature of the redshifted 21 cm contribution to the spectrum due to neutral hydrogen in the intergalactic medium (IGM) during the epoch of reionization, assuming a rapid transition to a fully ionized IGM at a redshift of 8. With refinement, this technique should be able to distinguish between slow and fast reionization scenarios. To constrain the duration of reionization to $\Delta z>2$, the systematic residuals in the measured spectrum must be reduced to 3 mK.'
author:
- 'Judd D. Bowman, Alan E. E. Rogers, and Jacqueline N. Hewitt'
title: Toward Empirical Constraints on the Global Redshifted 21 cm Brightness Temperature during the Epoch of Reionization
---
Introduction
============
The transition period at the end of the cosmic “Dark Ages” is known as the epoch of reionization (EOR). During this epoch, radiation from the very first luminous sources—early stars, galaxies, and quasars—succeeded in ionizing the neutral hydrogen gas that had filled the intergalactic medium (IGM) since the recombination event following the Big Bang. Reionization marks a significant shift in the evolution of the Universe. For the first time, gravitationally-collapsed objects exerted substantial feedback on their environments through electromagnetic radiation, initiating processes that have dominated the evolution of the visible baryonic Universe ever since. The epoch of reionization, therefore, can be considered a dividing line when the relatively simple evolution of the early Universe gave way to more complicated and more interconnected processes. Although the Dark Ages are known to end when the first luminous sources ionized the neutral hydrogen in the IGM, precisely when this transition occurred remains uncertain.
The best existing constraints on the timing of the reionization epoch come from two sources: the cosmic microwave background (CMB) anisotropy and absorption features in the spectra of high-redshift quasars. The amplitude of the observed temperature anisotropy in the CMB is affected by Thomson scattering due to electrons along the line of sight between the surface of last scattering and the detector, and thus, it is sensitive to the ionization history of the IGM through the electron column density. In addition, if there is sufficient optical depth to CMB photons due to free electrons in the IGM after reionization, some of the angular anisotropy in the unpolarized intensity can be converted to polarized anisotropy. This produces a peak in the polarization power spectrum at the angular scale size equivalent to the horizon at reionization with an amplitude proportional to the optical depth [@1997ApJ...488....1Z]. Measurements by the WMAP satellite of these effects indicate that the redshift of reionization is $z_r\approx11\pm4$ [@2007ApJS..170..377S], assuming an instantaneous transition.
Lyman-$\alpha$ absorption by neutral hydrogen is visible in the spectra of many high-redshift quasars and, thus, offers the second currently feasible probe of the ionization history of the IGM. Continuum emission from quasars is redshifted as it travels through the expanding Universe to the observer. Neutral hydrogen along the line of sight creates absorption features in the continuum at wavelengths corresponding to the local rest-frame wavelength of the Lyman-$\alpha$ line. Whereas CMB measurements place an integrated constraint on reionization, quasar absorption line studies are capable of probing the ionization history in detail along the sight-lines. There is a significant limitation to this approach, however. The Lyman-$\alpha$ absorption saturates at very low fractions of neutral hydrogen (of order $x_{HI} \approx 10^{-4}$). Nevertheless, results from these studies have been quite successful and show that, while the IGM is highly ionized below $z\lesssim6$ (with typical $x_{HI}\lesssim10^{-5}$), a significant amount of neutral hydrogen is present above, although precisely how much remains unclear [@2001ApJ...560L...5D; @2001AJ....122.2850B; @2002AJ....123.1247F; @2003AJ....125.1649F; @2004Natur.427..815W; @2006AJ....132..117F].
The existing CMB and quasar absorption measurements are somewhat contradictory. Prior to these studies, the reionization epoch was assumed generally to be quite brief, with the transition from an IGM filled with fully neutral hydrogen to an IGM filled with highly ionized hydrogen occurring very rapidly. These results, however, open the possibility that the ionization history of the IGM may be more complicated than previously believed [@2003ApJ...595....1H; @2003ApJ...591...12C; @2003MNRAS.344..607S; @2004ApJ...604..484M].
Direct observations of the 21 cm (1420 MHz) hyperfine transition line of neutral hydrogen in the IGM during the reionization epoch would resolve the existing uncertainties and reveal the evolving properties of the IGM. The redshifted 21 cm signal should appear as a faint, diffuse background in radio frequencies below $\nu<200$ MHz for redshifts above $z>6$ (according to $\nu=1420/[1+z]$ MHz). For diffuse gas in the high-redshift ($z\approx10$) IGM, the expected unpolarized differential brightness temperature of the redshifted 21 cm line relative to the pervasive CMB is readily calculable from basic principles and is given by [@2004ApJ...608..622Z their § 2] $$\begin{array}{rl}
\label{eqn_intro_temp} \delta T_{21}(\vec{\theta}, z) \approx~&
23~(1+\delta)~x_{HI} \left ( 1 - \frac{T_\gamma}{T_S} \right ) \\
& \times \left ( \frac{\Omega_b~h^2}{0.02} \right ) \left [ \left (
\frac{0.15}{\Omega_m~h^2} \right ) \left ( \frac{1+z}{10} \right )
\right ]^{1/2} \mbox{mK},
\end{array}$$ where $\delta(\vec{\theta},z)$ is the local matter over-density, $x_{HI}(\vec{\theta},z)$ is the neutral fraction of hydrogen in the IGM, $T_\gamma(z) = 2.73~(1+z)$ K is the temperature of CMB at the redshift of interest, $T_S(\vec{\theta},z)$ is the spin temperature that describes the relative population of the ground and excited states of the hyperfine transition, and $\Omega_b$ is the baryon density relative to the critical density, $\Omega_m$ is the total matter density, and $h$ specifies the Hubble constant according to $H_0=100~ h$ km s$^{-1}$ Mpc$^{-1}$. From Equation \[eqn\_intro\_temp\], we see that perturbations in the local density, spin temperature, and neutral fraction of hydrogen in the IGM would all be revealed as fluctuations in the brightness temperature of the observed redshifted 21 cm line.
The differential brightness temperature of the redshifted 21 cm line is very sensitive to the spin temperature. When the spin temperature is greater than the CMB temperature, the line is visible in emission. For $T_S \gg T_\gamma$, the magnitude of the emission saturates to a maximum (redshift-dependent) brightness temperature that is about 25 to 35 mK for a mean-density, fully neutral IGM between redshifts 6 and 15, assuming a $\Lambda$CDM cosmology with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, $\Omega_b=0.04$, and $h=0.7$. At the other extreme, when the spin temperature is very small and $T_S \ll T_\gamma$, the line is visible in absorption against the CMB with a potentially very large (and negative) relative brightness temperature.
A number of factors are involved in predicting the typical differential brightness temperature of the redshifted 21 cm line as a function of redshift. In particular, the spin temperature must be treated in detail, including collisional coupling between the spin and kinetic temperatures of the gas, absorption of CMB photons, and heating by ultra-violet radiation from the first luminous sources. We direct the reader to @2006PhR...433..181F for a good introduction to the topic. The results of several efforts to predict the evolution of the differential brightness temperature of the redshifted 21 cm line have yielded predictions that are generally consistent in overall behavior, but vary highly in specific details . These models tend to agree that, for a finite period at sufficiently high redshifts ($z\gtrsim20$), the hyperfine line should be seen in absorption against the CMB, with relative brightness temperatures of up to $|\delta
T_b|\lesssim100$ mK. This is because the IGM initially cools more rapidly than the CMB following recombination [@1994ApJ...427...25S; @1997ApJ...475..429M]. During this period, fluctuations in the differential brightness temperature of the redshifted 21 cm background should track the underlying baryonic matter density perturbations . Eventually, however, the models indicate that the radiation from the first generations of luminous sources will elevate the spin temperature of neutral hydrogen in the IGM above the CMB temperature and the redshifted 21 cm line should be detected in emission with relative brightness temperatures up to the expected maximum values (of order $25$ mK). Finally, during the reionization epoch, the neutral hydrogen becomes ionized, leaving little or no gas to produce the emission, and the apparent differential brightness temperature of the redshifted 21 cm line falls to zero as reionization progresses. As the gas is ionized, a unique pattern should be imprinted in the redshifted 21 cm signal that reflects the processes responsible for the ionizing photons and that evolves with redshift as reionization progresses [@1997ApJ...475..429M; @2000ApJ...528..597T; @2003ApJ...596....1C; @2004ApJ...608..622Z; @2004ApJ...613...16F]. The details of the specific timing, duration, and magnitude of these features remains highly variable between theoretical models due largely to uncertainties about the properties of the first luminous sources.
Measuring the brightness temperature of the redshifted 21 cm background could yield information about both the global and the local properties of the IGM. Determining the average brightness temperature over a large solid angle as a function of redshift would eliminate any dependence on local density and temperature perturbations and constrain the evolution of the product $\overline{x_{HI}(1-T_\gamma/T_S)}$, where we use the bar to denote a spatial average. During the reionization epoch, it is, in general, believed to be a good approximation to assume that $T_S\gg T_\gamma$ and, therefore, that the brightness temperature is proportional directly to $\bar{x}_{HI}$. Global constraints on the brightness temperature of the redshifted 21 cm line during the EOR, therefore, would directly constrain the neutral fraction of hydrogen in the IGM. Such constraints would provide a basic foundation for understanding the astrophysics of reionization by setting bounds on the duration of the epoch, as well as identifying unique features in the ionization history (for example if reionization occurred in two phases or all at once). They would also yield improvements in estimates of the optical depth to CMB photons and, thus, would help to break existing degeneracies in CMB measurements between the optical depth and properties of the primordial matter density power spectrum [@2006PhRvD..74l3507T].
![ \[f\_edges\_photos\] EDGES deployed at Mileura Station in Western Australia. The left panel shows the full antenna and ground screen in the foreground and the analog-to-digital conversion and data acquisition module in the background. The right panel is a close-up view of the amplifier and switching module connected directly to the antenna (through the balun).](f1_color.eps){width="20pc"}
For these reasons, several efforts are underway to make precise measurements of the radio spectrum below $\nu<200$ MHz ($z>6$). In this paper, we report on the initial results of the Experiment to Detect the Global EOR Signature (EDGES). In § \[s\_edges\_method\], we describe the specific approach used for EDGES to address the issue of separating the redshifted 21 signal from the foreground emission. We then give an overview of the EDGES system in § \[s\_edges\_system\], followed by the results of the first observing campaign with the system in § \[s\_edges\_results\], along with a discussion of the implications for future single-antenna measurements.
Method {#s_edges_method}
======
In principle, the global brightness temperature measurement is much less complicated to perform than the detection of local perturbations in the redshifted 21 cm background (which will be attempted in the near future by the Mileura Widefield Array \[MWA\], Low Frequency Array \[LOFAR\], Giant Metrewave Radio Telescope \[GMRT\], and Twenty-one Centimeter Array \[21CMA\]). Since the desired signal for the global measurement is the mean brightness temperature due to redshifted 21 cm emission (or absorption) over the entire sky, there is no need for high angular resolution or imaging. A single antenna tuned to the appropriate frequencies could reach the required sensitivity ($\sim25$ mK) within only one hour of integration time, assuming a reasonable spectral resolution of $\sim1$ MHz (equivalent to $\Delta
z\approx0.1$ at $z\approx8$). There is a fundamental complication with such an experiment, however, arising from the global nature of the signal. Since the expected redshifted 21 cm emission fills the entire sky, there is no ability to perform comparison switching between the target field and a blank field. The problem this causes is two-fold. First, it is difficult to separate the contribution to the measured spectrum due to the redshifted signal from that of any other all-sky emission, including Galactic synchrotron and free-free radiation, the integrated contribution of extragalactic continuum sources, or the CMB. Second, for similar reasons, it is difficult to avoid confusing any systematic effects in the measured spectrum due to the instrument or environment with received signal from the sky. The severity of these problems is exacerbated in single-antenna measurements by the intensity of the Galactic synchrotron emission. Unlike interferometric observations, a single antenna is sensitive to the large-scale emission from the Galaxy, providing a 200 to 10,000 K foreground in the measured spectrum.
Determining the $\sim25$ mK redshifted 21 cm contribution to the radio spectrum requires separating the signal from the foreground spectrum at better than 1 part in 10,000. This can be accomplished by taking advantage of the differences between the spectra of the Galactic and extragalactic foregrounds and the anticipated redshifted 21 cm contribution. As discussed by , Galactic synchrotron emission is the dominate component of the astrophysical foregrounds below $\nu<200$ MHz, accounting for all but approximately 30 to 70 K of the foregrounds at 178 MHz [@1967MNRAS.136..219B]. Its spectrum is very nearly a power-law given, in temperature units, by $T_{gal}(\nu) \sim \nu^{-\beta}$, where $\beta\approx2.5$ is the spectral index. The spectral index is generally constant over the frequencies of interest ($50 \lesssim \nu
\lesssim 200$ MHz), although it is known to flatten with decreasing frequency due to self-absorption. The intensity of the synchrotron emission and the exact value of the spectral index depend on Galactic coordinate. The amplitude varies over an order of magnitude, between about $200 < T_{gal} < 10,000$ K at 150 MHz (peaking toward the Galactic center and along the Galactic plane), while the spectral index has small variations of order $\sigma_\beta\approx0.1$ dependent largely on Galactic latitude, with the steepest regions occurring at high Galactic latitudes. Free-free emission in the Galaxy and discrete Galactic and extragalactic continuum sources also have spectra that can be reasonably described by power-laws. The integrated flux from extragalactic continuum sources is generally isotropic on large scales and accounts for the majority of the remaining power in the low-frequency radio spectrum, with free-free emission making up only about 1% of the total power. The combined spectrum due to the astrophysical foregrounds is smooth and remains similar to a power-law profile.
On the other hand, as the apparent differential brightness temperature of the redshifted 21 cm background transitions from $T_{21}=0$ mK at very high redshift to $T_{21}\approx-100$ mK before heating of the IGM by the first luminous sources, and then climbs to $T_{21}\approx25$ mK at the beginning of the reionization epoch before falling back to $T_{21}\approx0$ mK at the end of reionization, the global mean redshifted 21 cm spectrum may contain up to three relatively sharp features between $50\lesssim\nu\lesssim200$ MHz that would not be represented well by a power-law profile. For the large solid angles of a single antenna beam, the mean redshifted 21 cm signal should vary little from one location to another on the sky. @2006MNRAS.371..867F and [@2004ApJ...608..611G] have calculated example global mean redshifted 21 cm spectra for various assumptions of stellar formation histories.
The specific approach employed with EDGES to exploit these expected differences in spectral characteristics in order to overcome the difficulty in separating the foreground and signal contributions in the measured spectrum is to limit the scope of the experiment to test for discontinuous features in the spectrum, since these would necessarily be due to the rapid transitions in the redshifted 21 cm brightness temperature and not the spectrally smooth foregrounds. In particular, the frequency response of the system is designed to test for fast reionization only (and not the transitions that might arise at higher redshifts from cooling and heating of the IGM). In the extreme case that the transition from a fully neutral to a fully ionized IGM was virtually instantaneous, such that $\dot{\bar{x}}_{HI}(z_r)\rightarrow\infty$, where $z_r$ is the redshift of reionization, the contribution to the global spectrum at the frequencies corresponding to the reionization epoch would approach a step function. A sharp feature resembling a step function that is superimposed on the smooth power-law-like foreground spectrum should be relatively easy to identify. And if reionization were to progress more slowly, producing a smooth transition that spanned a large range of redshifts and many tens of MHz, a simple model could be used to set limits on the maximum rate of the transition.
In principle, a variety of such models could be devised to use in tests for the presence of a step feature in the radio spectrum due to a rapid reionization. A simple low-order polynomial fit to the measured spectrum would reveal such a discontinuous feature in the residual spectrum after subtracting the fit and, thus, would be able to determine the redshift range of a rapid reionization. Figure \[f\_edges\_model\] illustrates this approach by plotting a model (described in Section \[s\_edges\_limits\]) of the redshifted 21 cm contribution to the measured spectrum along with the residuals after a seventh-order polynomial fit is removed from a simulated sky spectrum. This is the method used for the preliminary EDGES measurements. An advantage of this approach for global reionization experiments is that, given sufficient sensitivity, even a null result could still constrain $\dot{\bar{x}}_{HI}(z)$ and, thereby, distinguish between slow and fast reionization scenarios.
Experiment Design {#s_edges_system}
=================
By focusing (at least initially) on confirming or ruling out a fast reionization scenario, the design of the EDGES system is able to be relatively simple. The primary need is to reduce any instrumental or systematic contributions to the measured power spectrum that vary rapidly with frequency, since these could be confused with a sharp feature in the spectrum due to a fast reionization of the IGM. Such contributions could be due to terrestrial transmitters, reflections of receiver noise or sky noise from nearby objects, undesirable resonances within the electronics or the radio-frequency interference (RFI) enclosures, or spurious signals introduced by the digital sampling system. In this section, we provide an overview of the experimental design and setup, highlighting aspects that are relevant to reducing the effects of both the external and internal sources of systematic errors. Additional details on the analysis of systematic contributions and the hardware design can be found in the EDGES memorandum series[^1].
![ \[f\_edges\_model\] Example of redshifted 21 cm contribution (solid) to $T_{sky}$ based on the model described in § \[s\_edges\_limits\] with $\Delta T_{21}=25$ mK, $z_r = 8$, and $\Delta z = 0.6$. The residuals are also shown for a seventh-order polynomial fit to a simulated spectrum between 130 and 190 MHz with (dash) and without (dot) the redshifted 21 cm contribution. The foreground contribution was modeled for the plot using $\beta=2.5$, and $T_{gal}(150$ MHz$) = 250$ K.](f2.eps){width="20pc"}
Site Selection
--------------
Some of the contributions to the systematic uncertainty listed above can be addressed by careful selection of the observing site. Avoiding terrestrial transmitters (primarily from FM radio and television stations) is the most serious problem. Even at distances of hundreds or thousands of kilometers, tropospheric ducting and scattering (troposcatter), sporadic E propagation in the ionosphere, and reflections from meteors are all capable of transferring a significant amount of power from Earth-based transmitters. The background produced by the integrated effect of many distant transmitters may have significant spectral structure above the expected redshifted 21 cm level. For example, a single, 100 kW FM radio station at 300 km from the observing site could produce up to a 100 K effective temperature in a 1 MHz channel due to troposcatter, or 100 mK due to meteor reflections. Fortunately, these mechanisms of atmospheric propagation exhibit diurnal or transient behavior (as is the case for sporadic E propagation, tropospheric ducting, and meteor reflections) or require specific geometries for peak efficiency (as is the case for troposcatter), making sensitive measurements possible at remote sites at least some of the time.
Another concern is that local objects in the environment will scatter both external noise and receiver noise, which will then be picked up by the system and correlate with the original noise, causing sinusoidal ripples in the measured spectrum. We have estimated the magnitude of the reflections of the Galactic foreground from objects like trees and mountains on the horizon where the antenna gain is reduced by a factor of 20 dB or more. As long as objects subtend solid angles under about 100 deg$^2$, the spectrum will only be affected by a few parts per million (*ppm*). We have also considered the magnitude of noise originating from the receiver that will be returned by a nearby scatterer. Even if we assume that this noise is perfectly correlated with the internal receiver noise, it will only produce ripples in the spectrum at the level of a few *ppm* provided that the object, like a tree subtending a few deg$^2$, is more than $\sim100$ m away or a larger object, subtending $\sim100$ deg$^2$ is more than $\sim1$ km away.
Reflections of signals from compact radio sources may also be correlated. In this case the scatterer and the receiving antenna act like an adding interferometer to produce ripples in the spectrum. However, these effects are extremely small since a 1 Jy source results in under 1 mK of antenna temperature for the dipole-based EDGES system and the reflected signal is much smaller still. The ground reflection has been eliminated by placing the antenna on the ground. A brief discussion on the impact of these effects in radio astronomy measurements can be found also in @rohlfs_wilson.
Hardware Configuration {#s_edges_config}
----------------------
Following a careful choice of the deployment site for the experiment, the remaining sources of systematic uncertainty result from the hardware design of the system. The EDGES system consists of three primary modules: 1) an antenna, 2) an amplifier and comparison switching module, and 3) an analog-to-digital conversion and storage unit. The antenna, shown in Figure \[f\_edges\_photos\], is a “fat” dipole-based design derived from the four-point antenna of [@fourpoint1; @fourpoint2]. The design was chosen for its simplicity and its relatively broad frequency response that spans approximately an octave. The response of the antenna was tuned to 100 to 200 MHz by careful selection of the dipole dimensions. In order to eliminate reflections from the ground and to reduce gain toward the horizon, the antenna is placed over a conducting mesh that rests directly on the ground. The mesh is constructed from thin, perforated metal sheets to reduce weight and is shaped to match an octagonal support structure below the ground screen. The diameter of this ground screen is approximately 2 m.
Although the antenna is constructed with perpendicular dipoles capable of receiving dual linear polarizations, only one polarization of the crossed-dipole is sampled by the receiver in order to reduce the cost of the system. This is acceptable since the spatially averaged all-sky spectrum is expected to have essentially no polarized component. The Galactic foreground does exhibit strong polarization in certain regions, such as the “fan region” around $\ell\approx140^\circ$, $b\approx8^\circ$, which has an extended polarized component of about 3 K [@1973MNRAS.163..147W]. Such a region could produce a ripple in the measured spectrum from a single linear polarization as the polarization angle rotates with frequency. Under the worst circumstances, if such a region were located at the peak of the EDGES beam, the magnitude of the ripple could reach $\sim50$ mK. Away from the Galactic plane, however, where EDGES observations are generally targeted in order to reduce the system temperature, it is more likely that the effects of polarization would be at least an order of magnitude lower. Furthermore, if the rotation measure (RM) is of order 10 rad m$^{-2}$, then the polarized component could be averaged out over $\sim1$ MHz. Nevertheless, in future versions of EDGES, both ports of the antenna will be sampled in order to check for polarization effects and other systematic effects that result from the non-uniformity of Galactic radiation.
A dipole antenna is naturally a balanced electrical system. To convert from the balanced antenna leads to the unbalanced receiver system (in which one lead is grounded), a short coaxial cable enclosed in a clamp-on split ferrite core with a high impedance is used as a common-mode choke balun[^2] and is connected directly to the terminals of the antenna with the central conductor fastened to one element and the braided shielding to the other.
The amplifier module consists of two stages that are contained in separate aluminum enclosures to reduce coupling between the low-noise amplifiers. Each stage provides 33 dB of gain for a total of 66 dB. Bandpass filtering of the signal is also performed in the second stage, and the resulting half-power bandwidth spans approximately 50 to 330 MHz. The amplifier chain can be connected through a voltage controlled three-position switch to one of three inputs: the antenna, an ambient load, or an ambient load plus a calibration noise source. Switching between the ambient load and the antenna provides a comparison to subtract spurious instrumental signals in the measured sky spectrum.
Impedance mismatch between the antenna and the amplifiers causes reflections of the sky noise within the electrical path of the instrument that produce an undesirable sinusoidal ripple in the measured spectrum due to the frequency-dependence of the phase of the reflections at the input to the amplifier. To reduce the effects of these reflections in EOR measurements, the input to the amplifier chain is connected directly to the balun on the antenna (with no intermediate transmission cable), as shown in Figure \[f\_edges\_photos\]. While absolute calibration is limited in this configuration by the effect of the unknown phases of the reflections on the measured spectrum, the compact size of the antenna and the small signal path delays result in a smooth spectral response.
The amplifier module is connected to the analog-to-digital conversion module by three low-loss coaxial transmission cables. The cables provide power, switching control, and signal transmission, respectively. Common-mode current on these cables (i.e. current that is on the outer surface of the shielding in the coaxial cable, or current that is unidirectional on both the central conductor and inner surface of the shielding) is also capable of producing reflections and additional sinusoidal ripples in the measured spectrum. The ferrite core balun used between the antenna and amplifiers allows common-mode current of approximately 10% of the differential mode. Although most of this current is transferred to the ground screen by direct contact between the amplifier module casing and the ground screen, some current persists and leaks through the casing of the amplifier module and onto the shielding of the three cables connecting the amplification module to the analog-to-digital conversion module. To reduce this current to less than 0.005% of the differential mode current, additional clamp-on ferrite cores are placed every meter on the transmission cables.
Finally, the analog-to-digital conversion is accomplished with an Acqiris AC240[^3] 8-bit digitizer with maximum dynamic range of 48 dB (although, in practice, the effective dynamic range was substantially lower due to coupling between the digital output of the converter and its input). The AC240 uses an embedded field programmable gate array (FPGA) to perform onboard Fast Fourier Transform (FFT) and integration of the power spectrum in realtime. The spectrometer is clocked at 1 GS/s and the Fourier transform processes 16,384 channels, giving a bandwidth of 500 MHz and a raw spectral resolution of about 30 kHz. The broadband spectrometer employs the FPGA code of and a Blackman-Harris window function is used to improve the isolation between neighboring frequency channels at the expense of reducing the effective spectral resolution to 122 kHz. The unit is contained on a CompactPCI card connected to a host computer. The digitizer and host computer, along with a power transformer and interface circuitry for controlling the amplifier module with the serial port of the computer, are enclosed in an aluminum box to prevent self interference.
The Measured Spectrum {#s_data_acquisition}
---------------------
![ \[f\_edges\_spectrum\] Integrated spectrum used for upper limit analysis of reionization signal. The sky temperature, $T_{sky}$, is an estimate based on modeled values for cable losses and no correction for antenna reflections. The spectrum represents the best 10% of the data from observations over two nights. It is selected by discarding individual observation cycles (see § \[s\_data\_acquisition\]) containing periods of particularly intense radio frequency interference. A total of approximately 1.5 h of integration is included (3.75 h including the ambient load and calibrator noise source measurements in each cycle). The black curve shows the spectrum after de-weighting the interferers (shown in gray) present in the retained observations.](f3.eps){width="22pc"}
The measured spectra from each of the three switch positions can be combined to produce a calibrated estimate of the true sky spectrum. The three spectra are given by $$\begin{array}{ccl}
p_0 & = & g ~ (T_L + T_R) ~ (1 + n_0) \\
p_1 & = & g ~ (T_L + T_R + T_{cal}) ~ (1 + n_1) \\
p_2 & = & g ~ (T_A + T_R) ~ (1 + n_2)
\end{array}$$ where the explicit frequency dependence of each term has been dropped and $p_0$ is the spectrum for the ambient load, $p_1$ is the spectrum for the ambient load plus calibration noise, and $p_2$ is the spectrum for the antenna. In this terminology, $g$ is the gain, $T_L$ is the ambient load temperature, $T_R$ is the receiver noise temperature, $T_{cal}$ is the calibration noise temperature, and $T_A$ is the antenna temperature. Thermal uncertainty in the measurements is explicitly included in the Gaussian random variables $n_0$, $n_1$, and $n_2$, the magnitudes of which are given by $n_i =
(\epsilon~b~\tau_i)^{-1/2}$, where $\epsilon=0.5$ is the efficiency for the Blackman-Harris window function (which could be improved to 0.93 by processing two sets of overlapping windows), $b =
122\times10^3$ Hz is the resolution bandwidth, and $\tau_i$ is the integration time in seconds (for each switch position, $i$). Temporarily setting the noise terms to zero, $n_i\rightarrow0$, and solving for the antenna temperature yields $$T_A = T_{cal}\frac{p_2 - p_0}{p_1 - p_0} + T_L.\label{eqn_ta}$$ In practice, the impedance match between the antenna and receiver is not perfect and some of the incident sky noise may be reflected back out of the system. This produces deviations between the derived sky temperature found using Equation \[eqn\_ta\] and the true sky spectrum. Independent measurements of the impedance mismatch can be used to correct these deviations by applying a frequency-dependent multiplicative factor to $T_A$ that is proportional to the inverse of the reflection coefficient. For the EDGES system, this correction was measured by two methods: we used a network analyzer in the laboratory to determine the impedance of the antenna, and we reconfigured the system in the field with a long cable inserted between the antenna and amplifier module so that reflections between the two elements were visible in the measured spectrum and could be used to calibrate the reflection coefficient. In both sets of measurements, the corrections were found to be small (of order 1%) and smooth (able to be fit by a low-order polynomial in frequency) over the band of interest. For the remainder of this paper, we will ignore this correction since its effects are easily absorbed by the polynomial fit technique used to constrain the redshifted 21 cm contribution to the spectrum.
Adding the noise terms back in and solving in the limit that $T_{cal}
\gg (T_L \approx T_A) > T_R$ results in an estimate of the thermal uncertainty per frequency channel of approximately $$\Delta T_{A,rms} \approx \sqrt{ n^2_0 (T_L + T_R)^2 + n^2_1 (T_L)^2 +
n_2^2(T_A+T_R)^2}.$$ For optimal efficiency, the three terms contributing to $\Delta
T_{A,rms}$ should be comparable in magnitude. Substituting $T_L=300$ K, $T_A=250$ K and $T_R=20$ K, we find that the terms are comparable as long as approximately equal time is spent in each switch position. In addition, a 1 hour integration in each switch position (3 hours total) will result in a thermal uncertainty in the estimate of the antenna temperature of $\Delta T_{A,rms} \approx
35$ mK within each 122 kHz frequency channel.
To acquire a series of estimates of the true sky spectrum using this technique, software on the host computer cycles the amplifier module between the three switch positions and triggers the digitizer to acquire, Fourier transform, and accumulate data for a predefined duration at each of the switch positions. The integration durations per switch position are $\tau_{\{0,1,2\}}=\{10, 5, 10\}$ seconds for the ambient load, ambient load plus calibration noise source, and antenna, respectively, giving a duty cycle of about 40% on the antenna. This loop is repeated approximately every 25 seconds for the duration of the observations and the resulting measurements are recorded to disk.
Initial Results {#s_edges_results}
===============
![ \[f\_edges\_residuals\] Residuals after subtraction of seventh-order polynomial fit to measured spectrum shown in Figure \[f\_edges\_spectrum\]. The gray line is the raw spectrum with 122 kHz resolution. The black line is after smoothing to 2.5 MHz resolution to reduce the thermal noise to below the systematic noise. The *rms* of the smoothed fluctuations is approximately 75 mK (see Figure \[f\_edges\_rms\_vs\_time\]).](f4.eps){width="22pc"}
The EDGES system was deployed at the radio-quiet Mileura Station in Western Australia from 29 November through 8 December 2006. These dates were chosen such that the Galactic center would be below the horizon during most of the night, keeping the system temperature as low as possible for the measurements. The system was located approximately 100 m from the nearest buildings in a clearing with no nearby objects and no obstructions above $\sim5^{\circ}$ on the horizon, and the antenna was aligned in an approximately north-south/east-west configuration. The system was operated on 8 consecutive nights during the deployment, with 5 of the nights dedicated to EOR observing. In total, over 30 h of relevant drift scans were obtained, but strong, intermittent interference from satellites complicated the measurements and only approximately 8 h of high-quality observations were retained as the primary data set. Although the satellite interferers that complicated the measurements were narrow-band and, in many cases, were easily removed through excision of the effected spectral channels, the limited dynamic range of the EDGES system resulted in clipping of the analog-to-digital converter and corruption of the full band during especially strong transmissions. This required all channels to be omitted from the data set in those instances. In particular, it was found that the low Earth orbiting satellites of Orbcomm (transmitting between approximately 136 and 138 MHz), as well as satellite beacons (at 150 MHz) from discarded spacecraft were particularly troublesome. The Orbcomm activity was somewhat variable and usually decreased during the night. The typical duration of a pass was approximately 15 minutes, during which time the power in the satellite signal could easily reach an order of magnitude greater than the integrated sky noise over the band. While previous observations at the site with prototype MWA equipment [@2007AJ....133.1505B] have demonstrated (in a subset of the full target band) that it is possible to reach the sensitivities required for EDGES despite the satellites and other sources of interference, improvements to the EDGES digitizing system, such as an upgrade of the analog-to-digital converter to 10 or 12 bits, would certainly help to alleviate the difficulties encountered during this observing campaign and increase the usable fraction of measurements.
From the primary data set remaining after transient RFI exclusion, a stringent filter was applied to select the best 1.5 h of sky-time when transient interference signals were weakest. The final cut of data included measurements from multiple nights and spanned a range of local apparent sidereal time (LST) between 0 and 5 h. The sky temperature at 150 MHz derived from the system during this period was found to have a minimum of $\sim240$ K at about 3 h LST and a maximum of $\sim280$ K at 5 h LST. The integrated spectrum generated from these measurements is shown in Figure \[f\_edges\_spectrum\]. Frequency channels containing RFI were identified in the integrated spectrum by an algorithm that employs a sliding local second-order polynomial fit and iteratively removes channels with large errors until the fit converges. The affected channels were then weighted to zero in subsequent analysis steps. To look for small deviations from the smooth foreground spectrum, a seventh-order polynomial was fit to the measured spectrum between 130 and 190 MHz (where the impedance match between the antenna and receiver was nearly ideal) and subtracted.
The residual deviations in the measured sky spectrum after the polynomial fit and subtraction are shown in Figure \[f\_edges\_residuals\]. The *rms* level of systematic contributions to the measured spectrum was found to be $\Delta
T_{rms} \approx 75$ mK, a factor of $\sim3$ larger than the maximum expected redshifted 21 cm feature that would result from a rapid reionization. Although it is not obvious in Figure \[f\_edges\_residuals\], the variations in the residuals are due to instrumental contributions and not thermal noise. The large variations between 163 and 170 MHz are due to the 166 MHz PCI-bus clock of the AC240 and computer, while the gap centered at approximately 137 MHz is due to RFI excision of the Orbcomm satellite transmissions over a region spanning more than 2.5 MHz. Analysis of the dependence of $\Delta T_{rms}$ on integration duration is shown in Figure \[f\_edges\_rms\_vs\_time\] and illustrates that the *rms* of the residuals follows a thermal profile $\sim(b
\tau)^{-1/2}$ initially and then saturates to a constant value. After smoothing to 2.5 MHz resolution ($\Delta z\approx0.2$), the instrumentally dominated 75 mK threshold is reached in approximately 20 minutes (1200 s) of integration on the sky (50 minutes of total integration in all three switch positions). Reordering the individual 25-second observation cycles used in the full integration does not change the behavior in Figure \[f\_edges\_rms\_vs\_time\], and longer integrations (up to approximately 3 h of sky time), using observation cycles with more intense interference, continued to decrease the thermal noise, but leave the spurious signals and systematic effects unchanged.
![ \[f\_edges\_rms\_vs\_time\] Characteristic amplitude of the residuals to the polynomial fit as a function of integration time on the sky. The *rms* follows a thermal $(b \tau)^{-1/2}$ dependency until saturating at a constant 75 mK noise level due to the instrumental errors introduced into the measured spectrum. The dotted lines are guides for the eye showing a $(b \tau)^{-1/2}$ profile and a constant 75 mK contribution.](f5.eps){width="22pc"}
Limits on Reionization History {#s_edges_limits}
------------------------------
Although the sensitivity level of the initial observations with the EDGES system was limited by instrumental effects in the measured spectrum at a level greater than the expected maximum contribution due to redshifted 21 cm emission, weak constraints can still be placed on the reionization history. In addition, it is possible to make a quantitative assessment of how much improvement must be made before significant constraints are possible, as well as to characterize the best-case outcome of future efforts using similar approaches. To begin, we introduce a model for the sky spectrum such that $$\label{eqn_sky_temp} T_{sky}(\nu) = T_{gal}(\nu) + T_{cmb} +
T_{21}(\nu)$$ where $T_{gal}$ represents the contribution of all the foregrounds (and is dominated by the Galactic synchrotron radiation), $T_{cmb}=2.73$ K is the CMB contribution, and $T_{21}$ is the specific form for the frequency-dependence of the redshifted 21 cm emission during the transition from the fully neutral to fully ionized IGM. This model neglects any directional or temporal variation in $T_{sky}$ and, therefore, implicitly assumes an angular average over the antenna beam and a time average over the drift scan measurements performed for the experiment. Since $T_{cmb}$ and $T_{21}$ are taken to be constant over the sky, only the $T_{gal}$ contribution is affected by this simplification. This does not impact the result, however, as long as the foreground emission varies slowly on the sky and the antenna pattern changes slowly with frequency—conditions that are presumed to be met in the high Galactic latitude region sampled by the dipole-based EDGES system. As a test of this assumption, we calculated the residuals after the polynomial fit for a bright source with flux comparable to Cas A (1400 Jy at 100 MHz) and spectral index $\beta=2.77$ at various positions in the antenna beam using simulated beam patterns to determine the frequency-dependence. We found, in all cases, less than a $\sim50$ $\mu$K residual.
During the reionization epoch, we define $T_{21}$ to be given by $$T_{21}(z) = \Delta T_{21} \frac{1}{2} \left \{ 1 + cos \left [ \frac{
\pi (z_r - z - \Delta z / 2)} { \Delta z} \right ] \right \},$$ where $\Delta T_{21}$ is constant and is the maximum amplitude of the redshifted 21 cm contribution, $z_r$ is the redshift when $\bar{x}_{HI}(z_r)=0.5$, $\Delta z$ is the total duration of the reionization epoch, and we use $\nu = 1420 / (1+z)$ MHz to convert back to frequency units. Before the reionization epoch ($z>z_r+\Delta
z/2$), $T_{21} \equiv \Delta T_{21}$, while after reionization ($z<z_r-\Delta z/2$), $T_{21}\equiv0$. The exact form of the transition used for $T_{21}$ has little influence on the outcome of the constraints as long as it is reasonably smooth. Figure \[f\_edges\_model\] illustrates the modelled redshifted 21 cm spectrum. The free parameters in the model are $z_r$, $\Delta z$, and $\Delta T_{21}$.
For the EDGES best-response frequency range, a center redshift around $z_r=8$ allows the largest range of $\Delta z$ to be explored. By simulating the combined sky spectrum, $T_{sky}$, for a range of the two remaining free parameters, we can determine the *rms* of the residuals that would remain following the polynomial fit used in the EDGES data analysis. Comparing the *rms* of the residuals in the models to the 75 mK *rms* of the initial measurements gives a good estimate of the region of parameter space ruled out so far. Figure \[f\_edges\_constraint\] illustrates the results of this process. The line defining the ruled-out region is computed by finding the locus of parameters that make the *rms* residuals in the model equal to 75 mK. While a more statistically robust analysis is clearly possible, little benefit would be gained for the initial measurements presented here due to the severe systematic effects present in the spectrum.
From Figure \[f\_edges\_constraint\], it is clear that the initial results constrain only a small portion of parameter space that is well outside the expected region for both the intensity of the redshifted 21 cm signal and the duration of reionization. The best constraint, in the case of a nearly instantaneous reionization, is that the redshifted 21 cm contribution to the spectrum is not greater than about $\Delta T_{21} \lesssim 450$ mK before the transition. Reducing the systematic contributions in the measured spectrum by more than an order of magnitude to $\Delta T_{rms}<7.5$ mK would begin to allow meaningful constraints, while an improvement of a factor of 25 to $\Delta T_{rms} \approx 3$ mK would be able to rule out a significant portion of the viable parameter range and constrain $\Delta z > 2$. In principle, such an improvement is possible with minor modifications to the EDGES system. Reaching a systematic uncertainty below $\sim3$ mK, however, is likely to be infeasible without a redesign of the experimental approach because errors in the polynomial fit to the overall power-law-like shape of the sky spectrum, $T_{sky}(\nu)$, are the dominant source of uncertainty below that level in the current approach. The sharp cut-off at $\Delta z\approx2$ in parameter space is the result of using a seventh-order polynomial to fit a 60 MHz bandwidth, thus yielding a maximum residual scale size of order 10 MHz, which corresponds to $\Delta z\approx2$ at $z\approx8$. If the same polynomial could be reasonably fit to a larger bandwidth (or a lower-order polynomial fit to the existing bandwidth), then $\Delta z$ could be probed to larger values.
![ \[f\_edges\_constraint\] Constraints placed by EDGES on the redshifted 21 cm contribution to the sky spectrum. The dark-gray region at the top-left is the portion of the parameter space ruled out by the initial EDGES results with $\Delta T_{rms}=75$ mK (solid line). The dashed lined labelled $\Delta T_{rms}=7.5$ mK and the dotted line labelled $\Delta T_{rms}=3$ mK indicated the constraints that could be placed on reionization if the experimental systematics were lowered to the respective values. The light-gray region along the bottom is the general range of parameters believed to be viable. The redshifted 21 cm contribution to the spectrum is modelled according to the description in § \[s\_edges\_limits\] with $z_r =
8$.](f6.eps){width="20pc"}
Conclusion
==========
In principle, useful measurements of the redshifted 21 cm background can be carried out with a small radio telescope. These measurements would be fundamental to understanding the evolution of the IGM and the EOR. In particular, the global evolution of the mean spin temperature and mean ionization fraction of neutral hydrogen in the high redshift IGM could be constrained by very compact instruments employing individual radio antennas. We have reported preliminary results to probe the reionization epoch based on this approach from the first observing campaign with the EDGES system. These observations were limited by systematic effects that were an order of magnitude larger than the anticipated signal and, thus, ruled out only an already unlikely range of parameter space for the differential amplitude of the redshifted 21 cm brightness temperature and for the duration of reionization. Nevertheless, the results of this experiment indicate the viability of the simple global spectrum approach.
Building on the experiences of these initial efforts, modifications to the EDGES system are underway to reduce the residual systematic contribution in the measured spectrum and to expand the frequency coverage of the system down to 50 MHz or lower in order to place constraints on the anticipated transition of the hyperfine line from absorption to emission as the IGM warms before the EOR. Constraining the redshift and intensity of this feature would be very valuable for understanding the heating history of the IGM and, since the transition has the potential to produce a step-like feature in the redshifted 21 cm spectrum with a magnitude over 100 mK (up to a factor of 4 larger than the amplitude of the step during the reionization epoch), it may be easier to identify than the transition from reionization—although the sky noise temperature due to the Galactic synchrotron foreground increases significantly at the lower frequencies, as well. Through these and other global spectrum efforts, the first contribution to cosmic reionization science from measurements of the redshifted 21 cm background will hopefully be achieved in the near future.
This work was supported by the Massachusetts Institute of Technology, School of Science, and by the NSF through grant AST-0457585.
[^1]: http://www.haystack.mit.edu/ast/arrays/Edges/
[^2]: This balun provides a 1:1 impedance transition and operates on the same principle as the quarter wavelength sleeve balun described by @KrausAntennas [page 742]. The ferrite provides a high impedance over a wide frequency range to reduce the common-mode currents, whereas the sleeve balun provides a high impedance over only a limited frequency range close to the quarter wavelength resonance.
[^3]: http://www.acqiris.com/products/analyzers/cpci-signal-analyzers/ac240-platform.html
|
"---\nabstract: 'While cavity cooling of a single trapped emitter was demonstrated, cooling of many (...TRUNCATED) |
"---\nabstract: 'Stock prediction is a topic undergoing intense study for many years. Finance expert(...TRUNCATED) |
"---\nabstract: 'We report the results of a 100 square degree survey of the Taurus Molecular Cloud r(...TRUNCATED) |
"---\nabstract: 'We investigate $S$-arithmetic inhomogeneous Khintchine type theorems in the dual se(...TRUNCATED) |
"---\nabstract: 'In medical domain, data features often contain missing values. This can create seri(...TRUNCATED) |
"---\nauthor:\n- |\n \\\n \\\n **\\\n **\ntitle: '**[Dyson-Schwinger equation constrai(...TRUNCATED) |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 67