text
stringlengths 448
13k
| label
int64 0
1
| doc_idx
stringlengths 8
14
|
---|---|---|
---
abstract: 'We discuss the stability properties of an autonomous system in loop quantum cosmology. The system is described by a self-interacting scalar field $\phi$ with positive potential $V$, coupled with a barotropic fluid in the Universe. With $\Gamma=VV''''/V''^2$ considered as a function of $\lambda=V''/V$, the autonomous system is extended from three dimensions to four dimensions. We find that the dynamic behaviors of a subset, not all, of the fixed points are independent of the form of the potential. Considering the higher-order derivatives of the potential, we get an infinite-dimensional autonomous system which can describe the dynamical behavior of the scalar field with more general potential. We find that there is just one scalar-field-dominated scaling solution in the loop quantum cosmology scenario.'
author:
- Kui Xiao
- 'Jian-Yang Zhu'
title: Stability analysis of an autonomous system in loop quantum cosmology
---
[^1]
Introduction
============
The scalar field plays an important role in modern cosmology. Indeed, scalar-field cosmological models are of great importance in the study of the early Universe, especially in the investigation of inflation. The dynamical properties of a scalar fields also make an interesting research topic for modern cosmological studies [@Copeland-IJMPD; @Coley-Book]. The dynamical behavior of scalar field | 1 | member_16 |
coupled with a barotropic fluid in a spatially flat Friedmann-Robertson-Walker universe has been studied by many authors (see [@Copeland-IJMPD; @Copeland-scalar; @Leon-Phase], and the first section of [@Coley-Book]).
The phase-plane analysis of the cosmological autonomous system is an useful method for studying the dynamical behavior of a scalar field. One always considers the dynamical behavior of a scalar field with an exponential potential in the classical cosmology [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2] or modified cosmology [@Samart-Phantom; @Li-O(N)]. And, if one considers the dynamical behavior of a scalar field coupled with a barotropic fluid, the exponential potential is also the first choice [@Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]. The exponential potential $V$ leads to the fact that the variables $\Gamma=VV''/V'^2$ equal 1 and that $\lambda=V'/V$ is also a constant. Then the autonomous system is always two dimensional in classical cosmology [@Copeland-exponential], and three dimensional in loop quantum cosmology (LQC) [@Samart-Phantom]. Although one can also consider a more complex case with $\lambda$ being a dynamically changing quantity [@Copeland-IJMPD; @Macorra-lambda; @Nunes-lambda], the fixed point is not a real one, and this method is not exact. Recently, Zhou *et al* [@Zhou-plb; @Fang-CQG] introduced a new method by which one can make $\Gamma$ a general function of $\lambda$. Then the | 1 | member_16 |
autonomous system is extended from two dimensions to three dimensions in classical cosmology. They found that this method can help investigate many quintessence models with different potentials. The goal of this paper is to extend this method for studying the dynamical behavior of a scalar field with a general potential coupled with a barotropic fluid in LQC.
LQC [@Bojowald-Living; @Ashtekar-overview] is a canonical quantization of homogeneous spacetime based on the techniques used in loop quantum gravity (LQG) [@Rovelli-Book; @Thiemann-Book]. Owing to the homogeneity and isotropy of the spacetime, the phase space of LQC is simpler than that of LQG. For example, the connection is determined by a single parameter $c$ and the triad is determined by $p$. Recently, it has been shown that the loop quantum effects can be very well described by an effective modified Friedmann dynamics. Two corrections of the effective LQC are always considered: the inverse volume correction and the holonomy correction. These modifications lead to many interesting results: the big bang can be replaced by the big bounce [@Ashtekar], the singularity can be avoided [@Singh], the inflation can be more likely to occur (e.g., see [@Bojowald; @Germani-inflation; @Copeland-superinflation; @Ashtekar-inflation; @Corichi-measure]), and more. But the inverse volume | 1 | member_16 |
modification suffers from gauge dependence which cannot be cured and thus yields unphysical effects. In the effective LQC based on the holonomy modification, the Friedmann equation adds a $-\frac{\kappa}{3}\frac{\rho^2}{\rho_c}$ term, in which $\kappa=8\pi G$, to the right-hand side of the standard Friedmann equation [@Ashtekar-improve]. Since this correction comes with a negative sign, the Hubble parameter $H$, and then $\dot{a}$ will vanish when $\rho=\rho_c$, and the quantum bounce occurs. Moreover, for a universe with a large scalar factor, the inverse volume modification to the Friedmann equation can be neglected and only the holonomy modification is important.
Based on the holonomy modification, the dynamical behavior of dark energy has recently been investigated by many authors [@Samart-Phantom; @Samart-dy; @Xiao-dynamical]. The attractor behavior of the scalar field in LQC has also been studied [@Copeland-superinflation; @Lidsey-attractor]. It was found that the dynamical properties of dark-energy models in LQC are significantly different from those in classical cosmology. In this paper, we examine the background dynamics of LQC dominated by a scalar field with a general positive potential coupled with a barotropic fluid. By considering $\Gamma$ as a function of $\lambda$, we investigate scalar fields with different potentials. Since the Friedmann equation has been modified by the | 1 | member_16 |
quantum effect, the dynamical system will be very different from the one in classical cosmology, e.g., the number of dimensions of autonomous system will change to four in LQC. It must be pointed out that this method cannot be used to describe the dynamical behavior of scalar field with arbitrary potential. To overcome this problem, therefore, we should consider an infinite-dimensional autonomous system.
The paper is organized as follows. In Sec. \[sec2\], we present the basic equations and the four dimensional dynamical system, and in Sec. \[sec3\], we discuss the properties of this system. In Sec. \[sec4\], we discuss the autonomous system in greater detail, as well as an infinite-dimensional autonomous system. We conclude the paper in the last section. The Appendix contains the analysis of the dynamical properties of one of the fixed points, $P_3$.
Basic equations {#sec2}
===============
We focus on the flat Friedmann-Robertson-Walker cosmology. The modified Friedmann equation in the effective LQC with holonomy correction can be written as [@Ashtekar-improve] $$\begin{aligned}
H^2=\frac{1}{3}\rho\left(1-\frac{\rho}{\rho_c}\right),\label{Fri}\end{aligned}$$ in which $\rho$ is the total energy density and the natural unit $\kappa=8\pi G=1$ is adopted for simplicity. We consider a self-interacting scalar field $\phi$ with a positive potential $V(\phi)$ coupled with a barotropic fluid. | 1 | member_16 |
Then the total energy density can be written as $\rho=\rho_\phi+\rho_\gamma$, with the energy density of scalar field $\rho_\phi=\frac12\dot{\phi}^2+V(\phi)$ and the energy density of barotropic fluid $\rho_\gamma$. We consider that the energy momenta of this field to be covariant conserved. Then one has $$\begin{aligned}
&&\ddot{\phi}+3H\dot{\phi}+V'=0,\label{ddotphi}\\
&&\dot{\rho}_\gamma+3\gamma H\rho_\gamma=0, \label{dotrg}\end{aligned}$$ where $\gamma$ is an adiabatic index and satisfies $p_\gamma=(\gamma-1)\rho_\gamma$ with $p_\gamma$ being the pressure of the barotropic fluid, and the prime denotesthe differentiation with respect to the field $\phi$. Differentiating Eq. (\[Fri\]) and using Eqs. (\[ddotphi\]) and (\[dotrg\]), one can obtain $$\begin{aligned}
\dot{H}=-\frac12\left(\dot{\phi}^2+\gamma\rho_\gamma\right)\left[1-\frac{2(\rho_\gamma+\rho_\phi)}{\rho_c}\right].
\label{Fri4}\end{aligned}$$
Equations (\[Fri\])-(\[dotrg\]) and (\[ddotphi\])-(\[Fri4\]) characterize a closed system which can determine the cosmic behavior. To analyze the dynamical behavior of the Universe, one can further introduce the following variables [@Copeland-exponential; @Samart-Phantom]: $$\begin{aligned}
x\equiv\frac{\dot{\phi}}{\sqrt{6}H},\quad
y\equiv\frac{\sqrt{V}}{\sqrt{3}H},\quad
z\equiv\frac{\rho}{\rho_c},\quad \lambda\equiv\frac{V'}{V},
\label{new-v}\end{aligned}$$ where the $z$ term is a special variable in LQC \[see Eq. (\[Fri\])\]. In the LQC scenario, the total energy density $\rho$ should be less than or equal to the critical energy density $\rho_c$, and thus $0\leq z\leq 1$. Notice that, in the classical region, $z=0$ for $\rho\ll\rho_c$. Using these new variables, one can obtain $$\begin{aligned}
&&\frac{\rho_\gamma}{3H^2}=\frac{1}{1-z}-x^2-y^2,\label{rg-xyz}\\
&&\frac{\dot{H}}{H^2}=-\left[3x^2+\frac{3\gamma}{2}\left(\frac{1}{1-z}-x^2-y^2\right)\right]\left(1-2z\right)\nonumber\\
\label{dH-HH}.\end{aligned}$$
Using the new variables (\[new-v\]), and considering Eqs. (\[rg-xyz\]) and (\[dH-HH\]), one can | 1 | member_16 |
rewrite Eqs. (\[Fri\])-(\[dotrg\]) in the following forms: $$\begin{aligned}
\frac{dx}{dN} &=&-3x-\frac{\sqrt{6}}{2}\lambda y^2+x\left[3x^2+\frac{3\gamma}{2}\left(\frac{1}{1-z}-x^2-y^2\right)\right]\nonumber\\
&&\times(1-2z),\label{x'}\\
\frac{dy}{dN}&=&\frac{\sqrt{6}}{2}\lambda x
y+y\left[3x^2+\frac{3\gamma}{2}
\left(\frac{1}{1-z}-x^2-y^2\right)\right]\nonumber\\
&&\times(1-2z),\label{y'}\\
\frac{dz}{dN} &=&-3\gamma z-3z\left(1-z\right)\left(2x^2-\gamma x^2-\gamma y^2\right),\label{z'}\\
\frac{d\lambda}{dN} &=&\sqrt{6}\lambda^2
x\left(\Gamma-1\right),\label{l}\end{aligned}$$ where $N=\ln a$ and $$\begin{aligned}
\Gamma\equiv \frac{VV''}{V'^2}.\label{Gamma}\end{aligned}$$ Note that the potential $V(\phi)$ is positive in this paper, but one can also discuss a negative potential. Just as [@Heard-negative] has shown, the negative scalar potential could slow down the growth of the scale factor and cause the Universe to be in a collapsing phase. The dynamical behavior of the scalar field with the positive and negative potential in brane cosmology has been discussed by [@Copeland-scalar]. In this paper we are concerned only with an expanding universe, and both the Hubble parameter and the potential are positive.
Differentiating $\lambda$ with respect to the scalar field $\phi$, we obtain the relationship between $\lambda$ and $\Gamma$, $$\begin{aligned}
\frac{d\lambda^{-1}}{d\phi}=1-\Gamma. \label{l-G}\end{aligned}$$ If we only consider a special case of the potential, like exponential potential [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom], then $\lambda$ and $\Gamma$ are both constants. In this case, the four dimensional dynamical system, Eqs. (\[x’\])-(\[l\]), reduces to a 3-dimensional one, since $\lambda$ is a constant. (In the classical dynamical system, the $z$ term does not exist, | 1 | member_16 |
and then the dynamical system is reduced from three dimensions to two dimensions.) The cost of this simplification is that the potential of the field is restricted. Recently, Zhou *et al* [@Zhou-plb; @Fang-CQG] considered the potential parameter $\Gamma$ as a function of another potential parameter $\lambda$, which enables one to study the fixed points for a large number of potentials. We will follow this method in this section and the sections that follow to discuss the dynamical behavior of the scalar field in the LQC scenario, and we have $$\begin{aligned}
\Gamma(\lambda)=f(\lambda)+1.\label{G-l}\end{aligned}$$ In this case, Eq. (\[G-l\]) can cover many scalar potentials.
For completeness’ sake, we briefly review the discussion of [@Fang-CQG] in the following. From Eq. (\[l-G\]), one can obtain $$\begin{aligned}
\frac{d\lambda}{\lambda f(\lambda)}=\frac{dV}{V}.\label{l-f-V}\end{aligned}$$ Integrating out $\lambda=\lambda(V)$, one has the following differential equation of potential $$\begin{aligned}
\frac{dV}{V\lambda(V)}=d\phi.\label{l-V}\end{aligned}$$ Then, Eqs. (\[l-f-V\]) and (\[l-V\]) provide a route for obtaining the potential $V=V(\phi)$. If we consider a concrete form of the potential (e.g., an exponential potential), the dynamical system is specialized (e.g., the dynamical system is reduced to three dimensions if one considers the exponential potential for $d\lambda/dN=0$). These specialized dynamical systems are too special if one hopes to distinguish the fixed points that | 1 | member_16 |
are the common properties of scalar field from those that are just related to the special potentials [@Fang-CQG]. If we consider a more general $\lambda$, then we can get the more general stability properties of scalar field in the LQC scenario. We will continue the discussion of this topic in Sec. \[sec4\]. In this case, Eq. (\[l\]) becomes $$\begin{aligned}
\frac{d\lambda}{dN}&=&\sqrt{6}\lambda^2 xf(\lambda).\label{l'}\end{aligned}$$ Hereafter, Eqs. (\[x’\])-(\[z’\]) along with Eq. (\[l’\]) definitely describe a dynamical system. We will discuss the stability of this system in the following section.
Properties of the autonomous system {#sec3}
===================================
Obviously, the terms on the right-hand side of Eqs. (\[x’\])-(\[z’\]) and (\[l’\]) only depend on $x,y,z,\lambda$, but not on $N$ or other variables. Such a dynamical system is usually called an autonomous system. For simplicity, we define $
\frac{dx}{dN}=F_1(x,y,z,\lambda)\equiv F_1,
\frac{dy}{dN}=F_2(x,y,z,\lambda)\equiv F_2,
\frac{dz}{dN}=F_3(x,y,z,\lambda)\equiv F_3$, and $\frac{d\lambda}{dN}=F_4(x,y,z,\lambda)\equiv F_4.$ The fixed points $(x_c,y_c,z_c,\lambda_c)$ satisfy $F_i=0, i=1,2,3,4$. From Eq. (\[l’\]), it is straightforward to see that $x=0, \lambda=0$ or $f(\lambda)=0$ can make $F_4(x,y,z,\lambda)=0$. Also, we must consider $\lambda^2f(\lambda)=0$. Just as [@Fang-CQG] argued, it is possible that $\lambda^2 f(\lambda)\neq0$ and $\frac{d\lambda}{dN}\neq0$ when $\lambda=0$. Thus the necessary conditionfor the existence of the fixed points with $x\neq 0$ is $\lambda^2f(\lambda)=0$. Taking into account these | 1 | member_16 |
factors, we can easily obtain all the fixed points of the autonomous system described by Eqs. (\[x’\])-(\[z’\]) and (\[l’\]), and they are shown in Tab. **I**.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Fixed-points $x_c$ $y_c$ $z_c$ $\lambda_c$ Eigenvalues Stability
-------------- ------------------------------------------- ------------------------------------------------- ------- ------------- ----------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------- --
$P_1$ $0$ $0$ $0$ $0$ $\mathbf{M}^T=(0,-3\gamma,\frac32\gamma,-3+\frac32\gamma)$ U, for all $\gamma$
$P_2$ $0$ $0$ $0$ $\lambda_*$ $\mathbf{M}^T=(0,\frac32\gamma,-3\gamma,-3+\frac32\gamma)$ U, for all $\gamma$
S, for $\gamma=1,f_1(0)\geq0$
$P_3$ $0$ $1$ 0 $0$ $\mathbf{M}^T=(-3,-3\gamma,0,0)$ U, for $\gamma=\frac43$, if $f_1(0)\neq 0$
S, for $\gamma=\frac43$, if $f_1(0)=0$
$P_4$ $1$ $0$ $0$ $0$ $\mathbf{M}^T=(0,-6,0,6-3\gamma)$ U,for all $\gamma$
$P_5$ $-1$ $0$ $0$ $0$ $\mathbf{M}^T=(0,-6,0,6-3\gamma)$ U,for all $\gamma$
$P_6$ $0$ $0$ $0$ $\lambda_a$ $\mathbf{M}^T=(0,\frac32\gamma,-3\gamma,-3+\frac32\gamma)$ U,for all $\gamma$
$P_7$ $1$ $0$ $0$ $\lambda_a$ $\mathbf{M}^T= U,for all $\gamma$
\left(-6,6-3\gamma,\frac12\sqrt{6}\lambda_a+3,\sqrt{6}\lambda_a A\right)$
$P_8$ $-1$ $0$ $0$ $\lambda_a$ $\mathbf{M}^T= U,for all $\gamma$
\left(-6,6-3\gamma,-\frac12\sqrt{6}\lambda_a+3,-\sqrt{6}\lambda_aA\right)$
$P_9$ $-\frac{\sqrt{6}}{6}\lambda_a$ $\sqrt{1-\frac{\lambda^2_a}{6}}$ 0 $\lambda_a$ $\mathbf{M}^T=\left(-\lambda_a^2,-3+\frac12\lambda_a^2,\lambda_a^2-3\gamma,-\lambda^3_a-f_1(\lambda_a) \right)$ S, for $f_1(\lambda_a)>\lambda_a$, and $\lambda_a<3\gamma$
U, for $f_1(\lambda_a)<\lambda_a$ and/or $\lambda_a>3\gamma$
$P_{10}$ $-\sqrt{\frac32}\frac{\gamma}{\lambda_a}$ $\sqrt{\frac{3}{2\lambda_a^2}\gamma(2-\gamma)}$ 0 $\lambda_a$ See the Eq. (\[p10\]) U, for all $\gamma$
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
: The stability analysis of an autonomous system in LQC. The system is described by a self-interacting scalar field $\phi$ with positive potential $V$ coupled with a barotropic fluid $\rho_\gamma$. Explanation of the symbols used in this table: $P_{i}$ | 1 | member_16 |
denotes the fixed points located in the four dimensionsal phase space, which are earmarked by the coordinates $(x_c,y_c,z_c,\lambda_c)$. $\lambda_*$ means that $\lambda$ can be any value. $\lambda_a$ is the value that makes $f(\lambda)=0$. $\mathbf{M}^T$ means the inverted matrix of the eigenvalues of the fixed points. $f_1(\Lambda)=\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=\Lambda}$ with $\Lambda=0, \lambda_a$. $A=\left[2f(\lambda_a)+\lambda_a\left(
\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda_a}\right)\right]$. U stands for unstable, and S stands for stable.
The properties of each fixed points are determined by the eigenvalues of the Jacobi matrix $$\begin{aligned}
\mathcal{M}= \left. \begin{pmatrix}
\frac{\partial F_1}{\partial x}&\frac{\partial F_1}{\partial y}&\frac{\partial F_1}{\partial z}&\frac{\partial F_1}{\partial \lambda}\\
\frac{\partial F_2}{\partial x}&\frac{\partial F_2}{\partial y}&\frac{\partial F_2}{\partial z}&\frac{\partial F_2}{\partial \lambda}\\
\frac{\partial F_3}{\partial x}&\frac{\partial F_3}{\partial y}&\frac{\partial F_3}{\partial z}&\frac{\partial F_3}{\partial \lambda}\\
\frac{\partial F_4}{\partial x}&\frac{\partial F_4}{\partial
y}&\frac{\partial F_4}{\partial z}&\frac{\partial F_4}{\partial
\lambda}
\end{pmatrix}
\right|_{(x_c,y_c,z_c,\lambda_c)}.\end{aligned}$$ According to Lyapunov’s linearization method, the stability of a linearized system is determined by the eigenvalues of the matrix $\mathcal{M}$ (see Chapter 3 of [@Slotine-book]). If all of the eigenvalues are strictly in the left-half complex plane, then the autonomous system is stable. If at least one eigenvalue is strictly in the right-half complex plane, then the system is unstable. If all of the eigenvalues are in the left-half complex plane, but at least one of them is on the $i\omega$ axis, then | 1 | member_16 |
one cannot conclude anything definite about the stability from the linear approximation. By examining the eigenvalues of the matrix $\mathcal{M}$ for each fixed point shown in Table **I**, we find that points $P_{1,2,4-8,10}$ are unstable and point $P_{9}$ is stable only under some conditions. We cannot determine the stability properties of $P_{3}$ from the eigenvalues, and we will give the full analysis of $P_3$ in the Appendix.
Some remarks on Tab.**I**:
1. Apparently, points $P_2$ and $P_6$ have the same eigenvalues, and the difference between these two points is just on the value of $\lambda$. As noted in the caption of Table **I**, $\lambda_*$ means that $\lambda$ can be any value, and $\lambda_a$ is just the value that makes $f(\lambda)=0$. Obviously, $\lambda_a$ is just a special value of $\lambda_*$, and point $P_6$ is a special case of point $P_2$. $P_6$ is connected with $f(\lambda)$, but $P_2$ is not. From now on, we do not consider separately the special case of $P_6$ when we discuss the property of $P_2$. Hence the value of $\lambda_a$ is contained in our discussion of$\lambda_*$.
2. It is straightforward to check that, if $x_c=\lambda_c=0$, $y_c$ can be any value $y_*$ when it is greater than or | 1 | member_16 |
equal $1$. But, if $y_*>1$, then $z_c=1-1/y_*^2<1$, and this means that the fixed point is located in the quantum-dominated regions. Although the stability of this point in the quantum regions may depend on $f(\lambda)$, it is not necessary to analyze its dynamical properties, since it does not have any physical meanings. The reason is the following: If the Universe is stable, it will not evolve to today’s pictures. If the Universe is unstable, it will always be unstable. We will just focus on point $P_3$ staying in the classical regions. Then $y_c=y_*=1,z_c=1-1/y_*^2=0$, i.e., for the classical cosmology region, $\rho\ll\rho_c$.
3. Since the adiabatic index $\gamma$ satisfies $0<\gamma< 2$ (in particular, for radiation $\gamma=\frac43$ and for dust $\gamma=1$), all the terms that contain $\gamma$ should not change sign. A more general situation of $\gamma$ is $0\leq \gamma\leq 2$ [@Billyard-scaling]. If $\gamma=0$ or $\gamma=2$, the eigenvalues corresponding to points $P_{1,2,4,5,9}$ will have some zero elements and some negative ones. To analyze the stability of these points, we need to resort to other more complex methods, just as we do in the Appendix for the dynamical properties of point $P_3$. In this paper, we just consider the barotropic fluid which includes radiation and | 1 | member_16 |
dust, and $\gamma\neq 0,2$. Notice that if one considers $\gamma=0$, the barotropic fluid describes the dark energy. This is an interesting topic, but will not be considered here for the sake of simplicity.
4. $-\sqrt{6}<\lambda_a<\sqrt{6},\lambda_a\neq 0$ should hold for point $P_9$, hence $-3+\frac12\lambda^2_a<0$.
5. $\lambda_a>0$ should hold, since $y_c>0$ for point $P_{10}$. The eigenvalue of this point is $$\begin{aligned}
\label{p10}
\mathbf{M}=\begin{pmatrix}
-3\gamma\\-3\lambda_a\gamma f_1(\lambda_a)\\-\frac32+\frac34\gamma+\frac{3}{4\lambda_a}\sqrt{(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)}\\
-\frac32+\frac34\gamma-\frac{3}{4\lambda_a}\sqrt{(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)}
\end{pmatrix}.\nonumber\\\end{aligned}$$ Since we just consider $0<\gamma<2$ in this paper, it is easy to check that $(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)>0$ is always satisfied. And this point is unstable with $f_1(\lambda_a)=\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=\lambda_a}$ being either negative or positive, since $-\frac32+\frac34\gamma+\frac{3}{4\lambda_a}\sqrt{(2-\gamma)(\lambda^2_a(2-\gamma)+8\gamma+24\gamma^2)}$ is always positive.
Based on Table **I** and the related remarks above, we have the folloing conclusions:
1. Points $P_{1,2}$: The related critical values, eigenvalues and stability properties do not depend on the specific form of the potential, since $\lambda_c=0$ or $\lambda$ can be any value $\lambda_*$.
2. Point $P_3$: The related stability properties depend on $f_1(0)=\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=0}$.
3. Points $P_{4,5}$: The related eigenvalues and stability properties do not depend on the form of the potential, but the critical values of these points should satisfy $\lambda^2 f(\lambda)=0$ since $x_c\neq 0$.
4. Point $P_6$: It is a special case of $P_2$, but $f(\lambda_a)=0$ should be | 1 | member_16 |
satisfied.
5. Points $P_{7,8}$: Same as $P_6$, they would not exist if $f(\lambda_a)\neq 0$.
6. Point $P_{9,10}$: $f(\lambda_a)=0$ should hold. The fixed values and the eigenvalues of these two points depend on $f_1(\lambda_a)=
\left.\frac{df(\lambda)}{d\lambda}\right|_{\lambda=\lambda_a}$.
Thus, only points $P_{1,2}$ are independent of $f(\lambda)$.
Comparing the fixed points in LQC and the ones in classical cosmology (see the Table **I** of [@Fang-CQG]), we can see that, even though the values of the coordinates $(x_{c},y_{c},\lambda_{c})$ are the same, the stability properties are very different. This is reasonable, because the quantum modification is considered, and the autonomous system in the LQC scenario is very different from the one in the classical scenario, e.g., the autonomous system is four dimensional in LQC but three dimensional in the classical scenario. Notice that all of the fixed points lie in the classical regions, and therefore the coordinates of fixed points remain the same from classical to LQC, which we also pointed out in an earlier paper [@Xiao-dynamical].
Now we focus on the late time attractors: point $P_{3}$ under the conditions of $\gamma=1, f_1(0)\geq 0$ and $\gamma=4/3, f_1(0)=0$, and point $P_{9}$ under the conditions of $\lambda_a^2<6,
f_1(\lambda_a)>\lambda_a,\lambda_a<3\gamma$. Obviously, these points are scalar-field dominated, since ${\rho_\gamma}=H^2(1/(1-z_c)-x_c^2-y_c^2)=0$. For point $P_{3}$, | 1 | member_16 |
the effective adiabatic index $\gamma_\phi=(\rho_\phi+p_\phi)/\rho_\phi=0$, which means that the scalar field is an effective cosmological constant. For point $P_{9}$, $\gamma_\phi=\lambda^2_a/2$. This describes a scaling solution that, as the universe evolves, the kinetic energy and the potential energy of the scalar field scale together. And we can see that there is not any barotropic fluid coupled with the scalar-field-dominated scaling solution. This is different from the dynamical behavior of scalar field with exponential potential $V=V_0
\exp(-\lambda\kappa\phi)$ in classical cosmology [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom], and also is different from the properties of the scalar field in brane cosmology [@Copeland-scalar], in which $\lambda=\rm{const.}$ (notice that the definition of $\lambda$ in [@Copeland-scalar] is different from the one in this paper) and $\Gamma$ is a function of $L(\rho(a))$ and $|V|$. In these models, the Universe may enter a stage dominated by a scalar field coupled with fluid when $\lambda,\gamma$ satisfy some conditions [@Copeland-exponential; @Copeland-scalar].
We discuss the dynamical behavior of the scalar field by considering $\Gamma$ as a function of $\lambda$ in this and the preceding sections. But $\Gamma$ can not always be treated as a function of $\lambda$. We need to consider a more general autonomous system, which we | 1 | member_16 |
will introduce in the next section.
Further discussion of the autonomous system {#sec4}
===========================================
The dynamical behavior of the scalar field has been discussed by many authors (e.g., see [@Copeland-IJMPD; @Coley-Book; @Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]). If one wants to get the potentials that yield the cosmological scaling solutions beyond the exponential potential, one can add a $\frac{d\phi}{dN}$ term into the autonomous system [@Nunes-scaling]. All of these methods deal with special cases of the dynamical behavior of scalar fields in backgrounds of some specific forms. By considering $\Gamma$ as a function of $\lambda$, one can treat potentials of more general forms and get the common fixed points of the general potential, as shown in [@Zhou-plb; @Fang-CQG] and in the two preceding sections. However, as is discussed in [@Fang-CQG], sometimes $\Gamma$ is not a function of $\lambda$, and then the dynamical behaviors of the scalar fields discussed above are still not general in the strict sense. For a more general discussion, we must consider the higher-order derivatives of the potential. We define $$\begin{aligned}
&&{}^{(1)}\Gamma=\frac{VV_3}{V'^2}, \quad {}^{(2)}\Gamma
=\frac{VV_4}{V'^2},\quad{}^{(3)}\Gamma=\frac{VV_5}{V'^2},\nonumber\\
&&\cdots\quad{}^{(n)}\Gamma=\frac{VV_{n+2}}{V'^2},\quad \cdots\end{aligned}$$ in which $V_{n}=\frac{d^n V}{d\phi^n},n=3,4,5,\cdots$. Then we can get $$\begin{aligned}
\frac{d\Gamma}{d N}
&=&\sqrt{6}x\left[\Gamma\lambda+{}^{(1)}\Gamma-2\lambda\Gamma^2\right],\label{G'}\\
\frac{d\left({}^{(1)}\Gamma\right)}{dN}
&=&\sqrt{6}x\left[{}^{(1)}\Gamma\lambda +{}^{(2)}\Gamma
-2\lambda\Gamma\left({}^{(1)}\Gamma\right)\right],\label{G3'}\\
\frac{d\left({}^{(2)}\Gamma\right)}{dN} &=&\sqrt{6}x\left[
| 1 | member_16 |
{}^{(2)}\Gamma\lambda+{}^{(3)}\Gamma
-2 \lambda\Gamma\left({}^{(2)}\Gamma\right)\right],\label{G4'}\\
\frac{d\left({}^{(3)}\Gamma\right)}{dN}
&=&\sqrt{6}x\left[{}^{(3)}\Gamma\lambda+{}^{(4)}\Gamma
-2\lambda\Gamma\left({}^{(3)}\Gamma\right)\right],\label{G5'}\\
&& \cdots\cdots\nonumber\\
\frac{d\left({}^{(n)}\Gamma\right)}{dN}
&=&\sqrt{6}x\left[{}^{(n)}\Gamma\lambda+{}^{(n+1)}\Gamma
-2\lambda\Gamma \left({}^{(n)}\Gamma\right)\right],\label{Gn'}\\
&& \cdots\cdots\nonumber\end{aligned}$$ To discuss the dynamical behavior of scalar field with more general potential, e.g., when neither $\lambda$ nor $\Gamma$ is constant, we need to consider a dynamical system described by Eqs. (\[x’\])-(\[l\]) coupled with Eqs. (\[G’\])-(\[Gn’\]). It is easy to see that this dynamical system is also an autonomous one. We can discuss the values of the fixed points of this autonomous system. Considering Eq. (\[l\]), we can see that the values of fixed points should satisfy $x_c=0$, $\lambda_c=0$, or $\Gamma_c=1$. Then, we can get the fixed points of this infinite-dimensional autonomous system.
1. If $x_c=0$, considering Eqs. (\[x’\])-(\[z’\]), one can get $(y_c,z_c,\lambda_c)=(0,0,0)$ or $(y_c,z_c,\lambda_c)=(0,0,\lambda_*)$, and $\Gamma_c,{}^{(n)}\Gamma_c$ can be any values.
2. If $\lambda_c=0$, considering Eqs. (\[x’\])-(\[z’\]), one can see that the fixed points of $(x,y,z)$ are $(x_c,y_c,z_c)=(0,y_*,1-1/y_*^2)$and $(x_c,y_c,z_c)=(\pm1,0,0)$. If $x_c=0$, $\Gamma_c$ and ${}^{(n)}\Gamma_c$ can be any values, and if $x_c=\pm 1$, ${}^{(n)}\Gamma_c=0$.
3. If $\Gamma_c=1$, considering Eqs. (\[x’\])-(\[z’\]), one can get that the fixed points of $(x,y,z,\lambda)$ are $(x_c,y_c,z_c,\lambda_c)=(0,0,0,\lambda_*)$ and $(x_c,y_c,z_c,\lambda_c)=(\pm 1,0,0,\lambda_*)$. And ${}^{(n)}\Gamma_c$ should satisfy ${}^{(n)}\Gamma_c=\lambda_*^n$. There are other fixed points, which will be discussed below.
Based on the above analysis and Table **I**, one | 1 | member_16 |
can find that points $P_{1-10}$ are just special cases of the fixed points of an infinite-dimensional autonomous systems. Considering the definition of $\Gamma$ (see Eq. (\[Gamma\])), the simplest potential is an exponential potential when $\Gamma_c=1$. The properties of these fixed points have been discussed by many authors [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]. If $x_c=0$ and $y_c=0$, this corresponds to a fluid-dominated universe, which we do not consider here. If $x_c=\pm 1$, $\Gamma_c=0$ and ${}^{(n)}\Gamma_c=0$, we do not need to consider the $\Gamma$ and the $^{(n)}\Gamma$ terms. Then the stability properties of these points are the same as points $P_{4,5}$ in Table **I**, and there are unstable points. The last case is $(x_c,y_c,z_c,\lambda_c)=(0,y_*,1-1/y_*^2,0)$ and $\Gamma,{}^{(n)}\Gamma$ can be any value. To analyze the dynamical properties of this autonomous system, we need to consider the ${}^{(n)}\Gamma_c$ terms. We will get an infinite series. In order to solve this infinite series, we must truncate it by setting a sufficiently high-order ${}^{(M)}\Gamma$ to be a constant, for a positive integer $M$, so that $d\left({}^{(M)}\Gamma\right)/dN=0$. Thus we can get an $(M+4)$-dimensional autonomous system. One example is the quadratic potential $V=\frac12m^2\phi^2$ with some positive constant $m$ that gives a five dimensional autonomous system, | 1 | member_16 |
and another example is the Polynomial (concave) potential $V=M^{4-n}\phi^n$ [@Lindle-potential] that gives an $(n+3)$-dimensional autonomous system. Following the method we used in the two preceding sections, we can get the dynamical behavior of such finite-dimensional systems.
In the remainder of this section, we discuss whether this autonomous system has a scaling solution.
If $x_c=0$, then $\Gamma_c\neq 0,{}^{(n)}\Gamma_c\neq 0$, and the stability of the fixed points may depend on the truncation. As an example, if we choose ${}^{(2)}\Gamma=0$, then we can get a six dimensional autonomous system. The eigenvalues for the fixed point $(x_c,y_c,z_c,\lambda_c,\Gamma_c,{}^{(1)}\Gamma_c)=(0,0,0,\lambda_b,\Gamma_*,{}^{(1)}\Gamma_*)$, where $\lambda_b=0$ or $\lambda_b=\lambda_*$, is $$\begin{aligned}
\mathbf{M}^T=(0,0,0,\frac32\gamma,-3\gamma,-3+\frac32\gamma).\nonumber\end{aligned}$$ Obviously, this is an unstable point, and it has no scaling solution. The eigenvalues for the fixed point $(x_c,y_c,z_c,\lambda_c,\Gamma_c,{}^{(1)}\Gamma_c)=(0,1,0,0,\Gamma_*,{}^{(1)}\Gamma_*)$ is $$\begin{aligned}
\mathbf{M}^T=(0,0,0,0,-3\gamma,-3-3\gamma).\nonumber\end{aligned}$$ According to the center manifold theorem (see Chapter 8 of [@Khalil-non], or [@DynamicalReduction]), there are two nonzero eigenvalues, and we need to reduce the dynamical system to two dimensions to get the stability properties of the autonomous system. This point may have scaling solution, but we need more complex mathematical method. We discuss this problem in another paper [@Xiao-scaling].
We discuss the last case. If $\Gamma_c=1$, we can consider an exponential potential. Then the autonomous system is reduced | 1 | member_16 |
to three dimensions. It is easy to check that the values $(x_{ec},y_{ec},z_{ec})$ of the fixed points are just the values $(x_c,y_c,z_c)$ of points $P_{6-10}$ in Table **I**. We focus on the two special fixed points: $$\begin{aligned}
&& F_1: (x_{ec},y_{ec},z_{ec})=(-\lambda/\sqrt{6}, \sqrt{1-\lambda^2/6}, 0),\nonumber\\
&&F_2: (x_{ec},y_{ec},z_{ec})=(-\sqrt{3/2}\gamma/\lambda,
\sqrt{3\gamma(2-\gamma)/(2\lambda^2)}, 0).\nonumber\end{aligned}$$ Using Lyapunov’s linearization method, we can find that $F_2$ is unstable and $F_1$ is stable if $\lambda<3\gamma$. It is easy to check that $\rho_\gamma=H^2[1/(1-z_{ec})-x_{ec}^2-y_{ec}^2]=0$ when $(x_{ec},y_{ec},z_{ec})=(-\lambda/\sqrt{6}, \sqrt{1-\lambda^2/6},
0)$. From the above analysis, we find that there is just the scalar-field-dominated scaling solution when we consider the autonomous system to be described by a self-interacting scalar field coupled with a barotropic fluid in the LQC scenario.
Conclusions {#sec5}
===========
The aim of this paper is two-fold. We discuss the dynamical behavior of scalar field in the LQC scenario following [@Fang-CQG; @Zhou-plb]. To further analyze the dynamical properties of scalar field with more general potential, we introduce an infinite-dimensional autonomous system.
To discuss the dynamical properties of scalar field in the LQC scenario, we take $\Gamma$ as a function of $\lambda$, and extend the autonomous system from three dimensions to four dimensions. We find this extended autonomous system has more fixed points than the three dimensional one | 1 | member_16 |
does. And we find that for some fixed points, the function $f(\lambda)$ affects either their values, e.g., for points $P_{4-10}$, or their stability properties, e.g., for points $P_{3,9}$. In other words, the dynamical properties of these points depend on the specific form of the potential. But some other fixed points, e.g., points $P_{1,2}$,are independent of the potential. The properties of these fixed points are satisfied by all scalar fields. We also find that there are two later time attractors, but the Universe is scalar-field dominated since $\rho_\gamma=0$ at these later time attractors.
The method developed by [@Fang-CQG; @Zhou-plb] can describe the dynamical behavior of the scalar field with potential of a more general form than, for example, an exponential potential [@Copeland-exponential; @Hao-PRD1; @Hao-PRD2; @Samart-Phantom; @Li-O(N); @Ferreira-scaling; @Hoogen-scaling; @Billyard-interaction; @Fu-phantom]. But it is not all-encompassing. If one wants to discuss the dynamical properties of a scalar field with an arbitrary potential, one needs to consider the higher-order derivatives of the potential $V(\phi)$. Hence the dynamical system will extend from four dimensions to infinite-dimensions. This infinite-dimensional dynamical system is still autonomic, but it is impossible to get all of its dynamical behavior unless one considers $\Gamma_c=1$ which just gives an exponential potential. | 1 | member_16 |
If one wants to study as much as possible the dynamical properties of this infinite-dimensional autonomous system, one has to consider a truncation that sets ${}^{(M)}\Gamma=\rm{Const.}$, with $M$ above a certain positive integer. Then the infinite-dimensional system can be reduced to $M+4)$ dimensions. And we find that there is just the scalar-field-dominated scaling solution for this autonomous system. We only give out the basic properties of this infinite-dimensional autonomous system in this paper, and will continue the discussion in the paper in [@Xiao-scaling].
We only get the scalar-field-dominated scaling solutions, whether we consider $\Gamma$ as a function of $\lambda$ or consider the higher-order derivatives of the potential. This conclusion is very different from the autonomous system which is just described by a scalar field with an exponential potential [@Samart-Phantom]. This is an interesting problem that awaits further analysis.
K. Xiao thanks Professor X. Li for his help with the center manifold theorem. This work was supported by the National Natural Science Foundation of China, under Grant No. 10875012 and the Fundamental Research Funds for the Central Universities.
The stability properties of the Point $P_3$
===========================================
In Sec. \[sec3\], we point out that it is impossible to get the stability properties | 1 | member_16 |
of the fixed point if at least one of the eigenvalues of $\mathcal{M}$ is on the $i\omega$ axis with the rest being in the left-half complex plane. The fixed point $P_3$ is exactly such a point. In this appendix, we use the center manifold theorem (see Chapter 8 of [@Khalil-non] , or [@DynamicalReduction]) to get the condition for stability of $P_3$. The coordinates of $P_3$ are $(0,1,0,0)$ and the eigenvalues are $(-3,-3\gamma,0,0)$. First, we transfer $P_3$ to $P'_3$ $(x_c=0,\bar{y}_c=y-1=0,z_c=0, \lambda_c=0)$. In this case, Eqs. (\[x’\])-(\[z’\]) and (\[l’\]) become $$\begin{aligned}
\frac{dx}{dN}&=&-3\,x-\frac12\,\sqrt {6}\lambda\, \left( \bar{y}+1
\right) ^{2}+x \left[ 3\,{x}^
{2}+\frac32\gamma\, \left((1+z)\right.\right.\nonumber\\
&&\left.\left.-{x}^{2}-\left( \bar{y}+1 \right) ^{2} \right) \right]\left( 1-2z\right)\label{X'},\\
\frac{d \bar{y}}{dN}&=&\frac12\,\sqrt {6}\lambda\,x \left( \bar{y}+1
\right) + \left( \bar{y}+1 \right)
\left[3\,{x}^{2}+\frac32\gamma\, \left((1+z)\right.\right.\nonumber\\
&&\left.\left.-{x}^{2}-\left( \bar{y}+1 \right) ^{2} \right)
\right] \left( 1-2z
\right)\label{Y'}, \\
\frac{dz}{dN}&=&-3\gamma z-3z\left(1-z\right)\left[2x^2-\gamma
x^2-\gamma
(\bar{y}+1)^2\right],\label{Z'}\\
\frac{d\lambda}{dN}&=& \sqrt
{6}{\lambda}^{2}\left(f(0)+f_1(0)\lambda\right) x \label{L'},\end{aligned}$$ where we have considered that the related variables $(x,\bar{y},z,\lambda)$ are small around point $(x_c,\bar{y}_c,z_c,\lambda_c)=\left(0,0, 0,0\right)$. Therefore the following Taylor series $$\begin{aligned}
&&\frac{1}{1-z}=1+{z}+\cdots,\nonumber\\
&&f(\lambda)=f(0)+f_1(0)\lambda+\cdots,\nonumber\end{aligned}$$ can be used, where $f_1(0)=\frac{df(\lambda)}{d\lambda}\left.\right|_{\lambda=0}$.
We can get the Jacobi matrix $\mathcal{M'}$ of the dynamical system Eqs. (\[X’\])-(\[L’\]) as $$\begin{aligned}
\mathcal{M'}= \begin{pmatrix}
-3& 0& 0&-\frac{\sqrt{6}} 2\\
0&-3\gamma& \frac32\gamma & 0\\
0&0& 0&0\\
0&0&0&0
\end{pmatrix}.\end{aligned}$$ It is easy to get the | 1 | member_16 |
eigenvalues and eigenvectors of $\mathcal{M}'$. Let $\mathcal{A}$ denote the matrix whose columns are the eigenvalues, and $\mathcal{S}$ denote the matrix whose columns are the eigenvectors, and then we have $$\begin{aligned}
\mathcal{A}=\begin{pmatrix} -3\\ -3\gamma\\ 0\\ 0
\end{pmatrix},\quad
\mathcal{S}=\begin{pmatrix}
1& 0& -\frac{\sqrt{6}}{6}& 0\\
0& 1& 0 & \frac{1}{2}\\
0 & 0&0& 1\\
0 & 0& 1 & 0
\end{pmatrix}.\end{aligned}$$ With the help of $\mathcal{S}$, we can transform $\mathcal{M}'$ into a block diagonal matrix $$\begin{aligned}
\mathcal{S}^{-1} \mathcal{M}' \mathcal{S}=\begin{pmatrix}
-3 & 0&0&0\\
0& -3\gamma& 0& 0\\
0&0&0&0\\
0&0&0&0
\end{pmatrix}=\begin{pmatrix}
\mathcal{A}_1 &0\\
0& \mathcal{A}_2
\end{pmatrix},\end{aligned}$$ where all eigenvalues of $\mathcal{A}_1$ have negative real parts, and all eigenvalues of $\mathcal{A}_2$ have zero real parts.
Now we change the variables to be $$\begin{aligned}
\begin{pmatrix}
X\\Y\\Z\\ \bar{\lambda}
\end{pmatrix}=
\mathcal{S}^{-1}\begin{pmatrix}
x\\ \bar{y}\\ {z}\\ \lambda
\end{pmatrix}=
\begin{pmatrix}
x+\frac{\sqrt{6}}{6}\lambda\\ \bar{y}-\frac12z\\ \lambda \\z
\end{pmatrix}.\end{aligned}$$ Then, we can rewrite the autonomous system in the form of the new variables: $$\begin{aligned}
\begin{pmatrix}\label{A9}
\frac{dX}{dN}\\ \frac{dY}{dN}\\ \frac{dZ}{dN}\\ \frac{d\bar{\lambda}}{dN}
\end{pmatrix}=
\begin{pmatrix}
-3 & 0&0&\\
0&-3\gamma&0&0\\
0&0&0&0\\
0&0&0&0
\end{pmatrix}
\begin{pmatrix}
X\\Y\\Z\\ \bar{\lambda}
\end{pmatrix}+
\begin{pmatrix}
G_1\\G_2\\G_3\\G_4
\end{pmatrix},\nonumber\\\end{aligned}$$ where $G_i=G_i(X,Y,Z,\bar{\lambda}),(i=1,2,3,4)$ are functions of $X,Y,Z$, and $\bar{\lambda}$. It is easy to get $G_i$ by substituting the transformations $x=X-\frac{\sqrt{6}}{6}Z,
\bar{y}=Y+\frac12\bar{\lambda}, z=\bar{\lambda}, \lambda=Z$ into the R.H.S. of Eqs. (\[X’\])-(\[L’\]).
According to the center | 1 | member_16 |
---
abstract: 'Spatial heterogeneity in the elastic properties of soft random solids is examined via vulcanization theory. The spatial heterogeneity in the *structure* of soft random solids is a result of the fluctuations locked-in at their synthesis, which also brings heterogeneity in their *elastic properties*. Vulcanization theory studies semi-microscopic models of random-solid-forming systems, and applies replica field theory to deal with their quenched disorder and thermal fluctuations. The elastic deformations of soft random solids are argued to be described by the Goldstone sector of fluctuations contained in vulcanization theory, associated with a subtle form of spontaneous symmetry breaking that is associated with the liquid-to-random-solid transition. The resulting free energy of this Goldstone sector can be reinterpreted as arising from a phenomenological description of an elastic medium with quenched disorder. Through this comparison, we arrive at the statistics of the quenched disorder of the elasticity of soft random solids, in terms of residual stress and Lamé-coefficient fields. In particular, there are large residual stresses in the equilibrium reference state, and the disorder correlators involving the residual stress are found to be long-ranged and governed by a universal parameter that also gives the mean shear modulus.'
author:
- Xiaoming Mao
- 'Paul | 1 | member_17 |
M. Goldbart'
- Xiangjun Xing
- Annette Zippelius
title: Soft random solids and their heterogeneous elasticity
---
Introduction {#SEC:Intr}
============
Random solids, such as chemical gels, rubber, glasses and amorphous silica, are characterized by their [*structural*]{} heterogeneity, which results from the randomness locked-in at the time they are synthesized. The mean positions of the constituent particles exhibit no long-range order, and every particle inhabits a unique spatial environment. The [*elasticity*]{} of random solids also inherits heterogeneity from this locked-in randomness. For example, the Lamé coefficients and the residual stress vary from point to point throughout the elastic medium. The central goal of this paper is to develop a statistical characterization of random elastic media, via the mean values of the Lamé coefficients and the residual stress as well as the two-point spatial correlations amongst the fluctuations of these quantities, which we shall name as the disorder correlator. These mean values and correlations are to be thought of as averages taken over realizations of the sample fabrication for a given set of fabrication parameters. We expect these characteristic quantities to coincide with the volume-averages of their single-sample counterparts.
Our focus will be on [*soft*]{} random solids. These are network media that | 1 | member_17 |
include chemical gels [@Addad1996], which are formed by the permanent random chemical bonding of small molecules, as well as rubber [@Treloar1975], which is formed via the introduction of permanent random chemical cross-links between nearby monomers in melts or solutions of flexible long-chain polymers. Soft random solids are characterized by their *entropic elasticity*. These are media in which the shear modulus originates in the strong thermal fluctuations of the configurations of the constituent particles and is much smaller than the bulk modulus, which is energetic in nature and originates in the excluded-volume interactions between the particles. The concept of entropic elasticity forms the basis of the classical theory of rubber elasticity, developed long ago by Kuhn, Flory, Wall, Treloar and others (see Ref. [@Treloar1975]).
As we discuss soft random solids we shall take chemical gels as our prototype media. When the density of the introduced links exceeds the percolation threshold, an infinite cluster of linked molecules forms, spanning the system, and the network acquires a thermodynamic rigidity with respect to shear deformations [^1]. This event is often called the gelation transition or the vulcanization transition [@Goldbart1996].
The [*geometrical*]{} or [*architectural*]{} aspects of the gelation/vulcanization transition can be well captured by the | 1 | member_17 |
theory of percolation [@Stauffer1994]. However, to study the [*elasticity*]{} that emerges at the gelation/vulcanization transition, and especially its heterogeneity, one needs a theory that incorporates not only the geometrical aspects, but also the equilibrium thermal fluctuations of the particle positions and the strong, qualitative changes that they undergo at the gelation/vulcanization transition. In the setting of rubber elasticity, although the classical theory is successful in explaining the entropic nature of the shear rigidity of rubber, it is essentially based on a single-chain picture and, as such, is incapable of describing the consequences of the long scale random structure of rubbery media, e.g., the random spatial variations in their local elastic parameters and the resulting nonaffinity of their local strain-response to macroscopic applied stresses.
The general problem of heterogeneous elasticity and nonaffine deformations has been studied in the setting of flexible polymer networks [@Rubinstein1997; @Glatting1995; @Holzl1997; @Svaneborg2005], and also semi-flexible polymer networks [@Head2003; @Heussinger2007], glasses [@Wittmer2002], and granular materials [@Utter2008]. Particularly noteworthy is the recent investigation by DiDonna and Lubensky of the general relationship between the spatial correlations of the nonaffine deformations and those of the underlying quenched random elastic parameters [@DiDonna2005].
The mission of the present work is to develop | 1 | member_17 |
a statistical characterization of the heterogeneous elasticity of soft random solids by starting from a semi-microscopic model and applying a body of techniques that we shall call vulcanization theory to it. In particular, we aim to obtain the mean values and disorder correlators of elastic parameters, such as the Lamé coefficients and the residual stress, in terms of the parameters of the semi-microscopic model, such as the density of cross-links, the excluded-volume interactions, etc. One of our key findings is that the disorder correlator of the residual stress is long ranged, as are all cross-disorder correlators between the residual stress and the Lamé coefficients. We also find that these disorder correlators are controlled by a universal scale parameter—independent of the microscopic details—that, moreover, controls the scale of the mean shear modulus. In addition, we characterize the nonaffininity of the deformations in terms of these parameters.
The strategy we adopt for accomplishing our goals involves a handshaking between two different analytical schemes. [^2] The first scheme follows a well-trodden path. We begin with a semi-microscopic model, the Randomly Linked Particle Model (RLPM) [@Mao2007; @Ulrich2006; @Broderix2002; @Mao2005], involving particle coordinates and quenched random interactions between them that represent the randomness that is | 1 | member_17 |
locked-in at the instant of cross-linking. In order to account for this quenched randomness, as well as the thermal fluctuations in particle positions which are the origin of the entropic elasticity, we adopt the framework of vulcanization theory [@Goldbart1996]. This framework includes the use of the replica method to eliminate the quenched randomness, followed by a Hubbard-Stratonovich transformation to construct a field-theoretic representation in terms of an order parameter field—in this case, the random solidification order parameter. We analyze this representation at the stationary-point level of approximation, and then focus on the gapless excitations around the stationary point—the Goldstone fluctuations—observing that these excitations can be parameterized in terms of a set of replicated shear deformation fields [@Mao2007; @Ulrich2006; @Goldbart2004]. The second prong of our approach is less conventional. It begins with our introduction of a phenomenological model free energy of an elastic continuum, characterized by a nonlocal kernel of quenched random attractions between mass-points. To obtain statistical information about this quenched random kernel, we use the replica method to eliminate the randomness, and obtain a pure model of replicas of the deformation field with couplings controlled by the disorder-moments of the kernel; these disorder-moments are then treated as unknown quantities | 1 | member_17 |
to be determined. This pure model has precisely the same structure as the Goldstone theory mentioned in the previous paragraph has. Thus, by comparing the two models we can learn the moments of the quenched random kernel from the (already-computed) coupling functions of the Goldstone model. Then we analyze a particular realization of the phenomenological model having a fixed value of the quenched randomness. We observe that the natural reference state (i.e., the state of vanishing displacement field) of this model is not in fact an equilibrium state for any given realization of disorder, due to the random attractive interactions embodied by the kernel. We analyze how these attractions compete with the near-incompressibility of the medium to determine the displacement to the new, equilibrium configuration, which we shall term the relaxed state.(This process can be understood in the setting of a hypothetical, instant process of preparing a sample of rubber: the cross-links introduce attractions and random stresses, and the system then undergoes relaxation, including global contraction and local deformation.) We then explore shear deformations around this relaxed state, pass to the local limit, and arrive at the standard form of continuum elasticity theory, expressed in terms of the strain around | 1 | member_17 |
the relaxed state, but with coefficients that are explicit functions of the quenched random kernel. Thus, using the information about the statistics of the quenched random kernel obtained via the comparison with the RLPM, we are able to infer statistical information about the elastic properties of the random elastic medium in the relaxed state, which is of experimental relevance.
Why is it legitimate to identify the Goldstone theory arising from the microscopic model with the replica theory of the phenomenological model? The reason is that, within the schemes that we have chosen to analyze them, both models describe shear deformations not of the equilibrium state of the system but, rather, of the system immediately after cross-linking has been done but before any cross-linking-induced relaxation has been allowed to occur. [[ The equivalence between these two schemes is not based solely on the equality of free energies of the two models; it is also based on the identity of the physical meaning of the (replicated) deformation fields in the two theories, and thus the way that these fields couple to externally applied forces. Both schemes are descriptions of the elasticity of soft random solids displaying heterogeneous elastic properties, one from a | 1 | member_17 |
semi-microscopic viewpoint, the other invoking phenomenological parameters. Thus, the equivalence of the two schemes provides the values of the phenomenological parameters as functions of the semi-microscopic parameters. ]{}]{}
Before concluding this introduction, let us emphasize that this work is a first attempt to [*derive*]{} the elastic heterogeneities of vulcanized matters. To date, the prevalent strategy in studies of disordered systems is to assume a particular structure for the quenched disorder on phenomenological grounds (such as Gaussian, short ranged, etc.) and explore the consequences. Assumptions about disorder structure are usually based on symmetry arguments and also the preference for simplicity, but otherwise lack theoretical substantiation. It is one of the main advantage of vulcanization theory that it can [*predict*]{} some generic properties of the disordered structure in vulcanized matter, as is shown in the present work, which can be used to support and sharpen the assumptions underlying more phenomenological theories.
The classical theory of rubber elasticity, which was shown to be derivable from the saddle-point approximation of vulcanization theory, is known to fail to describe rubber elasticity in the intermediate and large deformation regimes [@Treloar1975]. While a recent study [@Xing2007] shows that long wave-length thermal elastic fluctuations account qualitatively for this | 1 | member_17 |
failure, on general grounds, we expect that elastic heterogeneities should play an equally important role. It would therefore be interesting to explore how the elastic heterogeneities discovered in the present work modify the macroscopic elasticity of rubbery materials. Such a program is left for a future work.
The outline of this paper is as follows. In Section \[SEC:RLPM\], we analyze the semi-microscopic RLPM using the tools of vulcanization theory. Specifically, we use the replica method [@Mezard1987] to study a model network consisting of randomly linked particles, which exhibits a continuous phase transition from the liquid state to the random solid state, paying particular attention to the Goldstone fluctuations of the random solid state, which, as we have mentioned above, are related to the elastic shear deformations of the random solid state. In Section \[SEC:Phen\], we propose a nonlocal phenomenological model of a random elastic medium, and subsequently derive it from the RLPM, by identify that this phenomenological model is the low energy theory (i.e., it captures the Goldstone fluctuations) of the RLPM in the random solid state. Through this correspondence we learn information of the statistics of the quenched random nonlocal kernel. In Section \[SEC:relaxation\], we study the relaxation of | 1 | member_17 |
the phenomenological model to a stable state for any fixed randomness (i.e., any realization of disorder), due to random stresses and attractive interactions. We re-expand the free energy about this relaxed state to obtain the true elastic theory. This relaxed reference state is, however, still randomly stressed [@Alexander1998]; nevertheless, the stress in this state—the so-called *residual stress*—does satisfy the condition of mechanical equilibrium, viz., $\partial_i \Stre_{ij}(x)=0$. In its local limit, the proposed phenomenological model reproduces a version of Lagrangian continuum elasticity theory that features random Lamé coefficients and residual stresses. In this section, we also use the phenomenological model to explore the related issue of elastic heterogeneity, viz., the nonaffine way in which the medium responds to external stress. In Section \[SEC:Heterogeneity\], we arrive at predictions for the statistics of the quenched random elastic parameters that feature in the phenomenological model in the relaxed state, along with the statistics of nonaffine deformations. Thus we provide a first principles account of the heterogeneous elasticity of soft random solids. We conclude, in Section \[SEC:ConcDisc\], with a brief summary and discussion of our results.
Semi-microscopic approach: The randomly linked particle model {#SEC:RLPM}
=============================================================
Randomly linked particle model {#SEC:RLPMBasics}
------------------------------
The Randomly Linked Particle | 1 | member_17 |
Model (RLPM) consists of $\PartNumb$ particles in a volume $\Volu$ in $\Dime$ dimensions. In order to study elasticity, including bulk deformations, $\Volu$ is allowed to fluctuate under a given pressure $\Pres$. The positions of the particles in this fluctuating volume are denoted by $\{\PartPosi_j\}_{j=1}^{\PartNumb}$. The particles in the RLPM interact via two types of interactions: a repulsive interaction $\VExc$ between all pairs of particles (either direct or mediated via a solvent); and an attractive interaction $\VLink$ between the pairs of particles that are chosen at random to be linked. We take the latter to be a soft link (as opposed to the usual hard constraint of vulcanization theory). Thus, the Hamiltonian can be written as $$\begin{aligned}
\label{EQ:HRLPM}
H_{\RealDiso} =
\sum_{1\le i<j\le N}^{\PartNumb}
\VExc (\PartPosi_i-\PartPosi_j)+
\sum_{e=1}^{\LinkNumb}
\VLink \big(\vert \PartPosi_{i_e}-\PartPosi_{j_e}\vert\big) .\end{aligned}$$ The label $e$, which runs from $1$ to the total number of links $M$, indexes the links in a given realization of the quenched disorder, and specifies them via the quenched random variables $M$ and $\{i_{e},j_{e}\}_{e=1}^{M}$.
We take $\VExc$ to be a strong, short-ranged repulsion, which serves to penalize density fluctuations and thus render the system nearly incompressible, as is appropriate for regular or polymeric liquids. As we shall describe in | 1 | member_17 |
Section \[SEC:FTMF\], we address the interactions in Eq. (\[EQ:HRLPM\]) by eliminating the particle coordinates in favor of collective fields, which have the form of joint densities of the replicas of the particles. This continuum approach enables us to focus on the physics of random particle localization, particularly at lengthscales that are relevant for such localization, which (except when the density of links is extremely high) are long compared with the ranges of $\VExc$ and $\VLink$. With a focus on these longer lengthscales in mind, we see that it is adequate to replace the repulsive interaction $\VExc(\PartPosi)$ by the model Dirac delta-function excluded-volume interaction $\ExclVolu\,\delta(\PartPosi)$, characterized by the strength $\ExclVolu$ [@Gennes1979; @Deam1976; @Doi1986]. This procedure amounts to making a gradient expansion in real space (or, equivalently, a wave vector expansion in Fourier space) of $\VExc$ and retaining only the zeroth-order term; it gives for the strength $\ExclVolu$ the value ${\int}d\PartPosi\,\VExc(\PartPosi)$. Terms of higher order in the gradient expansion would have a non-negligible impact on the suppression of density fluctuations only at lengthscales comparable to or shorter than the range of $\VExc$, and fluctuation modes at such lengthscales are not the ones driven via random linking to the instability associated with random | 1 | member_17 |
localization (and thus are not modes in need of stabilization via $\VExc$). We remark that the approximate interaction $\ExclVolu\,\delta(\PartPosi)$ is not, in practice, singular, and is instead regularized via a high wave vector cut-off.
At our coarse-grained level of description, the particles of the RLPM can be identified with polymers or small molecules, and the soft links can be identified with molecular chains that bind the molecules to one another. The potential for the soft links can be modeled as Gaussian chains $$\begin{aligned}
\label{EQ:VGC}
\VLink^{(\textrm{GC})}(\vert r \vert) = \frac{\BoltCons T \vert r \vert^2}{2\LinkScal^2} ,\end{aligned}$$ i.e., a harmonic attraction, or a zero rest-length spring, of lengthscale $\LinkScal$ between the two particles. In making this coarse-graining one is assuming that microscopic details (e.g., the precise locations of the cross-links on the polymers, the internal conformational degrees of freedom of the polymers, and the effects of entanglement) do not play significant roles for the long-wavelength physics. In part, these assumptions are justified by studying more detailed models, in which the conformational degrees of freedom of the polymers are retained [@Goldbart1996]. However, we should point out that the precise form of $\VLink$ is not important for long-wavelength physics and, hence, for the elastic properties | 1 | member_17 |
that we are aiming to investigate \[cf. the discussion at the end of Sec. \[SEC:FTMF\]\].
From the discussion above, the RLPM is a convenient minimal model of soft random solids, inasmuch as it adequately captures the necessary long-wavelength physics. It can be regarded as either a model of a chemical gel, or as a caricature of vulcanized rubber or other soft random solid. The RLPM can be viewed as a simplified version of vulcanization theory [@Castillo1994; @Goldbart2004], with microscopic details, such as polymer chain conformations, being ignored. Nevertheless, it is able to reproduce the same universality class as vulcanization theory at the liquid-to-random-solid transition. For the study of elasticity, we shall consider length-scales on which the system is a well-defined solid (i.e., scales longer than the localization length, as we shall see later in this paper). Both of these scales are much larger than the characteristic linear dimension of an individual polymer. The RLPM is a model very much in the spirit of lattice percolation, except that it naturally allows for particle motion as well as particle connectivity, and is therefore suitable for the study of continuum elasticity and other issues associated with the (thermal or deformational) motion of the | 1 | member_17 |
constituent entities.
Equation (\[EQ:HRLPM\]) is a Hamiltonian for a given realization of quenched disorder $\RealDiso \equiv \{ i_e, j_e\}_{e=1}^{\LinkNumb}$, which describes the particular random instance of the linking of the particles. These links are the quenched disorder of the system, which are specified at synthesis and do not change with thermal fluctuations. This is because there is a wide separation between the timescale for the linked-particle system to reach thermal equilibrium and the much longer timescale required for the links themselves to break. Therefore, we treat the links as permanent. Later, we shall apply the replica technique [@Mezard1987] to average over these permanent random links.
Replica statistical mechanics of the RLPM {#SEC:RLPMReplica}
-----------------------------------------
For a given volume and a given realization of disorder $\RealDiso$ we can write the partition function $Z_{\RealDiso}$ for the RLPM as $$\begin{aligned}
\label{EQ:partition}
Z_{\RealDiso} (\Volu)
&\equiv& \int _{\Volu} \prod_{i=1}^{\PartNumb} d \PartPosi_i
\exp\Big(-\frac{H_{\RealDiso}}{{\BoltCons}T}\Big) \nonumber\\
&=& \PartitionLiquid(V) \Bigg\ltha \prod _{e=1}^{\LinkNumb}
\LinkPote\big( \vert \PartPosi_{i_e} - \PartPosi_{j_e} \vert \big)
\Bigg\rtha_{1}^{\HamiExcl},\end{aligned}$$ where $\HamiExcl \equiv \frac{\ExclVolu }{2}\sum_{i,j=1}^{\PartNumb} \delta (\PartPosi_i-\PartPosi_j)$ is the excluded-volume interaction part of the Hamiltonian, and $\PartitionLiquid(V) \equiv \int _{\Volu} \prod_{i=1}^{\PartNumb} d \PartPosi_i \exp\big(-\HamiExcl/{\BoltCons}T\big)$ is the partition function of the liquid in the absence of any links. The issue of | 1 | member_17 |
the Gibbs factorial factor, which is normally introduced to compensate for the overcounting of identical configuration, is a genuinely subtle one in the context of random solids (for a discussion, see Ref. [@Goldbart1996]). However, our focus will be on observables such as order parameter rather than on free energies, and thus the omission of the Gibbs factor is of no consequence. The factor $$\begin{aligned}
\label{EQ:DeltReal}
\LinkPote\big( \vert \PartPosi_{i_e} - \PartPosi_{j_e} \vert \big)
\equiv e^{ -\frac{\vert \PartPosi_{i_e} - \PartPosi_{j_e} \vert^2}
{2 \LinkScal^2}}\end{aligned}$$ is associated with the link-induced attractive interaction term in the Hamiltonian. The average $\ltha \cdots \rtha_{1}^{\HamiExcl}$, taken with respect to a Boltzmann weight involving the excluded-volume interaction Hamiltonian $\HamiExcl$, is defined as $$\begin{aligned}
\ltha \cdots \rtha_{1}^{\HamiExcl} \equiv
\frac{1}{\PartitionLiquid(V)}\int _{\Volu} \prod_{i=1}^{\PartNumb} d \PartPosi_i \,
e^{-\frac{\HamiExcl}{{\BoltCons}T}}\ldots\, .\end{aligned}$$ The corresponding Helmholtz free energy is then given by $$\begin{aligned}
\HelmFreeEner _{\RealDiso} (\Volu) \equiv -\BoltCons T \ln Z_{\RealDiso} (\Volu).\end{aligned}$$
To perform the average of the free energy over the quenched disorder, we shall need to choose a probability distribution that assigns a sensible statistical weight $\DEDist (\{ i_e, j_e\}_{e=1}^{\LinkNumb})$ to each possible realization of the total number $\LinkNumb$ and location $\{i_e,j_e\}_{e=1}^{\LinkNumb}$ of the links. Following an elegant strategy due to Deam and Edwards [@Deam1976], | 1 | member_17 |
we assume a version of the normalized link distribution as follows: $$\begin{aligned}
\DEDist(\RealDiso)
= \frac{
\Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb}
Z_{\RealDiso}(\Volu_0)}
{\LinkNumb ! Z_1},\end{aligned}$$ where $\LinkDens$ is a parameter that controls the mean total number of links. We assume that the preparation state (i.e., the state in which the links are going to be introduced) is in a given volume $\Volu_0$. The $Z_{\RealDiso}(\Volu_0)$ factor is actually the partition function, as given in Eq. (\[EQ:partition\]), and can be regarded as probing the equilibrium correlations of the underlying unlinked liquid. The factor $\LinkPote_0=\big(2\pi \LinkScal^2 \big)^{d/2}$ is actually the $p=0$ value of the Fourier transform of the $\LinkPote$ function defined in Eq. (\[EQ:DeltReal\]), and we shall see later that these factors ensure that the (mean-field) critical point occurs at $\LinkDens_C =1$. The normalization factor $Z_1$ is defined to be $\sum_{\RealDiso} \big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\big)^{\LinkNumb} Z_{\RealDiso}(\Volu_0)/ \LinkNumb !$. The calculation for $Z_1$ is straightforward, and is given in Appendix \[APP:DisoAver\].
The Deam-Edwards distribution can be understood as arising from a realistic vulcanization process in which the links are introduced simultaneously and instantaneously into the liquid state in equilibrium. Specifically, it incorporates the notion that all pairs of particles that happen (at some particular instant) to be nearby | 1 | member_17 |
are, with a certain probability controlled by the link density parameter $\LinkDens$, linked. Thus, the correlations of the link distribution reflect the correlations of the unlinked liquid, and it follows that realizations of links only acquire an appreciable statistical weight if they are compatible with some reasonably probable configuration of the unlinked liquid.
The factor $\big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\big)^{\LinkNumb} /\LinkNumb !$ in the Deam-Edwards distribution introduces a Poissonian character to the total number $\LinkNumb$ of links. These links are envisioned to be the product of a Poisson chemical linking process. The factor $Z_{\RealDiso}(\Volu_0)$ assures that the probability of having a given random realization of links is proportional to the statistical weight for, in the unlinked liquid state, finding the to-be-linked pairs to be co-located in the liquid state to within the shape function $\exp \big(- \vert \PartPosi_{i_e}-\PartPosi_{j_e}\vert ^2 / 2\LinkScal^2\big)$.
As a result of the Deam-Edwards distribution, the mean number of links per particle is given by $\lda \LinkNumb \rda/\PartNumb = \LinkDens/2$. Thus, $\LinkDens = 2 \lda \LinkNumb \rda/N$ is the *mean coordination number*, i.e., the average number of particles to which a certain particle is connected, [[the factor of $2$ results from the fact that each link is shared by | 1 | member_17 |
two particles]{}]{}. For a detailed discussion of the Deam-Edwards distribution, see Ref. [@Deam1976; @Broderix2002]. By using this distribution of the quenched disorder, we can perform the disorder average of the Helmholtz free energy via the replica technique, thus obtaining $$\begin{aligned}
\label{EQ:HFReplica}
\lda \HelmFreeEner \rda
&\equiv& \sum_{\RealDiso} \DEDist(\RealDiso)
\HelmFreeEner_{\RealDiso} (\Volu) \nonumber\\
&=& -\BoltCons T \sum_{\RealDiso} \DEDist(\RealDiso) \ln Z_{\RealDiso} (\Volu)
\nonumber\\
&=& -\BoltCons T \lim _{n \to 0} \sum_{\RealDiso} \DEDist(\RealDiso)
\frac{Z_{\RealDiso}(\Volu)^{n}-1}{n}.\end{aligned}$$ We now insert the Deam-Edwards distribution to get $$\begin{aligned}
\label{EQ:HFReplicaDE}
\lda \HelmFreeEner \rda
&=& -\BoltCons T \lim _{n \to 0} \sum_{\RealDiso}
\frac{
\Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb}
Z_{\RealDiso}(\Volu_0)}{\LinkNumb ! Z_1} \nonumber\\
&&\quad\times \frac{Z_{\RealDiso}(\Volu)^{n}-1}{n}.\end{aligned}$$ This disorder-averaged free energy differs from the form traditionally obtained via the replica technique, in that there is an extra replica $Z_{\RealDiso}(\Volu_0)$, which originates in the Deam-Edwards distribution. We shall call this extra replica the $0^{\textrm{th}}$ replica, and note that it represents the *preparation state* of the system. [^3] The summation over the realizations of the quenched disorder $\RealDiso$ can be performed, following the calculation in Appendix \[APP:DisoAver\]; thus we arrive at the form $$\begin{aligned}
\label{EQ:HFEDisoAver}
\lda \HelmFreeEner \rda
&=& -\BoltCons T \lim _{n \to 0} \frac{1}{n}\Big( \frac{Z_{1+n}}{Z_1}-1 \Big),\end{aligned}$$ which can also be expressed as $$\begin{aligned}
\label{EQ:HFEDisoAverDeri}
\lda \HelmFreeEner \rda | 1 | member_17 |
= -\BoltCons T \lim _{n \to 0}
\frac{\partial}{\partial n} \ln Z_{1+n} \, ,\end{aligned}$$ where
$$\begin{aligned}
\label{EQ:ReplPart}
Z_{1+n} &\equiv& \sum_{\RealDiso}
\frac{\Big(\frac{\LinkDens \Volu_0}{2\PartNumb \LinkPote_0}\Big)^{\LinkNumb}}
{ \LinkNumb !} Z_{\RealDiso}(\Volu_0) Z_{\RealDiso}(\Volu)^n \nonumber\\
&=& \PartitionLiquid(\Volu_0)\PartitionLiquid(\Volu)^n \lthal \exp \Big(\frac{\LinkDens \Volu_0}
{2\PartNumb \LinkPote_0} \sum_{i\ne j}^{\PartNumb} \ReplProd
\LinkPote \big(
\vert \PartPosi_{i}\REPa - \PartPosi_{j}\REPa \vert
\big)\Big)\rthal_{1+n}^{\HamiExcl} .\end{aligned}$$
Notice that, here, the preparation state (i.e., $0^{\textrm{th}}$ replica) has a fixed volume $\Volu_0$ because, for convenience, we have assumed that the linking process was undertaken instantaneously in a liquid state of fixed volume and thus the pressure is fluctuating, whereas the measurement states (replicas $1$ through $n$) are put in a fixed-pressure $\Pres$ environment, the volume $\Volu$ of which is allowed to fluctuate. In the latter parts of the paper we shall set the pressure $\Pres$ to be the average pressure measured in the preparation state at volume $\Volu_0$. In particular, for a given volume of the liquid state in which the links are made, the average pressure is given by $$\begin{aligned}
\Pres = - \frac{\partial \FLiquid (\Volu_0)}{\partial \Volu_0}
\Big\vert_{T} \, ,\end{aligned}$$ where we have introduced the Helmholtz free energy of the unlinked liquid $\FLiquid (\Volu_0) \equiv - \BoltCons T \ln \PartitionLiquid (\Volu_0)$. We suppose that the excluded-volume interactions are so | 1 | member_17 |
strong that the density fluctuations are suppressed, and the density of the unlinked liquid is just $\PartNumb/\Volu_0$. [^4] Thus, the mean-field value of Helmholtz free energy in the unlinked liquid state is $$\begin{aligned}
\label{EQ:FLiquid}
\FLiquid (\Volu_0) &=& -\PartNumb \BoltCons T \ln \Volu_0
+\frac{\ExclVolu \PartNumb^2}{2\Volu_0}.\end{aligned}$$ Therefore, the mean pressure in the unlinked liquid state is given by $$\begin{aligned}
\label{EQ:AverPres}
\Pres = \frac{\PartNumb \BoltCons T}{\Volu_0}
+\frac{\ExclVolu \PartNumb^2}{2\Volu_0^2},\end{aligned}$$ [[from which we can identify—by the standard way in which the second virial coefficient $B_2$ appears in the free energy of the equation of state for a fluid—that $B_2$ for the unlinked liquid is $\ExclVolu/2\BoltCons T$. Thus, without having actually performed a cluster expansion, we see that the Dirac delta-function interaction with coefficient $\ExclVolu$ indeed leads to a virial expansion with a suitable excluded volume, viz., $\ExclVolu/(\BoltCons T)$. ]{}]{} As mentioned above, we shall apply this pressure in the measurement states (described by $1^{\textrm{th}}$ through $n^{\textrm{th}}$ replicas), and let their volumes $\Volu$ fluctuate, in order to obtain an elastic free energy that can describe volume variations. In particular, by choosing the pressure $p$ to be exactly the mean pressure of the liquid state, we shall obtain an elastic free energy that takes the *state right | 1 | member_17 |
after linking*, which has the same volume $\Volu_0$ as the liquid state, as the elastic reference state. This issue of the state right after linking and the elastic reference state will be discussed in detail in Section \[SEC:PhenFE\].
In light of this construction of the pressure ensemble, we have the capability of learning about the bulk modulus of the system, and to characterize volume changes caused by linking, a process that has the effect of eliminating translational degrees of freedoms.
To establish an appropriate statistical mechanics for the fixed-pressure ensemble, we shall make the following Legendre transformation of the Helmholtz free energy, which leads to the Gibbs free energy $\GibbFreeEner (\Pres,T)$:
\[EQ:LegeTran\] $$\begin{aligned}
\Pres &=& -\frac{\partial \HelmFreeEner(\Volu,T)}
{\partial \Volu}\big\vert_{T} \label{EQ:LegeTran1} \, , \\
\GibbFreeEner (\Pres,T) &=& \HelmFreeEner(\Volu,T) + \Pres \Volu \, . \label{EQ:LegeTran2}\end{aligned}$$
In Eq. (\[EQ:LegeTran2\]) the volume $\Volu$ takes the value (in terms of $\Pres$) that satisfies Eq. (\[EQ:LegeTran1\]), i.e., the volume that minimizes the Gibbs free energy at a given pressure $p$.
In the following sections, we shall first calculate the disorder-average of the Helmholtz free energy, and then make this Legendre transformation to obtain the disorder-averaged Gibbs free energy. This will allow us to explore the elasticity | 1 | member_17 |
of the RLPM in detail.
Field-theoretic description of the RLPM {#SEC:FTMF}
---------------------------------------
We shall use field-theoretic methods to analyze the disorder-averaged free energy $\lda\HelmFreeEner\rda$ and, more specifically, the replicated partition function $Z_{1+n}$. To do this, we introduce a joint probability distribution for the particle density in the replicated space, i.e., the replicated density function $$\begin{aligned}
\label{EQ:DensReal}
\DensFunc(\REPX)\equiv \frac{1}{\PartNumb} \sum_{i=0}^{\PartNumb}
\ReplProd \delta^{(d)}(x\REPa-\PartPosi_i\REPa) ,\end{aligned}$$ where $\REPX \equiv (x^{0},x^{1},\ldots,x^{n})$ is a short-hand for the $(1+n)$-replicated position $\Dime$-vector. For convenience, we introduce a complete orthonormal basis set in replica space $\{\BASE\REPa\}_{\alpha=0}^{n}$, in terms of which a vector $\REPX$ can be expressed as $$\begin{aligned}
\REPX=\ReplSum x\REPa \BASE\REPa .\end{aligned}$$ Note that the components $x\REPa$ are themselves $\Dime$-vectors. With this notation, the density function of a single replica $\alpha$ is given by $\DensFunc_{p\REPa\BASE\REPa}$, which is the Fourier transform of the $\DensFunc(\REPX)$ field, $\DensFunc_{\REPP}$, with momentum nonzero only in replica $\alpha$, corresponding to integrating over the normalized densities in other replicas in real space.
The replicated partition function (\[EQ:ReplPart\]) can be written as a functional of the replicated density function $\DensFunc$ in momentum space as $$\begin{aligned}
\label{EQ:ZQ}
Z_{1+n} = \int_{\Volu_0} \prod_{i=1}^{\PartNumb} d\PartPosi_{i}^{0}
\int_{\Volu} \ReplProd \prod_{j=1}^{\PartNumb} d\PartPosi_{j}\REPa
\, e^{-\frac{H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack}{\BoltCons T}} ,\end{aligned}$$ with $$\begin{aligned}
\label{EQ:HQ}
H_{\DensFunc}\lbrack \DensFunc_{\REPP}\rbrack
&\equiv& -\frac{\PartNumb | 1 | member_17 |
\LinkDens \BoltCons T}{2 \Volu^n \LinkPote_0}
\sum_{\REPP}\DensFunc_{\REPP}\DensFunc_{-\REPP}\LinkPoteRepl_{\REPP}\nonumber\\
&&+\frac{\ExclVolu \PartNumb^2}{2\Volu_0} \sum_{p}\DensFunc_{p\BASE^{0}}
\DensFunc_{-p\BASE^{0}} \nonumber\\
&&+\frac{\ExclVolu \PartNumb^2}{2\Volu} \sum_{p} \ReplSumOne
\DensFunc_{p\BASE\REPa} \DensFunc_{-p\BASE\REPa} ,\end{aligned}$$ where the factor $$\begin{aligned}
\LinkPoteRepl_{\REPP} = \big(\LinkPote_0\big)^{1+n}
e^{-\LinkScal^2 \vert \REPP \vert^2/2}\end{aligned}$$ is the replicated version of the Fourier transform of the function $\LinkPote(x)$, defined in Eq. (\[EQ:DeltReal\]). [[The summation $\sum_{\REPP}$ denotes a summation over all momentum $\Dime$-vectors $p\REPa$, one for each replica, with $\REPP$ taking the values $\REPP=\ReplSum p\REPa \BASE\REPa$. The cartesian components of the $p\REPa$ take the values $2\pi m/L$, where $L$ is the linear size of the system and $m$ is any integer. Similarly, summations $\sum_{p}$ over $\Dime$-vectors $p$ include components having the values $2\pi m/L$.]{}]{} The first term on the right hand side of Eq. (\[EQ:HQ\]) arises from the attractive, link-originating, interaction part \[see Eq. (\[EQ:ReplPart\])\]; the next two terms represent the excluded-volume interaction in $\HamiExcl$, for the $0^{\textrm{th}}$ replica and for replicas $1$ through $n$, respectively.
The excluded-volume interaction is taken to be very strong, and thus the density fluctuations in any single replica are heavily suppressed. This means that $\DensFunc_{p\REPa\BASE\REPa}$ are very small for all $p\REPa\ne 0$, and $\DensFunc_{p\REPa\BASE\REPa}\vert_{p\REPa=0}=1$, corresponding to a nearly homogeneous particle density. To manage this issue, we separate the replicated space into a | 1 | member_17 |
---
abstract: 'We present resolved *Herschel* images of a circumbinary debris disk in the 99 Herculis system. The primary is a late F-type star. The binary orbit is well characterised and we conclude that the disk is misaligned with the binary plane. Two different models can explain the observed structure. The first model is a ring of polar orbits that move in a plane perpendicular to the binary pericenter direction. We favour this interpretation because it includes the effect of secular perturbations and the disk can survive for Gyr timescales. The second model is a misaligned ring. Because there is an ambiguity in the orientation of the ring, which could be reflected in the sky plane, this ring either has near-polar orbits similar to the first model, or has a 30 degree misalignment. The misaligned ring, interpreted as the result of a recent collision, is shown to be implausible from constraints on the collisional and dynamical evolution. Because disk+star systems with separations similar to 99 Herculis should form coplanar, possible formation scenarios involve either a close stellar encounter or binary exchange in the presence of circumstellar and/or circumbinary disks. Discovery and characterisation of systems like 99 Herculis will help understand | 1 | member_18 |
processes that result in planetary system misalignment around both single and multiple stars.'
author:
- |
G. M. Kennedy[^1]$^1$, M. C. Wyatt$^1$, B. Sibthorpe$^2$, G. Duchêne$^{3,4}$, P. Kalas$^4$, B. C. Matthews$^{5,6}$, J. S. Greaves$^7$, K. Y. L. Su$^8$, M. P. Fitzgerald$^{9,10}$\
$^1$ Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\
$^2$ UK Astronomy Technology Center, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK\
$^3$ Department of Astronomy, University of California, B-20 Hearst Field Annex, Berkeley, CA 94720-3411, USA\
$^4$ Laboratoire d’Astrophysique, Observatoire de Grenoble, Université J. Fourier, CNRS, France\
$^5$ Herzberg Institute of Astrophysics, National Research Council Canada, 5071 West Saanich Road., Victoria, BC, Canada, V9E 2E7, Canada\
$^6$ University of Victoria, Finnerty Road, Victoria, BC, V8W 3P6, Canada\
$^7$ School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS, UK\
$^8$ Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA\
$^9$ Institute of Geophysics and Planetary Physics, Lawrence Livermore National Laboratory, L-413, 7000 East Avenue, Livermore, CA 94550, USA\
$^{10}$ Department of Physics and Astronomy, UCLA, Los Angeles, CA 90095-1547, USA
title: '99 Herculis: Host to a Circumbinary Polar-ring Debris Disk'
---
circumstellar matter — | 1 | member_18 |
stars: individual: 99 Herculis, HD 165908, HIP 88745, GJ704AB
Introduction {#s:intro}
============
The *Herschel* Key Program DEBRIS (Dust Emission via a Bias free Reconnaissance in the Infrared/Submillimeter) has observed a large sample of nearby stars to discover and characterise extrasolar analogues to the Solar System’s asteroid and Kuiper belts, collectively known as “debris disks.” The 3.5m *Herschel* mirror diameter provides 6-7” resolution at 70-100$\mu$m , and as a consequence our survey has resolved many disks around stars in the Solar neighbourhood for the first time .[^2]
Here we present resolved images of the 99 Herculis circumbinary disk. This system is particularly interesting because unlike most debris disk+binary systems, the binary orbit is well characterised. The combination of a known orbit and resolved disk means we can compare their (different) inclinations and consider circumbinary particle dynamics and formation scenarios.
This system is a first step toward building on the binary debris disk study of @2007ApJ...658.1289T. Their *Spitzer* study found that debris disks are as common in binary systems as in single systems, but tend not to have separations in the 3-30AU range. However, only some of their systems had detections at multiple wavelengths to constrain the disk location and none were | 1 | member_18 |
resolved, making the true dust location uncertain. Resolved systems such as 99 Her remove this ambiguity, and provide crucial information on the disk location, stability and dynamics.
This paper is laid out as follows. We first consider the stellar and orbital properties of the 99 Her system, including the possibility of a third component. Then we consider the *Herschel* image data and several different models that can explain it. Finally, we discuss the implications of these models for the formation of the system.
99 Herculis {#s:stprop}
===========
The binary 99 Herculis (HD 165908, HIP 88745, GJ 704AB, ADS 11077) contains the 37$^{\rm
th}$ closest F star primary within the volume limited Unbiased Nearby Stars sample [@2010MNRAS.403.1089P]. The Catalogue of Components of Doubles and Multiple systems [CCDM J18071+3034, @2002yCat.1274....0D] lists three components, but using Hipparcos proper motions @2010MNRAS.403.1089P find that the 93” distant C component is not bound to the system. The binary pair has been known since 1859, and consists of an F7V primary orbited by a K4V secondary. The primary is known to be metal poor with \[Fe/H\] $\approx -0.4$ [e.g. @1996yCat..33140191G; @2000MNRAS.316..514A; @2007PASJ...59..335T] and has an age consistent with the main-sequence .
Binary Configuration {#ss:binary}
--------------------
Parameter Symbol | 1 | member_18 |
(unit) Value Uncertainty
---------------------------- ------------------------- --------- -------------
Semi-major axis a (”) 1.06 $0.02$
Eccentricity e 0.766 $0.004$
Inclination i ($^\circ$) 39 $2$
Ascending node $\Omega$ ($^\circ$) 41 $2$
Longitude of pericenter $\omega$ ($^\circ$) 116 $2$
Date of pericenter passage T (yr) 1997.62 $0.05$
Period P (yr) 56.3 $0.1$
Total mass $M_{\rm tot} (M_\odot)$ 1.4 $0.1$
: 99 Her orbital elements, system mass and 1$\sigma$ uncertainties. The ascending node $\Omega$ is measured anti-clockwise from North. The longitude of pericenter is measured anti-clockwise from the ascending node, and projected onto the sky plane has a position angle of 163$^\circ$ (i.e. is slightly different to 41+116 because the orbit is inclined).[]{data-label="tab:elem"}
![99 Her binary orbit as seen on the sky, with the line of nodes and pericenter indicated. North is up and East is left. The stellar orbits are shown over one (anti-clockwise) orbital period with black dots. Grey dots (primary is the larger grey dot) show the positions at the PACS observation epoch. Black dot sizes are scaled in an arbitrary way such that larger dots are closer to Earth. The arrows indicate the direction of motion and the scale bar indicates the binary semi-major axis of 1.06” (16.5AU).[]{data-label="fig:sys"}](systop.eps){width="50.00000%"}
To interpret the | 1 | member_18 |
*Herschel* observations requires an understanding of the binary configuration, which we address first. Typically, the important orbital elements in fitting binary orbits are the semi-major axis, eccentricity, and period, which yield physical characteristics of the system (if the distance is known). Regular observations of 99 Her date back to 1859 and the binary has completed nearly three revolutions since being discovered. Additional observations since the previous orbital derivation , have allowed us to derive an updated orbit. Aside from a change of 180$^\circ$ in the ascending node [based on spectroscopic data @2006ApJS..162..207A], the orbital parameters have changed little; the main purpose of re-deriving the orbit is to quantify the uncertainties.
The orbit was derived by fitting position angles (PAs) and separations ($\rho$) from the Washington Double Star catalogue [@2011yCat....102026M].[^3] We included three additional observations; a *Hubble Space Telescope* (HST) *Imaging Spectrograph* (STIS) acquisition image [epoch 2000.84, $PA=264 \pm
2^\circ$, $\rho=0.54 \pm 0.02$”, @2004ApJ...606..306B], an adaptive optics image taken using the Lick Observatory Shane 3m telescope with the IRCAL near-IR camera as an ongoing search for faint companions of stars in the UNS sample (epoch 2009.41, $PA=309 \pm
2^\circ$, $\rho=1.12 \pm 0.02$”), and a Keck II NIRC2 L’ speckle image taken | 1 | member_18 |
to look for a third companion (see §\[s:third\], epoch 2011.57, $PA=317 \pm 1^\circ$, $\rho=1.20
\pm 0.014$”). For visual data we set uncertainties of 7$^\circ$ to PAs and 0.5” to separations, for *Hipparcos* data we used 5$^\circ$ and 0.1”, and for speckle observations without quoted uncertainties we used 2$^\circ$ and 0.04”. The resulting orbital elements, shown in Table \[tab:elem\], vary only slightly from those derived by .[^4] The best fit yields $\chi^2 = 190$ with 399 degrees of freedom. The fit is therefore reasonable, and most likely the $\chi^2$ per degrees of freedom is less than unity because the uncertainties assigned to the visual data are too conservative. If anything, the uncertainties quoted in Table \[tab:elem\] are therefore overestimated. However, visual data can have unknown systematic uncertainties due to individual observers and their equipment so we do not modify them [@2001AJ....122.3472H].
These data also allow us to derive a total system mass of 1.4$M_\odot$, where we have used a distance of 15.64pc [@2008yCat.1311....0F]. While the total mass is well constrained by visual observations, the mass ratio must be derived from either the differential luminosity of each star or radial velocities. We use the spectroscopic mass function derived by @2006ApJS..162..207A, yielding a | 1 | member_18 |
mass ratio of 0.49, which has an uncertainty of about 10%. The mass ratio from the differential luminosity is 0.58, with a much larger (20%) uncertainty . Using the spectroscopic result, the primary (GJ 704A) has a mass of 0.94$M_\odot$, while the secondary (GJ 704B) is 0.46$M_\odot$.
The system configuration is shown in Figure \[fig:sys\] and is inclined such that the primary is currently closer to Earth than the secondary. The position of the B component relative to A on the date it was observed by *Herschel* in late April 2010 was PA=314$^\circ$ at an observed separation of 1.15” (22.6AU when deprojected), indicated by grey circles in the Figure.
A Third Component? {#s:third}
------------------
While the STIS images clearly resolve the binary, there is a possible third component with $PA \approx 284^\circ$ and $\rho \approx 0.27"$ that is about 2.4 times as faint as the B component. @2008AN....329...54S also report a third component (epoch 2005.8) at $PA \approx 50^\circ$ and $\rho \approx 0.228"$ (no magnitude is given). However, while they detected the secondary again in mid-2007, they do not report any detection of the putative tertiary [@2010AN....331..286S]. The detected positions are shown as star symbols in sky coordinates in Figure | 1 | member_18 |
\[fig:pm\], which shows the motion of the 99 Her system. The system proper motion is $\mu_\alpha \cos \delta = -110.32$mas yr$^{-1}$, $\mu_\delta = 110.08$mas yr$^{-1}$ [@2007ASSL..350.....V], and accounts for the motion of the primary assuming the orbit derived in , which is very similar to ours. The small proper motion uncertainty means STIS and @2008AN....329...54S cannot have seen the same object if it is fixed on the sky. There is no clear sign of a third component in the residuals from fitting the orbit of the secondary.
![Motion of the 99 Her system (filled dots) in sky coordinates at three epochs. The epochs including the putative third component are enclosed in boxes. The arrow shows the direction of the system center of mass movement and the distance travelled in 5 years, and the grey lines show the path traced out by each star. Star symbols show the position of the third object observed in the STIS data in 2000 (dashed box) and by @2008AN....329...54S in 2005 (dotted box).[]{data-label="fig:pm"}](pm.eps){width="50.00000%"}
![Keck/NIRC2 adaptive optics image of 99 Her at 3.8$\mu$m, cropped to about 1.5” around the primary. North is up and East is left. The saturated A component is at the center of | 1 | member_18 |
the frame.[]{data-label="fig:keck"}](her99b_fig_v2.eps){width="45.00000%"}
To try and resolve this issue we obtained an adaptive optics image of 99 Her at L’ (3.8 $\mu$m) using the NIRC2 camera at Keck II on July 27, 2011, shown in Figure \[fig:keck\]. We adopted the narrow camera (10 mas/pixel) and used a five-point dither pattern with three images obtained at each position consisting of 25 coadds of 0.181 seconds integration. The cumulative integration time for the final co-registered and coadded image is 67.875 seconds. The core of the A component point-spread-function is highly saturated, which degrades the achievable astrometry. We estimate the position of 99 Her A by determining the geometric center of the first diffraction ring. The position of 99 Her B is taken from the centroid of the unsaturated core. The PA and separation are quoted above.
There is no detection of the putative 99 Her C within 1.6” of the primary in the combined image if it is only a factor 2.4 fainter than the B component, because it would appear 20 times brighter than the brightest speckles. However, if it were closer to the primary than 0.2” it would currently be too close to detect. If the object was fixed on the | 1 | member_18 |
sky near either the 2000 or 2005 locations, it would have been detected in the individual pointings of the five-point dither since each NIRC2 pointing has a field of view of $10''\times10''$. To be outside the field of view and still bound to the primary, the tertiary must have an apocenter larger than about 75AU (5”). An object in such an orbit would have a period of at least 200 years, so could not have been detected near the star in 2005 and be outside the NIRC2 field of view in 2011.
The non-detections by @2010AN....331..286S and NIRC2 make the existence of the tertiary suspicious. It is implausible that the object was too close to, or behind the star both in 2007 and 2011, because at a semi-major axis of 0.23” (3.5AU) from the primary (similar to the projected separation) the orbital period is 7 years. Therefore, the object would be on opposite sides of the primary, and the two detections already rule out an edge-on orbit. Even assuming a circular orbit, such an object is unlikely to be dynamically stable, given the high eccentricity and small pericenter distance (4.1AU) of the known companion. A tertiary at this separation would | 1 | member_18 |
be subject to both short term perturbations and possible close encounters. If the mutual inclination were high enough, it would also be subject to Kozai cycles due to the secondary that could result in a high eccentricity and further affect the orbital stability.
While it may be worthy of further detection attempts, the existence of this component appears doubtful and we do not consider it further.
IR and Sub-mm Data {#s:data}
==================
Observations {#s:obs}
------------
![image](pacs70.eps){width="33.00000%"} ![image](pacs100.eps){width="33.00000%"} ![image](pacsall160.eps){width="33.00000%"}
*Herschel* Photodetector and Array Camera & Spectrometer data at 100 and 160$\mu$m were taken in April 2010 during routine DEBRIS observations. Subsequently, a Spectral & Photometric Imaging Receiver observation was triggered by the large PACS excess indicating the presence of a debris disk and a likely sub-mm detection. The disk was detected, but not resolved with SPIRE at 250 and 350$\mu$m. A 70$\mu$m PACS image was later obtained to better resolve the disk. Because every PACS observation includes the 160$\mu$m band, we have two images at this wavelength, which are combined to produce a single higher S/N image. All observations were taken in the standard scan-map modes for our survey; mini scan-maps for PACS data and small maps for SPIRE. Data were | 1 | member_18 |
reduced using a near-standard pipeline with the Herschel Interactive Processing Environment [HIPE Version 7.0, @2010ASPC..434..139O]. We decrease the noise slightly by including some data taken as the telescope is accelerating and decelerating at the start and end of each scan leg.
The high level of redundancy provided by PACS scan maps means that the pixel size used to generate maps can be smaller than the natural scale of 3.2”/pix at 70 and 100$\mu$m and 6.4”/pix at 160$\mu$m via an implementation of the “drizzle” method [@2002PASP..114..144F]. Our maps are generated at 1”/pix at 70 and 100$\mu$m and 2”/pix at 160$\mu$m. The benefit of better image sampling comes at the cost of correlated noise [@2002PASP..114..144F], which we discuss below.
In addition to correlated noise, two characteristics of the PACS instrument combine to make interpretation of the data challenging. The PACS beam has significant power at large angular scales; about 10% of the energy lies beyond 1 arcminute and the beam extends to about 17 arcminutes (1000 arcsec). While this extent is not a problem in itself, it becomes problematic because PACS data are subject to fairly strong $1/f$ (low frequency) noise and must be high-pass filtered. The result is that a source | 1 | member_18 |
will have a flux that is 10-20% too low because the “wings” of the source were filtered out. While this problem can be circumvented with aperture photometry using the appropriate aperture corrections derived from the full beam extent, the uncorrected apertures typically used for extended sources will result in underestimates of the source flux.[^5]
Here, we correct the fluxes measured in apertures for 99 Her based on a comparison between PSF fitted and aperture corrected measurement of bright point sources in the DEBRIS survey with predictions from their stellar models [based on the calibration of @2008AJ....135.2245R]. These upward corrections are $16 \pm 5\%$, $19 \pm 5\%$, and $21 \pm 5\%$ at 70, 100, and 160$\mu$m respectively. These factors depend somewhat on the specifics of the data reduction, so are *not* universal. This method assumes that the correction for 99 Her is the same as for a point source, which is reasonable because the scale at which flux is lost due to filtering the large beam is much larger than the source extent. The corrected PACS measurement is consistent with MIPS 70$\mu$m, so we do not investigate this issue further.
The beam extent and filtering is also important for resolved modelling | 1 | member_18 |
because the stellar photospheric contribution to the image is decreased. Therefore, in generating a synthetic star+disk image to compare with a raw PACS observation, the stellar photospheric flux should be decreased by the appropriate factor noted above. Alternatively, the PACS image could be scaled up by the appropriate factor and the true photospheric flux used.
Table \[tab:obs\] shows the measured star+disk flux density in each *Herschel* image. Uncertainties for PACS are derived empirically by measuring the standard deviation of the same sized apertures placed at random image locations with similar integration time to the center (i.e. regions with a similar noise level).
The SPIRE observations of 99 Her are unresolved. The disk is detected with reasonable S/N at 250$\mu$m, marginally detected at 350$\mu$m, and not detected at 500$\mu$m. Fluxes are extracted with PSF fitting to minimise the contribution of background objects. Because all three bands are observed simultaneously (i.e. a single pointing), the PSF fitting implementation fits all three bands at once. A detection in at least one band means that all fluxes (or upper limits) are derived at the same sky position.
Additional IR data exist for 99 Her, taken with the Multiband Imaging Photometer for *Spitzer* [MIPS, @2004ApJS..154...25R]. | 1 | member_18 |
Only the star was detected at 24$\mu$m ($270.3 \pm 0.1$mJy), but this observation provides confirmation of the 99 Her stellar position in the PACS images relative to a background object 1.8 arcmin away to the SE ($PA=120^\circ$) that is visible at 24, 70, and 100$\mu$m. The presence of an excess at 70$\mu$m ($98 \pm 5$mJy compared to the photospheric value of 30mJy) was in fact reported by @2010ApJ...710L..26K. They did not note either the circumbinary nature or that the disk may be marginally resolved by MIPS at 70$\mu$m. Because our study focuses on the spatial structure, we use the higher resolution PACS data at 70$\mu$m, but include the MIPS data to model the SED.
Basic image analysis {#s:basic}
--------------------
Figure \[fig:pacs\] shows the *Herschel* PACS data. Compared to the beam size, the disk is clearly resolved at all three wavelengths. At 160$\mu$m the peak is offset about 5” East relative to both the 70 and 100$\mu$m images. However, the disk is still visible at 160$\mu$m as the lower contours match the 70 and 100$\mu$m images well. The 160$\mu$m peak is only 2-3$\sigma$ more significant than these contours. While such variations are possible due to noise, in this case the offset | 1 | member_18 |
is the same in both 160$\mu$m images, so could be real. The fact that the peak extends over several pixels is not evidence that it is real, because the pixels in these maps are correlated (see below). If real, this component of the disk or background object cannot be very bright at SPIRE wavelengths because the measured fluxes appear consistent with a blackbody fit to the disk (see §\[s:sed\]). Based on an analysis of all DEBRIS maps (that have a constant depth), the chance of a 3$\sigma$ or brighter background source appearing within 10” of 99 Her at 160$\mu$m is about 5% (Thureau et al in prep). Given that the 160$\mu$m offset is only a 2-3$\sigma$ effect (i.e. could be a 2-3$\sigma$ background source superimposed on a smooth disk), the probability is actually higher because the number of background sources increases strongly with depth. These objects have typical temperatures of 20–40K , so could easily appear in only the 160$\mu$m image, particularly if the disk flux is decreasing at this wavelength.
Band Flux (mJy) Uncertainty Method
---------- ------------ ------------- --------------
PACS70 93 10 15” aperture
PACS100 87 10 15” aperture
PACS160 80 15 17” aperture
SPIRE250 44 6 PSF fit
| 1 | member_18 |
SPIRE350 22 7 PSF fit
SPIRE500 4 8 PSF fit
: *Herschel* photometry of 99 Her. The disk is not detected at 500$\mu$m and can be considered a 3$\sigma$ upper limit of 24mJy.[]{data-label="tab:obs"}
We now analyse the PACS images using 2D Gaussian models to estimate the disk size, inclination, and position angle. A 2D Gaussian fits the star-subtracted PACS 100$\mu$m image fairly well, with major and minor full-width half-maxima of 17.7 and 12.8” at a position angle of $78^\circ$. Quadratically deconvolving from the 6.7” FWHM beam assuming a circular ring implies an inclination of $48^\circ$ from face-on and an estimated diameter of 250AU. Gaussian fitting to star-subtracted images at both 70 and 160$\mu$m yields similar results.
As noted above, estimation of uncertainties in these parameters is non-trivial due to correlated noise, but made easier by the constant depth of our survey. By inserting the best fit Gaussian to the star-subtracted image of the 99 Her disk from the 100$\mu$m image into 438 other 100$\mu$m maps at high coverage positions offset from the intended target, we obtain a range of fit parameters for hundreds of different realisations of the same noise. This process yields an inclination of $45 \pm 5^\circ$ and | 1 | member_18 |
PA of $75 \pm
8^\circ$. Repeating the process, but using the best fit Gaussian for the 70$\mu$m image yields an inclination of $44 \pm 6^\circ$ and PA of $68 \pm 9^\circ$. Though the inclination of the disk is similar to the binary, the position angle is significantly different from the binary line of nodes of $41 \pm 2 ^\circ$. This difference means that the disk and binary orbital planes are misaligned.
As a check on the above approach, we can correct for the correlated noise directly. @2002PASP..114..144F show that for a map that has sufficiently many dithers (corresponding in our case to many timeline samples across each pixel), a noise “correction” factor of $r/\left(1-1/3r\right)$ can be derived, where $r$ is the ratio of natural to actual pixel scales and is 3.2 for our PACS maps. A correction factor of 3.6 for the measured pixel to pixel noise is therefore required when estimating the uncertainty on a fitted Gaussian. Including this factor at 70$\mu$m and calculating the uncertainty by the standard $\Delta \chi^2$ method yields an inclination of $42 \pm
7^\circ$ and a PA of $68 \pm 9^\circ$. At 100$\mu$m the result is an inclination of $44
\pm 6^\circ$ and a | 1 | member_18 |
PA of $76 \pm 8^\circ$. These results are therefore almost exactly the same as the empirical method used above and therefore lead to the same conclusion of misalignment.
As will become apparent in §\[s:spatial\], there is reason to believe that the disk plane could be perpendicular to the binary pericenter direction. The projection of the binary pericenter direction on the sky plane has a PA of $163 \pm 2^\circ$, and a line perpendicular to this has a PA of $73 \pm 2^\circ$. Therefore, the observed disk position angle of about 72$^\circ$ is consistent with being at 90$^\circ$ to the binary pericenter direction.
SED {#s:sed}
===
![SED for the 99 Her system (both stars) showing the stellar and disk models (grey lines) and star+disk model (black line). The blackbody disk model is the solid grey line, and the physical grain model the dashed line. Photometric measurements are shown as black filled circles, and synthetic photometry of the stellar atmosphere as open circles ($U-B$, $B-V$, & $b-y$ colours, and $m1$ and $c1$ Stromgren indices were fitted but are not shown here). Black triangles mark upper limits from IRAS at 60 and 100$\mu$m.[]{data-label="fig:sed"}](F037AB.eps){width="50.00000%"}
The combination of all photometry for 99 Her allows modelling | 1 | member_18 |
of the spectral energy distribution (SED). The model is separated into two components; a stellar atmosphere and a disk. Due to being fairly bright ($V\sim5$mag) the system is saturated in the 2MASS catalogue. However, sufficient optical photometry for each individual star and the pair exists [@1993AJ....106..773H; @1997yCat.2215....0H; @1997ESASP1200.....P; @2006yCat.2168....0M], as well as infra-red measurements of the AB pair from AKARI and IRAS . These data were used to find the best fitting stellar models via $\chi^2$ minimisation. This method uses synthetic photometry over known bandpasses and has been validated against high S/N MIPS 24$\mu$m data for DEBRIS targets, showing that the photospheric fluxes are accurate to a few percent for AFG-type stars. The stellar luminosities ($L_{\star,A}=1.96L_\odot$, $L_{\star,B}=0.14L_\odot$) and IR fluxes of the individual components are consistent with the fit for the pair ($L_{\star,AB}=2.08L_\odot$). The fit for the AB pair is shown in Figure \[fig:sed\].
The spatial structure of the disk can be modelled with dust at a single radial distance of 120AU (i.e. thin compared to *Herschel’s* resolution, §\[s:spatial\]), so disk SED modelling can be decoupled from the resolved modelling once this radial distance is known. Because we have measurements of the disk emission at only five wavelengths, we cannot | 1 | member_18 |
strongly constrain the grain properties and size distribution. We fit the data with a blackbody model, and then compare the data with several “realistic” grain models .
In fitting a blackbody we account for inefficient grain emission at long wavelengths by including parameters $\lambda_0$ and $\beta$, where the blackbody is modified by a factor $\left( \lambda_0/\lambda \right)^\beta$ for wavelengths longer than $\lambda_0$. The best fitting model has a temperature of 49K and fractional luminosity $L_{\rm
disk}/L_\star = 1.4 \times 10^{-5}$. The SPIRE data are near the confusion limit of about 6mJy, so the parameters $\beta$ and $\lambda_0$ are unconstrained within reasonable limits by the data (based on previous sub-mm detections for other disks we fix them to $\lambda_0=210 \mu$m and $\beta=1$ in Figure \[fig:sed\] [@2007ApJ...663..365W]).
Assuming that grains absorb and emit like blackbodies, the radial dust distance implied by 49K is 45AU. Because the disk is observed at a radius of 120AU (i.e. is warmer than expected for blackbodies at 120AU), the dust emission has at least some contribution from grains small enough to emit inefficiently in the 70-350$\mu$m wavelength range. Because the SED alone is consistent with a pure blackbody (i.e. with $\beta=0$), we cannot make such a statement | 1 | member_18 |
without the resolved images. However, actually constraining the grain sizes is difficult because temperature and emission are also affected by composition. We fit the data by generating disk SEDs for grains at a range of semi-major axes and choosing the one with the lowest $\chi^2$. If the dust semi-major axis is different from the observed semi-major axis of 120AU the model parameters are changed and the model recalculated, thus iterating towards the best fit.
We model the dust with a standard diameter ($D$) distribution $n(D) \propto D^{2-3q}$ where $q=1.9$ [equivalently $n(M) \propto M^{-q}$ where $M$ is mass @2003Icar..164..334O], with the minimum size set by the blowout limit for the specific composition used (about 1.4$\mu$m) and a maximum size of 10cm. The size distribution probably extends beyond 10cm, but objects larger than 10cm contribute negligibly to the emission because the size distribution slope means that smaller grains dominate. Preliminary tests found that icy grains provided a much better fit than silicates. To refine the grain model so the SED and resolved radius agree, we introduced small amounts of amorphous silicates to the initially icy model. The grains are therefore essentially ice mixed with a small fraction ($f_{\rm sil} = 1.5\%$) of | 1 | member_18 |
silicate. The icy grain model is shown as a dotted line in Figure \[fig:sed\]. This model has a total dust surface area of 14AU$^2$ and a mass of order 10$M_\oplus$ if the size distribution is extrapolated up to 1000km size objects.
The parameters from of this model are degenerate for the data in hand; for example the size distribution could be shallower and the fraction of silicates higher (e.g. $q=1.83$ and $f_{\rm sil} = 4\%$). If we allow the minimum grain size to be larger than the blowout limit, the disk is well fit by amorphous silicate grains with $q=1.9$ and $D_{\rm
bl} = 10\mu$m. The disk spectrum can even be fit with a single size population of 25$\mu$m icy grains. However, the predictions for the flux at millimeter wavelengths depend on the size distribution, with lower fluxes for steeper size distributions. Therefore, grain properties and size distribution can be further constrained in the future with deep (sub)mm continuum measurements.
In summary, it is hard to constrain the grain sizes or properties. There is a difference in the required minimum grain size that depends on composition. Because icy grains are reflective at optical wavelengths, a detection of the disk in | 1 | member_18 |
scattered light could constrain the albedo of particles, and therefore their composition.
Spatial structure {#s:spatial}
=================
![image](coplanar-models.eps){width="100.00000%"}
The PACS images of the 99 Her disk are resolved, which allows modelling of the spatial distribution of grains that contribute to the observed emission at each wavelength. We compare synthetic images with the *Herschel* observations in several steps: i) Generate a three dimensional distribution of surface area $\sigma(r,\theta,\phi)$, where the coordinates are centered on the primary star. ii) Generate a radial distribution of grain emission properties. Because the SED can be modelled with blackbody grains at 49K and the spatial structure modelled with a narrow ring, there is no real need for a radial temperature dependence and the grain properties are only a function of wavelength: $P(\lambda) = B_\nu(49K,\lambda)$. Practically, we use a radial temperature dependence $T
\propto r^{-1/2}$ centered on the primary, normalised so that the disk ring temperature is 49K. This approach ensures that temperature differences due to non-axisymmetries (negligible for 99 Her) are automatically taken into account. iii) Generate a high resolution model as viewed from a specific direction. The emission in a single pixel of angular size $x$ from a single volume element in the three dimensional model | 1 | member_18 |
---
abstract: 'We show existence of solutions to the least gradient problem on the plane for boundary data in $BV(\partial\Omega)$. We also provide an example of a function $f \in L^1(\partial\Omega) \backslash (C(\partial\Omega) \cup BV(\partial\Omega))$, for which the solution exists. We also show non-uniqueness of solutions even for smooth boundary data in the anisotropic case for a nonsmooth anisotropy. We additionally prove a regularity result valid also in higher dimensions.'
address: 'W. Górny: Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland.'
author:
- Wojciech Górny
title: 'Planar least gradient problem: existence, regularity and anisotropic case'
---
Introduction
============
Many papers, including [@SWZ], [@MNT], [@MRL], [@GRS] describe the least gradient problem, i.e. a problem of minimalization
$$\min \{ \int_\Omega |Du|, \quad u \in BV(\Omega), \quad u|_{\partial\Omega} = f \},$$
where we may impose certain conditions on $\Omega$, $f$ and use different approaches to the boundary condition. In [@SWZ] $f$ is assumed to be continuous and the boundary condition is in the sense of traces. They also impose a set of geometrical conditions on $\Omega$, which are satisfied by strictly convex sets; in fact, in dimension two they are equivalent to strict convexity. The authors of [@MNT] also add | 1 | member_19 |
a positive weight. Another approach is presented in [@MRL], where boundary datum belongs to $L^1(\partial\Omega)$, but the boundary condition is understood in a weaker sense.
Throughout this paper $\Omega \subset \mathbb{R}^N$ shall be an open, bounded, strictly convex set with Lipschitz (or $C^1$) boundary. The boundary datum $f$ will belong to $L^1(\partial \Omega)$ or $BV(\partial\Omega)$. We consider the following minimalization problem called the least gradient problem (for brevity denoted by LGP):
$$\label{zagadnienie}
\inf \{ \int_\Omega |Du|, \quad u \in BV(\Omega), \quad Tu = f \},$$
where $T$ denotes the trace operator $T: BV(\Omega) \rightarrow L^1(\partial\Omega)$. Even existence of solutions in this sense is not obvious, as the functional
$$F(u) = \twopartdef{\int_\Omega |Du|}{ u \in BV(\Omega) \text{ and } Tu = f;}{+\infty}{\text{otherwise}}$$
is not lower semicontinuous with respect to $L^1$ convergence. In fact, in [@ST] the authors have given an example of a function $f$ without a solution to corresponding least gradient problem. It was a characteristic function of a certain fat Cantor set. Let us note that it does not lie in $BV(\partial\Omega)$.
There are two possible ways to deal with Problem . The first is the relaxation of the functional $F$. Such reformulation and its relationship with the original | 1 | member_19 |
statement is considered in [@MRL] and [@Maz]. Another way is to consider when Problem has a solution in the classical sense and what is its regularity. This paper uses the latter approach.
The main result of the present paper is giving a sufficient condition for existence of solutions of the least gradient problem on the plane. It is given in the following theorem, which will be later proved as Theorem \[tw:istnienie\]:
Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, strictly convex set with $C^1$ boundary. Then for every $f \in BV(\partial\Omega)$ there exists a solution of LGP for $f$.
Obviously, this condition is not necessary; the construction given in [@SWZ] does not require the boundary data to have finite total variation. We also provide an example of a function $f \in L^1(\Omega) \backslash (C(\partial\Omega) \cup BV(\partial\Omega))$, for which the solution exists, see Example \[ex:cantor\].
Another result included in this article provides a certain regularity property. Theorem \[tw:rozklad\] asserts existence of a decomposition of a function of least gradient into a continuous and a locally constant function. It is not a property shared by all BV functions, see [@AFP Example 4.1].
Let $\Omega \subset \mathbb{R}^N$, where $N \leq 7$, be an | 1 | member_19 |
open, bounded, strictly convex set with Lipschitz boundary. Suppose $u \in BV(\Omega)$ is a function of least gradient. Then there exist functions $u_c, u_j \in BV(\Omega)$ such that $u = u_c + u_j$ and $(Du)_c = Du_c$ and $(Du)_j = Du_j$, i.e. one can represent $u$ as a sum of a continuous function and a piecewise constant function. They are of least gradient in $\Omega$. Moreover this decomposition is unique up to an additive constant.
The final chapter takes on the subject of anisotropy. As it was proved in [@JMN], for an anisotropic norm $\phi$ on $\mathbb{R}^N$ smooth with respect to the Euclidean norm there is a unique solution to the anisotropic LGP. I consider $p-$norms on the plane for $p \in [1, \infty]$ to show that for $p = 1, \infty$, i.e. where the anisotropy is not smooth, the solutions need not be unique even for smooth boundary data (see Examples \[ex:l1\] and \[ex:linfty\]), whereas for $1 < p < \infty$, when the anisotropy is smooth, Theorem \[tw:anizotropia\] asserts that the only connected minimal surface with respect to the p-norm is a line segment, similarly to the isotropic solution.
Let $\Omega \subset \mathbb{R}^2$ be an open convex set. Let | 1 | member_19 |
the anisotropy be given by the function $\phi(x,Du) = \| Du \|_p$, where $1 < p < \infty$. Let $E$ be a $\phi-$minimal set with respect to $\Omega$, i.e. $\chi_E$ is a function of $\phi-$least gradient in $\Omega$. Then every connected component of $\partial E$ is a line segment.
Preliminaries
=============
Least gradient functions
------------------------
Now we shall briefly recall basic facts about least gradient functions. What we need most in this paper is the Miranda stability theorem and the relationship between functions of least gradient and minimal surfaces. For more information, see [@Giu].
We say that $u \in BV(\Omega)$ is a function of least gradient, if for every compactly supported $($equivalently: with trace zero$)$ $v \in BV(\Omega)$ we have
$$\int_\Omega |Du| \leq \int_\Omega |D(u + v)|.$$
We say that $u \in BV(\Omega)$ is a solution of the least gradient problem in the sense of traces $($solution of LGP$)$ for given $f \in L^1(\Omega)$, if $Tu = f$ and for every $v \in BV(\Omega)$ such that $Tv = 0$ we have
$$\int_\Omega |Du| \leq \int_\Omega |D(u + v)|.$$
To underline the difference between the two notions, we recall a stability theorem by Miranda:
$($[@Mir Theorem 3]$)$ Let $\Omega \subset \mathbb{R}^N$ | 1 | member_19 |
be open. Suppose $\{ f_n \}$ is a sequence of least gradient functions in $\Omega$ convergent in $L^1_{loc}(\Omega)$ to $f$. Then $f$ is of least gradient in $\Omega$.
An identical result for solutions of least gradient problem is impossible, as the trace operator is not continuous in $L^1$ topology. We need an additional assumption regarding traces. A correct formulation would be:
\[stabilnosc\] Suppose $f, f_n \in L^1(\partial\Omega)$. Let $u_n$ be a solution of LGP for $f_n$, i.e. $Tu_n = f_n$. Let $f_n \rightarrow f$ in $L^1(\partial\Omega)$ and $u_n \rightarrow u$ in $L^1(\Omega)$. Assume that also $Tu = f$. Then $u$ is a solution of LGP for $f$.
To deal with regularity of solutions of LGP, it is convenient to consider superlevel sets of $u$, i.e. sets of the form $\partial \{ u > t \}$ for $t \in \mathbb{R}$. It follows the the two subsequent results:
\[lem:jednoznacznoscnadpoziomic\] Suppose $u_1, u_2 \in L^1(\Omega)$. Then $u_1 = u_2$ a.e. iff for every $t \in \mathbb{R}$ the superlevel sets of $u_1$ and $u_2$ are equal, i.e. $\{ u_1 > t \} = \{ u_2 > t \}$ up to a set of measure zero.
\[twierdzeniezbgg\] $($[@BGG Theorem 1]$)$\
Suppose $\Omega \subset \mathbb{R}^N$ is | 1 | member_19 |
open. Let $f$ be a function of least gradient in $\Omega$. Then the set $\partial \{ f > t \}$ is minimal in $\Omega$, i.e. $\chi_{\{ f > t \}}$ is of least gradient for every $t \in \mathbb{R}$.
It follows from [@Giu Chapter 10] that in low dimensions $(N \leq 7)$ the boundary $\partial E$ of a minimal set $E$ is an analytical hypersurface $($after modification of $E$ on a set of measure zero$)$. Thus, as we modify each superlevel set of $u$ by a set of measure zero, from Lemma \[lem:jednoznacznoscnadpoziomic\] we deduce that the class of $u$ in $L^1(\Omega)$ does not change. After a change of representative we get that the boundary of each superlevel set of $u$ is a sum of analytical minimal surfaces; thus, we may from now on assume that we deal with such a representative. Also, several proofs are significantly simplified if we remember that in dimension two there is only one minimal surface: an interval.
Sternberg-Williams-Ziemer construction
--------------------------------------
In [@SWZ] the authors have shown existence and uniqueness of solutions of LGP for continuous boundary data and strictly convex $\Omega$ (or, to be more precise, the authors assume that $\partial \Omega$ has non-negative | 1 | member_19 |
mean curvature and is not locally area-minimizing). The proof of existence is constructive and we shall briefly recall it. The main idea is reversing Theorem \[twierdzeniezbgg\] and constructing almost all level sets of the solution. According to the Lemma \[lem:jednoznacznoscnadpoziomic\] this uniquely determines the solution.
We fix the boundary data $g \in C(\partial \Omega)$. By Tietze theorem it has an extension $G \in C(\mathbb{R}^n \backslash \Omega)$. We may also demand that $G \in BV(\mathbb{R}^n \backslash \overline{\Omega})$. Let $L_t = (\mathbb{R}^n \backslash \Omega) \cap \{ G \geq t \}$. Since $G \in BV(\mathbb{R}^n \backslash \overline{\Omega})$, then for a.e. $t \in \mathbb{R}$ we have $P(L_t, \mathbb{R}^n \backslash \overline{\Omega}) < \infty$. Let $E_t$ be a set solving the following problems:
$$\label{sternbergminimalnadlugosc}
\min \{ P(E, \mathbb{R}^n): E \backslash \overline{\Omega} = L_t \backslash \overline{\Omega} \},$$
$$\max \{ |E|: E \text{ is a minimizer of \eqref{sternbergminimalnadlugosc}} \}.$$
Let us note that both of these problems have solutions; let $m \geq 0$ be the infimum in the first problem. Let $E_n$ be a sequence of sets such that $P(E_n, \Omega) \rightarrow m$. By compactness of unit ball in $BV(\Omega)$ and lower semicontinuity of the total variation we obtain $\chi_{E_{n_k}} \rightarrow \chi_E$, where
$$m \leq P(E, \Omega) \leq | 1 | member_19 |
P(E_n, \Omega) \rightarrow m.$$
Take $M \leq |\Omega|$ be the supremum in the second problem. Take a sequence o sets $E_n$ such that $|E_n| \rightarrow M$. Then on some subsequence $\chi_{E_{n_k}} \rightarrow \chi_E$, and thus
$$M \geq |E| \geq |E_n| - |E_n \triangle E| = |E_n| - \|\chi_{E_n} - \chi_E \|_1 \rightarrow M - 0.$$
Then we can show existence of a set $T$ of full measure such that for every $t \in T$ we have $\partial E_t \cap \partial \Omega \subset g^{-1}(t)$ and for every $t,s \in T$, $s < t$ the inclusion $E_t \subset \subset E_s$ holds. It enables us to treat $E_t$ as superlevel sets of a certain function; we define it by the following formula:
$$u(x) = \sup \{t \in T: x \in \overline{E_t \cap \Omega} \}.$$
It turns out that $u \in C(\overline{\Omega}) \cap BV(\Omega)$ and $u$ is a solution to LGP for $g$. Moreover $| \{ u \geq t \} \triangle (\overline{E_t \cap \Omega})| = 0$ for a.e. $t$. Uniqueness proof is based on a maximum principle.
In the existence proof in chapter $4$ we are going to use a particularly simple case of the construction. Suppose $\Omega \subset \mathbb{R}^2$ and that $f \in | 1 | member_19 |
C^1(\partial\Omega)$. Firstly, let us notice that we only have to construct the set $E_t$ for almost all $t$. Secondly, we recall that in dimension $2$ the only minimal surfaces are intervals; thus, to find the set $E_t$, let us fix $t$ and look at the preimage $g^{-1}(t)$. We connect its points with intervals with sum of their lengths as small as possible. It can cause problems, for example if we take $t$ to be a global maximum of the function; thus, let us take $t$ to be a regular value (by Sard theorem almost all values are regular), so the preimage $f^{-1}(t)$ is a manifold. In dimension $2$ this means that the preimage contains finitely many points, because $f$ is Lipschitz and $\partial\Omega$ is compact. As the derivative at every point $p \in f^{-1}(t)$ is nonzero, there is at least one interval in $\partial E_t$ ending in $p$. As is established later in Proposition \[slabazasadamaksimum2\], by minimality of $\partial E_t$ there can be at most one, so there is exactly one interval in $\partial E_t$ ending in every $p \in f^{-1}(t)$.
A typical example for the construction, attributed to John Brothers, is to let $\Omega = B(0,1)$ and take the | 1 | member_19 |
boundary data to be (in polar coordinates, for fixed $r = 1$) the function $f: [0, 2\pi) \rightarrow \mathbb{R}$ given by the formula $f(\theta) = \cos(2 \theta)$; see [@MRL Example 2.7] or [@SZ Example 3.6].
BV on a one-dimensional compact manifold
----------------------------------------
In the general case one may attempt to define BV spaces on compact manifolds using partition of unity; such approach is presented in [@AGM]. It is not necessary for us; it suffices to consider one-dimensional case. Let us consider $\Omega \subset \mathbb{R}^2$ open, bounded with $C^1$ boundary. We may define on $\partial\Omega$ the Hausdorff measure, integrable functions $($which are appoximatively continuous a.e.$)$. We recall (see [@EG Chapter 5.10]) that the one-dimensional $BV$ space on the interval $(a,b) \subset \mathbb{R}$ may be described in the following way:
$$f \in BV((a,b)) \Leftrightarrow \sum |f(x_{i})-f(x_{i-1})| \leq M < \infty$$
for every $a < x_0 < ... < x_n < b$, where $x_i$ are points of approximate continuity of $f$. The smallest such constant $M$ turns out to be the usual total variation of $f$.
We may extend this definition to the case where we have a one-dimensional manifold diffeomorphic to an open interval if it is properly parametrized, i.e. all tangent | 1 | member_19 |
vectors have length one. Repeating the proof from [@EG] we get that this definition coincides with the divergence definition. Then we extend it to the case of a one-dimensional compact connected manifold in the following way:
We say that $f \in BV(\partial\Omega)$, if after removing from $\partial\Omega$ a point $p$ of approximate continuity of $f$ we have $f \in BV(\partial\Omega \backslash \{ p \})$. The norm is defined to be
$$\| f \|_{BV(\partial\Omega)} = \| f \|_1 + \| f \|_{BV(\partial\Omega \backslash \{ p \})}.$$
This definition does not depend on the choice of $p$, as in dimension one the total variation on disjoint intervals is additive, thus for different points $p_1, p_2$ we get that
$$\| f \|_{BV(\partial\Omega \backslash \{ p_1 \})} = \| f \|_{BV((p_1,p_2))} + \| f \|_{BV((p_2,p_1))} = \| f \|_{BV(\partial\Omega \backslash \{ p_2 \})},$$
where $(p_1,p_2)$ is an oriented arc from $p_1$ to $p_2$. Thus all local properties of $BV(\partial\Omega)$ hold; we shall recall the most important one for our considerations:
\[stw:bvdim1\] Let $E \subset \partial\Omega$ be a set of finite perimeter, i.e. $\chi_E \in BV(\partial\Omega)$. Then, if we take its representative to be the set of points of density $1$, then $\partial E = | 1 | member_19 |
\partial^{*} E = \{ p_1, ..., p_{2n} \}$ and $P(E, \partial\Omega) = 2n$. Here $\partial^{*} E$ denotes the reduced boundary of $E$, i.e. the set where a measure-theoretical normal vector exists; see [@EG Chapter 5.7].
However, some global properties need not hold. For example, the decomposition theorem $f = f_{ac} + f_j + f_s$ does not hold; consider $\Omega = B(0,1)$, $f = \arg (z)$. The main reason is that $\pi_1(\partial\Omega) \neq 0$.
Regularity of least gradient functions
======================================
In this section we are going to prove several regularity results about functions of least gradient, valid up to dimension $7$. We start with a weak form of the maximum principle and later prove a result on decomposition of a least gradient function into a continuous and jump-type part; this decomposition holds not only at the level derivatives, but also at the level of functions. We will extensively use the blow-up theorem, see [@EG Section 5.7.2].
\[tw:blowup\] For each $x \in \partial^{*} E$ define the set $E^r = \{ y \in \mathbb{R}^N: r(y-x) + x \in E \}$ and the hyperplane $H^{-}(x) = \{ y \in \mathbb{R}^N: \nu_E (x) \cdotp (y - x) \leq 0 \}$. Then
$$\chi_{E^r} \rightarrow \chi_{H^{-}(x)}$$
in | 1 | member_19 |
$L^1_{loc}(\mathbb{R}^N)$ as $r \rightarrow 0$.
It turns out that on the plane Theorem \[twierdzeniezbgg\] may be improved to an analogue of the maximum principle for linear equations; geometrically speaking, the linear weak maximum principle states that every level set touches the boundary.
\[slabazasadamaksimum\] $($weak maximum principle on the plane$)$\
Let $\Omega \subset \mathbb{R}^2$ be an open bounded set with Lipschitz boundary and suppose $u \in BV(\Omega)$ is a function of least gradient. Then for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is empty or it is a sum of intervals, pairwise disjoint in $\Omega$, such that every interval connects two points of $\partial \Omega$.
By the argument from [@Giu Chapter 10] for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is a sum of intervals and $\partial \{ u > t \} = \partial^* \{ u > t \}$. Obviously $\partial \{ u > t \}$ is closed in $\Omega$. Suppose one of those intervals ends in $x \in \Omega$. Then the normal vector at $x$ is not well defined (the statement of the Theorem \[tw:blowup\] does not hold), so $x \notin \partial^* \{ u > t \}$. Thus $x \notin | 1 | member_19 |
\partial \{ u > t \}$, contradiction. Similarly suppose two such intervals intersect in $x \in \Omega$. Then the measure-theoretic normal vector at $x$ has length smaller then $1$, depending on the angle between the two intervals. Thus $x \notin \partial^* \{ u > t \}$, contradiction.
If we additionally assume that $\Omega$ is convex, then the union is disjoint also on $\partial\Omega$:
\[slabazasadamaksimum2\] Let $\Omega \subset \mathbb{R}^2$ be an open, bounded, convex set with Lipschitz boundary and suppose $u \in BV(\Omega)$ is a function of least gradient. Then for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is empty or it is a sum of intervals, pairwise disjoint in $\overline{\Omega}$, such that every interval connects two points of $\partial \Omega$.
Suppose that at least two intervals in $\partial E_t$ end in $x \in \partial\Omega$: $\overline{xy}$ and $\overline{xz}$. We have two possibilities: there are countably many intervals in $\partial E_t$, which end in $x$, with the other end lying in the arc $\overline{yz} \subset \partial\Omega$ which does not contain $x$; or there are finitely many. The first case is excluded by the monotonicity formula for minimal surfaces, see for example [@Sim Theorem 17.6, Remark 37.9], as | 1 | member_19 |
from Theorem \[twierdzeniezbgg\] $E$ is a minimal set and only finitely many connected components of the boundary of a minimal set may intersect any compact subset of $\Omega$.
In the second case we may without loss of generality assume that $\overline{xy}$ and $\overline{xz}$ are adjacent. Consider the function $\chi_{E_t}$. In the area enclosed by the intervals $\overline{xy}, \overline{xz}$ and the arc $\overline{yz} \subset \partial\Omega$ not containing $x$ we have $\chi_{E_t} = 1$ and $\chi_{E_t} = 0$ on the two sides of the triangle (or the opposite situation, which we handle similarly). Then $\chi_{E_t}$ is not a function of least gradient: the function $\widetilde{\chi_{E_t}} = \chi_{E_t} - \chi_{\Delta xyz}$ has strictly smaller total variation due to the triangle inequality. This contradicts Theorem \[twierdzeniezbgg\].
The result above is sharp. As the following example shows, we may not relax the assumption of convexity of $\Omega$.
Denote by $\varphi$ the angular coordinate in the polar coordinates on the plane. Let $\Omega = B(0,1) \backslash (\{ \frac{\pi}{4} \leq \varphi \leq \frac{3\pi}{4} \} \cup \{ 0 \}) \subset \mathbb{R}^2$, i.e. the unit ball with one quarter removed. Take the boundary data $f \in L^1(\partial \Omega)$ to be $$f(x,y) = \twopartdef{1}{y \geq 0}{0}{y < 0.}$$ Then the | 1 | member_19 |
solution to the least gradient problem is the function (defined inside $\Omega$) $$u(x,y) = \twopartdef{1}{y \geq 0}{0}{y < 0,}$$ in particular $\partial \{ u \geq 1 \}$ consists of two horizontal line segments whose closures intersect at the point $(0,0) \in \partial\Omega$. Note that in this example the set $\Omega$ is star-shaped, but it is not convex.
In higher dimensions, we are going to need a result from [@SWZ] concerning minimal surfaces:
\[stw:sternbergpowmin\] $($[@SWZ Theorem 2.2])\
Suppose $E_1 \subset E_2$ and let $\partial E_1, \partial E_2$ are area-minimizing in a open set $U$. Further, suppose $x \in \partial E_1 \cap \partial E_2 \cap U$. Then $\partial E_1$ and $\partial E_2$ agree in some neighbourhood of $x$.
$($weak maximum principle$)$\
Let $\Omega \subset \mathbb{R}^N$, where $N \leq 7$ and suppose $u \in BV(\Omega)$ is a function of least gradient. Then for every $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is empty or it is a sum of minimal surfaces $S_{t,i}$, pairwise disjoint in $\Omega$, which satisfy $\partial S_{t,i} \subset \partial \Omega$.
Let us notice, that with only subtle changes the previous proof works also in the case $N \leq 7$, i.e. when boundaries of superlevel sets are | 1 | member_19 |
minimal surfaces.
From [@Giu Chapter 10] it follows that for $t \in \mathbb{R}$ the set $\partial \{ u > t \}$ is a sum of minimal surfaces $S_{t,i}$ and $\partial \{ u > t \} = \partial^* \{ u > t \}$. Obviously $\partial \{ u > t \}$ $($boundary in topology of $\Omega)$ is closed in $\Omega$, so $\partial S_{t,i} \cap \Omega = \emptyset$ $($boundary in topology of $\partial \{ u > t \})$; suppose otherwise. Let $x \in \partial S_{t,i} \cap \Omega$. Then in $x$ the blow-up theorem does not hold, so $x \notin \partial \{ u > t \}$, contradiction.
Now suppose that $S_{t,i}$ and $S_{t,j}$ are not disjoint in $\Omega$. Then from the Proposition \[stw:sternbergpowmin\] applied to $E_1 = E_2 = \{ u > t \}$ we get $S_{t,i} = S_{t,j}$.
Let $E_1 \subset E_2$ and suppose that $E_1$ and $E_2$ are sets of locally bounded perimeter and let $x \in \partial^{*} E_1 \cap \partial^{*} E_2$. Then $\nu_{E_1}(x) = \nu_{E_2}(x)$.
We are going to use the blow-up theorem (Theorem \[tw:blowup\]). First notice that the inclusion $E_1 \subset E_2$ implies
$$E_1^r = \{ y \in \mathbb{R}^N: r(y-x) + x \in E_1 \} \subset \{ y \in \mathbb{R}^N: | 1 | member_19 |
r(y-x) + x \in E_2 \} = E_2^r.$$
We keep the same notation as in Theorem \[tw:blowup\] and use it to obtain
$$\chi_{H_1^-(x)} \leftarrow \chi_{E_1^r} \leq \chi_{E_1^r} \rightarrow \chi_{H_2^-(x)},$$
where the convergence holds in $L^1_{loc}$ topology. Thus $H_1^-(x) = H_2^-(x)$, so $\nu_{E_2}(x) = \nu_{E_2}(x)$.
\[stw:zbiorskokow\] For $u \in BV(\Omega)$ the structure of its jump set is as follows:
$$J_u = \bigcup_{s,t \in \mathbb{Q}; s \neq t} (\partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}).$$
Let $x \in J_u$. By definition of $J_u$ the normal vector at $x$ is well defined. The same applies to the trace values from both sides: let us denote them by $u^{-} (x) < u^{+} (x)$. But then there exist $s,t \in \mathbb{Q}$ such that $u^{-} (x) < s < t < u^{+} (x)$, so $x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}$.
On the other hand, let $x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}$. From the previous proposition the normal vectors coincide, so the normal at $x$ does not depend on $t$ and we may define traces from both sides as
$$u^{+}(x) = \sup \{ | 1 | member_19 |
t: x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \};$$
$$u^{-}(x) = \inf \{ t: x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \}.$$
More precisely, the trace is uniquely determined up to a measure zero set by the mean integral property from [@EG Theorem 5.3.2]. But it holds for all $x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \}$; from the weak maximum principle this set divides $\Omega$ into two disjoint parts, $\Omega^+$ and $\Omega^-$. Let $\Omega^+$ be the part with greater values of $u$ in the neighbourhood of the cut. If $u^+(x) < \sup \{ t: x \in \partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \} = s$, then for sufficiently small neighbourhoods of $x$ we would have $u \geq s$, so $\fint_{B(x,r) \cap \Omega^+} |u^+(x) - u(y)| \geq |u^+(x) - s| > 0$, contradiction. The other cases are analogous.
Suppose $u \in BV(\Omega)$ is a least gradient function. Then $J_u = \bigcup_{k=1}^{\infty} S_k$, where $S_k$ are pairwise disjoint minimal surfaces. In addition, the trace of $u$ from both sides is constant | 1 | member_19 |
along $S_k$; in particular the jump of $u$ is constant along $S_k$.
We follow the characterisation of $J_u$ from the Proposition \[stw:zbiorskokow\]. For every $t$ the set $\partial^{*} \{ u > t \}$ is a minimal surface. Proposition \[stw:sternbergpowmin\] ensures that if $\partial^{*} \{ u > s \} \cap \partial^{*} \{ u > t \} \neq \emptyset$, then their intersecting connected components $S_{s,i}$, $S_{t,j}$ coincide. In particular, the trace from both sides defined as above is constant along $S_{t,j}$. Thus connected components of $J_u$ coincide with connected components of $\partial^{*} \{ u > t \}$ for some $t$, so by weak maximum principle they are minimal surfaces non-intersecting in $\Omega$ with boundary in $\partial \Omega$. As the area of each such surface is positive, there is at most countably many of them.
\[tw:rozklad\] Let $\Omega \subset \mathbb{R}^N$, where $N \leq 7$, be an open, bounded, strictly convex set with Lipschitz boundary. Suppose $u \in BV(\Omega)$ is a function of least gradient. Then there exist functions $u_c, u_j \in BV(\Omega)$ such that $u = u_c + u_j$ and $(Du)_c = Du_c$ and $(Du)_j = Du_j$, i.e. one can represent $u$ as a sum of a continuous function and a piecewise constant | 1 | member_19 |
function. They are of least gradient in $\Omega$. Moreover this decomposition is unique up to an additive constant.
1\. From the previous theorem $J_u = \bigcup_{k=1}^{\infty} S_k$, where $S_k$ are pairwise disjoint minimal surfaces with boundary in $\partial\Omega$. The jump along each of them has a constant value $a_k$. They divide $\Omega$ into open, pairwise disjoint sets $U_i$.
2\. We define $u_j$ in the following way: let us call any of the obtained sets $U_0$. Let us draw a graph such that the sets $U_i$ are its vertices. $U_i$ and $U_j$ are connected by an edge iff $\partial U_i \cap U_j = S_k$, i.e. when they have a common part of their boundaries. To such an edge we ascribe a weight $a_k$. Example of such construction is presented on the picture above. As $S_k$ are disjoint in $\Omega$ and do not touch the boundary, then such a graph is a tree, i.e. it is connected and there is exactly one path connecting two given vertices. Thus, we define $u_j$ by the formula
$$u_j(x) = \sum_{\text{path connecting } U_0 \text{ with } U_i} a_k, \text{ when } x \in U_j.$$
Such a function is well defined, as our graph has no | 1 | member_19 |
cycles. It also does not depend on the choice of $U_0$ up to an additive constant (if we chose some $U_1$ instead, the function would change by a summand $\sum_{\text{path connecting } U_0 \text{ with } U_1} a_k)$. We see that $u_j \in L^1(\Omega)$ and that it is piecewise constant.\
3. We notice that $Du_j = (Du)_j$, as $u_j$ is constant on each $U_i$, $J_{u_j} = J_u$ and the jumps along connected components of $J_u$ have the same magnitude. Thus we define $u_c = u - u_j$. We see that $(Du_c)_j = 0$.\
4. The $u_c$, $u_j$ defined above are functions of least gradient.
Suppose that $u_j$ is not a a function of least gradient, i.e. there exists $v \in BV(\Omega)$ such that $\int_\Omega |Dv| < \int_\Omega |Du_j|$ and $Tu_j = Tv$. Then we would get
$$\int_\Omega |Du| \leq \int_\Omega |D(u_c + v)| \leq \int_\Omega |Du_c| + \int_\Omega |Dv| < \int_\Omega |Du_c| + \int_\Omega |Du_j| = \int_\Omega |Du|,$$
where the first inequality follows from $u$ being a function of least gradient, and the last equality from measures $Du_c$ and $Du_j$ being mutually singular. The proof for $u_c$ is analogous.
5\. The function $u_c$ is continuous. As $u_c$ is of least | 1 | member_19 |
gradient, then if it isn’t continuous at $x \in \Omega$, then a certain set of the form $\partial \{ u_c > t \}$ passes through $x$; otherwise $u_c$ would be constant in the neighbourhood of $x$. But in that case $u_c$ has a jump along the whole connected component of $\partial \{ u_c > t \}$ containing $x$, which is impossible as $(Du_c)_j = 0$.
6\. What is left is to prove uniqueness of such a decomposition. Let $u = u_c^1 + u_j^1 = u_c^2 + u_j^2$. Changing the order of summands we obtain
$$u_c^1 - u_c^2 = u_j^2 - u_j^1,$$
but the distributional derivative of the left hand side is a continuous measure, and the distributional derivative of the right hand side is supported on the set of zero measure with respect to $\mathcal{H}^{n-1}$, so both of them are zero measures. But the condition $Dv = 0$ implies $v = \text{const}$, so the functions $u_c^1$, $u_c^2$ differ by an additive constant.
In this decomposition $u_c$ isn’t necessarily continuous up to the boundary. Let us use the complex numbers notation for the plane. We take $\Omega = B(1,1)$. Let the boundary values be given by the formula $f(z) = \arg(z)$. | 1 | member_19 |
Then $u = u_c = \arg(z) \in BV(\Omega) \cap C^{\infty}(\Omega)$, but $u$ isn’t continuous at $0 \in \partial\Omega$.
Existence of solutions on the plane
===================================
We shall prove existence of solutions on the plane for boundary data in $BV(\partial\Omega)$. We are going to use approximations of the solution in strict topology. Proposition \[podciagzbiezny\] will ensure us that existence of convergent sequences of approximations in $L^1$ topology is not a problem; Theorem \[tw:scislazb\] will upgrade it to strict convergence. The Miranda stability theorem (Theorem \[stabilnosc\]) ends the proof. Later, we shall see an example of a discontinuous function $f$ of infinite total variation such that the solution to the LGP exists.
\[podciagzbiezny\] Suppose $f_n \rightarrow f$ in $L^1(\partial\Omega)$. $u_n$ are solutions of LGP for $f_n$. Then $u_n$ has a convergent subsequence, i.e. $u_{n_k} \rightarrow u$ in $L^1(\Omega)$.
As the trace operator is a surjection, by the Open Mapping Theorem it is open. Let us fix $\widetilde{f} \in BV(\Omega)$ such that $T\widetilde{f} = f$ and a sequence of positive numbers $\varepsilon_n \rightarrow 0$. Then by continuity and openness of $T$ the image of a ball $B(\widetilde{f}, \varepsilon_n)$ contains a smaller ball $B(T\widetilde{f}, \delta_n)$ for another sequence of positive numbers $\delta_n \rightarrow 0$. | 1 | member_19 |