text
stringlengths
448
13k
label
int64
0
1
doc_idx
stringlengths
8
14
--- abstract: 'Quasi-periodic signals have yielded important constraints on the masses of black holes in galactic X-ray binaries, and here we extend this to active galactic nuclei (AGN). We employ a wavelet technique to analyze 19 observations of 10 AGN obtained with the [*XMM-Newton*]{} EPIC-PN camera. We report the detection of a candidate 3.3 kilosecond quasi-period in 3C 273. If this period represents an orbital timescale originating near a last stable orbit of 3 $R_S$, it implies a central black hole mass of $7.3\times 10^6$ M$_\sun$. For a maximally rotating black hole with a last stable orbit of 0.6 $R_S$, a central black hole mass of $8.1\times 10^7$ M$_\sun$ is implied. Both of these estimates are substantially lower than previous reverberation mapping results which place the central black hole mass of 3C 273 at about $2.35\times 10^8$ M$_\sun$. Assuming that this reverberation mass is correct, the X-ray quasi-period would be caused by a higher order oscillatory mode of the accretion disk.' author: - 'C. Espaillat, J. Bregman, P. Hughes, and E. Lloyd-Davies' title: 'Wavelet Analysis of AGN X-ray Time Series: A QPO in 3C 273?' --- Introduction ============ Quasi-periodic oscillations (QPOs) are thought to originate in the inner accretion disk
1
member_20
of a black hole or neutron star in an X-ray binary (XRB) system [@vdk00]. Consequently, QPOs have been used in galactic XRBs to introduce important constraints on the masses of the central black holes of these systems. Previous work has revealed that AGN and XRBs are alike: noise power spectra have shown that similar physical processes may be underlying the X-ray variability in both [@ede99; @utt02; @mar03; @vau03; @mchar04; @mchar05]. Taking this resemblance into account and assuming that accretion onto a stellar-mass black hole is comparable to accretion onto a supermassive black hole, one would expect some AGN to exhibit QPOs similar to those observed in XRBs. In supermassive black holes ($10^6$-$10^9$ M$_\sun$), these QPOs would be at much lower frequencies than those we find in stellar-mass black holes ($\sim$ 10 M$_\sun$). Low frequency quasi-periods (LF QPOs) in XRBs range from 50 mHz to 30 Hz; scaling from a $\sim 1$ Hz QPO in a 10 $M_\sun$ XRB, a LF QPO in an AGN would occur on timescales of days to months [@vau05], too long to be detectable for the AGN in our sample. On the other hand, high frequency QPOs (HF QPOs) in XRBs have values of $\geq$ 100
1
member_20
Hz and assuming a 1/$M_{BH}$ scaling of frequencies, $f_{HFQPO}\sim 3\times 10^{-3} (M_{BH}/ 10^{6} M_\sun)^{-1}$ Hz [@abram], corresponding to timescales greater than 400s for AGN. While this parallel between AGN and XRBs seems promising, no claim of an X-ray quasi-period in an AGN has been found to be statistically robust. @vaub remark that a major source of false detections arise from assuming an inappropriate background noise power spectrum. X-ray variations of AGN have intrinsically red noise power spectra (i.e. the power spectra have a continuum resembling a power law with a steep slope; Press 1978), however many purported QPOs in AGN are compared against an assumed background of white noise (i.e. Poisson photon noise or a flat spectrum). For example, in a $\sim$5 day ASCA observation of IRAS 18325-5926 the significance of the candidate periodicity was estimated with white noise [@iwa98]. After including red noise in the periodogram fitting, @vau found that the candidate periodicity was no longer significant at the 95$\%$ level. @fiore also claimed high ($>99\%$) significance peaks in NGC 4151, however, after fitting red noise and Poisson photon noise components of the spectrum @vaub showed that the significances of the QPOs fell below the 95$\%$ confidence level. It
1
member_20
is also difficult to constrain the significance of possible QPOs due to power spectra effects [@vaub]. EXOSAT data of NGC 5548 were reported to have a significant period [@pap93], but @tag96 later showed that the significance of the candidate QPO was lower than previously reported once the uncertainties in modeling the spectrum were taken into consideration. This lack of statistically significant evidence for QPOs in AGN has led to questions of whether existing X-ray observations of AGN are sensitive enough to detect QPOs even if they are present [@vau05]. Here we use a different technique to search for significant periodic structures in the time variability data that have been collected for AGNs with XMM-Newton. We use a wavelet transform technique, which can have certain advantages relative to periodograms and Fourier power spectra, the methods that have previously dominated the literature. The wavelet technique, which has become widely used in other branches of science, is particularly useful in identifying signals where the period or its amplitude changes with time. This technique is applied to the XMM-Newton data from bright 10 AGNs, with special care taken to properly treat the noise characteristics and error analysis, and we find a candidate 3.3 ks
1
member_20
quasi-period in 3C 273. In Section 2, we present our observations and data reduction steps. In Section 3, we provide an overview of the two wavelet techniques used in our analysis: the continuous wavelet transform and the cross-wavelet transform. The results of these two techniques as well as significance tests are presented. We also discuss structure function analysis for the AGN in our sample. In Section 4 we argue that this 3.3 ks quasi-period in 3C 273 is consistent with what we would expect from oscillations in the accretion disk around the supermassive black hole based on current black hole mass estimates. Observations and Data Reduction {#obssection} =============================== The 10 AGN in our sample were selected because they are bright and have [*XMM-Newton*]{} EPIC-PN camera observations which exceed 30 kiloseconds (ks). In total, we have 19 observations and each observation’s ID, date, length, and average counts are listed in Table \[obslog\]. All observations are in the energy range 0.75 to 10 keV and most were taken in small window mode, which has a readout time of 6 milliseconds (ms). The only exception is NGC 4151 Observation ID (Obs. ID): 0112830201, which was taken in full frame mode with a readout
1
member_20
time of 73.4 ms. Observation Data Files (ODFs) were obtained from the on-line [*XMM-Newton*]{} Science Archive and later reduced with the [*XMM-Newton*]{} Science Analysis Software (SAS, v. 7.0.0, 6.1.0, 5.4.1). Source light curves, with 5 s bins, were extracted for a circular region centered on the source ($\sim$20$^\prime$$^\prime$). Background light curves were obtained from a nearby rectangular source-free region and subtracted from the source light curves. These rectangular background regions were larger than the source regions and were accordingly scaled down. Due to strong flaring, the last few kiloseconds of data are excluded from most observations. The count rates for the target sources are orders of magnitude greater than the background count rate in the detection cell, so a rise in the background is unimportant. We removed these last few kiloseconds of data from the data stream just to be very cautious. We note that in the observation of 3C 273 with the claimed detection, including the periods with flaring does not change our results. Some of the observations in our sample are affected by pile-up. Pile-up occurs when more than one X-ray photon arrives in a pixel before the pixel is read out by the CCD, making it difficult
1
member_20
to distinguish one high energy photon from two lower energy photons. Pile-up can also occur when photons striking adjacent pixels are confused with a single photon that deposits charge in more than one pixel. Depending on how many pixels are involved, this is called a single-, double-, triple-, or quadruple- pixel event. The SAS task EPATPLOT measures the pile-up in an observation and the results for our target with the highest count rate, MKN 421 (Table 1), are shown in Figure \[epatplot\]. When we compare the expected fractions of pixel events (solid lines) with those actually measured in the data (histograms) for the range 0.75 to 10 keV we see that a larger than expected fraction of double events (third histogram from top, dark blue in electronic edition) is measured as well as a larger fraction of triple and quadruple events (bottom two histograms), although to a lesser degree, while single events (second histogram from top) are lower than expected, indicating the presence of pile-up. Pile-up leads to a general reduction in the mean count rate as well as a reduction in the magnitude of variations. We will explore the influence of pile-up on our data in more detail when
1
member_20
we discuss structure functions in Section \[sfpileupsection\]. Data Analysis and Results ========================= Wavelet Analysis {#waveletanalysis} ---------------- ### The Continuous Wavelet Transform The continuous wavelet transform (CWT) is the inner product of a dilated and translated mother wavelet and a time series $f(t)$, the idea being that the wavelet is applied as a band-pass filter to the time-series. The continuous wavelet transform maps the power of a particular frequency (i.e. dilation) at different times in translation-dilation space, giving an expansion of the signal in both time and frequency. Hence, the continuous wavelet transform not only tells us which frequencies exist in the signal, but also when they exist, allowing us to see whether a timescale varies in time. This is the wavelet technique’s advantage over Fourier transforms in detecting quasi-periods. In addition, the Fourier transform is not suited for detecting quasi-periods since non-periodic outbursts will spread power across the spectrum and windowing will cause power to appear at low frequencies, potentially obscuring quasi-periodic signals. Throughout this paper, we follow @hug98 and @kel03 and references within. In previous studies [@hug98; @kel03; @liu05; @kadler06] we have found the Morlet wavelet $$\psi_{Morlet} = \pi^{-1/4} e^{ik_{\psi}t} e^{-|t^{2}|/2},$$ with $k_{\psi}=6$ to be an excellent choice. The
1
member_20
value of $k_{\psi}$ is a satisfactory compromise between a value small-enough that we have good resolution of temporal structures, and large-enough that the admissibility condition is satisfied, at least to machine accuracy [@far92]. The wavelet, being continuous and complex, permits a rendering in transform space that highlights temporally localized, periodic activity – oscillatory behavior in the real part and a smooth distribution of power in the modulus – and being progressive (zero power at negative frequency), is optimal for the study of causal signals. We have deliberately avoided any form of weighting, such as that introduced by @foster to allow for uneven sampling, or @johnson to rescale within the cone of influence, in order to facilitate our interpretation of the cross wavelet, and to allow the use of existing methods of significance analysis. From this mother wavelet, we generate a set of translated ($t'$) and dilated ($l$) wavelets $$\psi_{lt^{'}}(t)=\frac{1}{\sqrt{l}} \psi (\frac{t-t^{'}}{l}), l \in \Re^{+}, t\in \Re$$ and we then take the inner product with the signal $F(t)$ to obtain the wavelet coefficients $$\label{coefficients} \widetilde f (l,t^{'})= \int_{\Re}f(t)\psi^{*}_{lt'}(t) dt .$$ The wavelet coefficients are later mapped in wavelet space which has as coordinates translation and dilation, and so periodic behavior shows
1
member_20
up as a pattern over all translations at a specific dilation. By way of example, Figure \[sinecwt\] shows the real part and the power of the continuous wavelet transform (second and bottom panel, respectively) for a sinusoidal signal of varying frequency (top panel). Here, the real part of the transform shows oscillatory behavior corresponding to the two periodicities of the sinusoidal signal at dilations of 3s and 6s with a break in translation at 50s corresponding to the time where the change in frequency occurs. The bottom panel in Figure \[sinecwt\] shows that the power of the continuous wavelet transform is concentrated at these two frequencies as well. The hatched area in both panels of Figure \[sinecwt\] represents the cone of influence: the region where edge effects become important. It arises because discontinuities at the beginning and end of a finite time series result in a decrease in the wavelet coefficient power. Also shown in the header of Figure \[sinecwt\] are the number of dilations used ($N_l$) and the ranges of dilations explored. We discuss $\alpha$ and the normalization of Figure \[sinecwt\] in the following section. ### Significance Tests {#sigtestsection} Significance tests can be created for the continuous wavelet transform
1
member_20
and here we follow @tor98. First, one compares the wavelet power with that of an appropriate background spectrum. We use the univariate lag-1 auto-regressive \[AR(1)\] process given by $$\label{noiseeqn} x_{n}=\alpha x_{n-1} + z_{n}$$ where $\alpha$ is the assumed lag-1 autocorrelation and $z_{n}$ is a random deviate taken from white noise. Note that $\alpha = 0$ gives a white noise process. Throughout this paper, we will use “white noise” to refer to an AR(1) process with $\alpha = 0$. Red noise is sometimes used to refer to noise with $\alpha = 1$, however, throughout this paper we apply the term to any non-zero $\alpha$. The normalized discrete Fourier power spectrum of this process is $$\label{fouriereqn} P_{j}=\frac{1-\alpha^{2}}{1 + \alpha^{2} - 2 \alpha \cos(2 \pi \delta t /\tau_{j})}$$ where $\tau_{j}$ is the associated Fourier period for a scale $l_{j}$. We use the above two equations to model a white noise or red noise spectrum. The global wavelet power spectrum (GWPS) is obtained by averaging in time $$\widetilde f_{G}^{2}(l_{j})=\frac{1}{N_{j}} \Sigma^{i'_{j}}_{i=i_{j}}|\widetilde f(l_{j},t'_{j})|^{2}.$$ Here, $i_{j}$ and $i'_{j}$ are the indices of the initial and final translations $t'_{i}$ outside of the cone of influence at a given scale $l_{j}$. $N_{j}$ is the number of translations $t'_{i}$ outside
1
member_20
the cone of influence at that scale. Assuming a background spectrum given by Eqn. \[fouriereqn\] we estimate the autocorrelation coefficient ($\alpha$) by calculating the lag-1 and lag-2 autocorrelations, $\alpha_{1}$ and $\alpha_{2}$. The autocorrelation coefficient is then estimated as $\alpha = (\alpha_{1} + \sqrt{\alpha_{2}})/2$. The background spectrum $P_{j}$ then allows us to compute the confidence levels. It is assumed that the time series has a mean power spectrum given by Eqn. \[fouriereqn\] and so if a peak in the wavelet power spectrum is significantly above this background spectrum, then the peak can be assumed to be a true feature. If the values in the time series $f(t)$ are normally distributed, we expect the wavelet power $|\widetilde f|^{2}$ to be $\chi^{2}$ distributed with two degrees of freedom ($\chi^{2}_{2}$). The square of a normally distributed variable is $\chi^{2}$ distributed with one degree of freedom and the second degree of freedom comes from the fact that both the real and imaginary parts of the complex $\widetilde f$ are normally distributed. For example, to determine the 95$\%$ confidence level, one multiplies the background spectrum (Eqn. \[fouriereqn\]) by the 95th percentile value for $\chi_{2}^{2}$. In Figure \[sinegwps\] we show the GWPS of a time series along
1
member_20
with 99$\%$ and 95$\%$ confidence level for a red noise process and 99$\%$ confidence level for a white noise process. The distribution for the local wavelet power spectrum is $$\frac{|\widetilde f(l_{j},t'_{i})|^2}{\sigma^{2}} \Rightarrow P_{j}\frac{\chi^{2}_{\nu}}{\nu}$$ where the arrow means “distributed as," $\sigma^{2}$ is the variance, and $\nu$ is the number of degrees of freedom, which is two here. The indices on the scale $l$ are $j$=1,2,...,$J$ where $J$ is the number of scales, and the indices on the translation $t'$ are $i$=1,2,...,$N_{data}$. We evaluate this equation at each scale to get 95$\%$ confidence contour lines and in this paper our continuous transforms are normalized to the 95$\%$ confidence level for the corresponding red noise process. Doing this allows one to see the strength of the wavelet coefficients relative to the 95$\%$ confidence level of a red noise process. ### The Cross-Wavelet Transform {#xwtsection} Although the continuous wavelet transform is useful in examining how a time series varies in time and scale, it does not tell us how the time series varies in dilation over a range of scales when assigning a characteristic timescale. Since a quasi-periodic signal has no unique dilation we use the cross-wavelet transform (XWT) which filters out noise and
1
member_20
reveals the QPO more clearly. Here we use the XWT introduced by @kel03. After the continuous transform identifies that a periodic pattern exists in the data, the dilation that characterizes this period is obtained from the global wavelet power spectrum and is used to create a sinusoidal mock signal. The continuous wavelet transform coefficients of the data signal $f_{a}(t)$ are then multiplied by the complex conjugate of the continuous transform coefficients of a mock signal $f_{m}(t)$. The results are mapped out in wavelet space and analyzed for a correlation. The cross-wavelet transform takes the form $$\widetilde f_{c} (l,t')= \widetilde f_a(l,t')\widetilde f^{*}_m(l,t')$$ where the continuous wavelet coefficients $\widetilde f_a$ and $\widetilde f_m$ are given by Equation \[coefficients\]. Figure \[sinecross\] shows the cross-wavelet for the same sinusoidal signal of varying frequency used in Figure \[sinecwt\]. The mock signal was calculated using the 6s period found in the wavelet power spectrum (see Fig. \[sinegwps\]) and as the concentrations in the real and power panels of Figure \[sinecross\] show, the cross-wavelet finds that this 6s period exists in the first half of the time series, illustrating the cross-wavelet’s ability to highlight a QPO. The reader may refer to @kel03 for a full review of
1
member_20
the cross-wavelet technique used here. Structure Function Analysis {#sfanalysissection} --------------------------- Since the global wavelet power spectrum compares the observed signal to expected levels of red noise and white noise, we created structure functions (SFs) for each of our observations to see which noise process dominates the signal at different times. A structure function calculates the mean deviation of data points, providing an alternate method of quantifying time variations. Here we use a structure function of the first-order [@sim85]: $$SF(\delta t) = <[F(t) - F(t + \delta t)]^{2}>$$ where $F(t)$ is the flux at time $t$ and $\delta t$ is a time lag. The slope $\alpha$ of the SF curve in $log(SF)-log(\delta t)$ space depends on the noise processes underlying the signal, giving us an indication of the nature of the process of variation. If $\alpha = 1$ red noise dominates, and for flatter slopes of $\alpha = 0$ Poisson photon noise is significant. A plateau at short time lag is due to measurement noise. The transition from plateau to power-law in the structure function curve determines where the dominant underlying noise process changes in the object. The point of turnover from power-law to plateau at longer time lags corresponds to
1
member_20
a maximum characteristic timescale. ### Effects of Pileup {#sfpileupsection} We measure the presence of pile-up in our observations by using the SAS task EPATPLOT and find that the majority of our sources show varying degrees of pile-up. For example, as previously shown in Section \[obssection\], MKN 421 Obs. ID: 0099280101 has a modest amount of pile-up (see Figure \[epatplot\]). In the structure function of this observation (left panel, Figure  \[structpileup\]), the flat portion of the structure function curve should have a value of $log(SF)=1$, which corresponds to the Poisson photon noise inherent in the photon statistics. However, here it falls below the Poisson photon noise level. To remove the pile-up we exclude the central core of the source in the event file since pile-up is more likely to occur here. For this subtracted data, the EPATPLOT output indicates that there is no pile-up and the SF curve is then at the expected value for Poisson photon noise (right panel, Figure \[structpileup\]). Pile-up affects the SF because it lowers the overall count rate and thereby Poisson photon noise is underreported. We correct for pile-up in the rest of our data by adding a fixed value to $log(SF)$, moving the flat part
1
member_20
of the structure function curve up to 1. All of our observations had less than 5$\%$ pile-up except for PKS 2155-304 Obs. ID 124930301 (6.5$\%$) and both observations of MKN 421 ($\sim 10\%$). Overall, the percentage of pile-up in our sample increases with the number of counts except for NGC 4151 Obs. ID 112830201 which is 5$\%$ piled-up and has an average of only 25 counts. Results ------- ### Wavelet Analysis Results {#wavelet_results} Of the observations that we analyzed, only one showed a quasi-period of interest (at 3.3 ksec), and this occurred in an observation of 3C 273 (ID 126700301). The continuous wavelet transform result for this observation is shown in Figure \[3ccwt\] with the quasi-period circled in the real and power plots (second and third panel, respectively). One can see that the quasi-period appears in the last two-thirds of the observation. In the real plot, the concentrations match up with peaks in the light curve, and the power is concentrated at $4.2\times10^{4}$ s. The wavelet is sampled with 220 dilations ($N_{l}$) ranging between $\sim 207.2$ s and $2.3\times10^{4}$ s. We note that the data in Figure \[3ccwt\] are binned from 5 s to 100 s for clarity and that
1
member_20
we only show the first 56 ks due to background flaring at the end of the observation. We note that including the periods with background flaring does not change our results. The $\alpha$ found from autocorrelation analysis for the unbinned data is 0.14 and this value is used to reach the conclusions in this paper. The 3.3 ks quasi-period is also evident in the Global Wavelet Power Spectrum (GWPS, Figure \[3cgwps\]), which is calculated by summing up the wavelet power spectra at all times. In searching for quasi-periodic behavior we excluded time scales above 25$\%$ of the time series length, where, using spectral methods, too few periods to provide a convincing result would be present, and where the cone of influence becomes important for the wavelet coefficients. On short time scales, experience has shown that sources often exhibit a broad distribution of power, with local maxima not well-separated from the mean power level. We selected a lower bound for our search, by visual identification of such behavior in the GWPS, in conjunction with a concomitant change in behavior of the SF. The solid line in Figure \[3cgwps\] is the power spectrum of the signal, which is compared to the power
1
member_20
spectrum of white and red noise random processes (broken lines). One can see that the 3.3 ks detection exceeds the expected levels of white and red noise at the 99$\%$ significance level, i.e. the probability of the detection is higher than 99$\%$ of the noise random processes (the significance of this signal is 99.979$\%$ relative to red noise with $\alpha = 0.14$). The origins of the white and red noise power spectra were discussed in Section \[sigtestsection\]. The cross-wavelet analysis for 3C 273 (Fig. \[3ccross\]) supports the conclusion that a period of  3.3 ks is indeed present. Here, the XWT (see Section \[xwtsection\]) compares a mock sinusoidal signal with a period of 3282 s with the 3C 273 light curve. The concentration in the cross-wavelet transform shows that the 3.3 ks signal is present throughout the observation. As one can see, by comparing the crosswavelet signals in juxtaposed bands, the 3.3 ks periodicity can be traced over the entire interval. In the CWT (Figure \[3ccwt\]) the 3.3 ks signal is particularly strong at late times, and so, due to the limited dynamic range of the rendering, is not evident early in the time interval in that figure. This periodicity is
1
member_20
not detected in the other three observations of this object. In the 58 ks (Obs. ID 159960101) and 60 ks (Obs. ID 126700801) observations of 3C 273, there is a signal at 5000s, but it does not rise above the 99$\%$ red noise confidence level (Fig. \[gwpsall1\]). We note that a Fourier analysis of 3C 273 yielded a feature at 3.3 ks, but with a lower significance ($<$ 3$\sigma$) than is found with the wavelet technique. We performed Monte Carlo simulations in order to estimate the probability that the wavelet technique would claim a spurious detection. As a baseline, we created one thousand simulated light curves for Poisson photon noise (Fig. 11) to represent random observational errors i.e. photon counting statistics. The simulated light curves were 56 ks long with 5 s intervals and we multiplied the mean deviate $z_{n}$ by 40 to produce an average spread in the y-axis of 40 counts to resemble the 3C 273 light curve. Most of the false detections occur at timescales less than 2000s which corresponds to 3.6 $\%$ the length of the observation and supports our earlier point that one can select the lower limit to search for periodicities by visual identification
1
member_20
of broad distributions of power on short time scales in the GWPS. On average, the wavelet technique claims a detection (at or above the significance level reported by the wavelet analysis for 3C 273) 0.4$\%$ of the time (Fig. \[hist\]). The Monte Carlo simulations suggest a significantly higher rate of false detections than is implied by the statistics based on the GWPS. However, they are consistent with the latter estimates [*within the margin of error*]{}, given that only 1000 realizations of a time series were generated. Better simulation statistics could be achieved by increasing the number of time series realizations by several orders of magnitude, but devoting time and resources to this is not warranted. Visual inspection of the simulated light curves reveals that they differ qualitatively from the actual time series: a better correspondence can be achieved with the addition of randomly distributed Gaussian-profile bursts of fixed, small amplitude. Evidently, the process under study is not strictly a stationary, first order one, and the formal statistical measures of significance should be regarded as only indicative of the high likelihood of a quasi-periodic phenomenon in this source. A more detailed analysis, allowing for nonstationary processes, is beyond the scope of
1
member_20
this paper. While we have performed 19 independent experiments and found only 1 detection we point out that of our 19 data sets only 7 have average counts (Table 1) equal to or more than the observation in which we find the QPO. One cannot expect to see with equal likelihood, a periodicity of equal strength in these weaker AGN. We note that independently, the XWT finds evidence for power throughout the observation at 3.3 ks (Fig. 8). We measured the 3.3 ks signal strength across the time series from the power plot of Figure 8. The power of the 3.3 ks signal is $\sim$ 4000 times stronger than shorter and longer dilations, illustrating that the 3.3 ks period is well-constrained. We also ran the XWT on this time series with analyzing signals of 2.3 ks and 4.3 ks. The average power of these signals is $\sim$ 2 times less than the average power of the 3.3 ks signal. This demonstrates that the XWT is picking out a well-defined, persistent signal, and will not misleadingly suggest a signal where there is none. We did not find any significant detections for the other nine AGN in our sample. No features had
1
member_20
significances that exceeded the 99$\%$ confidence levels for both white noise and red noise processes (see Figs. \[gwpsall1\], \[gwpsall2\]) and appeared at either too short (i.e. at timescales shorter than 3.6$\%$ the length of the observation) or too long (i.e. at timescales greater than half the length of the observation) a timescale. Some of the AGN in our sample have been studied before and previous reports of QPOs exist in the literature. We will discuss those results in more detail in Section \[discussionprevious\]. ### Structure Function Results {#sfanalysis} After correcting for pile-up, we subtract a constant level corresponding to Poisson photon noise from the structure functions (Figures \[sfminus1\], \[sfminus2\]). The slopes are measured by fitting a power-law to the SF curve using the least-squares method in $log(SF)-log(\delta t)$ space. Slopes are listed in Table \[sftbl\] along with the characteristic time-scales of variability, which were measured by identifying the times of turnover from plateau to power-law and vice versa in the SF curve. All of our structure functions have a flat plateau at short timescales corresponding to Poisson photon noise, most have a power-law portion, and some have a plateau at long timescales. We include light curves in Figures  \[lightcurves1\],  \[lightcurves2\]
1
member_20
for comparison with the structure functions. The structure functions for all four observations of 3C 273 are shown in the first four panels of Figure \[sfminus1\]. The observation with the 3.3 ks quasi-period (upper left, Figure \[sfminus1\]) is dominated by whitish noise around 3000s, as inferred from its flat slope; however, the SF is unsuited to quantifying the autocorrelation coefficient precisely. Recall that the wavelet analysis finds an autocorrelation coefficient of $\alpha = 0.14$, relatively small, and consistent with a flattish structure function. We note that this observation also has the greatest excess of such noise above the photon noise, compared to the other three observations, consistent with this being a unique time series out of all those analyzed. Discussion ========== Mass Estimates of 3C 273 ------------------------ There are several mass estimates for 3C 273 obtained from different methods. One method is reverberation mapping whereby one uses the time lag of the emission-line light curve with respect to the continuum light curve to determine the light crossing size of the broad line region (BLR) and then assumes Keplerian conditions in the broad line region gas motion (i.e. $M_{BH}=v^{2}R_{BLR}/G$) [@pet00]. Reverberation mapping results based on the optical continuum (i.e. Balmer lines)
1
member_20
place the mass of the central black hole in 3C 273 at $2.35^{+0.37}_{-0.33}\times 10^8$ M$_\sun$ [@kas00]. In a different study, @pian use $Hubble$ $Space$ $Telescope$ UV luminosities to find the broad line region size. To do so, they derive a relationship between $R_{BLR}$ and UV luminosity using the empirical relationship found by @kas00 between $R_{BLR}$ and the optical luminosity. @pian obtain a mass of $4.0^{+2}_{-2}\times 10^{8}M_\sun$ for 3C 273, consistent with the @kas00 value within errors. In another study, @paltani look at the strongest broad emission UV lines (Ly$\alpha$ and C IV ) in archival $International$ $Ultraviolet$ $Explorer$ observations and obtain a mass of $6.59^{+1.86}_{-0.9}\times10^9M_\sun$ for the central supermassive black hole in 3C 273. There are also mass estimates for 3C 273 that do not come from reverberation mapping. @lia03 find a black hole mass of $2\times 10^7$ M$_\sun$ by generalizing the Elliot-Shapiro relation to the Klein-Nishina regime for 3C 273’s gamma-ray flux obtained from EGRET. Another method is to use the @mclure correlation between host galaxy luminosity and black hole mass which obtains a mass of $1.6 \times 10^9$ M$_\sun$ with an uncertainty of 0.6 dex [@wang]. Underlying Physical Process for the QPO in 3C 273 ------------------------------------------------- If the
1
member_20
--- abstract: 'With the goal of recovering high-quality image content from its degraded version, image restoration enjoys numerous applications, such as in surveillance, computational photography, medical imaging, and remote sensing. Recently, convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task. Existing CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatially precise but contextually less robust results are achieved, while in the latter case, semantically reliable but spatially less accurate outputs are generated. In this paper, we present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network, and receiving strong contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing several key elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) spatial and channel attention mechanisms for capturing contextual information, and (d) attention based multi-scale feature aggregation. In the nutshell, our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on five real image benchmark datasets demonstrate that our method,
1
member_21
named as MIRNet, achieves state-of-the-art results for a variety of image processing tasks, including image denoising, super-resolution and image enhancement.' author: - Syed Waqas Zamir - Aditya Arora - Salman Khan - Munawar Hayat - Fahad Shahbaz Khan - 'Ming-Hsuan Yang' - Ling Shao bibliography: - 'MIRNet.bib' title: Learning Enriched Features for Real Image Restoration and Enhancement --- Introduction ============ Image content is exponentially growing due to the ubiquitous presence of cameras on various devices. During image acquisition, degradations of different severities are often introduced. It is either because of the physical limitations of cameras, or due to inappropriate lighting conditions. For instance, smart phone cameras come with narrow aperture, and have small sensors with limited dynamic range. Consequently, they frequently generate noisy and low-contrast images. Similarly, images captured under the unsuitable lighting are either too dark, or too bright. The art of recovering the original clean image from its corrupted measurements is studied under the image restoration task. It is an ill-posed inverse problem, due to the existence of many possible solutions. Recently, deep learning models have made significant advancements for image restoration and enhancement, as they can learn strong (generalizable) priors from large-scale datasets. Existing CNNs typically follow
1
member_21
one of the two architecture designs: 1) an encoder-decoder, or 2) high-resolution (single-scale) feature processing. The encoder-decoder models [@ronneberger2015u; @kupyn2019deblurgan; @chen2018; @zhang2019kindling] first progressively map the input to a low-resolution representation, and then apply a gradual reverse mapping to the original resolution. Although these approaches learn a broad context by spatial-resolution reduction, on the downside, the fine spatial details are lost, making it extremely hard to recover them in the later stages. On the other side, the high-resolution (single-scale) networks [@dong2015image; @DnCNN; @zhang2020residual; @ignatov2017dslr] do not employ any downsampling operation, and thereby produce images with spatially more accurate details. However, these networks are less effective in encoding contextual information due to their limited receptive field. Image restoration is a position-sensitive procedure, where pixel-to-pixel correspondence from the input image to the output image is needed. Therefore, it is important to remove only the undesired degraded image content, while carefully preserving the desired fine spatial details (such as true edges and texture). Such functionality for segregating the degraded content from the true signal can be better incorporated into CNNs with the help of large context, *e.g.*, by enlarging the receptive field. Towards this goal, we develop a new *multi-scale* approach that maintains
1
member_21
the original high-resolution features along the network hierarchy, thus minimizing the loss of precise spatial details. Simultaneously, our model encodes multi-scale context by using *parallel convolution streams* that process features at lower spatial resolutions. The multi-resolution parallel branches operate in a manner that is complementary to the main high-resolution branch, thereby providing us more precise and contextually enriched feature representations. The main difference between our method and existing multi-scale image processing approaches is the way we aggregate contextual information. First, the existing methods [@tao2018scale; @nah2017; @gu2019self] process each scale in isolation, and exchange information only in a top-down manner. In contrast, we progressively fuse information across all the scales at each resolution-level, allowing both top-down and bottom-up information exchange. Simultaneously, both fine-to-coarse and coarse-to-fine knowledge exchange is laterally performed on each stream by a new *selective kernel* fusion mechanism. Different from existing methods that employ a simple concatenation or averaging of features coming from multi-resolution branches, our fusion approach dynamically selects the useful set of kernels from each branch representations using a self-attention approach. More importantly, the proposed fusion block combines features with varying receptive fields, while preserving their distinctive complementary characteristics. Our main contributions in this work include: -
1
member_21
A novel feature extraction model that obtains a complementary set of features across multiple spatial scales, while maintaining the original high-resolution features to preserve precise spatial details. - A regularly repeated mechanism for information exchange, where the features across multi-resolution branches are progressively fused together for improved representation learning. - A new approach to fuse multi-scale features using a selective kernel network that dynamically combines variable receptive fields and faithfully preserves the original feature information at each spatial resolution. - A recursive residual design that progressively breaks down the input signal in order to simplify the overall learning process, and allows the construction of very deep networks. - Comprehensive experiments are performed on five real image benchmark datasets for different image processing tasks including, image denoising, super-resolution and image enhancement. Our method achieves state-of-the-results on *all* five datasets. Furthermore, we extensively evaluate our approach on practical challenges, such as generalization ability across datasets. Related Work ============ With the rapidly growing image media content, there is a pressing need to develop effective image restoration and enhancement algorithms. In this paper, we propose a new approach capable of performing image denoising, super-resolution and image enhancement. Different from existing works for these problems,
1
member_21
our approach processes features at the original resolution in order to preserve spatial details, while effectively fuses contextual information from multiple parallel branches. Next, we briefly describe the representative methods for each of the studied problems. **Image denoising.** Classic denoising methods are mainly based on modifying transform coefficients [@yaroslavsky1996local; @donoho1995noising; @simoncelli1996noise] or averaging neighborhood pixels [@smith1997susan; @tomasi1998bilateral; @perona1990scale; @rudin1992nonlinear]. Although the classical methods perform well, the self-similarity [@efros1999texture] based algorithms, *e.g.*, NLM [@NLM] and BM3D [@BM3D], demonstrate promising denoising performance. Numerous patch-based algorithms that exploit redundancy (self-similarity) in images are later developed [@dong2012nonlocal; @WNNM; @mairal2009non; @hedjam2009markovian]. Recently, deep learning-based approaches [@MLP; @RIDNet; @Brooks2019; @Gharbi2016; @CBDNet; @N3Net; @DnCNN; @FFDNetPlus] make significant advances in image denoising, yielding favorable results than those of the hand-crafted methods. **Super-resolution (SR).** Prior to the deep-learning era, numerous SR algorithms have been proposed based on the sampling theory [@keys1981cubic; @irani1991improving], edge-guided interpolation [@allebach1996edge; @li2001new; @zhang2006edge], natural image priors [@kim2010single; @xiong2010robust], patch-exemplars [@chang2004super; @freedman2011image] and sparse representations [@yang2010image; @yang2008image]. Currently, deep-learning techniques are actively being explored, as they provide dramatically improved results over conventional algorithms. The data-driven SR approaches differ according to their architecture designs [@wang2019deep; @anwar2019deep]. Early methods [@dong2014learning; @dong2015image] take a low-resolution (LR) image as input
1
member_21
and learn to directly generate its high-resolution (HR) version. In contrast to directly producing a latent HR image, recent SR networks [@VDSR; @tai2017memnet; @tai2017image; @hui2018fast] employ the residual learning framework [@He2016] to learn the high-frequency image detail, which is later added to the input LR image to produce the final super-resolved result. Other networks designed to perform SR include recursive learning [@kim2016deeply; @han2018image; @ahn2018fast], progressive reconstruction [@wang2015deep; @Lai2017], dense connections [@tong2017image; @wang2018esrgan; @zhang2020residual], attention mechanisms [@RCAN; @dai2019second; @zhang2019residual], multi-branch learning [@Lai2017; @EDSR; @dahl2017pixel; @li2018multi], and generative adversarial models [@wang2018esrgan; @park2018srfeat; @sajjadi2017enhancenet; @SRResNet]. **Image enhancement.** Oftentimes, cameras provide images that are less vivid and lack contrast. A number of factors contribute to the low quality of images, including unsuitable lighting conditions and physical limitations of camera devices. For image enhancement, histogram equalization is the most commonly used approach. However, it frequently produces under-enhanced or over-enhanced images. Motivated by the Retinex theory [@land1977retinex], several enhancement algorithms mimicking human vision have been proposed in the literature [@bertalmio2007; @palma2008perceptually; @jobson1997multiscale; @rizzi2004retinex]. Recently, CNNs have been successfully applied to general, as well as low-light, image enhancement problems. Notable works employ Retinex-inspired networks [@Shen2017; @wei2018deep; @zhang2019kindling], encoder-decoder networks [@chen2018encoder; @Lore2017; @ren2019low], and generative adversarial networks [@chen2018deep;
1
member_21
@ignatov2018wespe; @deng2018aesthetic]. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Framework of the proposed network MIRNet that learns enriched feature representations for image restoration and enhancement. MIRNet is based on a recursive residual design. In the core of MIRNet is the multi-scale residual block (MRB) whose main branch is dedicated to maintaining spatially-precise high-resolution representations through the entire network and the complimentary set of parallel branches provide better contextualized features. It also allows information exchange across parallel streams via selective kernel feature fusion (SKFF) in order to consolidate the high-resolution features with the help of low-resolution features, and vice versa.[]{data-label="fig:framework"}](Images/framework.png "fig:"){width="\textwidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Proposed Method =============== In this section, we first present an overview of the proposed MIRNet for image restoration and enhancement, illustrated in Fig. \[fig:framework\]. We then provide details of the *multi-scale residual block*, which is the fundamental building block of our method, containing several key elements: **(a)** parallel multi-resolution convolution streams for extracting (fine-to-coarse) semantically-richer and (coarse-to-fine) spatially-precise feature representations, **(b)** information exchange across multi-resolution streams, **(c)** attention-based aggregation of features arriving from multiple streams, **(d)** dual-attention units to capture contextual information in both spatial and channel dimensions, and **(e)** residual resizing modules to perform downsampling and upsampling operations. **Overall Pipeline.** Given an image $\mathbf{I}
1
member_21
\in \mathbb{R}^{H\times W \times 3}$, the network first applies a convolutional layer to extract low-level features $\mathbf{X_0} \in \mathbb{R}^{H\times W \times C}$. Next, the feature maps $\mathbf{X_0}$ pass through $N$ number of recursive residual groups (RRGs), yielding deep features $\mathbf{X_d} \in \mathbb{R}^{H\times W \times C}$. We note that each RRG contains several multi-scale residual blocks, which is described in Section \[sec:msrb\]. Next, we apply a convolution layer to deep features $\mathbf{X_d}$ and obtain a residual image $\mathbf{R} \in \mathbb{R}^{H\times W \times 3}$. Finally, the restored image is obtained as $\mathbf{\hat{I}} = \mathbf{I} + \mathbf{R}$. We optimize the proposed network using the Charbonnier loss [@charbonnier1994]: $$\label{Eq:loss} \mathcal{L}(\mathbf{\hat{I}},\mathbf{I}^*) = \sqrt{ {\|\mathbf{\hat{I}}-\mathbf{I}^*\|}^2 + {\varepsilon}^2 },$$ where $\mathbf{I}^*$ denotes the ground-truth image, and $\varepsilon$ is a constant which we empirically set to $10^{-3}$ for all the experiments. Multi-scale Residual Block (MRB) {#sec:msrb} -------------------------------- In order to encode context, existing CNNs [@ronneberger2015u; @newell2016stacked; @noh2015learning; @xiao2018simple; @badrinarayanan2017segnet; @peng2016recurrent] typically employ the following architecture design: **(a)** the receptive field of neurons is fixed in *each* layer/stage, **(b)** the spatial size of feature maps is *gradually* reduced to generate a semantically strong low-resolution representation, and **(c)** a high-resolution representation is *gradually* recovered from the low-resolution representation. However, it
1
member_21
is well-understood in vision science that in the primate visual cortex, the sizes of the local receptive fields of neurons in the same region are different [@hubel1962receptive; @riesenhuber1999hierarchical; @serre2007robust; @hung2005fast]. Therefore, such a mechanism of collecting multi-scale spatial information in the same layer needs to be incorporated in CNNs [@huang2017multi; @hrnet; @fourure2017residual; @Szegedy2015]. In this paper, we propose the multi-scale residual block (MRB), as shown in Fig. \[fig:framework\]. It is capable of generating a spatially-precise output by maintaining high-resolution representations, while receiving rich contextual information from low-resolutions. The MRB consists of multiple (three in this paper) fully-convolutional streams connected in parallel. It allows information exchange across parallel streams in order to consolidate the high-resolution features with the help of low-resolution features, and vice versa. Next, we describe individual components of MRB. **Selective kernel feature fusion (SKFF).** One fundamental property of neurons present in the visual cortex is to be able to change their receptive fields according to the stimulus [@li2019selective]. This mechanism of adaptively adjusting receptive fields can be incorporated in CNNs by using multi-scale feature generation (in the same layer) followed by feature aggregation and selection. The most commonly used approaches for feature aggregation include simple concatenation or summation.
1
member_21
However, these choices provide limited expressive power to the network, as reported in [@li2019selective]. In MRB, we introduce a nonlinear procedure for fusing features coming from multiple resolutions using a self-attention mechanism. Motivated by [@li2019selective], we call it selective kernel feature fusion (SKFF). The SKFF module performs dynamic adjustment of receptive fields via two operations –[*Fuse* and *Select*, as illustrated in Fig. \[fig:skff\]]{}. The *fuse* operator generates global feature descriptors by combining the information from multi-resolution streams. The *select* operator uses these descriptors to recalibrate the feature maps (of different streams) followed by their aggregation. Next, we provide details of both operators for the three-stream case, but one can easily extend it to more streams. **(1) Fuse:** SKFF receives inputs from three parallel convolution streams carrying different scales of information. We first combine these multi-scale features using an element-wise sum as: $\mathbf{L = L_1 + L_2 + L_3}$. We then apply global average pooling (GAP) across the spatial dimension of $\mathbf{L} \in \mathbb{R}^{H\times W \times C}$ to compute channel-wise statistics $\mathbf{s} \in \mathbb{R}^{1\times 1 \times C}$. Next, we apply a channel-downscaling convolution layer to generate a compact feature representation $\mathbf{z} \in \mathbb{R}^{1\times 1 \times r}$, where $r=\frac{C}{8}$ for all our
1
member_21
experiments. Finally, the feature vector $\mathbf{z}$ passes through three parallel channel-upscaling convolution layers (one for each resolution stream) and provides us with three feature descriptors $\mathbf{v_1}, \mathbf{v_2}$ and $\mathbf{v_3}$, each with dimensions $1\times1\times C$. **(2) Select:** this operator applies the softmax function to $\mathbf{v_1}, \mathbf{v_2}$ and $\mathbf{v_3}$, yielding attention activations $\mathbf{s_1}, \mathbf{s_2}$ and $\mathbf{s_3}$ that we use to adaptively recalibrate multi-scale feature maps $\mathbf{L_1}, \mathbf{L_2}$ and $\mathbf{L_3}$, respectively. The overall process of feature recalibration and aggregation is defined as: $\mathbf{U = s_1 \cdot L_1 + s_2\cdot L_2 + s_3 \cdot L_3}$. Note that the SKFF uses $\sim6\times$ fewer parameters than aggregation with concatenation but generates more favorable results (an ablation study is provided in experiments section). **Dual attention unit (DAU).** While the SKFF block fuses information across multi-resolution branches, we also need a mechanism to share information within a feature tensor, both along the spatial and the channel dimensions. Motivated by the advances of recent low-level vision methods [@RCAN; @RIDNet; @dai2019second; @zhang2019residual] based on the attention mechanisms [@hu2018squeeze; @wang2018non], we propose the dual attention unit (DAU) to extract features in the convolutional streams. The schematic of DAU is shown in Fig. \[fig:dau\]. The DAU suppresses less useful features and only
1
member_21
allows more informative ones to pass further. This feature recalibration is achieved by using channel attention [@hu2018squeeze] and spatial attention [@woo2018cbam] mechanisms. **(1) Channel attention (CA)** branch exploits the inter-channel relationships of the convolutional feature maps by applying *squeeze* and *excitation* operations [@hu2018squeeze]. Given a feature map $\mathbf{M} \in \mathbb{R}^{H\times W \times C}$, the squeeze operation applies global average pooling across spatial dimensions to encode global context, thus yielding a feature descriptor $\mathbf{d} \in \mathbb{R}^{1\times 1 \times C}$. The excitation operator passes $\mathbf{d}$ through two convolutional layers followed by the sigmoid gating and generates activations $\mathbf{\hat{d}} \in \mathbb{R}^{1\times 1 \times C}$. Finally, the output of CA branch is obtained by rescaling $\mathbf{M}$ with the activations $\mathbf{\hat{d}}$. **(2) Spatial attention (SA)** branch is designed to exploit the inter-spatial dependencies of convolutional features. The goal of SA is to generate a spatial attention map and use it to recalibrate the incoming features $\mathbf{M}$. To generate the spatial attention map, the SA branch first independently applies global average pooling and max pooling operations on features $\mathbf{M}$ along the channel dimensions and concatenates the outputs to form a feature map $\mathbf{f} \in \mathbb{R}^{H\times W \times 2}$. The map $\mathbf{f}$ is passed through a convolution
1
member_21
and sigmoid activation to obtain the spatial attention map $\mathbf{\hat{f}} \in \mathbb{R}^{H\times W \times 1}$, which we then use to rescale $\mathbf{M}$. **Residual resizing modules.** The proposed framework employs a recursive residual design (with skip connections) to ease the flow of information during the learning process. In order to maintain the residual nature of our architecture, we introduce residual resizing modules to perform downsampling (Fig. \[fig:downsample\]) and upsampling (Fig. \[fig:upsample\]) operations. In MRB, the size of feature maps remains constant along convolution streams. On the other hand, across streams the feature map size changes depending on the input resolution index $i$ and the output resolution index $j$. If $i<j$, the input feature tensor is downsampled, and if $i>j$, the feature map is upsampled. To perform $2\times$ downsampling (halving the spatial dimension and doubling the channel dimension), we apply the module in Fig. \[fig:downsample\] only once. For $4\times$ downsampling, the module is applied twice, consecutively. Similarly, one can perform $2\times$ and $4\times$ upsampling by applying the module in Fig. \[fig:upsample\] once and twice, respectively. Note in Fig. \[fig:downsample\], we integrate anti-aliasing downsampling [@zhang2019making] to improve the shift-equivariance of our network. [0.49]{} ![image](Images/downsample.png){width="\textwidth"} [0.49]{} ![image](Images/upsample.png){width="\textwidth"} Experiments =========== In this section, we perform
1
member_21
qualitative and quantitative assessment of the results produced by our MIRNet and compare it with the previous best methods. Next, we describe the datasets, and then provide the implementation details. Finally, we report results for **(a)** image denoising, **(b)** super-resolution and **(c)** image enhancement on five real image datasets. The source code and trained models will be released publicly[^1]. Real Image Datasets ------------------- **Image denoising.** **(1) DND [@dnd]** consists of $50$ images captured with four consumer cameras. Since the images are of very high-resolution, the dataset providers extract $20$ crops of size $512\times512$ from each image, yielding $1000$ patches in total. All these patches are used for testing (as DND does not contain training or validation sets). The ground-truth noise-free images are not released publicly, therefore the image quality scores in terms of PSNR and SSIM can only be obtained through an online server [@dndwebsite]. **(2) SIDD [@sidd]** is particularly collected with smartphone cameras. Due to the small sensor and high-resolution, the noise levels in smartphone images are much higher than those of DSLRs. SIDD contains $320$ image pairs for training and $1280$ for validation. **Super-resolution.** **(1) RealSR [@RealSR]** contains real-world LR-HR image pairs of the same scene captured by
1
member_21
adjusting the focal-length of the cameras. RealSR have both indoor and outdoor images taken with two cameras. The number of training image pairs for scale factors $\times2$, $\times3$ and $\times4$ are $183$, $234$ and $178$, respectively. For each scale factor, $30$ test images are also provided in RealSR. **Image enhancement.** **(1) LoL [@wei2018deep]** is created for low-light image enhancement problem. It provides 485 images for training and 15 for testing. Each image pair in LoL consists of a low-light input image and its corresponding well-exposed reference image. **(2) MIT-Adobe FiveK [@mit_fivek]** contains $5000$ images of various indoor and outdoor scenes captured with several DSLR cameras in different lighting conditions. The tonal attributes of all images are manually adjusted by five different trained photographers (labelled as experts A to E). Same as in [@hu2018exposure; @park2018distort; @wang2019underexposed], we also consider the enhanced images of expert C as the ground-truth. Moreover, the first 4500 images are used for training and the last 500 for testing. Implementation Details ---------------------- The proposed architecture is end-to-end trainable and requires no pre-training of sub-modules. We train three different networks for three different restoration tasks. The training parameters, common to all experiments, are the following. We use 3
1
member_21
RRGs, each of which further contains $2$ MRBs. The MRB consists of $3$ parallel streams with channel dimensions of $64, 128, 256$ at resolutions $1, \frac{1}{2}, \frac{1}{4}$, respectively. Each stream has $2$ DAUs. The models are trained with the Adam optimizer ($\beta_1 = 0.9$, and $\beta_2=0.999$) for $7\times10^5$ iterations. The initial learning rate is set to $2\times10^{-4}$. We employ the cosine annealing strategy [@loshchilov2016sgdr] to steadily decrease the learning rate from initial value to $10^{-6}$ during training. We extract patches of size $128\times128$ from training images. The batch size is set to $16$ and, for data augmentation, we perform horizontal and vertical flips. Image Denoising --------------- In this section, we demonstrate the effectiveness of the proposed MIRNet for image denoising. We train our network only on the training set of the SIDD [@sidd] and directly evaluate it on the test images of both SIDD and DND [@dnd] datasets. Quantitative comparisons in terms of PSNR and SSIM metrics are summarized in Table \[table:sidd\] and Table \[table:dnd\] for SIDD and DND, respectively. Both tables show that our MIRNet performs favourably against the data-driven, as well as conventional, denoising algorithms. Specifically, when compared to the recent best method VDN [@VDN], our algorithm demonstrates
1
member_21
a performance gain of $0.44$ dB on SIDD and $0.50$ dB on DND. Furthermore, it is worth noting that CBDNet [@CBDNet] and RIDNet [@RIDNet] use additional training data, yet our method provides significantly better results. For instance, our method achieves $8.94$ dB improvement over CBDNet [@CBDNet] on the SIDD dataset and $1.82$ dB on DND. In Fig. \[fig:dnd example\] and Fig. \[fig:sidd example\], we present visual comparisons of our results with those of other competing algorithms. It can be seen that our MIRNet is effective in removing real noise and produces perceptually-pleasing and sharp images. Moreover, it is capable of maintaining the spatial smoothness of the homogeneous regions without introducing artifacts. In contrast, most of the other methods either yield over-smooth images and thus sacrifice structural content and fine textural details, or produce images with chroma artifacts and blotchy texture. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_noisy "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_CBD "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd].
1
member_21
Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_RIDNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_VDN "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_Ours_MSRNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB/SSID_0324_gt "fig:"){width=".16\textwidth"} 18.25 dB 28.84 dB 35.57 dB 36.39 dB **36.97 dB** ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_noisy "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_CBD "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_RIDNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_VDN "fig:"){width=".16\textwidth"} ![Denoising examples from
1
member_21
SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_Ours_MSRNet "fig:"){width=".16\textwidth"} ![Denoising examples from SIDD [@sidd]. Our method effectively removes real noise from challenging images, while better recovering structural content and fine texture.[]{data-label="fig:sidd example"}](Images/Denoising/SIDD/RGB1/SSID_3828_gt "fig:"){width=".16\textwidth"} 18.16 dB 20.36 dB 29.83 dB 30.31 dB **31.36 dB** Noisy CBDNet [@CBDNet] RIDNet [@RIDNet] VDN [@VDN] MIRNet (Ours) Reference ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- **Generalization capability.** The DND and SIDD datasets are acquired with different sets of cameras having different noise characteristics. Since the DND benchmark does not provide training data, setting a new state-of-the-art on DND with our SIDD trained network indicates the good generalization capability of our approach. Super-Resolution (SR) --------------------- We compare our MIRNet against the state-of-the-art SR algorithms (VDSR [@VDSR], SRResNet [@SRResNet], RCAN [@RCAN], LP-KPN [@RealSR]) on the testing images of the RealSR [@RealSR] for upscaling factors of $\times2$, $\times3$ and $\times4$. Note that all the benchmarked algorithms are trained on the RealSR [@RealSR] dataset for fair comparison. In the experiments, we also include bicubic interpolation [@keys1981cubic], which is the most commonly used method for generating super-resolved images. Here, we compute the PSNR and SSIM scores using the
1
member_21
Y channel (in YCbCr color space), as it is a common practice in the SR literature [@RCAN; @RealSR; @wang2019deep; @anwar2019deep]. The results in Table \[table:realSR\] show that the bicubic interpolation provides the least accurate results, thereby indicating its low suitability for dealing with real images. Moreover, the same table shows that the recent method LP-KPN [@RealSR] provides marginal improvement of only $\sim0.04$ dB over the previous best method RCAN [@RCAN]. In contrast, our method significantly advances state-of-the-art and consistently yields better image quality scores than other approaches for all three scaling factors. Particularly, compared to LP-KPN [@RealSR], our method provides performance gains of $0.45$ dB, $0.74$ dB, and $0.22$ dB for scaling factors $\times2$, $\times3$ and $\times4$, respectively. The trend is similar for the SSIM metric as well. Visual comparisons in Fig. \[fig:sr example\] show that our MIRNet recovers content structures effectively. In contrast, VDSR [@VDSR], SRResNet [@SRResNet] and RCAN [@RCAN] reproduce results with noticeable artifacts. Furthermore, LP-KPN [@RealSR] is not able to preserve structures (see near the right edge of the crop). Several more examples are provided in Fig. \[fig:sr crop examples\] to further compare the image reproduction quality of our method against the previous best method [@RealSR]. It
1
member_21
can be seen that LP-KPN [@RealSR] has a tendency to over-enhance the contrast (cols. 1, 3, 4) and in turn causes loss of details near dark and high-light areas. In contrast, the proposed MIRNet successfully reconstructs structural patterns and edges (col. 2) and produces images that are natural (cols. 1, 4) and have better color reproduction (col. 5). **Cross-camera generalization.** The RealSR [@RealSR] dataset consists of images taken with Canon and Nikon cameras at three scaling factors. To test the cross-camera generalizability of our method, we train the network on the training images of one camera and directly evaluate it on the test set of the other camera. Table \[table:realSR generalization\] demonstrates the generalization of competing methods for four possible cases: (a) training and testing on Canon, (b) training on Canon, testing on Nikon, (c) training and testing on Nikon, and (d) training on Nikon, testing on Canon. It can be seen that, for all scales, LP-KPN [@RealSR] and RCAN [@RCAN] shows comparable performance. In contrast, our MIRNet exhibits more promising generalization. Image Enhancement ----------------- In this section, we demonstrate the effectiveness of our algorithm by evaluating it for the image enhancement task. We report PSNR/SSIM values of our method
1
member_21
and several other techniques in Table \[table:lol\] and Table \[table:fivek\] for the LoL [@wei2018deep] and MIT-Adobe FiveK [@mit_fivek] datasets, respectively. It can be seen that our MIRNet achieves significant improvements over previous approaches. Notably, when compared to the recent best methods, MIRNet obtains $3.27$ dB performance gain over KinD [@zhang2019kindling] on the LoL dataset and $0.69$ dB improvement over DeepUPE [@wang2019underexposed] on the Adobe-Fivek dataset. We show visual results in Fig. \[Fig:qual\_lol\] and Fig. \[Fig:qual\_fivek\]. Compared to other techniques, our method generates enhanced images that are natural and vivid in appearance and have better global and local contrast. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_input "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_lime "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/crm "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the
1
member_21
LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_retx "fig:"){width="24.40000%"} Input image LIME [@guo2016lime] CRM [@ying2017bio] Retinex-Net [@wei2018deep] ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/srie "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_kind "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_ours "fig:"){width="24.40000%"} ![Visual comparison of low-light enhancement approaches on the LoL dataset [@wei2018deep]. Our method reproduces image that is visually closer to the ground-truth in terms of brightness and global contrast.[]{data-label="Fig:qual_lol"}](Images/Enhancement/lol_images/lol_gt "fig:"){width="24.40000%"} SRIE [@fu2016weighted] KinD [@zhang2019kindling] MIRNet (Ours) Ground-truth ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/input "fig:"){width="32.40000%"}
1
member_21
![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/hdrnet "fig:"){width="32.40000%"} ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/dpe "fig:"){width="32.40000%"} Input image HDRNet [@Gharbi2017] DPE [@chen2018deep] ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/deepupe "fig:"){width="32.40000%"} ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/ours "fig:"){width="32.40000%"} ![Visual results of image enhancement on the MIT-Adobe FiveK [@mit_fivek] dataset. Compared to the state-of-the-art, our MIRNet makes better color and contrast adjustments and produces image that is vivid, natural and pleasant in appearance. ](Images/Enhancement/fivek/gt "fig:"){width="32.40000%"} DeepUPE [@wei2018deep] MIRNet (Ours) Ground-truth ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[Fig:qual\_fivek\] Ablation Studies ================ In this
1
member_21
--- abstract: 'We compute time-periodic and relative-periodic solutions of the free-surface Euler equations that take the form of overtaking collisions of unidirectional solitary waves of different amplitude on a periodic domain. As a starting guess, we superpose two Stokes waves offset by half the spatial period. Using an overdetermined shooting method, the background radiation generated by collisions of the Stokes waves is tuned to be identical before and after each collision. In some cases, the radiation is effectively eliminated in this procedure, yielding smooth soliton-like solutions that interact elastically forever. We find examples in which the larger wave subsumes the smaller wave each time they collide, and others in which the trailing wave bumps into the leading wave, transferring energy without fully merging. Similarities notwithstanding, these solutions are found quantitatively to lie outside of the Korteweg-de Vries regime. We conclude that quasi-periodic elastic collisions are not unique to integrable model water wave equations when the domain is periodic.' address: 'Dept of Mathematics, University of California, Berkeley, CA 94720-3840' author: - Jon Wilkening date: 'April 21, 2014' title: ' Relative-Periodic Elastic Collisions of Water Waves' --- =1 [^1] Introduction ============ A striking feature of multiple-soliton solutions of integrable model equations such
1
member_22
as the Korteweg-deVries equation, the Benjamin-Ono equation, and the nonlinear Schrödinger equation is that they interact elastically, leading to time-periodic, relative-periodic, or quasi-periodic dynamics. By contrast, the interaction of solitary waves for the free-surface Euler equations is inelastic. However, it has been observed many times in the literature [@chan:street:70; @cooker:97; @maxworthy:76; @su:mirie; @mirie:su; @zou:su; @craig:guyenne:06; @milewski:11] that the residual radiation after a collision of such solitary waves can be remarkably small. In the present paper we explore the possibility of finding nearby time-periodic and relative-periodic solutions of the Euler equations using a collision of unidirectional Stokes waves as a starting guess. Such solutions demonstrate that recurrent elastic collisions of solitary waves in the spatially periodic case do not necessarily indicate that the underlying system is integrable. A relative-periodic solution is one that returns to a spatial phase shift of its initial condition at a later time. This only makes sense on a periodic domain, where the waves collide repeatedly at regular intervals in both time and space, with the locations of the collisions drifting steadily in time. They are special cases (with $N=2$) of quasi-periodic solutions, which have the form $u(x,t)=U(\vec\kappa x+\vec \omega t + \vec\alpha)$ with $U$ an $N$-periodic
1
member_22
continuous function, i.e. $U\in C\big(\mathbb{T}^N\big)$, and $\vec\kappa$, $\vec\omega$, $\vec\alpha\in\mathbb{R}^N$. Throughout the manuscript, we will use the phrase “solitary waves” in a broad sense to describe waves that, most of the time, remain well-separated from one another and propagate with nearly constant speed and shape. “Stokes waves” will refer to periodic progressive solutions of the free-surface Euler equations of permanent form, or waves that began at $t=0$ as a linear superposition of such traveling waves. They comprise a special class of solitary waves. “Solitons” will refer specifically to superpositions of ${\operatorname}{sech}^2$ solutions of the KdV equation on the whole line, while “cnoidal solutions” will refer to their spatially periodic, multi-phase counterparts; see §\[sec:kdv\] for elaboration. It was found in [@water2] that decreasing the fluid depth causes standing waves to transition from large-scale symmetric sloshing behavior in deep water to pairs of counter-propagating solitary waves that collide repeatedly in shallow water. In the present work, we consider unidirectional waves of different amplitude that collide due to taller waves moving faster than shorter ones. We present two examples of solutions of this type: one where the resulting dynamics is fully time-periodic; and one where it is relative-periodic, returning to a spatial phase shift
1
member_22
of the initial condition at a later time. Both examples exhibit behavior typical of collisions of KdV solitons. In the first, one wave is significantly larger than the other, and completely subsumes it during the interaction. In the second, the waves have similar amplitude, with the trailing wave bumping into the leading wave and transferring energy without fully merging. Despite these similarities, the amplitude of the waves in our examples are too large for the assumptions in the derivation of the KdV equation to hold. In particular, the larger wave in the first example is more than half the fluid depth in height, and there is significant vertical motion of the fluid when the waves pass by. A detailed comparison of the Euler and KdV equations for waves with these properties is carried out in §\[sec:kdv\]. A review of the literature on water wave collisions and the accuracy of the KdV model of water waves is also given in that section. Rather than compute such solutions by increasing the amplitude from the linearized regime via numerical continuation, as was done for counter-propagating waves in [@water2], we use collisions of right-moving Stokes waves as starting guesses. The goal is to minimally
1
member_22
“tune” the background radiation generated by the Stokes collisions so that the amount coming out of each collision is identical to what went into it. In the first example of §\[sec:num\], we find that the tuned background radiation takes the form of a train of traveling waves of smaller wavelength moving to the right more slowly than either solitary wave. By contrast, in the counter-propagating case studied in [@water2], it consists of an array of smaller-wavelength standing waves oscillating rapidly relative to the time between collisions of the primary waves. In the second example of §\[sec:num\], the background radiation is essentially absent, which is to say that the optimized solution is free from high-frequency, low-amplitude disturbances in the trough, and closely resembles a relative-periodic cnoidal solution of KdV. We call the collisions in this solution “elastic” as they repeat forever, unchanged up to spatial translation, and there are no features to distinguish radiation from the waves themselves. This process of tuning parameters to minimize or eliminate small-amplitude oscillations in the wave troughs is reminiscent of Vanden-Broeck’s work [@vandenBroeck91] in which oscillations at infinity could be eliminated from solitary capillary-gravity waves by choosing the amplitude appropriately. To search for relative periodic
1
member_22
solutions, we use a variant of the overdetermined shooting method developed by the author and collaborators in previous work to study several related problems: time-periodic solutions of the Benjamin-Ono equation [@benj1; @benj2] and the vortex sheet with surface tension [@vtxs1; @vtxs2]; Hopf bifurcation and stability transitions in mode-locked lasers [@lasers]; cyclic steady-states in rolling treaded tires [@tires1]; self-similarity (or lack thereof) at the crests of large-amplitude standing water waves [@water1]; harmonic resonance and spontaneous nucleation of new branches of standing water waves at critical depths [@water2]; and three-dimensional standing water waves [@water3d]. The three approaches developed in these papers are the adjoint continuation method [@benj1; @lasers], a Newton-Krylov shooting method [@tires1], and a trust region shooting method [@water2] based on the Levenberg-Marquardt algorithm [@nocedal]. We adopt the latter method here to exploit an opportunity to consolidate the work in computing the Dirichlet-Neumann operator for many columns of the Jacobian simultaneously, in parallel. One computational novelty of this work is that we search directly for large-amplitude solutions of a nonlinear two-point boundary value problem, without using numerical continuation to get there. This is generally difficult. However, in the present case, numerical continuation is also difficult due to non-smooth bifurcation “curves” riddled
1
member_22
with Cantor-like gaps [@plotnikov01], and the long simulation times that occur between collisions in the unidirectional case. Our shooting method has proven robust enough to succeed in finding time-periodic solutions, when they exist, with a poor starting guess. False positives are avoided by resolving the solutions spectrally to machine accuracy and overconstraining the minimization problem. Much of the challenge is in determining the form of the initial condition and the objective function to avoid wandering off in the wrong direction and falling into a nonzero local minimum before locking onto a nearby relative-periodic solution. Equations of motion {#sec:eqm} =================== The equations of motion of a free surface $\eta(x,t)$ evolving over an ideal fluid with velocity potential $\phi(x,y,t)$ may be written [@whitham74; @johnson97; @craik04; @craik05] $$\begin{aligned} \label{eq:ww} \eta_t &= \phi_y - \eta_x\phi_x, \\[-3pt] \notag \varphi_t &= P\left[\phi_y\eta_t - \frac{1}{2}\phi_x^2 - \frac{1}{2}\phi_y^2 - g\eta\right], $$ where subscripts denote partial derivatives, $\varphi(x,t) = \phi(x,\eta(x,t), t)$ is the restriction of $\phi$ to the free surface, $g=1$ is the acceleration of gravity, $\rho=1$ is the fluid density, and $P$ is the projection $$Pf = f - \frac{1}{2\pi}\int_0^{2\pi} f(x)\,dx,$$ where we assume a $2\pi$-periodic domain. The velocity components $u=\phi_x$ and $v=\phi_y$ at the free surface can
1
member_22
be computed from $\varphi$ via $$\label{eq:uv:from:G} \begin{pmatrix} \phi_x \\ \phi_y \end{pmatrix} = \frac{1}{1+\eta'(x)^2}\begin{pmatrix} 1 & -\eta'(x) \\ \eta'(x) & 1 \end{pmatrix} \begin{pmatrix} \varphi'(x) \\ {\mathcal{G}}\varphi(x) \end{pmatrix},$$ where a prime denotes a derivative and ${\mathcal{G}}$ is the Dirichlet-Neumann operator [@craig:sulem:93] $$\label{eq:DNO:def} {\mathcal{G}}\varphi(x) = \sqrt{1+\eta'(x)^2}\,\, {\frac{\partial \phi}{\partial n}}(x+i\eta(x)) = \phi_y - \eta_x\phi_x$$ for the Laplace equation, with periodic boundary conditions in $x$, Dirichlet conditions ($\phi=\varphi$) on the upper boundary, and Neumann conditions ($\phi_y=0$) on the lower boundary, assumed flat. We have suppressed $t$ in the notation since time is frozen in the Laplace equation. We compute ${\mathcal{G}}\varphi$ using a boundary integral collocation method [@lh76; @baker:82; @krasny:86; @mercer:92; @baker10] and advance the solution in time using an 8th order Runge-Kutta scheme [@hairer:I] with 36th order filtering [@hou:li:07]. See [@water2] for details. Computation of relative-periodic solutions {#sec:method} ========================================== Traveling waves have the symmetry that $$\label{eq:init} \eta(x,0) \, \text{ is even}, \qquad \varphi(x,0) \, \text{ is odd.}$$ This remains true if $x$ is replaced by $x-\pi$. As a starting guess for a new class of time-periodic and relative-periodic solutions, we have in mind superposing two traveling waves, one centered at $x=0$ and the other at $x=\pi$. Doing so will preserve the property (\[eq:init\]), but the
1
member_22
waves will now interact rather than remain pure traveling waves. A solution will be called *relative periodic* if there exists a time $T$ and phase shift $\theta$ such that $$\label{eq:ts:def} \eta(x,t+T) = \eta(x-\theta,t), \qquad\quad \varphi(x,t+T) = \varphi(x-\theta,t)$$ for all $t$ and $x$. Time-periodicity is obtained as a special case, with $\theta\in2\pi\mathbb{Z}$. We can save a factor of 2 in computational work by imposing the alternative condition $$\label{eq:even:odd} \eta(x+\theta/2,T/2) \, \text{ is even}, \qquad\quad \varphi(x+\theta/2,T/2) \, \text{ is odd.}$$ From this, it follows that $$\begin{aligned} \eta(x+\theta/2,T/2) &= \eta(-x+\theta/2,T/2) = \eta(x-\theta/2,-T/2), \\ \varphi(x+\theta/2,T/2) &= -\varphi(-x+\theta/2,T/2) = \varphi(x-\theta/2,-T/2).\end{aligned}$$ But then both sides of each equation in (\[eq:ts:def\]) agree at time $t=-T/2$. Thus, (\[eq:ts:def\]) holds for all time. In the context of traveling-standing waves in deep water [@trav:stand], it is natural to define $T$ as twice the value above, replacing all factors of $T/2$ by $T/4$. That way a pure standing wave returns to its original configuration in time $T$ instead of shifting in space by $\pi$ in time $T$. In the present work, we consider pairs of solitary waves moving to the right at different speeds, so it is more natural to define $T$ as the first (rather than the second) time there
1
member_22
exists a $\theta$ such that (\[eq:ts:def\]) holds. Objective function {#sec:obj:fun} ------------------ We adapt the overdetermined shooting method of [@water1; @water2] to compute solutions of (\[eq:init\])–(\[eq:even:odd\]). This method employs the Levenberg-Marquardt method [@nocedal] with delayed Jacobian updates [@water2] to solve the nonlinear least squares problem described below. For (\[eq:init\]), we build the symmetry into the initial conditions over which the shooting method is allowed to search: we choose an integer $n$ and consider initial conditions of the form $$\label{eq:init:trav} \hat\eta_k(0) = c_{2|k|-1}, \qquad\quad \hat\varphi_k(0) = \pm ic_{2|k|},$$ where $k\in\{\pm1,\pm2,\dots,\pm \frac{n}{2}\}$ and $\hat\eta_k(t)$, $\hat\varphi_k(t)$ are the Fourier modes of $\eta(x,t)$, $\varphi(x,t)$. The numbers $c_1,\dots,c_n$ are assumed real and all other Fourier modes (except $\hat\eta_0$) are zero. We set $\hat\eta_0$ to the fluid depth so that $y=0$ is a symmetry line corresponding to the bottom wall. This is convenient for computing the Dirichlet-Neumann operator [@water2]. In the formula for $\hat\varphi_k$, the minus sign is taken if $k<0$ so that $\hat\varphi_{-k} =\overline{\hat\varphi_k}$. We also solve for the period, $$\label{eq:T:theta} T=c_{n+1}. $$ The phase shift $\theta$ is taken as a prescribed parameter here. Alternatively, in a study of traveling-standing waves [@trav:stand], the author defines a traveling parameter $\beta$ and varies $\theta=c_{n+2}$ as part of the
1
member_22
algorithm to obtain the desired value of $\beta$. This parameter $\beta$ is less meaningful for solitary wave collisions in shallow water, so we use $\theta$ itself as the traveling parameter in the present study. We also need to specify the amplitude of the wave. This can be done in various ways, e.g. by specifying the value of the energy, $$E = \frac{1}{2\pi}\int_0^{2\pi} {\textstyle}\frac{1}{2}\varphi{\mathcal{G}}\varphi + \frac{1}{2}g\eta^2\,dx,$$ by constraining a Fourier mode such as $\hat\eta_1(0)$, or by specifying the initial height of the wave at $x=0$: $$\eta(0,0) = \hat\eta_0 + \sum_{k=1}^{n/2} 2c_{2k-1}.$$ Thus, to enforce (\[eq:even:odd\]), we can minimize the objective function $$\label{eq:f} f(c) = \frac{1}{2} r(c)^Tr(c),$$ where $$\begin{aligned} \label{eq:r:def} r_1 = \big(\;\text{choose one:} \quad & E-a \quad,\quad \hat\eta_1(0)-a \quad,\quad \eta(0,0)-a \;\big), \\ \notag r_{2j} = {\operatorname{Im}}\{e^{ij\theta/2}\hat\eta_j(T/2)\}, \qquad &r_{2j+1} = {\operatorname{Re}}\{e^{ij\theta/2}\hat\varphi_j(T/2)\}, \qquad (1 \le j\le M/2).\end{aligned}$$ Here $a$ is the desired value of the chosen amplitude parameter. Alternatively, we can impose (\[eq:ts:def\]) directly by minimizing $$\label{eq:f1} \tilde f = \frac{1}{2}r_1^2 + \frac{1}{4\pi} \int_0^{2\pi} \left(\big[\eta(x,T)-\eta(x-\theta,0)\big]^2 + \big[\varphi(x,T)-\varphi(x-\theta,0)\big]^2\right)dx,$$ which also takes the form $\frac{1}{2}r^Tr$ if we define $r_1$ as above and $$\label{eq:r1:def} \begin{aligned} r_{4j-2}+ir_{4j-1} &= \sqrt{2}\left[ \hat\eta_j(T) - e^{-ij\theta}\hat\eta_j(0) \right], \\ r_{4j}+ir_{4j+1} &= \sqrt{2}\left[ \hat\varphi_j(T) - e^{-ij\theta}\hat\varphi_j(0) \right], \end{aligned} \qquad\quad (1\le j\le M/2).$$ Note
1
member_22
that $f$ measures deviation from evenness and oddness of $\eta(x+\theta/2,T/2)$ and $\varphi(x+\theta/2,T/2)$, respectively, while $\tilde f$ measures deviation of $\eta(x+\theta,T)$ and $\varphi(x+\theta,T)$ from their initial states. In the first example of §\[sec:num\], we minimize $\tilde f$ directly, while in the second we minimize $f$ and check that $\tilde f$ is also small, as a means of validation. The number of equations, $m=M+1$ for $f$ and $m=2M+1$ for $\tilde f$, is generally larger than the number of unknowns, $n+1$, due to zero-padding of the initial conditions. This adds robustness to the shooting method and causes all Fourier modes varied by the algorithm, namely those in (\[eq:init:trav\]), to be well-resolved on the mesh. Computation of the Jacobian --------------------------- To compute the $k$th column of the Jacobian $J=\nabla_c r$, which is needed by the Levenberg-Marquardt method, we solve the linearized equations along with the nonlinear ones: $$\label{eq:q:qdot} {\frac{\partial }{\partial t}} \begin{pmatrix} q \\ \dot q \end{pmatrix} = \begin{pmatrix} F(q) \\ DF(q)\dot q \end{pmatrix}, \quad \begin{aligned} q(0) &= q_0 = (\eta_0,\varphi_0), \\ \dot q(0) &= \dot q_0 = \partial q_0/\partial c_k. \end{aligned}$$ Here $q=(\eta,\varphi)$, $\dot q=(\dot\eta,\dot\varphi)$, $F(q)$ is given in (\[eq:ww\]), $DF$ is its derivative (see [@water2] for explicit formulas), and a dot represents
1
member_22
a variational derivative with respect to perturbation of the initial conditions, not a time derivative. To compute $\partial r_i/\partial c_k$ for $i\ge2$ and $k\le n$, one simply puts a dot over each Fourier mode on the right-hand side of (\[eq:r:def\]) or (\[eq:r1:def\]), including $\hat\eta_j(0)$ and $\hat\varphi_j(0)$ in (\[eq:r1:def\]). If $k=n+1$, then $c_k=T$ and $${\frac{\partial r_{2j}}{\partial T}} = {\operatorname{Im}}\{e^{ij\theta/2}(1/2)\partial_t\hat\eta_j(T/2)\}, \qquad {\frac{\partial (r_{4j}+ir_{4j+1})}{\partial T}} = \sqrt{2}\big[\partial_t\hat\varphi_j(T)\big]$$ in (\[eq:r:def\]) and (\[eq:r1:def\]), respectively, with similar formulas for $\partial(r_{4j-2}+ir_{4j-1})/\partial T$ and $\partial r_{2j+1}/\partial T$. The three possibilities for $r_1$ are handled as follows: $$\begin{aligned} &\text{case 1:} \quad {\frac{\partial r_1}{\partial c_k}} = \dot E = \frac{1}{2\pi}\int_0^{2\pi} \left[\dot\varphi\eta_t - \dot\eta\varphi_t \right]_{t=0}dx, \quad (k\le n), \qquad {\frac{\partial r_1}{\partial c_{n+1}}} = 0, \\ &\text{case 2:} \quad {\frac{\partial r_1}{\partial c_k}} = {{\dot\eta}^{\scriptscriptstyle\bm\wedge}}_1(0) = \delta_{k,1}, \quad (k\le n+1),\\ &\text{case 3:} \quad {\frac{\partial r_1}{\partial c_k}} = \dot\eta(0,0) = 2\delta_{k,\text{odd}}, \quad (k\le n), \qquad {\frac{\partial r_1}{\partial c_{n+1}}} = 0,\end{aligned}$$ where $\delta_{k,j}$ and $\delta_{k,\text{odd}}$ equal 1 if $k=j$ or $k$ is odd, respectively, and equal zero otherwise. The vectors $\dot q$ in (\[eq:q:qdot\]) are computed in batches, each initialized with a different initial perturbation, to consolidate the work in computing the Dirichlet-Neumann operator during each timestep. See [@water2; @trav:stand] for details. Numerical results {#sec:num}
1
member_22
================= As mentioned in the introduction, our idea is to use collisions of unidirectional Stokes (i.e. traveling) waves as starting guesses to find time-periodic and relative periodic solutions of the Euler equations. We begin by computing traveling waves of varying wave height and record their periods. This is easily done in the framework of §\[sec:method\]. We set $\theta=\pi/64$ (or any other small number) and minimize $\tilde f$ in (\[eq:f1\]). The resulting “period” $T$ will give the wave speed via $c=\theta/T$. Below we report $T=2\pi c$, i.e. $T$ is rescaled as if $\theta$ were $2\pi$. We control the amplitude by specifying $\hat\eta_1(0)$, which is the second option listed in §\[sec:method\] for defining the first component $r_1$ of the residual. A more conventional approach for computing traveling waves is to substitute $\eta(x-ct)$, $\varphi(x-ct)$ into (\[eq:ww\]) and solve the resulting stationary problem (or an equivalent integral equation) by Newton’s method [@chen80a; @chandler:93; @milewski:11]. Note that the wave speed $c$ here is unrelated to the vector $c$ of unknowns in (\[eq:init:trav\]). ![\[fig:bif:stokes\] Plots of wave height and first Fourier mode versus period for Stokes waves with wavelength $2\pi$ and fluid depth $h=0.05$. The temporal periods are $6T_A=137.843\approx 137.738 = 5T_C$.](figs/bif_stokes){width="3.3in"} ![\[fig:align:stokes\] Collision of two
1
member_22
right-moving Stokes waves that nearly return to their initial configuration after the interaction. (left) Solutions A and C were combined via (\[eq:AandC\]) and evolved through one collision to $t=137.738$. (right) Through trial and error, we adjusted the amplitude of the smaller Stokes wave and the simulation time to obtain a nearly time-periodic solution. ](figs/align_stokes){width="\linewidth"} With traveling waves in hand, out next goal is to collide two of them and search for a nearby time-periodic solution, with $\theta=0$. As shown in Figure \[fig:bif:stokes\], varying $\hat\eta_1(0)$ from 0 to $7.4\times 10^{-4}$ causes the period of a Stokes wave with wavelength $\lambda=2\pi$ and mean fluid depth $h=0.05$ to decrease from $T_O=28.1110$ to $T_A=22.9739$, and the wave height (vertical crest-to-trough distance) to increase from 0 to $0.02892$. Solution C is the closest among the Stokes waves we computed to satisfying $5T_C=6T_A$, where $p=5$ is the smallest integer satisfying $\frac{p+1}{p}T_A<T_O$. We then combine solution A with a spatial phase shift of solution C at $t=0$. The resulting initial conditions are $$\label{eq:AandC} \begin{aligned} \eta^{A+C}_0(x) &= h + \big[\eta^A_0(x)-h\big] + \big[\eta^C_0(x-\pi)-h\big], \\ \varphi^{A+C}_0(x) &= \varphi^A_0(x) + \varphi^C_0(x-\pi), \end{aligned}$$ where $h=0.05$ is the mean fluid depth. Plots of $\eta_0^A(x)$, $\eta_0^C(x-\pi)$, $\varphi_0^A(x)$ and $\varphi_0^C(x-\pi)$ are shown in Figure \[fig:AandC\].
1
member_22
If the waves did not interact, the combined solution would be time-periodic (to the extent that $5T_C=6T_A$, i.e. to about $0.076\%$). But the waves do interact. In addition to the complicated interaction that occurs when they collide, each slows the other down between collisions by introducing a negative gradient in the velocity potential between its own wave crests. Indeed, as shown in the right panel of Figure \[fig:AandC\], the velocity potential increases rapidly across a right-moving solitary wave and decreases elsewhere to achieve spatial periodicity. The decreasing velocity potential induces a background flow opposite to the direction of travel of the other wave. In the left panel of Figure \[fig:align:stokes\], we see that the net effect is that neither of the superposed waves has returned to its starting position at $t=5T_C$, and the smaller wave has experienced a greater net decrease in speed. However, as shown in the right panel, by adjusting the amplitude of the smaller wave (replacing solution C by B) and increasing $T$ slightly to $138.399$, we are able to hand-tune the Stokes waves to achieve $\tilde f\approx5.5\times10^{-8}$, where $\theta$ is set to zero in (\[eq:f1\]). Note that as $t$ varies from 0 to $T/10$ in the
1
member_22
left panel of Figure \[fig:evol:kdv1\], the small wave advances by $\pi$ units to the right while the large wave advances by $1.2\pi$ units. The waves collide at $t=T/2$. This generates a small amount of radiation, which can be seen at $t=T$ in the right panel of Figure \[fig:align:stokes\]. Some radiation behind the large wave is present for all $t>0$, as shown in Figure \[fig:pov:kdv1\]. Before minimizing $\tilde f$, we advance the two Stokes waves to the time of the first collision, $t=T/2$. At this time, the larger solitary wave has traversed the domain 3 times and the smaller one 2.5 times, so their peaks lie on top of each other at $x=0$. The reason to do this is that when the waves merge, the combined wave is shorter, wider, and smoother than at any other time during the evolution. Quantitatively, the Fourier modes of $\hat\eta_k(t)$ and $\hat\varphi_k(t)$ decay below $10^{-15}$ for $k\ge600$ at $t=0$, and $k\ge200$ when $t=T/2$. Thus, the number of columns needed in the Jacobian is reduced by a factor of 3, and the problem becomes more overdetermined, hence more robust. For the calculation of a time-periodic solution, we let $t=0$ correspond to this merged state, which affects
1
member_22
the time labels when comparing Figures \[fig:evol:kdv1\] and \[fig:evol:kdv2\]. As a final initialization step, we project onto the space of initial conditions satisfying (\[eq:init:trav\]) by zeroing out the imaginary parts of $\hat\eta_k(0)$ and the real parts of $\hat\varphi_k(0)$, which are already small. Surprisingly, this improves the time-periodicity of the initial guess in (\[eq:f1\]) to $\tilde f = 2.3\times 10^{-8}$. ![\[fig:evol:kdv1\] Evolution of two Stokes waves that collide repeatedly, at times $t\approx T/2+kT$, $k\ge0$. (left) Traveling solutions A and B in Figure \[fig:bif:stokes\] were initialized with wave crests at $x=0$ and $x=\pi$, respectively. The combined solution is approximately time-periodic, with period $T=138.399$. (right) The same solution, at later times, starting with the second collision ($t=3T/2$).](figs/evol_kdv1){width="\linewidth"} ![\[fig:pov:kdv1\] A different view of the solutions in Figure \[fig:evol:kdv1\] shows the generation of background waves. Shown here are the functions $\eta(x+8\pi t/T,t)$, which give the dynamics in a frame moving to the right fast enough to traverse the domain four times in time $T$. In a stationary frame, the smaller and larger solitary waves traverse the domain 5 and 6 times, respectively.](figs/pov_kdv1){height="2in"} We emphasize that our goal is to find *any* nearby time-periodic solution by adjusting the initial conditions to drive $\tilde f$ to zero.
1
member_22
Energy will be conserved as the solution evolves from a given initial condition, but is only imposed as a constraint (in the form of a penalty) on the search for initial conditions when the first component of the residual in (\[eq:r:def\]) is set to $r_1=E-a$. In the present calculation, we use $r_1=\eta(0,0)-a$ instead. In the second example, presented below, we will constrain energy. In either case, projecting onto the space of initial conditions satisfying (\[eq:init:trav\]) can cause $r_1$ to increase, but it will decrease to zero in the course of minimizing $\tilde f$. This projection is essential for the symmetry arguments of §\[sec:obj:fun\] to work. ![\[fig:evol:kdv2\] Time-periodic solutions near the Stokes waves of Figure \[fig:evol:kdv1\]. (left) $h=0.05$, $\eta(0,0) = 0.0707148$, $T=138.387$, $\tilde f=4.26\times10^{-27}$. (right) $h=0.0503$, $\eta(0,0)=0.0707637$, $T=138.396$, $\tilde f=1.27\times 10^{-26}$. The background radiation was minimized by hand in the right panel by varying $h$ and $\eta(0,0)$.](figs/evol_kdv2){width="\linewidth"} ![\[fig:pov:kdv2\] Same as Figure \[fig:pov:kdv1\], but showing the time-periodic solutions of Figure \[fig:evol:kdv2\] instead of the Stokes waves of Figure \[fig:evol:kdv1\]. The Stokes waves generate new background radiation with each collision while the time-periodic solutions are synchronized with the background waves to avoid generating additional disturbances. ](figs/pov_kdv2){height="2in"} We minimize $\tilde f$ subject to the
1
member_22
constraint $\eta(0,0)=0.0707148$, the third case described in §\[sec:method\] for specifying the amplitude. This causes $\tilde f$ to decrease from $2.3\times 10^{-8}$ to $4.26\times 10^{-27}$ using $M=1200$ grid points and $N=1200$ time-steps (to $t=T$). The results are shown in the left panel of Figures \[fig:evol:kdv2\] and \[fig:pov:kdv2\]. The main difference between the Stokes collision and this nearby time-periodic solution is that the Stokes waves generate additional background ripples each time they collide while the time-periodic solution contains an equilibrium background wave configuration that does not grow in amplitude after the collision. While the background waves in the counter-propagating case (studied in [@water2]) look like small-amplitude standing waves, these background waves travel to the right, but slower than either solitary wave. After computing the $h=0.05$ time-periodic solution, we computed 10 other solutions with nearby values of $h$ and $\eta(0,0)$ to try to decrease the amplitude of the background radiation. The best solution we found (in the sense of small background radiation) is shown in the right panel of Figures \[fig:evol:kdv2\] and \[fig:pov:kdv2\], with $h=0.0503$ and $\eta(0,0)=0.0707637$. The amplitude of the background waves of this solution are comparable to that of the Stokes waves after two collisions. Our second example is a relative
1
member_22
periodic solution in which the initial Stokes waves (the starting guess) are B and C in Figure \[fig:bif:stokes\] instead of A and C. As before, solution C is shifted by $\pi$ when the waves are combined initially, just as in (\[eq:AandC\]). Because the amplitude of the larger wave has been reduced, the difference in wave speeds is smaller, and it takes much longer for the waves to collide. If the waves did not interact, we would have $$\label{eq:cBcC} c_{B,0} = 0.23246089, \quad c_{C,0} = 0.22808499, \quad T_0 = \frac{2\pi}{c_{B,0}-c_{A,0}} = 1435.86,$$ where wave B crosses the domain $53.1230$ times in time $T_0$ while wave C crosses the domain $52.1230$ times. The subscript 0 indicates that the waves are assumed not to interact. Since the waves do interact, we have to evolve the solution numerically to obtain useful estimates of $T$ and $\theta$. We arbitrarily rounded $T_0$ to 1436 and made plots of the solution at times $\Delta t = T_0/1200$. We found that $\eta$ is nearly even (up to a spatial phase shift) for the first time at $463\Delta t=554.057$. This was our initial guess for $T/2$. The phase shift required to make $\eta(x+\theta/2,T/2)$ approximately even and $\varphi(x+\theta/2,T/2)$ approximately odd
1
member_22
was found by graphically solving $\varphi(x,T/2)=0$. This gives the initial guess $\theta/2=2.54258$. This choice of $T$ and $\theta$ (with $\eta^{B+C}$ and $\varphi^{B+C}$ as initial conditions) yields $f=2.0\times10^{-11}$ and $\tilde f=1.5\times10^{-10}$. We then minimize $f$ holding $E$ and $\theta$ constant, which gives $f=2.1\times10^{-29}$ and $\tilde f=3.0\times10^{-26}$. We note that $\tilde f$ is computed over $[0,T]$, twice the time over which the solution was optimized by minimizing $f$, and provides independent confirmation of the accuracy of the solution and the symmetry arguments of §\[sec:obj:fun\]. The results are plotted in Figure \[fig:evol:kdv3\]. We omit a plot of the initial guess (the collision of Stokes waves) as it is indistinguishable from the minimized solution. In fact, the relative change in the wave profile and velocity potential is about $0.35$ percent, $$\left(\frac{ \|\eta_\text{Stokes} - \eta_\text{periodic}\|^2 + \|\varphi_\text{Stokes} - \varphi_\text{periodic}\|^2}{ \|\eta_\text{Stokes} - h\|^2 + \|\varphi_\text{Stokes}\|^2} \right)^{1/2} \le 0.0035,$$ and $T/2$ changes even less, from 554.057 (Stokes) to 554.053 (periodic). By construction, $E$ and $\theta/2$ do not change at all. It was not necessary to evolve the Stokes waves to $T/2$, shift space by $\theta/2$, zero out Fourier modes that violate the symmetry condition (\[eq:init\]), and reset $t=0$ to correspond to this new initial state. Doing so
1
member_22
increases the decay rate of the Fourier modes (slope of $\ln|\hat\eta_k|$ vs $k$) by a factor of 1.24 in this example, compared to 3.36 in the previous example, where it is definitely worthwhile. ![\[fig:evol:kdv3\] A relative-periodic solution found using a superposition of the Stokes waves labeled B and C in Figure \[fig:bif:stokes\] as a starting guess. Unlike the previous case, the waves do not fully merge at $t=T/2$. ](figs/evol_kdv3){width="\linewidth"} The large change from $T_0/2 = 717.93$ to $T/2=554.053$ is due to nonlinear interaction of the waves. There are two main factors contributing to this change in period. The first is that the waves do not fully combine when they collide. Instead, the trailing wave runs into the leading wave, passing on much of its amplitude and speed. The peaks remain separated by a distance of $d=0.52462$ at $t=T/2$, the transition point where the waves have the same amplitude. Thus, the peak separation changes by $\pi-d$ rather than $\pi$ in half a period. The second effect is that the larger wave slows down the smaller wave more than the smaller slows the larger. Recall from Fig. \[fig:AandC\] that each wave induces a negative potential gradient across the other wave that generates
1
member_22
a background flow opposing its direction of travel. Quantitatively, when the waves are well separated, we find that the taller and smaller waves travel at speeds $c_B=0.231077=0.994049c_{B,0}$ and $c_C=0.226153=0.991531c_{C,0}$, respectively. The relative speed is then $(c_B-c_C) = 1.12526(c_{B,0}-c_{C,0})$. Thus, $$\label{eq:ineq} \frac{\pi-d}{c_B-c_C} < \frac{T}{2} < \frac{\pi-d}{c_{B,0}-c_{C,0}} < \frac{T_0}{2} = \frac{\pi}{c_{B,0}-c_{C,0}}, $$ with numerical values $531.5<554.1<598.0<717.9$. This means that both effects together have overestimated the correction needed to obtain $T$ from $T_0$. This is because the relative speed slows down as the waves approach each other, which is expected since the amplitude of the trailing wave decreases and the amplitude of the leading wave increases in this interaction regime. Indeed, the average speed of the waves is $$\label{eq:average:speed} \overline{c_B} = \frac{\theta/2 - d/2}{T/2} = 0.993388c_{B,0}, \qquad \overline{c_C} = \frac{\theta/2 + d/2 - \pi}{T/2} = 0.991737c_{C,0},$$ which are slightly smaller and larger, respectively, than their speeds when well separated. Note that $T/2$ in (\[eq:ineq\]) may be written $T/2=(\pi-d)/(\overline{c_B} - \overline{c_C})$. We used $\theta/2=2.54258+40\pi$ in (\[eq:average:speed\]) to account for the 20 times the waves cross the domain $(0,2\pi)$ in time $T/2$ in addition to the offset shown in Figure \[fig:evol:kdv3\]. Comparison with KdV {#sec:kdv} =================== In the previous section, we observed two types of
1
member_22
overtaking collisions for the water wave: one in which the larger wave completely subsumes the smaller wave for a time, and one where the two waves remain distinct throughout the interaction. Similar behavior has of course been observed for the Korteweg-de Vries equation, which was part of our motivation for looking for such solutions. Lax [@lax:1968] classified overtaking collisions of two KdV solitons as bimodal, mixed, or unimodal. Unimodal and bimodal waves are analogous to the ones we computed above, while mixed mode collisions have the larger wave mostly subsume the smaller wave at the beginning and end of the interaction, but with a two-peaked structure re-emerging midway through the interaction. Lax showed that if $1<c_1/c_2<A=(3+\sqrt{5})/2$, the collision is bimodal; if $c_1/c_2>3$, the collision is unimodal; and if $A<c_1/c_2<3$, the collision is mixed. Here $c_1$ and $c_2$ are the wave speeds of the trailing and leading waves, respectively, at $t=-\infty$. Leveque [@leveque:87] has studied the asymptotic dynamics of the interaction of two solitons of nearly equal amplitude. Zou and Su [@zou:su] performed a computational study of overtaking water wave collisions, compared the results to KdV interactions, and found that the water wave collisions ceased to be elastic at third order.
1
member_22
--- abstract: | As exemplified by the Kuramoto model, large systems of coupled oscillators may undergo a transition to phase coherence with increasing coupling strength. It is shown that below the critical coupling strength for this transition such systems may be expected to exhibit ‘echo’ phenomena: a stimulation by two successive pulses separated by a time interval $\tau $ leads to the spontaneous formation of response pulses at a time $\tau$, $2\tau $, $3\tau \ldots$, after the second stimulus pulse. Analysis of this phenomenon, as well as illustrative numerical experiments, are presented. The theoretical significance and potential uses of echoes in such systems are discussed. author: - 'Edward Ott, John H. Platig, Thomas M. Antonsen and Michelle Girvan' title: Echo Phenomena in Large Systems of Coupled Oscillators --- [**Large systems consisting of many coupled oscillators for which the individual natural oscillator frequencies are different naturally occur in a wide variety of interesting applications. As shown by Kuramoto, such systems can undergo a type of dynamical phase transition such that as the coupling strength is raised past a critical value, global synchronous collective behavior results. In this paper we show that another interesting, potentially useful, behavior of these systems also occurs
1
member_23
[*below*]{} the critical coupling strength. Namely, we demonstrate that these systems exhibit [*echo*]{} phenomena: If a stimulus pulse is applied at time $t=0$, followed by a second stimulus pulse at time $t=\tau $, then pulse echo responses can appear at $t=2\tau ,3\tau ,\ldots$. This phenomenon depends on both nonlinearity and memory inherent in the oscillator system, the latter being a consequence of the continuous spectrum of the linearized system.**]{} I. Introduction =============== Due to their occurrence in a wide variety of circumstances, systems consisting of a large number of coupled oscillators with different natural oscillation frequencies have been the subject of much scientific interest[@pikovsky01; @strogatz04]. Examples where the study of such systems is thought to be relevant are synchronous flashing of fireflies[@buck88] and chirping of crickets[@walker69], synchronous cardiac pacemaker cells[@michaels87], brain function[@singer93], coordination of oscillatory neurons governing circadian rhythms in mammals[@yamaguchi02], entrainment of coupled oscillatory chemically reacting cells[@kiss05], Josephson junction circuit arrays[@wiesenfeld95], etc. The globally-coupled, phase-oscillator model of Kuramoto[@kuramoto84; @acebron05] exemplifies the key generic feature of large systems of coupled oscillators. In particular, Kuramoto considered the case where the distribution function of oscillator frequencies was monotonically decreasing away from its peak value, and he showed that, as the coupling strength
1
member_23
$K$ between the oscillators is increased through a critical coupling strength $K_c$, there is a transition to sustained global cooperative behavior. In this state $(K>K_c)$ a suitable average over the oscillator population (this average is often called the ‘order parameter’) exhibits steady macroscopic oscillatory behavior. For $K<K_c$ a stimulus may transiently induce macroscopic oscillations, but the amplitude of these coherent oscillations (i.e., the magnitude of the order parameter) decays exponentially to zero with increasing time[@acebron05]. In the present paper we consider the Kuramoto model in the parameter range $K<K_c$, and we demonstrate that ‘echo’ phenomena occur for this system. The basic echo phenomenon can be described as follows: A first stimulus is applied at time $t=0$, and the response to it dies away; next, a second stimulus is applied at a later time, $t=\tau $, and its response likewise dies away; then at time $t=2\tau $ (also possibly at $n\tau $, for $n=3,4,\ldots$) an echo response spontaneously builds up and then decays away. An illustrative example is shown in Fig. 1, which was obtained by ![Illustration of the echo phenomenon. Stimuli at times $t=0$ and $t=\tau $ lead to direct system responses which rapidly decay away followed by echo responses
1
member_23
that can arise at times $2\tau $, $3\tau , \ldots $. The ‘response’ plotted on the vertical axis is the magnitude of the complex valued order parameter, Eq. (8). See Sec. IV for details of this computation.](fig1) numerical simulation (see Sec. IV for details). In order for this phenomenon to occur, the system must have two fundamental attributes, nonlinearity and memory. Nonlinearity is necessary because the response seen in Fig. 1 is not the same as the sum of the responses to each of the individual stimulus pulses in the absence of the other pulse (which is simply the decay that occurs immediately after the individual stimuli, without the echo). Memory is necessary in the sense that the system state after the decay of the second pulse must somehow encode knowledge of the previous history even though the global average of the system state, as represented by the order parameter, is approximately the same as before the two pulses were applied. Echo phenomena of this type, occurring in systems of many oscillators having a spread in their natural oscillation frequencies, have been known for a long time. The first example was the ‘spin echo’ discovered in 1950 by Hahn[@hahn50], where
1
member_23
the distribution of frequencies resulted from the position dependence of the precession frequency of nuclear magnetic dipoles in an inhomogeneous magnetic field. \[The spin echo forms the basis for modern magnetic resonance imaging (MRI).\] Subsequently, echoes for cyclotron orbits of charged particles in a magnetic field have been studied for the cases in which the distribution in frequency was due to magnetic field inhomogeneity[@gould65], relativistic dependence of the particle mass on its energy[@ott70], and Doppler shifts of the cyclotron frequency[@porkolab68]. Another notable case is that of plasma waves, where the frequency distribution results from the Doppler shift of the wave frequency felt by charged particles with different streaming velocities[@oneil68]. Although echo phenomena are well-known in the above settings, they have so far not received attention in the context of the Kuramoto model and its many related situations. It is our purpose in the present paper to investigate that problem. Two possible motivations for our study of echoes in the Kuramoto model are that they provide increased basic understanding of the model and also that they may be of potential use as a basis for future diagnostic measurements of related systems (see Sec. V). In what follows, Sec. II will give
1
member_23
a formulation of the model problem that will be analyzed in Sec. III and numerically simulated in Sec. IV, while Sec. V will provide a discussion of the implications of the results obtained. II. Formulation =============== We consider the basic Kuramoto model supplemented by the addition of a $\delta $-correlated noise term $n(t)$ and two impulsive stimuli, one at time $t=0$, and the other at time $t=\tau $, $$d\theta _i/dt=\omega _i+K/N\sum ^N_{j=1}\sin (\theta _j-\theta _i)-h(\theta _i)\Delta (t)+n(t) \ ,$$ $$\Delta (t)=\hat d_0\delta (t)+\hat d_1\delta (t-\tau ) \ ,$$ $$\langle n(t)n(t')\rangle =2\xi \delta (t-t') \ ,$$ $$h(\theta )=\sum _nh_ne^{in\theta } \ , \ \ h_n=h^*_{-n} \ , \ \ h_0=0 \ ,$$ where $h^*_{-n}$ denotes the complex conjugate of $h_{-n}$. In the above $\theta _i(t)$ represents the angular phase of oscillator $i$, where $i=1,2,\ldots ,N\gg 1$; and $\omega _i$ is the natural frequency of oscillator $i$ where we take $\omega _i$ for different oscillators (i.e., different $i$) to be distributed according to some given, time-independent distribution function $g(\omega )$, where $g(\omega )$ has an average frequency $\bar \omega =\int \omega g(\omega )d\omega $, is symmetric about $\omega =\bar \omega $, and monotonically decreases as $|\omega -\bar \omega |$ increases. To motivate
1
member_23
the impulsive stimuli term, consider the example of a population of many fireflies, and imagine that the stimuli at $t=0$ and at $t=\tau $ are external flashes of light at those times, where the constants $\hat d_0$ and $\hat d_1$ in Eq. (2) represent the intensity of these flashes. We hypothesize that a firefly will be induced by a stimulus flash to move its flashing phase toward synchronism with the stimulus flash. Thus a firefly that has just recently flashed will reset its phase by retarding it, while a firefly that was close to flashing will advance its phase. The amount of advance or retardation is determined by the ‘reset function’, $h(\theta )$. Since the reset function $h(\theta )$ depends on properties of the fireflies, we do not specify it further. Let $\theta ^+_i$ and $\theta ^-_i$ represent the phases of oscillator $i$ just after and just before a stimulus flash at $t=0$ or $t=\tau$. Then we have from Eq. (1) that $$\int ^{\theta ^{+}_{i}}_{\theta _i^-} \frac{d\theta }{h(\theta )}=\hat d_p ; \ \ p=0,1 \ .$$ Letting $F(\theta )=\int ^\theta d\theta /h(\theta )$, we obtain $$\theta ^+_i=F^{-1}(\hat d_p+F(\theta ^-_i)) \ .$$ In our subsequent analysis in Sec. III, we will
1
member_23
for convenience assume that $\hat d_p$ is small, in which case $(\theta ^+_i-\theta ^-_i)$ is small, and we can use the approximation, $$\theta _i^+\cong \theta ^-_i+\hat d_ph(\theta ^-_i); \ p=0,1 \ .$$ Following Kuramoto we introduce the complex valued order parameter $R(t)$, $$R(t)=\frac{1}{N} \sum ^N_{j=1} e^{i\theta _j(t)} \ ,$$ in terms of which Eq. (1) can be rewritten as $$d\theta _i/dt=\omega _i+(K/N)Im[e^{-i\theta _i}R(t)]-h(\theta _i)\Delta (t)+n(t) \ .$$ In our analysis in Sec. III we will take the limit $N\rightarrow \infty$ useful for approximating the situation where $N\gg 1$. In that limit it is appropriate to describe the system state by a continuous distribution function $f(\theta ,\omega ,t)$, where $$\int ^{2\pi }_0f(\theta ,\omega ,t)\frac{d\theta}{2\pi }=1 \ ,$$ and the fraction of oscillators with angles and natural frequencies in the ranges $(\theta , \theta +d\theta )$ and $(\omega ,\omega +d\omega )$ is $f(\theta ,\omega ,t)g(\omega )d\omega d\theta /2\pi $. The conservation of the number of oscillators then gives the time evolution equation for $f(\theta ,\omega ,t)$, $$\frac{\partial f}{\partial t}+\frac{\partial}{\partial \theta }\left\{ f\left[ \omega +KIm(R(t)e^{-i\theta })-h(\theta )\Delta (t)\right]\right\}=\xi \frac{\partial ^2f}{\partial \theta ^2} \ ,$$ $$R^*(t)=\int d\omega f_1(\omega ,t)g(\omega ) \ ,$$ where $R^*$ denotes the complex conjugate of $R$, $f(\omega ,\theta ,t)\equiv 1$
1
member_23
for $t<0$, and, in writing Eq. (12), $f_1$ represents the $e^{i\theta }$ component of the Fourier expansion of $f(\omega ,\theta ,t)$ in $\theta $, $$f(\omega ,\theta ,t)=\sum ^{+\infty}_{n=-\infty }f_n(\omega ,t)e^{in\theta } \ ,$$ with $f_0=1$, $f_n=f^*_{-n}$. As seen in Eq. (11), the effect of the noise term in Eq. (1) is to introduce diffusion in the phase angle $\theta $ whose strength is characterized by the phase diffusion coefficient $\xi $. In Sec. III we will solve Eqs. (11) and (12) for the case $d_p\ll 1$, thus demonstrating the echo phenomenon as described in Sec. I. In Sec. IV we will present numerical solutions of Eq. (1) for large $N$. III. Analysis ============= A. Amplitude expansion ---------------------- In order to proceed analytically we use a small amplitude expansion and obtain results to second order (i.e., up to quadratic in the small amplitude). This will be sufficient to obtain the echo phenomenon. We introduce a formal expansion parameter $\epsilon $, as follows, $$f=1+\epsilon f^{(1)}+\epsilon ^2f^{(2)}+\mathcal{O}(\epsilon ^3) \ ;$$ $\hat d_p=\epsilon d_p $ for $p=0$, $1$; $R=\epsilon R^{(1)}+\epsilon ^2R^{(2)}+\mathcal{O}(\epsilon ^3)$; $R^{(m)*}=\int gf_1^{(m)}d\omega $; where $f^{(m)}=\Sigma _nf_n^{(m)}\exp (in\theta )$. (Although we formally take $\epsilon \ll 1$, when we finally get our answers, the
1
member_23
results will apply for $\epsilon =1$ and $d_p=\hat d_p$, if $\hat d_p\ll 1$.) B. Order $\epsilon $ -------------------- In linear order (i.e., $\mathcal{O}(\epsilon))$, by multiplying Eq. (11) by $\exp (-i\theta )d\theta $ and integrating over $\theta $, we have for the component of $f^{(1)}$ varying as $e^{i\theta }$, $$\frac{\partial f_1^{(1)}}{\partial t}+(i\omega +\xi )f_1^{(1)}=\frac{K}{2} R^{(1)*}+ih_1\Delta (t) \ , \ \ R^{(1)*}(t)=\int f^{(1)}gd\omega \ ,$$ where $f_1^{(1)}(\omega ,t)=0$ for $t<0$ and $R^{(1)*}$ is the complex conjugate of $R^{(1)}$. Due to the delta function term on the right hand side of Eq. (15), $ih_1d_0\delta (t)$, at the instant just after the first delta function (denoted $t=0^+$), $f_1^{(1)}$ jumps from zero just before the delta function (denoted $t=0^-$) to the value $f_1^{(1)}(\omega ,0^+)=ih_1d_0$. Making use of this observation, in Appendix I we solve Eq. (15) for $0<t<\tau $, with the result that, for $K<K_c$, $$f_1^{(1)}(\omega ,t)=A(\omega )e^{-(i\omega +\xi )t}+\ \ ({\rm a\ more\ rapidly\ exponentially\ decaying\ component)} \ ,$$ where $$A(\omega )=ih_1d_0/D[-(i\omega +\xi )] \ ,$$ $$D(s)=1-\frac{K}{2}\int ^{+\infty}_{-\infty} \frac{g(\omega )d\omega }{s+\xi +i\omega } \ , \ {\rm for} \ Re(s)>0 \ ,$$ and $D(s)$ for $Re(s)\leq 0$ is defined from Eq.(18) by analytic continuation. Since Eq. (16) applies for $0<t<\tau $, we have that
1
member_23
just before the application of the second delta function stimulus $(t=\tau ^-)$, $$f_1^{(1)}(\omega ,\tau ^-)\cong A(\omega )e^{-(i\omega +\xi )\tau } \ ,$$ where we have neglected the second term on the right hand side of Eq. (16) on the basis that, due to its more rapid exponential decay, it is small compared to the first term. Solutions of $D(s)=0$ govern the stability of the state with $R^{(1)}=0$. Let $s=s_0$ denote the solutions of $D(s)=0$ with the largest real part. If $Re(s_0)<0$ the state $R^{(1)}=0$ is stable, and a perturbation away from $R^{(1)}=0$ decays to zero with increasing $t$ at the exponential rate $Re(s_0)$. If $Re(s_0)>0$, then the perturbation grows and $R^{(1)}$ eventually saturates into a sustained nonlinear state of coherent cooperative oscillatory behavior[@kuramoto84; @acebron05]. In general, $Re(s_0)$ is an increasing function of the coupling constant $K$, and $Re(s_0)\stackrel{>}{<}0$ for $K\stackrel{>}{<}K_c$, where $K_c$ is a critical value that depends on $\xi$ and $g(\omega )$. Throughout this paper we shall be considering only the case $K<K_c$ for which $Re(s_0)<0$. It is instructive to consider $\xi =0$. In that case, the first term in Eq. (16) is of constant magnitude in time, but, as time $t$ increases, it oscillates more and more rapidly
1
member_23
as a function of $\omega $. Because of this increasingly rapid variation in $\omega $, the contribution of this term to $R^{(1)*}(t)=\int gf_1^{(1)}d\omega $ decays in time (see Appendix I), and it does so at the same time-asymptotic rate as the contribution from the second more rapidly exponentially decaying contribution in Eq. (16). Thus the order parameter magnitude decays away, but the distribution function $f_1^{(1)}$ can still have a component (the first term in Eq. (16)) due to the pulse that has not decayed away. A similar conclusion applies for $\xi >0$ provided that $\xi $ is substantially less than the damping for the second term in Eq. (16). This is the source of the ‘memory’ referred to in Sec. I. It is also worth noting that the first term in Eq. (16) can be thought of as the manifestation of the continuous spectrum of the Kuramoto problem, discussed in detail in Ref.  [@strogatz92]. Thus the echo phenomenon that we derive subsequently can be regarded as an observable macroscopic consequence of the continuous spectrum, where by ‘macroscopic’ we mean that the effect can be seen through monitoring of the order parameter without the necessity of other more detailed knowledge of
1
member_23
the distribution function. It is also of interest to consider $f_n^{(1)}$ for $n\geq 2$. From Eq. (11) we obtain for $|n|\geq 2$ $$\frac{\partial f_n^{(1)}}{\partial t}+(in\omega +n^2\xi )f_n^{(1)}=inh_n\Delta (t) \ ,$$ which does not have any contribution from the order parameter, $R$. For $\tau >t>0$, Eq. (20) yields $$f_n^{(1)}(\omega ,t)=inh_nd_0\exp [-(in\omega +n^2\xi )t]\ ,$$ for $0 < t < \tau $, which, similar to the first term on the right hand side of Eq. (16), also oscillates increasingly more rapidly with $\omega $ as $t$ increases. At time $t=\tau ^-$ Eq. (21) yields $$f_n^{(1)}(\omega ,\tau ^-)=inh_nd_0\exp [-(in\omega +n^2\xi )\tau ] \ ,$$ for $|n|\geq 2$. C. Order $\epsilon ^2$ ---------------------- Now proceeding to $\mathcal{O}(\epsilon ^2)$ and again (as done in obtaining Eq. (15)) taking the $e^{i\theta }$ component of Eq. (11), we have $$\frac{\partial f_1^{(2)}}{\partial t}+(i\omega +\xi )f_1^{(2)}-\frac{1}{2}KR^{(2)*}=-i\left\{ \frac{K}{2i}f_2^{(1)}R^{(1)}-\Delta (t)\sum ^{+\infty}_{n=-\infty}h_{-(n-1)}f^{(1)}_n\right\}$$ where $R^{(1,2)*}(t)=\int ^{+\infty}_{-\infty}g(\omega )f_1^{(1,2)}(\omega ,t)d\omega $. The above equation is linear in $f_1^{(2)}$ and is driven by several inhomogeneous terms appearing on the right hand side of Eq. (23) that are quadratic in first order quantities. Since we are interested in the components of $f_1^{(2)}$ that result in echoes, and since, by our previous discussion, we expect that the echoes
1
member_23
depend on the presence of [*both*]{} stimulus delta functions (i.e., the delta function $\delta (t)$ of strength $d_0$ and the delta function $\delta (t-\tau )$ of strength $d_1$), we are interested in the component of $f_1^{(2)}$ that is proportional to the product $d_0d_1$ for $t>\tau $. We denote this component $f^{(2)}_{1,e}$, where the subscript $e$ stands for ‘echo’. From Eq. (23) we see that for $t>\tau $, the $f^{(2)}_{1,e}$ component of $f_1^{(2)}$ satisfied the following initial value problem $$\frac{\partial f_{1,e}^{(2)}}{\partial t}+(i\omega +\xi )f^{(2)}_{1,e}-\frac{1}{2}KR_e^{(2)*}=0 \ ,$$ $$f^{(2)}_{1,e}(\omega ,\tau ^+)=id_1\sum ^{+\infty}_{n=-\infty}h_{-(n-1)}f^{(1)}_n(\omega ,\tau ^-) \ ,$$ $$R_e^{(2)*}(t)=\int ^{+\infty}_{-\infty} g(\omega )f^{(2)}_{1,e}(\omega ,t)d\omega \ .$$ Since $f^{(1)}_n(\omega ,\tau^- )$ is proportional to $d_0$ (see Eqs. (19) and (22)), we see that the solution of Eqs. (24)–(26) for $f_{1,e}^{(2)}$ and $R_e^{(2)}$ will indeed be proportional to $d_0d_1$ as desired. We solve Eqs. (24)–(26) by taking Laplace transforms, $$\hat f^{(2)}_{1,e}(\omega ,s)=\int ^\infty _\tau e^{-st}f_{1,e}^{(2)}(\omega ,t)dt \ ,$$ $$\hat R^{(2)}_{e*}(s)\equiv \int ^\infty _\tau e^{-st}R_e^{(2)*}(t)dt \ ,$$ in terms of which we obtain from Eq. (24) $$\hat f^{(2)}_{1,e}(\omega ,s)=\hat R^{(2)}_{e*}\frac{K/2}{s+\xi +i\omega }+\frac{f^{(2)}_{1,e}(\omega ,\tau ^+)e^{-s\tau }}{s+\xi +i\omega } \ .$$ Multiplying Eq. (29) by $g(\omega )d\omega $ and integrating from $\omega =-\infty$ to $\omega =+\infty$, then yields $$\hat R^{(2)}_{e*}(s)=\frac{e^{-s\tau }}{D(s)}\int
1
member_23
^{+\infty}_{-\infty}\frac{f^{(2)}_{1,e}(\omega ,\tau ^+)}{s+\xi +i\omega }g(\omega )d\omega \ .$$ To find $R_e^{(2)*}(t)$ we take the inverse Laplace transform, $$R_e^{(2)*}(t)=\frac{1}{2\pi i}\int ^{+i\infty +\eta }_{-i\infty+\eta }e^{st}\hat R_{e*}^{(2)}(s)ds \ , \eta >0 \ .$$ For the purposes of evaluating the integral (31), we recall that $D(s)=0$ has roots whose real parts correspond to the exponential decay rate of a response to an initial stimulus toward the $R=0$ state. Thus, as before in our discussion of the linear response (see Eq. (16)), any poles at the roots of $D(s)=0$ give contributions that we assume decay substantially faster with increasing $t>\tau $ than the diffusion induced exponential decay rate $\xi $. Since we are interested in echoes that we will find occur for $t=2\tau ,3\tau ,\ldots $, we neglect contributions to Eq. (31) from such poles. Thus it suffices to consider only the contribution to Eq. (31) from the pole at $s+\xi +i\omega =0$. Hence Eqs. (30) and (31) yield $$R_e^{(2)*}(t)\cong \int ^{+\infty}_{-\infty}e^{-(i\omega +\xi )(t-\tau )}\frac{f^{(2)}_{1,e}(\omega ,\tau ^+)}{D[-(i\omega +\xi )]} g(\omega )d\omega \ .$$ D. Echoes --------- In order to see how Eq. (32) results in echoes, we recall our previous results, Eqs. (25), (19) and (21) for $f_{1,e}^{(2)}(\omega ,\tau ^+)$, and combine them to obtain
1
member_23
$$f^{(2)}_{1,e}(\omega ,\tau ^+)=d_0d_1h_2h_1^*\frac{\exp (i\omega \tau -\xi \tau )}{D^*[-(i\omega +\xi )]}-d_0d_1\sum _{|n|\geq 2}nh_nh^*_{n-1}\exp [-(in\omega +n^2\xi )\tau ] \ ,$$ where we have used $h_0=0$,  $h_n=h^*_n$, $f^{(1)}_{-1}=f_1^{(1)*}$, and the first term on the right side of Eq. (33) corresponds to $n=-1$ in Eq. (25). Putting Eq. (33) into Eq. (32), we see that we have an integral of a sum over terms with exponential time variations of the form $$\exp\{-i\omega [t-(1-n)\tau ]\}\exp \{ -\xi [t+(n^2-1)\tau \} \ .$$ Considering the first exponential in Eq. (34), we see that, for large values of $|t-(1-n)\tau |$, there is rapid oscillation of the integrand with $\omega $, and the integral can therefore be expected to be near zero. However, such rapid oscillation is absent near the times $t=(1-n)\tau $, at which a large value of $R_e^{(2)*}$ will occur. Since $t>\tau $, the relevant times occur for $n<-1$; e.g., for $n=-1$, we get an echo at $t=2\tau$; for $n=-2$, we get an echo at $t=3\tau $; etc. Therefore, we henceforth replace the summation over $|n|\geq 2$ in Eq. (33) by a summation from $n=-\infty$ to $n=-2$. E. Evaluation for Lorentzian frequency distribution functions ------------------------------------------------------------- We now consider the case of a Lorentzian frequency distribution, $$g(\omega )=g_L(\omega
1
member_23
)\equiv \frac{1}{\pi }\frac{\Delta}{(\omega -\bar \omega )^2+\Delta ^2}=\frac{1}{2\pi i}\left\{ \frac{1}{\omega -(\bar \omega +i\Delta )}-\frac{1}{\omega -(\bar \omega -i\Delta )}\right\} \ .$$ The right-most expression for $g_L(\omega )$ makes clear that, when the previously real variable $\omega $ is analytically continued into the complex plane, the function $g_L(\omega )$ results from the sum of two pole contributions, one at $\omega =\bar \omega +i\Delta $, and one at $\omega =\bar \omega -i\Delta $. The quantity $\bar \omega $ represents the average frequency of the distribution, while $\Delta $ represents the width of the distribution. Consideration of the Lorentzian will be particularly useful to us because the integral (32) can be explicitly evaluated, and also because our numerical experiments in Sec. IV will be for the case of a Lorentzian frequency distribution function. As a first illustration we consider the $n=-1$ term which results in an echo at $t=2\tau $. We first evaluate $D(s)$ by inserting the pole-form for $g_L(\omega )$ into Eq. (18) and closing the integration path with a large semicircle of radius approaching infinity. This yields a single residue contribution to $D(s)$, $$D(s)=1-\frac{K}{2}[s+\xi +i(\bar \omega -i\Delta )]^{-1} \ .$$ Note that the solution of $D(s)=0$ occurs at $$s=-i\omega -\left(\xi +\Delta -\frac{K}{2}\right) \
1
member_23
.$$ According to our previous assumptions, we require $K<K_c\equiv 2(\Delta +\xi )$ so that the $R=0$ state is stable, and $(\Delta -K/2)\tau - \xi \tau \gg 1$ so that we can neglect contributions from the pole at the root $D(s)=0$ in our approximation of (31) by (32). Using Eq. (36) and the $n=-1$ contribution to $f^{(2)}_{1,e}$ (i.e., the first term in (33)) in Eq. (32) we obtain for the echo term at $t=2\tau $ (denoted $R^{(2)*}_{2\tau }(\epsilon )$), $$R^{(2)*}_{2\tau }(t)=2ih^*_1h_2d_0d_1\Delta \int^{+\infty}_{-\infty} \frac{d\omega }{2\pi i}\cdot \frac{\exp[-i\omega (t-2\tau )-\xi t]}{[(\omega -\bar \omega )-i(\Delta -\frac{K}{2})][(\omega -\bar \omega )+i( \Delta -\frac{K}{2})]}\ .$$ For $t>2\tau $ $(t<2\tau )$ the integrand exponentially approaches zero as $Im(\omega )\rightarrow -\infty $ $(Im(\omega )\rightarrow +\infty)$, and we can therefore close the integration path with a large semicircle in the lower half $\omega $-plane (upper half $\omega $-plane). Thus the integral (38) is evaluated from the pole enclosed by the resulting path \[i.e., the pole $\omega =\omega _0-i(\Delta -\frac{K}{2})$ for $t>2\tau $, and the pole $\omega =\omega _0+i(\Delta -\frac{K}{2})$ for $t<2\tau $\], $$R^{(2)*}_{2\tau}(t)=\frac{h^*_1h_2d_0d_1\Delta }{\Delta -(K/2)}e^{-i\bar \omega (t-2\tau )-\xi t}e^{-(\Delta -\frac{K}{2})|t-2\tau |} \ .$$ From Eq. (39) we see that we obtain an echo that is approximately symmetric in shape about
1
member_23
$t=2\tau $ (i.e., the envelope $\exp [-(\Delta -K/2)|t-2\tau |]$) for $\xi \ll (\Delta -\frac{1}{2}K)$. We can similarly evaluate the contribution $R^{(2)*}_{m\tau }(t)$ of echoes at $t=m\tau $ for $m=3,4,\ldots$. For example, the result for the echo at $t=3\tau $ is $$R^{(2)*}_{3\tau }=\frac{2h^*_2h_3d_0d_1\Delta }{\Delta -(K/4)}e^{-\xi (3\tau +t)}e^{-i\bar \omega (t-3\tau )}E(t-3\tau ) \ ,$$ $$E(t-3\tau )= \left\{ \begin{array}{ll} \exp [\Delta (t-3\tau )] \ , & {\rm for} \ t<3\tau \ , \\ \exp -[(\Delta -\frac{1}{2}K)(t-3\tau )] \ , & {\rm for} \ t>3\tau \ . \end{array} \right.$$ Thus, in the case $\xi =0$, the shape of the pulse envelope $E(t-3\tau )$ is asymmetric about $t=3\tau $, increasing at a more rapid exponential rate (namely, $\Delta )$ as $t$ increases toward $3\tau $, than the slower exponential rate of decrease (namely, $\Delta -(K/2)$) as $t$ increases away from $3\tau $. This is in contrast to the symmetrically shaped envelope $\exp [-(\Delta -\frac{1}{2}K)|t-2\tau |]$ for the echo at $t=2\tau $. In Appendix II we present an evaluation of $R^{(2)*}_{2\tau }(t)$ for the case of a Gaussian frequency distribution function, $$g(\omega )=g_G(\omega )\equiv [2\pi \Delta ^2]^{-1/2}\exp [-(\omega -\bar \omega )^2/(2\Delta ^2)] \ .$$ F. The small coupling limit --------------------------- We now consider a general frequency
1
member_23
distribution function $g(\omega )$ but for the case where the coupling between oscillators is small. That is, $K\ll \Delta $, where $\Delta $ denotes the frequency width of $g(\omega )$ about its mean value $\omega =\bar \omega $. In this case a good approximation is provided by setting $K=0$. Thus $D[-(i\omega +\xi )]\cong 1$ and Eq. (33) yields $$f^{(2)}_{1,e}(\omega ,\tau ^+)=d_0d_1\sum ^\infty _{n=1}nh^*_nh_{n+1}\exp [-(-in\omega +n^2\xi )\tau ] \ ,$$ where we have replaced $n$ by $-n$ and used $h_n=h^*_{-n}$. Inserting Eq. (42) into Eq. (32) we obtain $$R_e^{(2)*}(t)=\sum ^\infty _{n=2}(n-1)d_0d_1h^*_{n-1}h_n\tilde g(t-n\tau )e^{-[(n^{2}-1)\tau +t]\xi } \ ,$$ where $\tilde g(t)$ is defined by $$\tilde g(t)=\int ^{+\infty}_{-\infty}d\omega e^{-i\omega t}g(\omega ) \ .$$ Thus, for $K\ll \Delta $, the shape of the echoes at $t=2\tau ,3\tau ,\ldots$ is directly given by the Fourier transform (44) of the frequency distribution function $g(\omega )$. Another point is that with $K\rightarrow 0$, Eq. (1) shows that the oscillators do not interact, and the nonlinearity needed to produce the echo phenomenon comes entirely from the stimulus function $h(\theta )$. IV. Simulations =============== We have performed direct numerical simulations of the system (1) with a Lorentzian oscillator distribution (see Eq. (35)), $\bar \omega =0$, $\Delta =1$ (corresponding to
1
member_23
$K_c=2$), $\hat d_0=\hat d_1$, $K=1$, $\tau =50$, and $\xi =0$. At $t=0^{-}$ we initialize each phase $\theta _i$ for $i=1,2,\ldots ,N$ randomly and independently with a uniform distribution in the interval $(0,2\pi )$. We then apply the mapping given by Eq. (7) with $\hat d_p=\hat d_0$ to each $\theta _i$ in order to simulate the effect of the delta function at $t=0$. Next we integrate Eq. (1) for each $i=1,2,\ldots ,N$ forward in time to $t=\tau ^{-}$, again apply the mapping Eq. (7) (but now with $\hat d_p=\hat d_1$), and we then continue the integration. At each time step we also calculate $R(t)$ using Eq. (8). Figure 2 shows results for $\hat d_0=\hat d_1=1/4$, and ![$|R(t)|$ versus $t$ for (a) $N=10^6$, (b) $N=10^5$, (c) $N=10^4$, and (d) $N=10^3$, showing the echo at $t\cong 2\tau $ and the increase of fluctuations at lower $N$.](fig2) ![$|R(t)|$ versus $t$ blown up around $t\cong 2\tau =200$, for $N=10^6$, $10^5$, $10^4$, $10^3$ (solid curves) showing the increase of fluctuations at lower $N$. The dotted curve is the theoretical result from Eq. (39) with $\xi =0$.](fig3) $$h(\theta )=\sin \theta +\sin 2\theta \ ,$$ for several different system sizes, $N=10^6$, $10^5$, and $10^4$. Figure 2(a–c) shows $|R(t)|$
1
member_23
versus $t$ for $0\leq t\leq 125$. The responses to the delta functions at $t=0$ and $\tau $, as well as the echo at time $t=2\tau $ are clearly illustrated. The effect of lower $N$ is to increase the fluctuations making the echo somewhat less distinct. We do not see any echo at $t=3\tau $. This is in agreement with Eq. (40), since $h_3=0$ for the $h(\theta )$ employed in these computations. Figure 3 shows a blow-up of the numerically computed echo around the time $t=2\tau $ for $N=10^6$, $10^5$, and $10^4$. Also, plotted in Fig. 3 as asterisks is the result from our theoretical calculation Eq. (39). Reasonable agreement between the theoretical and computed echo shapes is obtained, although the agreement is somewhat obscured by fluctuation effects at the smaller system sizes $(N)$. While our choice $\hat d_0=\hat d_1=1/4$ might be regarded as questionable for applicability of the small amplitude approximation $(\hat d_p\ll 1$, for $p=0,1$) employed by Eq. (7) and by our theory of Sec. III, we have nonetheless evidently obtained good agreement between the theory and numerical experiment. Figure 4 illustrates the effect of varying the driving amplitude for a network of size $N=10^4$. For $\hat d_0=\hat d_1=1/8$
1
member_23
(Fig. 4(a)) the echo is swamped by the noise and is not seen. For $\hat d_0=\hat d_1=1/4$ (Fig. 4(b), same as 2(a)) the echo seems to have appeared, but because of the noise, this conclusion is somewhat questionable. Finally, at the larger driving of $\hat d_0=\hat d_1=1/2$, the echo is clearly present. Figures 5(a) and 5(b) show the effect of changing $h(\theta )$. In particular, Fig. 5(a) shows numerical results for $\hat d_0=\hat d_1=1/4$, $N=10^5$, and $h(\theta )=\sin \theta $, with all other parameters the same as before. Since $h_2$ is now zero, Eq. (39) now predicts that there is no echo, in agreement with Fig. 5(a). Figure 5(b) shows numerical results for $\hat d_0=\hat d_1=1/4$, $N=10^5$, and $$h(\theta )=\sin \theta +\sin 2\theta +\sin 3\theta,$$ with all other parameters the same as before. Since $h_1$, $h_2$ and $h_3$ are all nonzero, Eqs. (39) and (40) now predict echoes at both $t\cong 2\tau $ and at $t\cong 3\tau $, and this is confirmed by Fig. 5(b). ![Simulation of $10^5$ oscillators for $\tau =100$, $\hat d_0=\hat d_1=1/3$, $K=1=\frac{1}{2}K_c$, $h(\theta )=\sin \theta $. In this case, no echo at $t=2\tau =200$ is observed.](fig4) ![Simulation of $10^5$ oscillators for $\tau =100$, $\hat d_0=\hat d_1=??$,
1
member_23
$K=1=\frac{1}{2}K_c$, $h(\theta )=\sin \theta +\sin 2\theta +\sin 3\theta $. In this case, echoes are seen at $t=2\tau =200$ and at $t=3\tau =300$. The inset shows a blow-up of the numerical result for the echo shape at $t=3\tau $ with the theoretical result, Eq. (41), superposed (dotted curve).](fig5) Finally, we note that similar numerical experiments to all of the above have been repeated using a Gaussian $g(\omega )$, and these yield similar results (not shown). V. Discussion ============= Echo phenomena as used for MRI provide a powerful medical diagnostic tool. Echoes in plasmas have also been used as a basis for measuring velocity space diffusion of plasma particles[@jensen69]. Thus it is of interest to consider whether there are potential diagnostic measurement uses of echoes in the context of situations that can be described by the Kuramoto model and its variants. For example, we note that the amplitude of the echo varies exponentially with $\xi $, providing a possible means of determining the phase diffusion coefficient $\xi $. For example, the amplitude of the echo at $t=2\tau $ varies as $e^{-\xi \tau }$. Thus the log of the ratio of measurements of the echo amplitude using two different values of $\tau $,
1
member_23
divided by the difference in the $\tau $ values, provides a potential means of estimating $\xi $. Also, as indicated by Eq. (43), if one can lower the coupling $K$ sufficiently, then echoes provide a potential way of determining the oscillator frequency distribution function $g(\omega )$. In particular, for low $K$ the distribution $g(\omega )$ is directly given by the inverse Fourier transform of the echo profile. On the other hand, we have seen from the simulations in Sec. IV that finite $N$ leads to noise-like behavior that may compromise such attempts. We also note that the Kuramoto model is an idealization, and application to any given situation may require modifications of the model and theory to more closely correspond to the situation at hand. We, nevertheless, feel that consideration of echoes for diagnostics may be of potential use. Furthermore, these phenomena are of theoretical interest from at least two points of view. First, as mentioned in Sec. IIIb, the memory required by the echo phenomenon can be thought of as leading to a macroscopically observable consequence of the continuous spectrum[@strogatz92] of the Kuramoto model. A second point of theoretical interest relates to the recent work in Ref.  [@ott]. In
1
member_23