text
stringlengths
6
128k
In this paper, a diffusion-aggregation equation with delta potential is introduced. Based on the global existence and uniform estimates of solutions to the diffusion-aggregation equation, we also provide the rigorous derivation from a stochastic particle system while introducing an intermediate particle system with smooth interaction potential. The theoretical results are compared to numerical simulations relying on suitable discretization schemes for the microscopic and macroscopic level. In particular, the regime switch where the analytic theory fails is numerically analyzed very carefully and allows for a better understanding of the equation.
We report the magnetic properties, electrical resistivity, and Vickers microhardness of as-cast and annealed high-entropy alloys (HEAs) FeCoNiPd and FeCoNiPt with the face-centered cubic structure. The heat treatment at 800 $^{\circ}$C does not largely affect the physical properties in each HEA. The values of the Curie temperature and the saturation moment at 50 K are 955 K and 1.458 $\mu_\mathrm{B}$/f.u. for the annealed FeCoNiPd, and 851 K and 1.456 $\mu_\mathrm{B}$/f.u. for the annealed FeCoNiPt, respectively. Each HEA is a soft ferromagnet and shows metallic resistivity. The electronic structure calculations of both HEAs support the ferromagnetic ground states. The comparisons between experimental and theoretical values are made for the Curie temperature, the saturation moment, and the residual resistivity. The Vickers microhardness of annealed FeCoNiPd and FeCoNiPt are both 188 HV. The hardness vs. valence electron count (VEC) per atom plot of these HEAs does not largely deviate from an expected universal relation forming a broad peak at VEC$\sim$6.8. This study would give some hints on designing a soft ferromagnetic HEA with high hardness.
We give a fairly complete characterization of the exact components of a large class of uniformly expanding Markov maps of $\mathbb{R}$. Using this result, for a class of $\mathbb{Z}$-invariant maps and finite modifications thereof, we prove certain properties of infinite mixing recently introduced by the author.
In this paper, we introduce a new three-step iteration process in Banach space and prove convergence results for approximating fixed points for nonexpansive mappings. Also, we show that the newly introduced iteration process converges faster than a number of existing iteration processes. Further, we discuss about the solution of mixed type Volterra-Fredholm functional nonlinear integral equation.
Real time, high resolution 3D reconstruction of scenes hidden from the direct field of view is a challenging field of research with applications in real-life situations related e.g. to surveillance, self-driving cars and rescue missions. Most current techniques recover the 3D structure of a non-lineof-sight (NLOS) static scene by detecting the return signal from the hidden object on a scattering observation area. Here, we demonstrate the full colour retrieval of the 3D shape of a hidden scene by coupling back-projection imaging algorithms with the high-resolution time-of-flight information provided by a single-pixel camera. By using a high effciency Single-Photon Avalanche Diode (SPAD) detector, this technique provides the advantage of imaging with no mechanical scanning parts, with acquisition times down to sub-seconds.
The coherent quantum noise cancellation (CQNC) strategy has been performed in the single-mode optomechanical systems to promote an ultra-sensitive metrology protocol to break the standard quantum limit. The key idea of CQNC is that the backaction noises arising from radiation pressure and driving can be offset by coupling the optical mode to a near-resonant ancillary mode. In this work, a continuous weak-force sensing under CQNC is developed in a double-mode optomechanical system consisted of two optical modes with distinct frequencies and a mechanical mode. In particular, under the asymmetrical treatment by driving the higher-frequency optical mode, probing the lower-frequency one, and coupling the probe mode to the ancillary mode, our configuration can be used to resemble the conventional CQNC sensing. It is more important to find that the current CQNC strategy simultaneously stabilizes the double-mode system with respect to both the constrained driving power (the Routh-Hurwitz criterion) and the effective positive mechanical damping (the stable optical-spring condition). Moreover, through exploiting the coupling between the probe mode and the ancillary mode under this nontrivial extension of the CQNC strategy (from the single-mode version to the double-mode one), the rotating-wave term and the counter-rotating term are found to be responsible to the system stability and the noise cancellation, respectively. In realistic situations, our scheme can be practiced in a tripartite optomechanical setup with a membrane in the middle and a twisted-cavity-based weak-torque detector.
Ramsey interferometry is a widely used tool for precisely measuring transition frequencies between two energy levels of a quantum system, with applications in time-keeping, precision spectroscopy, quantum optics, and quantum information. Often, the coherence time of the quantum system surpasses the one of the oscillator probing the system, thereby limiting the interrogation time and associated spectral resolution. Correlation spectroscopy overcomes this limitation by probing two quantum systems with the same noisy oscillator for a measurement of their transition frequency difference; this technique has enabled very precise comparisons of atomic clocks. Here, we extend correlation spectroscopy to the case of multiple quantum systems undergoing strong correlated dephasing. We model Ramsey correlation spectroscopy with $N$ particles as a multi-parameter phase estimation problem and demonstrate that multiparticle quantum correlations can assist in reducing the measurement uncertainties even in the absence of entanglement. We derive precision limits and optimal sensing techniques for this problem and compare the performance of probe states and measurement with and without entanglement. Using one- and two-dimensional ion Coulomb crystals with up to 91 qubits, we experimentally demonstrate the advantage of measuring multi-particle quantum correlations for reducing phase uncertainties, and apply correlation spectroscopy to measure ion-ion distances, transition frequency shifts, laser-ion detunings, and path-length fluctuations. Our method can be straightforwardly implemented in experimental setups with globally-coherent qubit control and qubit-resolved single-shot read-out and is thus applicable to other physical systems such as neutral atoms in tweezer arrays.
Systems of exchange-coupled spins are commonly used to model ferromagnets. The quantum correlations in such magnets are studied using tools from quantum information theory. Isotropic ferromagnets are shown to possess a universal low-temperature density matrix which precludes entanglement between spins, and the mechanism of entanglement cancellation is investigated, revealing a core of states resistant to pairwise entanglement cancellation. Numerical studies of one-, two-, and three-dimensional lattices as well as irregular geometries showed no entanglement in ferromagnets at any temperature or magnetic field strength.
We study a linear cocycle over irrational rotation $\sigma_{\omega}(x) = x + \omega$ of a circle $\mathbb{T}^{1}$. It is supposed the cocycle is generated by a $C^{1}$-map $A_{\varepsilon}: \mathbb{T}^{1} \to SL(2, \mathbb{R})$ which depends on a small parameter $\varepsilon\ll 1$ and has the form of the Poincar\'e map corresponding to a singularly perturbed Schr\"odinger equation. Under assumption the eigenvalues of $A_{\varepsilon}(x)$ to be of the form $\exp(\pm \lambda(x)/\varepsilon)$, where $\lambda(x)$ is a positive function, we examine the property of the cocycle to possess an exponential dichotomy (ED) with respect to the parameter $\varepsilon$. We show that in the limit $\varepsilon\to 0$ the cocycle "typically" exhibits ED only if it is exponentially close to a constant cocycle. In contrary, if the cocycle is not close to a constant one it does not posesses ED, whereas the Lyapunov exponent is "typically" large.
Aimed at progress in mega-electron volt (MeV) gamma-ray astronomy, which has not yet been well-explored, Compton telescope missions with a variety of detector concepts have been proposed so far. One of the key techniques for these future missions is an event reconstruction algorithm that is able to determine the scattering orders of multiple Compton scattering events and to identify events in which gamma rays escape from the detectors before they deposit all of their energies. We revisit previous event reconstruction methods and propose a modified algorithm based on a probabilistic method. First, we present a general formalism of the probabilistic model of Compton scattering describing physical interactions inside the detector and measurement processes. Then, we also introduce several approximations in the calculation of the probability functions for efficient computation. For validation, the developed algorithm has been applied to simulation data of a Compton telescope using a liquid argon time projection chamber, which is a new type of Compton telescope proposed for the GRAMS project. We have confirmed that it works successfully for up to 8-hit events, including correction of incoming gamma-ray energies for escape events. The proposed algorithm can be used for next-generation MeV gamma-ray missions featured by large-volume detectors, e.g., GRAMS.
Standard neural network based on general back propagation learning using delta method or gradient descent method has some great faults like poor optimization of error-weight objective function, low learning rate, instability .This paper introduces a hybrid supervised back propagation learning algorithm which uses trust-region method of unconstrained optimization of the error objective function by using quasi-newton method .This optimization leads to more accurate weight update system for minimizing the learning error during learning phase of multi-layer perceptron.[13][14][15] In this paper augmented line search is used for finding points which satisfies Wolfe condition. In this paper, This hybrid back propagation algorithm has strong global convergence properties & is robust & efficient in practice.
Cognitive radio is an intelligent radio that can be programmed and configured dynamically to fully use the frequency resources that are not used by licensed users. It defines the radio devices that are capable of learning and adapting to their transmission to the external radio environment, which means it has some kind of intelligence for monitoring the radio environment, learning the environment and make smart decisions. In this paper, we are reviewing some examples of the usage of machine learning techniques in cognitive radio networks for implementing the intelligent radio.
For any discrete-time $P$--local martingale $S$ there exists a probability measure $Q \sim P$ such that $S$ is a $Q$--martingale. A new proof for this result is provided. The core idea relies on an appropriate modification of an argument by Chris Rogers, used to prove a version of the fundamental theorem of asset pricing in discrete time. This proof also yields that, for any $\varepsilon>0$, the measure $Q$ can be chosen so that $\frac{dQ}{dP} \leq 1+\varepsilon$.
In this talk we present results obtained when fluid dynamical fluctuations are included in relativistic $3+1$ dimensional viscous fluid dynamics. We discuss effects of the interactions of fluctuations due to nonlinearities and the cutoff dependence.
The Newton series which interpolate finite multiple harmonic sums are useful in the study of multiple zeta values (MZV's). In this paper, we prove that these Newton series can be written as multiple series. As an application, we give a formula for MZV's which contains the duality.
We investigated the current-voltage characteristic of InAs/GaAs quantum dot intermediate band solar cells (QD IBSCs) with different n-type doping density in the QD layer. The n-type doping evidently increases the open circuit voltage, meanwhile decreases the short circuit current density, and leads to the conversion efficiency approaching that of the control solar cell, that is the major role of n-type doping is to suppress the effects of QDs on the current-voltage characteristic. Our model adopts practical parameters for simulation rather than those from detailed balanced method, so that the results in our simulation are not overestimated.
In this article, we present a survey of recent advances in passive human behaviour recognition in indoor areas using the channel state information (CSI) of commercial WiFi systems. Movement of human body causes a change in the wireless signal reflections, which results in variations in the CSI. By analyzing the data streams of CSIs for different activities and comparing them against stored models, human behaviour can be recognized. This is done by extracting features from CSI data streams and using machine learning techniques to build models and classifiers. The techniques from the literature that are presented herein have great performances, however, instead of the machine learning techniques employed in these works, we propose to use deep learning techniques such as long-short term memory (LSTM) recurrent neural network (RNN), and show the improved performance. We also discuss about different challenges such as environment change, frame rate selection, and multi-user scenario, and suggest possible directions for future work.
We present a global linear stability analysis of nuclear fuel accumulating on the surface of an accreting neutron star and we identify the conditions under which thermonuclear bursts are triggered. The analysis reproduces all the recognized regimes of hydrogen and helium bursts, and in addition shows that at high accretion rates, near the limit of stable burning, there is a regime of ``delayed mixed bursts'' which is distinct from the more usual ``prompt mixed bursts.'' In delayed mixed bursts, a large fraction of the fuel is burned stably before the burst is triggered. Bursts thus have longer recurrence times, but at the same time have somewhat smaller fluences. Therefore, the parameter alpha, which measures the ratio of the energy released via accretion to that generated through nuclear reactions in the burst, is up to an order of magnitude larger than for prompt bursts. This increase in alpha near the threshold of stable burning has been seen in observations. We explore a wide range of mass accretion rates, neutron star radii and core temperatures, and calculate a variety of burst properties. From a preliminary comparison with data, we suggest that bursting neutron stars may have hot cores, with T_{core} >~ 10^{7.5} K, consistent with interior cooling via the modified URCA or similar low-efficiency process, rather than T_{core} ~ 10^7 K, as expected for the direct URCA process. There is also an indication that neutron star radii are somewhat small <~ 10 km. Both of these conclusions need to be confirmed by comparing more careful calculations with better data.
Automatically generated static code warnings suffer from a large number of false alarms. Hence, developers only take action on a small percent of those warnings. To better predict which static code warnings should not be ignored, we suggest that analysts need to look deeper into their algorithms to find choices that better improve the particulars of their specific problem. Specifically, we show here that effective predictors of such warnings can be created by methods that locally adjust the decision boundary (between actionable warnings and others). These methods yield a new high water-mark for recognizing actionable static code warnings. For eight open-source Java projects (cassandra, jmeter, commons, lucene-solr, maven, ant, tomcat, derby) we achieve perfect test results on 4/8 datasets and, overall, a median AUC (area under the true negatives, true positives curve) of 92%.
The document tries to put focus on sequences with certain properties and periods leading to the first value smaller than the starting value in the Collatz problem. With the idea that, if all starting numbers lead ultimately to a smaller number, all full sequences lead to 1 with a finite stopping time, the problem could be reduced to more structured shorter sequences. It is shown that this sequences exist and follow consistent rules. Potential features of an infinite cycle, also leading to a smaller number, are also discussed.
The electronic structure, optical and x-ray absorption spectra, angle dependence of the cyclotron masses and extremal cross sections of the Fermi surface, phonon spectra, electron-phonon Eliashberg and transport spectral functions, temperature dependence of electrical resistivity of the MB2 (M=Ti and Zr) diborides were investigated from first principles using the full potential linear muffin-tin orbital method. The calculations of the dynamic matrix were carried out within the framework of the linear response theory. A good agreement with experimental data of optical and x-ray absorption spectra, phonon spectra, electron-phonon spectral functions, electrical resistivity, cyclotron masses and extremal cross sections of the Fermi surface was achieved.
A variety of independent observational studies have now reported a significant decline in the fraction of Lyman-break galaxies which exhibit Ly-a emission over the redshift interval z=6-7. In combination with the strong damping wing extending redward of Ly-a in the spectrum of the bright z=7.085 quasar ULAS 1120+0641, this has strengthened suggestions that the hydrogen in the intergalactic medium (IGM) is still substantially neutral at z~7. Current theoretical models imply HI fractions as large as 40-90 per cent may be required to explain these data assuming there is no intrinsic evolution in the Ly-a emitter population. We propose that such large neutral fractions are not necessary. Based on a hydrodynamical simulation which reproduces the absorption spectra of high-redshift (z~6-7) quasars, we demonstrate that the opacity of the intervening IGM redward of rest-frame Ly-a can rise rapidly in average regions of the Universe simply because of the increasing incidence of absorption systems which are optically thick to Lyman continuum photons as the tail-end of reionisation is approached. Our simulations suggest these data do not require a large change in the IGM neutral fraction by several tens of per cent from z=6-7, but may instead be indicative of the rapid decrease in the typical mean free path for ionising photons expected during the final stages of reionisation.
We suggest a theory of internal coherent tunneling in the pseudogap region where the applied voltage is below the free electron gap. We consider quasi 1D systems where the gap is originated by a lattice dimerization like in polyacethylene, as well as low symmetry 1D semiconductors. Results may be applied to several types of conjugated polymers, to semiconducting nanotubes and to quantum wires of semiconductors. The approach may be generalized to tunneling in strongly correlated systems showing the pseudogap effect, like the family of High Tc materials in the undoped limit. We demonstrate the evolution of tunneling current-voltage characteristics from smearing the free electron gap down to threshold for tunneling of polarons and further down to the region of bi-electronic tunneling via bipolarons or kink pairs.
A simple method for transmitting quantum states within a quantum computer is via a quantum spin chain---that is, a path on $n$ vertices. Unweighted paths are of limited use, and so a natural generalization is to consider weighted paths; this has been further generalized to allow for loops (\emph{potentials} in the physics literature). We study the particularly important situation of perfect state transfer with respect to the corresponding adjacency matrix or Laplacian through the use of orthogonal polynomials. Low-dimensional examples are given in detail. Our main result is that PST with respect to the Laplacian matrix cannot occur for weighted paths on $n\geq 3$ vertices nor can it occur for certain symmetric weighted trees. The methods used lead us to a conjecture directly linking the rationality of the weights of weighted paths on $n>3$ vertices, with or without loops, with the capacity for PST between the end vertices with respect to the adjacency matrix.
Dynamic texture (DT) exhibits statistical stationarity in the spatial domain and stochastic repetitiveness in the temporal dimension, indicating that different frames of DT possess a high similarity correlation that is critical prior knowledge. However, existing methods cannot effectively learn a promising synthesis model for high-dimensional DT from a small number of training data. In this paper, we propose a novel DT synthesis method, which makes full use of similarity prior knowledge to address this issue. Our method bases on the proposed kernel similarity embedding, which not only can mitigate the high-dimensionality and small sample issues, but also has the advantage of modeling nonlinear feature relationship. Specifically, we first raise two hypotheses that are essential for DT model to generate new frames using similarity correlation. Then, we integrate kernel learning and extreme learning machine into a unified synthesis model to learn kernel similarity embedding for representing DT. Extensive experiments on DT videos collected from the internet and two benchmark datasets, i.e., Gatech Graphcut Textures and Dyntex, demonstrate that the learned kernel similarity embedding can effectively exhibit the discriminative representation for DT. Accordingly, our method is capable of preserving the long-term temporal continuity of the synthesized DT sequences with excellent sustainability and generalization. Meanwhile, it effectively generates realistic DT videos with fast speed and low computation, compared with the state-of-the-art methods. The code and more synthesis videos are available at our project page https://shiming-chen.github.io/Similarity-page/Similarit.html.
Analog forecasting has been applied in a variety of fields for predicting future states of complex nonlinear systems that require flexible forecasting methods. Past analog methods have almost exclu- sively been used in an empirical framework without the structure of a model-based approach. We propose a Bayesian model framework for analog forecasting, building upon previous analog methods but accounting for parameter uncertainty. Thus, unlike traditional analog forecasting methods, the use of Bayesian modeling allows one to rigorously quantify uncertainty to obtain realistic posterior predictive distributions. The model is applied to the long-lead time forecasting of mid-May averaged soil moisture anomalies in Iowa over a high-resolution grid of spatial locations. Sea Surface Tem- perature (SST) is used to find past time periods with similar trajectories to the current pre-forecast period. The analog model is developed on projection coefficients from a basis expansion of the soil moisture and SST fields. Separate models are constructed for locations falling in each Iowa Crop Reporting District (CRD) and the forecasting ability of the proposed model is compared against a variety of alternative methods and metrics.
A foundational idea in the theory of in situ planet formation is the "minimum mass extrasolar nebula" (MMEN), a surface density profile ($\Sigma$) of disk solids that is necessary to form the planets in their present locations. While most previous studies have fit a single power-law to all exoplanets in an observed ensemble, it is unclear whether most exoplanetary systems form from a universal disk template. We use an advanced statistical model for the underlying architectures of multi-planet systems to reconstruct the MMEN. The simulated physical and Kepler-observed catalogs allows us to directly assess the role of detection biases, and in particular the effect of non-transiting or otherwise undetected planets, in altering the inferred MMEN. We find that fitting a power-law of the form $\Sigma = \Sigma_0^* (a/a_0)^\beta$ to each multi-planet system results in a broad distribution of disk profiles; $\Sigma_0^* = 336_{-291}^{+727}$ g/cm$^2$ and $\beta = -1.98_{-1.52}^{+1.55}$ encompass the 16th-84th percentiles of the marginal distributions in an underlying population, where $\Sigma_0^*$ is the normalization at $a_0 = 0.3$ AU. Around half of inner planet-forming disks have minimum solid masses of $\gtrsim 40 M_\oplus$ within 1 AU. While transit observations do not tend to bias the median $\beta$, they can lead to both significantly over- and under-estimated $\Sigma_0^*$ and thus broaden the inferred distribution of disk masses. Nevertheless, detection biases cannot account for the full variance in the observed disk profiles; there is no universal MMEN if all planets formed in situ. The great diversity of solid disk profiles suggests that a substantial fraction ($\gtrsim 23\%$) of planetary systems experienced a history of migration.
We study the irreversible adsorption of spherical $2AnB$ patchy colloids (with two $A$-patches on the poles and $n$ $B$-patches along the equator) on a substrate. In particular, we consider dissimilar $AA$, $AB$, and $BB$ binding probabilities. We characterize the patch-colloid network and its dependence on $n$ and on the binding probabilities. Two growth regimes are identified with different density profiles and we calculate a growth mode diagram as a function of the colloid parameters. We also find that, close to the substrate, the density of the network, which depends on the colloid parameters, is characterized by a depletion zone.
We study the energy convergence of the Karhunen-Lo\`eve decomposition of the turbulent velocity field in a high-Reynolds-number pressure-driven boundary layer as a function of the number of modes. An energy-optimal Karhunen-Lo\`eve (KL) decomposition is obtained from wall-modeled large-eddy simulations at "infinite" Reynolds number. By explicitly using Fourier modes for the horizontal homogeneous directions, we are able to construct a basis of full rank, and we demonstrate that our results have reached statistical convergence. The KL dimension, corresponding to the number of modes per unit volume required to capture 90% of the total turbulent kinetic energy, is found to be $2.4 \times 10^5 |\Omega|/H^3$ (with $|\Omega|$ the domain volume and $H$ the boundary layer height). This is significantly higher than current estimates, which are mostly based on the method of snapshots. In our analysis, we carefully correct for the effect of subgrid scales on these estimates.
We investigate the effect of top trilinear operators in t tbar production at the ILC. We find that the sensitivity to these operators largely surpasses the one achievable by the LHC either in neutral or charged current processes, allowing to probe new physics scales up to 4.5 TeV for a centre of mass energy of 500 GeV. We show how the use of beam polarisation and an eventual energy upgrade to 1 TeV allow to disentangle all effective operator contributions to the Ztt and gamma tt vertices.
The theory of flat Pseudo-Riemannian manifolds and flat affine manifolds is closely connected to the topic of prehomogeneous affine representations of Lie groups. In this article, we exhibit several aspects of this correspondence. At the heart of our presentation is a development of the theory of characteristic classes and characters of prehomogeneous affine representations. We give applications concerning flat affine, as well as Pseudo-Riemannian and symplectic affine flat manifolds.
Algorithmic decision support (ADS), using Machine-Learning-based AI, is becoming a major part of many processes. Organizations introduce ADS to improve decision-making and use available data, thereby possibly limiting deviations from the normative "homo economicus" and the biases that characterize human decision-making. However, a closer look at the development and use of ADS systems in organizational settings reveals that they necessarily involve a series of largely unspecified human decisions. They begin with deliberations for which decisions to use ADS, continue with choices while developing and deploying the ADS, and end with decisions on how to use the ADS output in an organization's operations. The paper presents an overview of these decisions and some relevant behavioral phenomena. It points out directions for further research, which is essential for correctly assessing the processes and their vulnerabilities. Understanding these behavioral aspects is important for successfully implementing ADS in organizations.
Hemispherical power asymmetry has emerged as a new challenge to cosmology in early universe. While the cosmic microwave background (CMB) measurements indicated the asymmetry amplitude $A \simeq 0.07$ at the CMB scale $k_{\rm CMB}\simeq 0.0045\,{\rm Mpc}^{-1}$, the high-redshift quasar observations found no significant deviation from statistical isotropy. This conflict can be reconciled in some scale-dependent asymmetry models. We put forward a new parameterization of scale-dependent asymmetric power spectrum, inspired by a multi-speed inflation model. The 21-cm power spectrum from the epoch of reionization can be used to constrain the scale-dependent hemispherical asymmetry. We demonstrate that an optimum, multi-frequency observation by the Square Kilometre Array (SKA) Phase 2 can impose a constraint on the amplitude of the power asymmetry anomaly at the level of $\Delta A \simeq 0.2$ at $0.056 \lesssim k_{\rm 21cm} \lesssim 0.15 \,{\rm Mpc}^{-1}$. This limit may be further improved by an order of magnitude as $\Delta A \simeq 0.01$ with a cosmic variance limited experiment such as the Omniscope.
We report ground-based transmission spectroscopy of the highly irradiated and ultra-short period hot-Jupiter WASP-103b covering the wavelength range $\approx$ 400-600 nm using the FORS2 instrument on the Very Large Telescope. The light curves show significant time-correlated noise which is mainly invariant in wavelength and which we model using a Gaussian process. The precision of our transmission spectrum is improved by applying a common-mode correction derived from the white light curve, reaching typical uncertainties in transit depth of $\approx$ 2x10$^{-4}$ in wavelength bins of 15 nm. After correction for flux contamination from a blended companion star, our observations reveal a featureless spectrum across the full range of the FORS2 observations and we are unable to confirm the Na absorption previously inferred using Gemini/GMOS or the strong Rayleigh scattering observed using broad-band light curves. We performed a Bayesian atmospheric retrieval on the full optical-infrared transmission spectrum using the additional data from Gemini/GMOS, HST/WFC3 and Spitzer observations and recover evidence for H$_2$O absorption at the 4.0$\sigma$ level. However, our observations are not able to completely rule out the presence of Na, which is found at 2.0$\sigma$ in our retrievals. This may in part be explained by patchy/inhomogeneous clouds or hazes damping any absorption features in our FORS2 spectrum, but an inherently small scale height also makes this feature challenging to probe from the ground. Our results nonetheless demonstrate the continuing potential of ground-based observations for investigating exoplanet atmospheres and emphasise the need for the application of consistent and robust statistical techniques to low-resolution spectra in the presence of instrumental systematics.
The purpose of this work was to obtain diffusion coefficient for the magnetic angular momentum transport and material transport in a rotating solar model. We assumed that the transport of both angular momentum and chemical elements caused by magnetic fields could be treated as a diffusion process. The diffusion coefficient depends on the stellar radius, angular velocity, and the configuration of magnetic fields. By using of this coefficient, it is found that our model becomes more consistent with the helioseismic results of total angular momentum, angular momentum density, and the rotation rate in a radiative region than the one without magnetic fields. Not only can the magnetic fields redistribute angular momentum efficiently, but they can also strengthen the coupling between the radiative and convective zones. As a result, the sharp gradient of the rotation rate is reduced at the bottom of the convective zone. The thickness of the layer of sharp radial change in the rotation rate is about 0.036 $R_{\odot}$ in our model. Furthermore, the difference of the sound-speed square between the seismic Sun and the model is improved by mixing the material that is associated with angular momentum transport.
The task of projecting onto $\ell_p$ norm balls is ubiquitous in statistics and machine learning, yet the availability of actionable algorithms for doing so is largely limited to the special cases of $p = \left\{ 0, 1,2, \infty \right\}$. In this paper, we introduce novel, scalable methods for projecting onto the $\ell_p$ ball for general $p>0$. For $p \geq1 $, we solve the univariate Lagrangian dual via a dual Newton method. We then carefully design a bisection approach for $p<1$, presenting theoretical and empirical evidence of zero or a small duality gap in the non-convex case. The success of our contributions is thoroughly assessed empirically, and applied to large-scale regularized multi-task learning and compressed sensing.
We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as lightweight as possible, and operates using a small set of well-known (surface-level) monotonicity facts about quantifiers, lexical items and tokenlevel polarity information. Despite its simplicity, we find our approach to be competitive with other logic-based NLI models on the SICK benchmark. We also use MonaLog in combination with the current state-of-the-art model BERT in a variety of settings, including for compositional data augmentation. We show that MonaLog is capable of generating large amounts of high-quality training data for BERT, improving its accuracy on SICK.
Motivated by new kinematic data in the outer parts of early-type galaxies (ETGs), we re-examine angular momentum (AM) in all galaxy types. We present methods for estimating the specific AM j, focusing on ETGs, to derive relations between stellar j_* and mass M_* (after Fall 1983). We perform analyses of 8 galaxies out to ~10 R_e, finding that data at 2 R_e are sufficient to estimate total j_*. Our results contravene suggestions that ellipticals (Es) harbor large reservoirs of hidden j_* from AM transport in major mergers. We carry out a j_*-M_* analysis of literature data for ~100 nearby bright galaxies of all types. The Es and spirals form parallel j_*-M_* tracks, which for spirals is like the Tully-Fisher relation, but for Es derives from a mass-size-rotation conspiracy. The Es contain ~3-4 times less AM than equal-mass spirals. We decompose the spirals into disks+bulges and find similar j_*-M_* trends to spirals and Es overall. The S0s are intermediate, and we propose that morphological types reflect disk/bulge subcomponents following separate j_*-M_* scaling relations -- providing a physical motivation for characterizing galaxies by mass and bulge/disk ratio. Next, we construct idealized cosmological models of AM content, using a priori estimates of dark matter halo spin and mass. We find that the scatter in halo spin cannot explain the spiral/E j_* differences, but the data are matched if the galaxies retained different fractions of initial j (~60% and ~10%). We consider physical mechanisms for j_* and M_* evolution (outflows, stripping, collapse bias, merging), emphasizing that the vector sum of such processes must produce the observed j_*-M_* relations. A combination of early collapse and multiple mergers (major/minor) may account for the trend for Es. More generally, the observed AM variations represent fundamental constraints for any galaxy formation model.
The Landau-Lifshitz form of the Lorentz-Abraham-Dirac equation in the presence of a plane wave of arbitrary shape and polarization is solved exactly and in closed form. The explicit solution is presented in the particular, paradigmatic cases of a constant crossed field and of a monochromatic wave with circular and with linear polarization.
We investigate whether experts possess differential expertise when making predictions. We note that this would make it possible to aggregate multiple predictions into a result that is more accurate than their consensus average, and that the improvement prospects grow with the amount of differentiation. Turning this argument on its head, we show how differentiation can be measured by how much weighted aggregation improves on simple averaging. Taking stock-market analysts as experts in their domain, we do a retrospective study using historical quarterly earnings forecasts and actual results for large publicly traded companies. We use it to shed new light on the Sinha et al. (1997) result, showing that analysts indeed possess individual expertise, but that their differentiation is modest. On the other hand, they have significant individual bias. Together, these enable a 20%-30% accuracy improvement over consensus average.
Maximum likelihood learning with exponential families leads to moment-matching of the sufficient statistics, a classic result. This can be generalized to conditional exponential families and/or when there are hidden data. This document gives a first-principles explanation of these generalized moment-matching conditions, along with a self-contained derivation.
A detailed characterization of avalanche dynamics of wet granular media in a rotating drum apparatus is presented. The results confirm the existence of the three wetness regimes observed previously: the granular, the correlated and the viscoplastic regime. These regimes show qualitatively different dynamic behaviors which are reflected in all the investigated quantities. We discuss the effect of interstitial liquid on the characteristic angles of the material and on the avalanche size distribution. These data also reveal logarithmic aging and allow us to map out the phase diagram of the dynamical behavior as a function of liquid content and flow rate. Via quantitative measurements of the flow velocity and the granular flux during avalanches, we characterize novel avalanche types unique to wet media. We also explore the details of viscoplastic flow (observed at the highest liquid contents) in which there are lasting contacts during flow, leading to coherence across the entire sample. This coherence leads to a velocity independent flow depth at high rotation rates and novel robust pattern formation in the granular surface.
We develop a new approach for the computation of the Mullineux involution for the symmetric group and its Hecke algebra using the notion of crystal isomorphism and the Iwahori-Matsumoto involution for the affine Hecke algebra of type A. As a consequence, we obtain several new elementary combinatorial algorithms for its computation, one of which is equivalent to Xu's algorithm (and thus Mullineux' original algorithm). We thus obtain a simple interpretation of these algorithms and a new elementary proof that they indeed compute the Mullineux involution.
The rigorous coupled-wave analysis (RCWA) is one of the most successful and widely used methods for modeling periodic optical structures. It yields fast convergence of the electromagnetic far-field and has been adapted to model various optical devices and wave configurations. In this article, we investigate the accuracy with which the electromagnetic near-field can be calculated by using RCWA and explain the observed slow convergence and numerical artifacts from which it suffers, namely unphysical oscillations at material boundaries due to the Gibb's phenomenon. In order to alleviate these shortcomings, we also introduce a mathematical formulation for accurate near-field calculation in RCWA, for one- and two-dimensional straight and slanted diffraction gratings. This accurate near-field computational approach is tested and evaluated for several representative test-structures and configurations in order to illustrate the advantages provided by the proposed modified formulation of the RCWA.
We investigate an attractive atomic Bose-Einstein condensate (BEC) trapped by a double-well potential in the axial direction and by a harmonic potential in the transverse directions. We obtain numerically, for the first time, a quantum phase diagram which includes all the three relevant phases of the system: Josephson, spontaneous symmetry breaking (SSB), and collapse. We consider also the coherent dynamics of the BEC and calculate the frequency of population-imbalance mode in the Josephson phase and in the SSB phase up to the collapse. We show that these phases can be observed by using ultracold vapors of 7Li atoms in a magneto-optical trap.
We study the question of whether coherent neutrino scattering can occur on macroscopic scales, leading to a significant increase of the detection cross section. We concentrate on radiative neutrino scattering on atomic electrons (or on free electrons in a conductor). Such processes can be coherent provided that the net electron recoil momentum, i.e. the momentum transfer from the neutrino minus the momentum of the emitted photon, is sufficiently small. The radiative processes is an attractive possibility as the energy of the emitted photons can be as large as the momentum transfer to the electron system and therefore the problem of detecting extremely low energy recoils can be avoided. The requirement of macroscopic coherence severely constrains the phase space available for the scattered particle and the emitted photon. We show that in the case of the scattering mediated by the usual weak neutral current and charged current interactions this leads to a strong suppression of the elementary cross sections and therefore the requirement of macroscopic coherence results in a reduction rather than an increase of the total detection cross section. However, for the $\nu e$ scattering mediated by neutrino magnetic or electric dipole moments coherence effects can actually increase the detection rates. Effects of macroscopic coherence can also allow detection of neutrinos in 100 eV -- a few keV energy range, which is currently not accessible to the experiment. A similar coherent enhancement mechanism can work for relativistic particles in the dark sector, but not for the conventionally considered non-relativistic dark matter.
Instead of assuming fully loaded cells in the analysis on cache-enabled networks with tools of stochastic geometry, we focus on the dynamic traffic in this letter. With modeling traffic dynamics of request arrivals and departures, probabilities of full-, free-, and modest-load cells in the large-scale cache-enabled network are elaborated based on the traffic queue state. Moreover, we propose to exploit the packets cached at cache-enabled users as side information to cancel the incoming interference. Then the packet loss rates for both the cache-enabled and cache-untenable users are investigated. The simulation results verify our analysis.
We propose a scheme for efficient construction of graph states using realistic linear optics, imperfect photon source and single-photon detectors. For any many-body entanglement represented by tree graph states, we prove that the overall preparation and detection efficiency scales only polynomially with the size of the graph, no matter how small the efficiencies for the photon source and the detectors.
Current efforts in the biomedical sciences and related interdisciplinary fields are focused on gaining a molecular understanding of health and disease, which is a problem of daunting complexity that spans many orders of magnitude in characteristic length scales, from small molecules that regulate cell function to cell ensembles that form tissues and organs working together as an organism. In order to uncover the molecular nature of the emergent properties of a cell, it is essential to measure multiple cell components simultaneously in the same cell. In turn, cell heterogeneity requires multiple cells to be measured in order to understand health and disease in the organism. This review summarizes current efforts towards a data-driven framework that leverages single-cell technologies to build robust signatures of healthy and diseased phenotypes. While some approaches focus on multicolor flow cytometry data and other methods are designed to analyze high-content image-based screens, we emphasize the so-called Supercell/SVM paradigm (recently developed by the authors of this review and collaborators) as a unified framework that captures mesoscopic-scale emergence to build reliable phenotypes. Beyond their specific contributions to basic and translational biomedical research, these efforts illustrate, from a larger perspective, the powerful synergy that might be achieved from bringing together methods and ideas from statistical physics, data mining, and mathematics to solve the most pressing problems currently facing the life sciences.
Let $G_o$ be a semisimple Lie group, let $K_o$ be a maximal compact subgroup of $G_o$ and let $\mathfrak{k}\subset\mathfrak{g}$ denote the complexification of their Lie algebras. Let $G$ be the adjoint group of $\mathfrak{g}$ and let $K$ be the connected Lie subgroup of $G$ with Lie algebra $ad(\mathfrak{k})$. If $U(\mathfrak{g})$ is the universal enveloping algebra of $\mathfrak{g}$ then $U(\mathfrak{g})^K$ will denote the centralizer of $K$ in $U(\mathfrak{g})$. Also let $P:U(\mathfrak{g})\longrightarrow U(\mathfrak{k})\otimes U(\mathfrak{a})$ be the projection map corresponding to the direct sum $U(\mathfrak{g})=\bigl(U(\mathfrak{k})\otimes U(\mathfrak{a})\bigr)\oplus U(\mathfrak{g})\mathfrak{n}$ associated to an Iwasawa decomposition of $G_o$ adapted to $K_o$. In this paper we give a characterization of the image of $U(\mathfrak{g})^K$ under the injective antihomorphism $P:U(\mathfrak{g})^K\longrightarrow U(\mathfrak{k})^M\otimes U(\mathfrak{a})$, considered by Lepowsky, when $G_o$ is locally isomorphic to F$_4$.
Benchmark stars with known angular diameters are key to calibrating interferometric observations. With the advent of optical interferometry, there is a need for suitably bright, well-vetted calibrator stars over a large portion of the sky. We present a catalog of uniformly computed angular diameters for 1523 stars in the northern hemisphere brighter than V = 6 and with declinations $-15^\circ < \delta < 82^\circ$. The median angular stellar diameter is 0.527 mas. The list has been carefully cleansed of all known binary and multiple stellar systems. We derive the angular diameters for each of the stars by fitting spectral templates to the observed spectral energy distributions (SEDs) from literature fluxes. We compare these derived angular diameters against those measured by optical interferometry for 75 of the stars, as well as to 176 diameter estimates from previous calibrator catalogs, finding in general excellent agreement. The final catalog includes our goodness-of-fit metrics as well as an online atlas of our SED fits. The catalog presented here permits selection of the best calibrator stars for current and future visible-light interferometric observations.
Vector Symbolic Architectures (VSAs) give a way to represent a complex object as a single fixed-length vector, so that similar objects have similar vector representations. These vector representations then become easy to use for machine learning or nearest-neighbor search. We review a previously proposed VSA method, MBAT (Matrix Binding of Additive Terms), which uses multiplication by random matrices for binding related terms. However, multiplying by such matrices introduces instabilities which can harm performance. Making the random matrices be orthogonal matrices provably fixes this problem. With respect to larger scale applications, we see how to apply MBAT vector representations for any data expressed in JSON. JSON is used in numerous programming languages to express complex data, but its native format appears highly unsuited for machine learning. Expressing JSON as a fixed-length vector makes it readily usable for machine learning and nearest-neighbor search. Creating such JSON vectors also shows that a VSA needs to employ binding operations that are non-commutative. VSAs are now ready to try with full-scale practical applications, including healthcare, pharmaceuticals, and genomics. Keywords: MBAT (Matrix Binding of Additive Terms), VSA (Vector Symbolic Architecture), HDC (Hyperdimensional Computing), Distributed Representations, Binding, Orthogonal Matrices, Recurrent Connections, Machine Learning, Search, JSON, VSA Applications
We have used a model of magnetic accretion to investigate the rotational equilibria of magnetic cataclysmic variables (mCVs). This has enabled us to derive a set of equilibrium spin periods as a function of orbital period and magnetic moment which we use to estimate the magnetic moments of all known intermediate polars. We further show how these equilibrium spin periods relate to the polar synchronisation condition and use these results to calculate the theoretical histogram describing the distribution of magnetic CVs as a function of P_spin / P_orb. We demonstrate that this is in remarkable agreement with the observed distribution assuming that the number of systems as a function of white dwarf magnetic moment is distributed according to N(mu_1) d mu_1 ~ mu_1^{-2} d mu_1.
We give an explicit formula for the limiting gap distribution of slopes of saddle connections on the golden L, or any translation surface in its SL(2, R)-orbit, in particular the double pentagon. This is the first explicit computation of the distribution of gaps for a flat surface that is not a torus cover.
This paper is concerned with a reaction--diffusion system modeling the fixation and the invasion in a population of a gene drive (an allele biasing inheritance, increasing its own transmission to offspring). In our model, the gene drive has a negative effect on the fitness of individuals carrying it, and is therefore susceptible of decreasing the total carrying capacity of the population locally in space. This tends to generate an opposing demographic advection that the gene drive has to overcome in order to invade. While previous reaction--diffusion models neglected this aspect, here we focus on it and try to predict the sign of the traveling wave speed. It turns out to be an analytical challenge, only partial results being within reach, and we complete our theoretical analysis by numerical simulations. Our results indicate that taking into account the interplay between population dynamics and population genetics might actually be crucial, as it can effectively reverse the direction of the invasion and lead to failure. Our findings can be extended to other bistable systems, such as the spread of cytoplasmic incompatibilities caused by Wolbachia.
We analyze the Higgs production via the Higgs-strahlung process $e^{+}e^{-}\to h_{k}{\it{l}}^{+}{\it{l}}^{-}$ in an Abelian Extended Supersymmetric SM. We work in the large Higgs trilinear coupling driven minimum of the potential, and find that the next-to-lightest Higgs cannot be produced by this process. Other Higgs scalars, namely the lightest and the heaviest, have cross sections comparable to that in the pure SM. It is found that the present model has observable differences with the other popular model, NMSSM, in the same type of minimum.
We discuss a progress in calculation of Feynman integrals which has been done with help of the Gegenbauer Polynomial Technique and demonstrate the results for most complicated parts of O(1/N^3) contributions to critical exponents of \phi^4 -theory, for any spacetime dimensionality D.
Creating user defined functions (UDFs) is a powerful method to improve the quality of computer applications, in particular spreadsheets. However, the only direct way to use UDFs in spreadsheets is to switch from the functional and declarative style of spreadsheet formulas to the imperative VBA, which creates a high entry barrier even for proficient spreadsheet users. It has been proposed to extend Excel by UDFs declared by a spreadsheet: user defined spreadsheet functions (UDSFs). In this paper we present a method to create a limited form of UDSFs in Excel without any use of VBA. Calls to those UDSFs utilize what-if data tables to execute the same part of a worksheet several times, thus turning it into a reusable function definition.
We study qubit-mediated energy transfer between two electron reservoirs by adopting a numerically-exact influence functional path-integral method. This non-perturbative technique allows us to study the system's dynamics beyond the weak coupling limit. Our simulations for the energy current indicate that perturbative-Markovian Master equation predictions significantly deviate from exact numerical results already at intermediate coupling, $\pi \rho \alpha_{j,j'}\gtrsim 0.4$, where $\rho$ is the metal (Fermi sea) density of states, taken as a constant, and $\alpha_{j,j'}$ is the scattering potential energy of electrons, between the $j$ and $j'$ states. Markovian Master equation techniques should be therefore used with caution beyond the strictly weak subsystem-bath coupling limit, especially when a quantitative knowledge of transport characteristics is desired.
The mass adoption of plug-in electric vehicles (PEVs) requires the deployment of public charging stations. Such facilities are expected to employ distributed generation and storage units to reduce the stress on the grid and boost sustainable transportation. While prior work has made considerable progress in deriving insights for understanding the adverse impacts of PEV chargings and how to alleviate them, a critical issue that affects the accuracy is the lack of real world PEV data. As the dynamics and pertinent design of such charging stations heavily depend on actual customer demand profile, in this paper we present and evaluate the data obtained from a $17$ node charging network equipped with Level $2$ chargers at a major North American University campus. The data is recorded for $166$ weeks starting from late $2011$. The result indicates that the majority of the customers use charging lots to extend their driving ranges. Also, the demand profile shows that there is a tremendous opportunity to employ solar generation to fuel the vehicles as there is a correlation between the peak customer demand and solar irradiation. Also, we provided a more detailed data analysis and show how to use this information in designing future sustainable charging facilities.
The aim of this paper is to establish a first and second fundamental theorem for $GL(V)$ equivariant polynomial maps from $k$--tuples of matrix variables $End(V)^{ k} $ to tensor spaces $End(V)^{ \otimes n}$ in the spirit of H. Weyl's book {\em The classical groups} \cite{Weyl} and of symbolic algebra.
In the random geometric graph $G(n,r_n)$, $n$ vertices are placed randomly in Euclidean $d$-space and edges are added between any pair of vertices distant at most $r_n$ from each other. We establish strong laws of large numbers (LLNs) for a large class of graph parameters, evaluated for $G(n,r_n)$ in the thermodynamic limit with $nr_n^d =$ const., and also in the dense limit with $n r_n^d \to \infty$, $r_n \to 0$. Examples include domination number, independence number, clique-covering number, eternal domination number and triangle packing number. The general theory is based on certain subadditivity and superadditivity properties, and also yields LLNs for other functionals such as the minimum weight for the travelling salesman, spanning tree, matching, bipartite matching and bipartite travelling salesman problems, for a general class of weight functions with at most polynomial growth of order $d-\varepsilon$, under thermodynamic scaling of the distance parameter.
Moment closures of the Vlasov-Amp{\`e}re system, whereby higher moments are represented as functions of lower moments with the constraint that the resulting fluid system remains Hamiltonian, are investigated by using water-bag theory. The link between the water-bag formalism and fluid models that involve density, fluid velocity, pressure and higher moments is established by introducing suitable thermodynamic variables. The cases of one, two and three water-bags are treated and their Hamiltonian structures are provided. In each case, we give the associated fluid closures and we discuss their Casimir invariants. We show how the method can be extended to an arbitrary number of fields, i.e., an arbitrary number of water-bags and associated moments. The thermodynamic interpretation of the resulting models is discussed. Finally, a general procedure to derive Hamiltonian N-field fluid models is proposed.
Transient Execution Attacks (TEAs) have gradually become a major security threat to modern high-performance processors. They exploit the vulnerability of speculative execution to illegally access private data, and transmit them through timing-based covert channels. While new vulnerabilities are discovered continuously, the covert channels can be categorised to two types: 1) Persistent Type, in which covert channels are based on the layout changes of buffering, e.g. through caches or TLBs; 2) Volatile Type, in which covert channels are based on the contention of sharing resources, e.g. through execution units or issuing ports. The defenses against the persistent-type covert channels have been well addressed, while those for the volatile-type are still rather inadequate. Existing mitigation schemes for the volatile type such as Speculative Compression and Time-Division-Multiplexing will introduce significant overhead due to the need to stall the pipeline or to disallow resource sharing. In this paper, we look into such attacks and defenses with a new perspective, and propose a scheduling-based mitigation scheme, called SPECWANDS. It consists of three priority-based scheduling policies to prevent an attacker from transmitting the secret in different contention situations. SPECWANDS not only can defend against both inter-thread and intra-thread based attacks, but also can keep most of the performance benefit from speculative execution and resource-sharing. We evaluate its runtime overhead on SPEC 2017 benchmarks and realistic programs. The experimental results show that SPECWANDS has a significant performance advantage over the other two representative schemes.
Strong gravitational lensing offers a wealth of astrophysical information on the background source it affects, provided the lensed source can be reconstructed as if it was seen in the absence of lensing. In the present work, we illustrate how sparse optimisation can address the problem. As a first step towards a full free-form lens modelling technique, we consider linear inversion of the lensed source under sparse regularisation and joint deblending from the lens light profile. The method is based on morphological component analysis, assuming a known mass model. We show with numerical experiments that representing the lens and source light using an undecimated wavelet basis allows us to reconstruct the source and to separate it from the foreground lens at the same time. Both the source and lens light have a non-analytic form, allowing for the flexibility needed in the inversion to represent arbitrarily small and complex luminous structures in the lens and source. in addition, sparse regularisation avoids over-fitting the data and does not require the use of any adaptive mesh or pixel grid. As a consequence, our reconstructed sources can be represented on a grid of very small pixels. Sparse regularisation in the wavelet domain also allows for automated computation of the regularisation parameter, thus minimising the impact of arbitrary choice of initial parameters. Our inversion technique for a fixed mass distribution can be incorporated in future lens modelling technique iterating over the lens mass parameters. The python package corresponding to the algorithms described in this article can be downloaded via the github platform at https://github.com/herjy/SLIT.
We introduce a simple diagrammatic 2-category $\mathscr{A}$ that categorifies the image of the Fock space representation of the Heisenberg algebra and the basic representation of $\mathfrak{sl}_\infty$. We show that $\mathscr{A}$ is equivalent to a truncation of the Khovanov--Lauda categorified quantum group $\mathscr{U}$ of type $A_\infty$, and also to a truncation of Khovanov's Heisenberg 2-category $\mathscr{H}$. This equivalence is a categorification of the principal realization of the basic representation of $\mathfrak{sl}_\infty$. As a result of the categorical equivalences described above, certain actions of $\mathscr{H}$ induce actions of $\mathscr{U}$, and vice versa. In particular, we obtain an explicit action of $\mathscr{U}$ on representations of symmetric groups. We also explicitly compute the Grothendieck group of the truncation of $\mathscr{H}$. The 2-category $\mathscr{A}$ can be viewed as a graphical calculus describing the functors of $i$-induction and $i$-restriction for symmetric groups, together with the natural transformations between their compositions. The resulting computational tool is used to give simple diagrammatic proofs of (apparently new) representation theoretic identities.
In a classic paper Zeeman introduced the k-twist spin of a knot K and showed that the exterior of a twist spin fibers over S^1. In particular this result shows that the knot K # -K is doubly slice. In this paper we give a quick proof of Zeeman's result. The k-twist spin of K also gives rise to two metabolizers for K # -K and we determine these two metabolizers precisely.
We apply a field-theoretical approach to study the structure and thermodynamics of a two-Yukawa fluid confined by a hard wall. We derive mean field equations allowing for numerical evaluation of the density profile which is compared to analytical estimations. Beyond the mean field approximation, analytical expressions for the free energy, the pressure, and the correlation function are derived. Subsequently, contributions to the density profile and the adsorption coefficient due to Gaussian fluctuations are found. Both the mean field and the fluctuation terms of the density profile are shown to satisfy the contact theorem. We further use the contact theorem to improve the Gaussian approximation for the density profile based on a better approximation for the bulk pressure. The results obtained are compared to computer simulation data.
The following notes are based on lectures delivered at the research school Modeling and Control of Open Quantum Systems (Mod\'{e}lisation et contr\^{o}le des syst\`{e}mes quantiques ouverts) at CIRM, Marseille, 16-20 April, 2018, as part of the Trimester \textit{Measurement and Control of Quantum Systems: Theory and Experiments} organized at Institut Henri Poincar\'{e}, Paris, France. The aim is to introduce quantum filtering to an audience with a background in either quantum theory or classical filtering.
Shocks, modelled over a broad range of parameters, are used to construct a new tool to deduce the mechanical energy and physical conditions from observed atomic or molecular emission lines. We compute magnetised, molecular shock models with velocities $V_s=5$-$80$ km s$^{-1}$, preshock proton densities $n_{\rm H}=10^2$-$10^6$ cm$^{-3}$, weak or moderate magnetic field strengths, and in the absence or presence of an external UV radiation field. We develop a simple emission model of an ensemble of shocks for connecting any observed emission lines to the mechanical energy and physical conditions of the system. For this range of parameters we find the full diversity (C-, C$^*$-, CJ-, and J-type) of magnetohydrodynamic shocks. H$_2$ and H are dominant coolants, with up to 30% of the shock kinetic flux escaping in Ly$\alpha$ photons. The reformation of molecules in the cooling tail means H$_2$ is even a good tracer of dissociative shocks and shocks that were initially fully atomic. For each shock model we provide integrated intensities of rovibrational lines of H$_2$, CO, and CH$^+$, atomic H lines, and atomic fine-structure and metastable lines. We demonstrate how to use these shock models to deduce the mechanical energy and physical conditions of extragalactic environments. As a template example, we interpret the CH$^+$(1-0) emission from the Eyelash starburst galaxy. A mechanical energy injection rate of at least $10^{11}$ $L_\odot$ into molecular shocks is required to reproduce the observed line. The low-velocity, externally irradiated shocks are at least an order magnitude more efficient than the most efficient shocks with no external irradiation, in terms of the total mechanical energy required. We predict differences of more than 2 orders of magnitude in intensities of the pure rotational lines of CO, Ly$\alpha$, metastable lines of O, S$^+$, and N, between representative models.
This report deals with translation invariance of convolutional neural networks (CNNs) for automatic target recognition (ATR) from synthetic aperture radar (SAR) imagery. In particular, the translation invariance of CNNs for SAR ATR represents the robustness against misalignment of target chips extracted from SAR images. To understand the translation invariance of the CNNs, we trained CNNs which classify the target chips from the MSTAR into the ten classes under the condition of with and without data augmentation, and then visualized the translation invariance of the CNNs. According to our results, even if we use a deep residual network, the translation invariance of the CNN without data augmentation using the aligned images such as the MSTAR target chips is not so large. A more important factor of translation invariance is the use of augmented training data. Furthermore, our CNN using augmented training data achieved a state-of-the-art classification accuracy of 99.6%. These results show an importance of domain-specific data augmentation.
The mean field dynamics of an $N$-particle weekly interacting Boson system can be described by the nonlinear Hartree equation. In this paper, we present estimates on the 1/N rate of convergence of many-body Schr\"{o}dinger dynamics to the one-body nonlinear Hartree dynamics with factorized initial data with two-body interaction potential $V$ in $L^3 (\mathbb{R}^3)+ L^{\infty} (\mathbb{R}^3)$.
Generative Adversarial Networks (GANs), though powerful, is hard to train. Several recent works (brock2016neural,miyato2018spectral) suggest that controlling the spectra of weight matrices in the discriminator can significantly improve the training of GANs. Motivated by their discovery, we propose a new framework for training GANs, which allows more flexible spectrum control (e.g., making the weight matrices of the discriminator have slow singular value decays). Specifically, we propose a new reparameterization approach for the weight matrices of the discriminator in GANs, which allows us to directly manipulate the spectra of the weight matrices through various regularizers and constraints, without intensively computing singular value decompositions. Theoretically, we further show that the spectrum control improves the generalization ability of GANs. Our experiments on CIFAR-10, STL-10, and ImageNet datasets confirm that compared to other methods, our proposed method is capable of generating images with competitive quality by utilizing spectral normalization and encouraging the slow singular value decay.
We study a new family of sign-changing solutions to the stationary nonlinear Schr\"odinger equation $$ -\Delta v +q v =|v|^{p-2} v, \qquad \text{in $\mathbb{R}^3$,} $$ with $2<p<\infty$ and $q \ge 0$. These solutions are spiraling in the sense that they are not axially symmetric but invariant under screw motion, i.e., they share the symmetry properties of a helicoid. In addition to existence results, we provide information on the shape of spiraling solutions, which depends on the parameter value representing the rotational slope of the underlying screw motion. Our results complement a related analysis of Del Pino, Musso and Pacard for the Allen-Cahn equation, whereas the nature of results and the underlying variational structure are completely different.
Resonant second harmonic generation between 1550 nm and 775 nm with outside efficiency $> 4.4\times10^{-4}\, \text{mW}^{-1}$ is demonstrated in a gallium phosphide microdisk cavity supporting high-$Q$ modes at visible ($Q \sim 10^4$) and infrared ($Q \sim 10^5$) wavelengths. The double resonance condition was satisfied through intracavity photothermal temperature tuning using $\sim 360\,\mu$W of 1550 nm light input to a fiber taper and resonantly coupled to the microdisk. Above this pump power efficiency was observed to decrease. The observed behavior is consistent with a simple model for thermal tuning of the double resonance condition.
We present a measurement of inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV as a function of the centrality of the collision, as estimated from the energy deposited in the Zero Degree Calorimeters. The measurement is performed with the ALICE detector down to zero transverse momentum, $p_{\rm T}$, in the backward ($-4.46 < y_{\rm cms} < -2.96$) and forward ($2.03 < y_{\rm cms} < 3.53$) rapidity intervals in the dimuon decay channel and in the mid-rapidity region ($-1.37 < y_{\rm cms} < 0.43$) in the dielectron decay channel. The backward and forward rapidity intervals correspond to the Pb-going and p-going direction, respectively. The $p_{\rm T}$-differential J/$\psi$ production cross section at backward and forward rapidity is measured for several centrality classes, together with the corresponding average $p_{\rm T}$ and $p^2_{\rm T}$ values. The nuclear modification factor, $Q_{\rm pPb}$, is presented as a function of centrality for the three rapidity intervals, and, additionally, at backward and forward rapidity, as a function of $p_{\rm T}$ for several centrality classes. At mid- and forward rapidity, the J/$\psi$ yield is suppressed up to 40% compared to that in pp interactions scaled by the number of binary collisions. The degree of suppression increases towards central p-Pb collisions at forward rapidity, and with decreasing $p_{\rm T}$ of the J/$\psi$. At backward rapidity, the $Q_{\rm pPb}$ is compatible with unity within the total uncertainties, with an increasing trend from peripheral to central p-Pb collisions.
Detection of radiation signals is at the heart of precision metrology and sensing. In this article we show how the fluctuations in photon counting signals can be exploited to optimally extract information about the physical parameters that govern the dynamics of the emitter. For a simple two-level emitter subject to photon counting, we show that the Fisher information and the Cram\'er- Rao sensitivity bound based on the full detection record can be evaluated from the waiting time distribution in the fluorescence signal which can, in turn, be calculated for both perfect and imperfect detectors by a quantum trajectory analysis. We provide an optimal estimator achieving that bound.
We compute the full $\mathcal{O}(\alpha_s)$ corrections to stop-antistop annihilation into two gluons and a light quark-antiquark pair within the framework of the Minimal Supersymmetric Standard Model (MSSM), including the non-perturbative Sommerfeld enhancement effect. Numerical results for the total annihilation cross section are shown and the effect on the neutralino relic density is discussed for an example scenario in the phenomenological MSSM.
Quantum Bit String Comparators (QBSC) operate on two sequences of n-qubits, enabling the determination of their relationships, such as equality, greater than, or less than. This is analogous to the way conditional statements are used in programming languages. Consequently, QBSCs play a crucial role in various algorithms that can be executed or adapted for quantum computers. The development of efficient and generalized comparators for any $n$-qubit length has long posed a challenge, as they have a high-cost footprint and lead to quantum delays. Comparators that are efficient are associated with inputs of fixed length. As a result, comparators without a generalized circuit cannot be employed at a higher level, though they are well-suited for problems with limited size requirements. In this paper, we introduce a generalized design for the comparison of two $n$-qubit logic states using just two ancillary bits. The design is examined on the basis of qubit requirements, ancillary bit usage, quantum cost, quantum delay, gate operations, and circuit complexity, and is tested comprehensively on various input lengths. The work allows for sufficient flexibility in the design of quantum algorithms, which can accelerate quantum algorithm development.
We propose a root-causing procedure for accelerating system-level debug using rule-based techniques. We describe the procedure and how it provides high quality debug hints for reducing the debug effort. This includes the heuristics for engineering features from logs of many tests, and the data analytics techniques for generating powerful debug hints. As a case study, we used these techniques for root-causing failures of the Power Management (PM) design feature Package-C8 and showed their effectiveness. Furthermore, we propose an approach for mining the root-causing experience and results for reuse, to accelerate future debug activities and reduce dependency on validation experts. We believe that these techniques are beneficial also for other validation activities at different levels of abstraction, for complex hardware, software and firmware systems, both pre-silicon and post-silicon.
We have investigated effects of early stellar encounters on a protoplanetary disk (planetesimal disk) and found that they can explain the high eccentricities and inclinations observed in the outer part ($>42$AU) of the Edgeworth-Kuiper Belt (EKB). The proto-sun is considered as a member of a stellar aggregation that undergoes dissolution on a timescale $\sim 10^8$yrs, such that a planetesimal disk experiences a flyby encounter at pericenter distance ($q$) on the order of 100AU. We simulated dynamical evolution of a planetesimal (test particle) disk perturbed by a passing star. We show that the stellar encounter pumps the velocity dispersion in the disk in the outer parts ($> 0.25q$). Planet formation is forestalled in that region. We also find that a stellar encounter with pericenter distance $q \sim 100-200$AU could have pumped up the velocity dispersion of EKB objects outside 42AU to the observed magnitude while preserving that inside Neptune's 3:2 mean-motion resonance (located at 39.5AU), which allows for the efficient capture of objects by the sweeping of the 3:2 resonance during orbital migration by proto-Neptune.
This paper provides a computational analysis of poetry reading audio signals at a large scale to unveil the musicality within professionally-read poems. Although the acoustic characteristics of other types of spoken language have been extensively studied, most of the literature is limited to narrative speech or singing voice, discussing how different they are from each other. In this work, we develop signal processing methods, which are tailored to capture the unique acoustic characteristics of poetry reading based on their silence patterns, temporal variations of local pitch, and beat stability. Our large-scale statistical analyses on three big corpora, each of which consists of narration (LibriSpeech), singing voice (Intonation), and poetry reading (from The Poetry Foundation), discover that poetry reading does share some musical characteristics with singing voice, although it may also resemble narrative speech.
In this paper, we derive some new and interesting idebtities for Bernoulli, Euler and Hermite polynomials associated with Chebyshev polynomials.
Cardiac Magnetic Resonance Imaging (MRI) plays an important role in the analysis of cardiac function. However, the acquisition is often accompanied by motion artefacts because of the difficulty of breath-hold, especially for acute symptoms patients. Therefore, it is essential to assess the quality of cardiac MRI for further analysis. Time-consuming manual-based classification is not conducive to the construction of an end-to-end computer aided diagnostic system. To overcome this problem, an automatic cardiac MRI quality estimation framework using ensemble and transfer learning is proposed in this work. Multiple pre-trained models were initialised and fine-tuned on 2-dimensional image patches sampled from the training data. In the model inference process, decisions from these models are aggregated to make a final prediction. The framework has been evaluated on CMRxMotion grand challenge (MICCAI 2022) dataset which is small, multi-class, and imbalanced. It achieved a classification accuracy of 78.8% and 70.0% on the training set (5-fold cross-validation) and a validation set, respectively. The final trained model was also evaluated on an independent test set by the CMRxMotion organisers, which achieved the classification accuracy of 72.5% and Cohen's Kappa of 0.6309 (ranked top 1 in this grand challenge). Our code is available on Github: https://github.com/ruizhe-l/CMRxMotion.
We report the discovery of three X-ray sources within the radio shell G318.2+0.1, one of which may be extended. Two of the sources were detected during the \sax Galactic Plane Survey and one was found in archival \ros data. The fainter \sax source is coincident with an ultra-compact galactic H {\sc ii} region, and we discuss the possibility that it can be a flaring young stellar object, while the other \sax source has no obvious counterpart. The PSPC source is consistent with emission from a foreground star. The hard spectrum of the brighter \sax source is consistent with a non-thermal origin, although a thermal nature cannot be formally excluded. If this source is associated with G318.2+0.1, then its hard spectrum suggests that it may be site of non-thermal electron acceleration.
Radiation effects analysis of instruments operative in harsh radiation environment is crucial for performance and functionality of electronic devices and components. Engineering design of instruments is usually carried out in Computer Aided Design (CAD) engineering software. Geant4-based Monte Carlo codes are extensively used for particle transport simulation and analysis. However, Geant4 is not prepared to accept CAD Standard for The Exchange of Product data (STEP) format. MRADSIM-Converter is a new software for STEP to Geometry Description Markup Language (GDML) format conversion, readable by Geant4-based Monte Carlo codes. Its validation with two different converters confirms its higher speed for importing CAD geometries with arbitrary size and complexity having a user-friendly interface for modifying volumes properties.
This paper studies the problem of serving multiple live video streams to several different clients from a single access point over unreliable wireless links, which is expected to be major a consumer of future wireless capacity. This problem involves two characteristics. On the streaming side, different video streams may generate variable-bit-rate traffic with different traffic patterns. On the network side, the wireless transmissions are unreliable, and the link qualities differ from client to client. In order to alleviate the above stochastic aspects of both video streams and link unreliability, each client typically buffers incoming packets before playing the video. The quality of the video playback subscribed to by each flow depends, among other factors, on both the delay of packets as well as their throughput. In this paper we characterize precisely the capacity of the wireless video server in terms of what combination of joint per-packet-delays and throughputs can be supported for the set of flows, as a function of the buffering delay introduced at the server. We also address how to schedule packets at the access point to satisfy the joint per-packet-delay-throughput performance measure. We test the designed policy on the traces of three movies. From our tests, it appears to outperform other policies by a large margin.
In this study, we propose task planning framework for multiple robots that builds on a behavior tree (BT). BTs communicate with a data distribution service (DDS) to send and receive data. Since the standard BT derived from one root node with a single tick is unsuitable for multiple robots, a novel type of BT action and improved nodes are proposed to control multiple robots through a DDS asynchronously. To plan tasks for robots efficiently, a single task planning unit is implemented with the proposed task types. The task planning unit assigns tasks to each robot simultaneously through a single coalesced BT. If any robot falls into a fault while performing its assigned task, another BT embedded in the robot is executed; the robot enters the recovery mode in order to overcome the fault. To perform this function, the action in the BT corresponding to the task is defined as a variable, which is shared with the DDS so that any action can be exchanged between the task planning unit and robots. To show the feasibility of our framework in a real-world application, three mobile robots were experimentally coordinated for them to travel alternately to four goal positions by the proposed single task planning unit via a DDS.
Locally homogeneous Lorentzian three-manifolds with recurrect curvature are special examples of Walker manifolds, that is, they admit a parallel null vector field. We obtain a full classification of the symmetries of these spaces, with particular regard to symmetries related to their curvature: Ricci and matter collineations, curvature and Weyl collineations. Several results are given for the broader class of three-dimensional Walker manifolds.
We propose a scheme to generate photonic tensor network states by sequential scattering of photons in waveguide QED systems. We show that sequential scatterings can convert a series of unentangled photons into any type of matrix product states. We also demonstrate the possibility of generating projected entangled pair states with arbitrary network representation by photon re-scattering.
We study the triviality and hierarchy problem of a Z_2-invariant Yukawa system with massless fermions and a real scalar field, serving as a toy model for the standard-model Higgs sector. Using the functional RG, we look for UV stable fixed points which could render the system asymptotically safe. Whether a balancing of fermionic and bosonic contributions in the RG flow induces such a fixed point depends on the algebraic structure and the degrees of freedom of the system. Within the region of parameter space which can be controlled by a nonperturbative next-to-leading order derivative expansion of the effective action, we find no non-Gaussian fixed point in the case of one or more fermion flavors. The fermion-boson balancing can still be demonstrated within a model system with a small fractional flavor number in the symmetry-broken regime. The UV behavior of this small-N_f system is controlled by a conformal Higgs expectation value. The system has only two physical parameters, implying that the Higgs mass can be predicted. It also naturally explains the heavy mass of the top quark, since there are no RG trajectories connecting the UV fixed point with light top masses.
{\bf Abstract} \,\, We consider the following nonlinear Schr\"{o}dinger equation on exterior domain. \begin{equation} \begin{cases} iu_t+\Delta_g u + ia(x)u - |u|^{p-1}u = 0 \qquad (x,t) \in \Omega\times (0,+\infty), \qquad (1)\cr u\big|_\Gamma = 0\qquad t \in (0,+\infty), \cr u(x,0) = u_0(x)\qquad x \in \Omega, \end{cases} \end{equation} where $1<p<\frac{n+2}{n-2}$, $\Omega\subset\mathbb{R}^n$ ($n\ge3$) is an exterior domain and $(\mathbb{R}^n,g)$ is a complete Riemannian manifold. We establish Morawetz estimates for the system (1) without dissipation ($a(x)\equiv 0$ in (1)) and meanwhile prove exponential stability of the system (1) with a dissipation effective on a neighborhood of the infinity. It is worth mentioning that our results are different from the existing studies. First, Morawetz estimates for the system (1) are directly derived from the metric $g$ and are independent on the assumption of an (asymptotically) Euclidean metric. In addition, we not only prove exponential stability of the system (1) with non-uniform energy decay rate, which is dependent on the initial data, but also prove exponential stability of the system (1) with uniform energy decay rate. The main methods are the development of Morawetz multipliers in non (asymptotically) Euclidean spaces and compactness-uniqueness arguments.
The transfer of graphene grown by chemical vapor deposition (CVD) using amorphous polymers represents a widely implemented method for graphene-based electronic device fabrication. However, the most commonly used polymer, poly(methyl methacrylate) (PMMA), leaves a residue on the graphene that limits the mobility. Here we report a method for graphene transfer and patterning that employs a perfluoropolymer---Hyflon---as a transfer handle and to protect graphene against contamination from photoresists or other polymers. CVD-grown graphene transferred this way onto LaAlO$_3$/SrTiO$_3$ heterostructures is atomically clean, with high mobility (~30,000 cm$^2$V$^{-1}$s$^{-1}$) near the Dirac point at 2 K and clear, quantized Hall and magneto-resistance. Local control of the LaAlO$_3$/SrTiO$_3$ interfacial metal-insulator transition---through the graphene---is preserved with this transfer method. The use of perfluoropolymers such as Hyflon with CVD-grown graphene and other 2D materials can readily be implemented with other polymers or photoresists.
The use of future contextual information is typically shown to be helpful for acoustic modeling. Recently, we proposed a RNN model called minimal gated recurrent unit with input projection (mGRUIP), in which a context module namely temporal convolution, is specifically designed to model the future context. This model, mGRUIP with context module (mGRUIP-Ctx), has been shown to be able of utilizing the future context effectively, meanwhile with quite low model latency and computation cost. In this paper, we continue to improve mGRUIP-Ctx with two revisions: applying BN methods and enlarging model context. Experimental results on two Mandarin ASR tasks (8400 hours and 60K hours) show that, the revised mGRUIP-Ctx outperform LSTM with a large margin (11% to 38%). It even performs slightly better than a superior BLSTM on the 8400h task, with 33M less parameters and just 290ms model latency.
In the light of the recent discovery of a neutron star with a mass accurately determined to be almost two solar masses, it has been suggested that hyperons cannot play a role in the equation of state of dense matter in $\beta$-equilibrium. We re-examine this issue in the most recent development of the quark-meson coupling model. Within a relativistic Hartree-Fock approach and including the full tensor structure at the vector-meson-baryon vertices, we find that not only must hyperons appear in matter at the densities relevant to such a massive star but that the maximum mass predicted is completely consistent with the observation.
We give a very simple derivation of the Atiyah-Patodi-Singer (APS) index theorem and its small generalization by using the path integral of massless Dirac fermions. It is based on the Fujikawa's argument for the relation between the axial anomaly and the Atiyah-Singer index theorem, and only a minor modification of that argument is sufficient to show the APS index theorem. The key ingredient is the identification of the APS boundary condition and its generalization as physical state vectors in the Hilbert space of the massless fermion theory. The APS $\eta$-invariant appears as the axial charge of the physical states.
There is a remarkable relation between two kinds of phase space distributions associated to eigenfunctions of the Laplacian of a compact hyperbolic manifold: It was observed in \cite{AZ} that for compact hyperbolic surfaces $X_{\Gamma}=\Gamma\backslash\mathbb{H}$ Wigner distributions $\int_{S^* X_{\Gamma}} a dW_{ir_j} = < \mathrm{Op}(a)\phi_{ir_j},\phi_{ir_j} >_{L^2(X_{\Gamma})}$ and Patterson--Sullivan distributions $PS_{ir_j}$ are asymptotically equivalent as $r_j\to\infty$. We generalize the definitions of these distributions to all rank one symmetric spaces of noncompact type and introduce off-diagonal elements $PS_{\lambda_j,\lambda_k}$. Further, we give explicit relations between off-diagonal Patterson--Sullivan distributions and off-diagonal Wigner distributions and describe the asymptotic relation between these distributions.
[REVISED VERSION] The aim of this paper is to state a sharp version of the K\"onig supremum theorem, an equivalent reformulation of the Hahn--Banach theorem. We apply it to derive statements of the Lagrange multipliers, Karush-Kuhn-Tucker and Fritz John type, for nonlinear infinite programs. We also show that a weak concept of convexity coming from minimax theory, infsup-convexity, is the adequate one for this kind of results.
We propose a novel method to measure the $E_G$ statistic from clustering alone. The $E_G$ statistic provides an elegant way of testing the consistency of General Relativity by comparing the geometry of the Universe, probed through gravitational lensing, with the motion of galaxies in that geometry. Current $E_G$ estimators combine galaxy clustering with gravitational lensing, measured either from cosmic shear or from CMB lensing. In this paper, we construct a novel estimator for $E_G$, using only clustering information obtained from two tracers of the large-scale structure: intensity mapping and galaxy clustering. In this estimator, both the velocity of galaxies and gravitational lensing are measured through their impact on clustering. We show that with this estimator, we can suppress the contaminations that affect other $E_G$ estimators and consequently test the validity of General Relativity robustly. We forecast that with the coming generation of surveys like HIRAX and Euclid, we will measure $E_G$ with a precision of up to 7% (3.9% for the more futuristic SKA2).
The 2-block intersection graph (2-BIG) of a twofold triple system (TTS) is the graph whose vertex set is composed of the blocks of the TTS and two vertices are joined by an edge if the corresponding blocks intersect in exactly two elements. The 2-BIGs are themselves interesting graphs: each component is cubic and 3-connected, and a 2-BIG is bipartite exactly when the TTS is decomposable to two Steiner triple systems. Any connected bipartite 2-BIG with no Hamilton cycle is a counter-example to a conjecture posed by Tutte in 1971. Our main result is that there exists an integer $N$ such that for all $v\geq N$, if $v\equiv 1$ or $3\mod{6}$ then there exists a TTS($v$) whose 2-BIG is bipartite and connected but not Hamiltonian. Furthermore, $13<N\leq 663$. Our approach is to construct a TTS($u$) whose 2-BIG is connected bipartite and non-Hamiltonian and embed it within a TTS($v$) where $v>2u$ in such a way that, after a single trade, the 2-BIG of the resulting TTS($v$) is bipartite connected and non-Hamiltonian.