text
stringlengths
6
128k
Recently, the first collisional family was identified in the trans-Neptunian belt. The family consists of Haumea and at least ten other ~100km-sized trans-Neptunian objects (TNOs) located in the region a = 42 - 44.5 AU. In this work, we model the long-term orbital evolution of an ensemble of fragments representing hypothetical post-collision distributions at the time of the family's birth. We consider three distinct scenarios, in which the kinetic energy of dispersed particles were varied such that their mean ejection velocities (veje) were of order 200 m/s, 300 m/s and 400 m/s, respectively. Each simulation considered resulted in collisional families that reproduced that currently observed. The results suggest that 60-75% of the fragments created in the collision will remain in the trans-Neptunian belt, even after 4 Gyr of dynamical evolution. The surviving particles were typically concentrated in wide regions of orbital element space centred on the initial impact location, with their orbits spread across a region spanning {\Delta}a ~ 6-12 AU, {\Delta}e ~ 0.1-0.15 and {\Delta}i ~ 7-10{\deg}. Most of the survivors populated the so-called Classical and Detached regions of the trans-Neptunian belt, whilst a minor fraction entered the Scattered Disk reservoir (<1%), or were captured in Neptunian mean motion resonances (<10%). In addition, except for those fragments located near strong resonances, the great majority displayed negligible long-term orbital variation. This implies that the orbital distribution of the intrinsic Haumean family can be used to constrain the orbital conditions and physical nature of the collision that created the family, billions of years ago. Indeed, our results suggest that the formation of the Haumean collisional family most likely occurred after the bulk of Neptune's migration was complete, or even some time after the migration had completely ceased.
Contrastive language-image Pre-training (CLIP) [13] can leverage large datasets of unlabeled Image-Text pairs, which have demonstrated impressive performance in various downstream tasks. Given that annotating medical data is time-consuming and laborious, Image-Text Pre-training has promising applications in exploiting large-scale medical image and radiology report datasets. However, medical Image-Text Pre-training faces several challenges, as follows: (1) Due to privacy concerns, the amount of available medical data is relatively small compared to natural data, leading to weaker generalization ability of the model. (2) Medical images are highly similar with only fine-grained differences in subtleties, resulting in a large number of false-negative sample pairs in comparison learning. (3) The hand-crafted Prompt usually differs from the natural medical image report, Subtle changes in wording can lead to significant differences in performance. In this paper, we propose a unified Image-Text-Label contrastive learning framework based on continuous prompts, with three main contributions. First, We unified the data of images, text, and labels, which greatly expanded the training data that the model could utilize. Second, we address the issue of data diversity and the impact of hand-crafted prompts on model performance by introducing continuous implicit prompts. Lastly, we propose a ImageText-Label contrastive Training to mitigate the problem of too many false-negative samples. We demonstrate through sufficient experiments that the Unified Medical Contrastive Learning (UMCL) framework exhibits excellent performance on several downstream tasks.
We consider three- and four-dimensional pseudo-Riemannian generalized symmetric spaces, whose invariant metrics were explicitly described in [15]. While four-dimensional pseudo-Riemannian generalized symmetric spaces of types A, C and D are algebraic Ricci solitons, the ones of type B are not so. The Ricci soliton equation for their metrics yields a system of partial differential equations. Solving such system, we prove that almost all the four-dimensional pseudo-Riemannian generalized symmetric spaces of type B are Ricci solitons. These examples show some deep differences arising for the Ricci soliton equation between the Riemannian and the pseudo-Riemannian cases, as any homogeneous Riemannian Ricci soliton is algebraic [21]. We also investigate three-dimensional generalized symmetric spaces of any signature and prove that they are Ricci solitons.
We present optical, near-infrared, and radio observations of supernova (SN) SN~IIb 2022crv. We show that it retained a very thin H envelope and transitioned from a SN~IIb to a SN~Ib; prominent H$\alpha$ seen in the pre-maximum phase diminishes toward the post-maximum phase, while He {\sc i} lines show increasing strength. \texttt{SYNAPPS} modeling of the early spectra of SN~2022crv suggests that the absorption feature at 6200\,\AA\ is explained by a substantial contribution of H$\alpha$ together with Si {\sc ii}, as is also supported by the velocity evolution of H$\alpha$. The light-curve evolution is consistent with the canonical stripped-envelope supernova subclass but among the slowest. The light curve lacks the initial cooling phase and shows a bright main peak (peak M$_{V}$=$-$17.82$\pm$0.17 mag), mostly driven by radioactive decay of $\rm^{56}$Ni. The light-curve analysis suggests a thin outer H envelope ($M_{\rm env} \sim$0.05 M$_{\odot}$) and a compact progenitor (R$_{\rm env}$ $\sim$3 R$_{\odot}$). An interaction-powered synchrotron self-absorption (SSA) model can reproduce the radio light curves with a mean shock velocity of 0.1c. The mass-loss rate is estimated to be in the range of (1.9$-$2.8) $\times$ 10$^{-5}$ M$_{\odot}$ yr$^{-1}$ for an assumed wind velocity of 1000 km s$^{-1}$, which is on the high end in comparison with other compact SNe~IIb/Ib. SN~2022crv fills a previously unoccupied parameter space of a very compact progenitor, representing a beautiful continuity between the compact and extended progenitor scenario of SNe~IIb/Ib.
We prove $L^{p}$-Caffarelli-Kohn-Nirenberg type inequalities on homogeneous groups, which is one of most general subclasses of nilpotent Lie groups, all with sharp constants. We also discuss some of their consequences. Already in the abelian case of $\mathbb{R}^{n}$ our results provide new insights in view of the arbitrariness of the choice of the not necessarily Euclidean quasi-norm.
Liquids flow, making them remarkably distinct from solids and close to gases. At the same time, interactions in liquids are strong as in solids. The combination of these two properties is believed to be the ultimate obstacle to constructing a general theory of liquids. Here, we adopt a new approach to liquids: instead of focusing on the problem of strong interactions, we zero in on the relative contributions of vibrational and diffusional motion in liquids. We subsequently show that from the point of view of thermodynamics, liquid energy and specific heat are given, to a very good approximation, by their vibrational contributions as in solids, for relaxation times spanning 15 orders of magnitude. We therefore find that liquids show an interesting {\it duality} not hitherto known: they are close to solids from the thermodynamical point of view and to gases from the point of view of flow. We discuss the experimental implications of this approach.
A number of experiments for measuring anisotropies of the Cosmic Microwave Background use scanning strategies in which temperature fluctuations are measured along circular scans on the sky. It is possible, from a large number of such intersecting circular scans, to build two-dimensional sky maps for subsequent analysis. However, since instrumental effects --- especially the excess low-frequency 1/f noise --- project onto such two-dimensional maps in a non-trivial way, we discuss the analysis approach which focuses on information contained in the individual circular scans. This natural way of looking at CMB data from experiments scanning on the circles combines the advantages of elegant simplicity of Fourier series for the computation of statistics useful for constraining cosmological scenarios,and superior efficiency in analysing and quantifying most of the crucial instrumental effects.
Equilibrium statistical mechanics rests on the assumption of ergodic dynamics of a system modulo the conservation laws of local observables: extremization of entropy immediately gives Gibbs' ensemble (GE) for energy conserving systems and a generalized version of it (GGE) when the number of local conserved quantities (LCQ) is more than one. Through the last decade, statistical mechanics has been extended to describe the late-time behaviour of periodically driven (Floquet) quantum matter starting from a generic state. The structure built on the fundamental assumptions of ergodicity and identification of the relevant "conservation laws" in this inherently non-equilibrium setting. More recently, it has been shown that the statistical mechanics has a much richer structure due to the existence of {\it emergent} conservation laws: these are approximate but stable conservation laws arising {\it due to the drive}, and are not present in the undriven system. Extensive numerical and analytical results support perpetual stability of these emergent (though approximate) conservation laws, probably even in the thermodynamic limit. This banks on the recent finding of a sharp ergodicity threshold for Floquet thermalization in clean, interacting non-integrable Floquet systems. This opens up a new possibility of stable Floquet engineering in such systems. This review intends to give a theoretical overview of these developments. We conclude by briefly surveying the experimental scenario.
The preequilibrium (nucleon-in, nucleon-out) angular distributions of $^{27}$Al, $^{58}$Ni and $^{90}$Zr have been analyzed in the energy region from 90 to 200 MeV in terms of the Quantum Moleculear Dynamics (QMD) theory. First, we show that the present approach can reproduce the measured (p,xp') and (p,xn) angular distributions leading to continuous final states without adjusing any parameters. Second, we show the results of the detailed study of the preequilibrium reaction processes; the step-wise contribution to the angular distribution, comparison with the quantum-mechanical Feshbach-Kerman-Koonin theory, the effects of momentum distribution and surface refraction/reflection to the quasifree scattering. Finally, the present method was used to assess the importance of multiple preequilibrium particle emission as a function of projectile energy up to 1 GeV.
Natural Language Processing research has recently been dominated by large scale transformer models. Although they achieve state of the art on many important language tasks, transformers often require expensive compute resources, and days spanning to weeks to train. This is feasible for researchers at big tech companies and leading research universities, but not for scrappy start-up founders, students, and independent researchers. Stephen Merity's SHA-RNN, a compact, hybrid attention-RNN model, is designed for consumer-grade modeling as it requires significantly fewer parameters and less training time to reach near state of the art results. We analyze Merity's model here through an exploratory model analysis over several units of the architecture considering both training time and overall quality in our assessment. Ultimately, we combine these findings into a new architecture which we call SHAQ: Single Headed Attention Quasi-recurrent Neural Network. With our new architecture we achieved similar accuracy results as the SHA-RNN while accomplishing a 4x speed boost in training.
A hybrid hydropower power plant is a conventional HydroPower Plant (HPP) augmented with a Battery Energy Storage System (BESS) to decrease the wear and tear of sensitive mechanical components and improve the reliability and regulation performance of the overall plant. A central task of controlling hybrid power plants is determining how the total power set-point should be split between the BESS and the hybridized unit (power set-point splitting) as a function of the operational objectives. This paper describes a Model Predictive Control (MPC) framework for hybrid medium- and high-head plants to determine the power set-point of the hydropower unit and the BESS. The splitting policy relies on an explicit formulation of the mechanical loads incurred by the HPP's penstock, which can be damaged due to fatigue when providing regulation services to the grid. By filtering out from the HPP's power set-point the components conducive to excess penstock fatigue and properly controlling the BESS, the proposed MPC is able to maintain the same level of regulation performance while significantly decreasing damages to the hydraulic conduits. A proof-of-concept by simulations is provided considering a 230 MW medium-head hydropower plant.
In video super-resolution, the spatio-temporal coherence between, and among the frames must be exploited appropriately for accurate prediction of the high resolution frames. Although 2D convolutional neural networks (CNNs) are powerful in modelling images, 3D-CNNs are more suitable for spatio-temporal feature extraction as they can preserve temporal information. To this end, we propose an effective 3D-CNN for video super-resolution, called the 3DSRnet that does not require motion alignment as preprocessing. Our 3DSRnet maintains the temporal depth of spatio-temporal feature maps to maximally capture the temporally nonlinear characteristics between low and high resolution frames, and adopts residual learning in conjunction with the sub-pixel outputs. It outperforms the most state-of-the-art method with average 0.45 and 0.36 dB higher in PSNR for scales 3 and 4, respectively, in the Vidset4 benchmark. Our 3DSRnet first deals with the performance drop due to scene change, which is important in practice but has not been previously considered.
This paper deals with the problem of detecting fallen people lying on the floor by means of a mobile robot equipped with a 3D depth sensor. In the proposed algorithm, inspired by semantic segmentation techniques, the 3D scene is over-segmented into small patches. Fallen people are then detected by means of two SVM classifiers: the first one labels each patch, while the second one captures the spatial relations between them. This novel approach showed to be robust and fast. Indeed, thanks to the use of small patches, fallen people in real cluttered scenes with objects side by side are correctly detected. Moreover, the algorithm can be executed on a mobile robot fitted with a standard laptop making it possible to exploit the 2D environmental map built by the robot and the multiple points of view obtained during the robot navigation. Additionally, this algorithm is robust to illumination changes since it does not rely on RGB data but on depth data. All the methods have been thoroughly validated on the IASLAB-RGBD Fallen Person Dataset, which is published online as a further contribution. It consists of several static and dynamic sequences with 15 different people and 2 different environments.
We prove upper and lower bounds for the number of eigenvalues of semi-bounded Schr\"odinger operators in all spatial dimensions. As a corollary, we obtain two-sided estimates for the sum of the negative eigenvalues of atomic Hamiltonians with Kato potentials. Instead of being in terms of the potential itself, as in the usual Lieb-Thirring result, the bounds are in terms of the landscape function, also known as the torsion function, which is a solution of $(-\Delta + V +M)u_M =1$ in $\mathbb{R}^d$; here $M\in\mathbb{R}$ is chosen so that the operator is positive. We further prove that the infimum of $(u_M^{-1} - M)$ is a lower bound for the ground state energy $E_0$ and derive a simple iteration scheme converging to $E_0$.
In complex power systems, nonlinear load flow equations have multiple solutions. Under typical load conditions only one solution is stable and corresponds to a normal operating point, whereas the second solution is not stable and is never realized in practice. However, in future distribution grids with high penetration of distributed generators more stable solutions may appear because of active or reactive power reversal. The systems can operate at different states, and additional control measures may be required to ensure that it remains at the appropriate point. This paper focuses on the analysis of several cases where multiple solution phenomena is observed. A non-iterative approach for solving load flow equations based on the Gr\"{o}bner basis is introduced to overcome the convergence and computational efficiency associated with standard iterative approaches. All the solutions of load flow problems with their existence boundaries are analyzed for a simple 3-bus model. Furthermore, the stability of the solutions is analyzed using a derived aggregated load dynamics model, and suggestions for preventive control are proposed and discussed. The failure of na\"{i}ve voltage stability criteria is demonstrated and new voltage stability criteria is proposed. Some of the new solutions of load flow equations are proved to be stable and/or acceptable to the EN 50610 voltage fluctuation standard.
This notebook paper presents our system in the ActivityNet Dense Captioning in Video task (task 3). Temporal proposal generation and caption generation are both important to the dense captioning task. Therefore, we propose a proposal ranking model to employ a set of effective feature representations for proposal generation, and ensemble a series of caption models enhanced with context information to generate captions robustly on predicted proposals. Our approach achieves the state-of-the-art performance on the dense video captioning task with 8.529 METEOR score on the challenge testing set.
The spontaneous nucleation and dynamics of topological kink defects have been studied in trapped arrays of 41-43 Yb ions. The number of kinks formed as a function of quench rate across the linear-zigzag transition is measured in the under-damped regime of the inhomogeneous Kibble-Zurek theory. The experimental results agree well with molecular dynamics simulations, which show how losses mask the intrinsic nucleation rate. Simulations indicate that doubling the ion number and optimization of laser cooling can help reduce the effect of losses. A range of kink dynamics is observed including configural change, motion and lifetime, and behavioral sensitivity to ion number.
Supermassive black holes appear to be uniquely associated with galactic bulges. The mean ratio of black hole mass to bulge mass was until recently very uncertain, with ground based, stellar kinematical data giving a value roughly an order of magnitude larger than other techniques. The discrepancy was resolved with the discovery of the M-sigma relation, which simultaneously established a tight corrrelation between black hole mass and bulge velocity dispersion, and confirmed that the stellar kinematical mass estimates were systematically too large due to failure to resolve the black hole's sphere of influence. There is now excellent agreement between the various techniques for estimating the mean black hole mass, including dynamical mass estimation in quiescent galaxies; reverberation mapping in active galaxies and quasars; and computation of the mean density of compact objects based on integrated quasar light. Implications of the M-sigma relation for the formation of black holes are discussed.
Bounding volumes are an established concept in computer graphics and vision tasks but have seen little change since their early inception. In this work, we study the use of neural networks as bounding volumes. Our key observation is that bounding, which so far has primarily been considered a problem of computational geometry, can be redefined as a problem of learning to classify space into free or occupied. This learning-based approach is particularly advantageous in high-dimensional spaces, such as animated scenes with complex queries, where neural networks are known to excel. However, unlocking neural bounding requires a twist: allowing -- but also limiting -- false positives, while ensuring that the number of false negatives is strictly zero. We enable such tight and conservative results using a dynamically-weighted asymmetric loss function. Our results show that our neural bounding produces up to an order of magnitude fewer false positives than traditional methods. In addition, we propose an extension of our bounding method using early exits that accelerates query speeds by 25%. We also demonstrate that our approach is applicable to non-deep learning models that train within seconds. Our project page is at: https://wenxin-liu.github.io/neural_bounding/.
Ontology-based query answering with existential rules is well understood and implemented for positive queries, in particular conjunctive queries. The situation changes drastically for queries with negation, where there is no agreed-upon semantics or standard implementation. Stratification, as used for Datalog, is not enough for existential rules, since the latter still admit multiple universal models that can differ on negative queries. We therefore propose universal core models as a basis for a meaningful (non-monotonic) semantics for queries with negation. Since cores are hard to compute, we identify syntactic descriptions of queries that can equivalently be answered over other types of models. This leads to fragments of queries with negation that can safely be evaluated by current chase implementations. We establish new techniques to estimate how the core model differs from other universal models, and we incorporate our findings into a new reasoning approach for existential rules with negation.
A large component of the building material concrete consists of aggregate with varying particle sizes between 0.125 and 32 mm. Its actual size distribution significantly affects the quality characteristics of the final concrete in both, the fresh and hardened states. The usually unknown variations in the size distribution of the aggregate particles, which can be large especially when using recycled aggregate materials, are typically compensated by an increased usage of cement which, however, has severe negative impacts on economical and ecological aspects of the concrete production. In order to allow a precise control of the target properties of the concrete, unknown variations in the size distribution have to be quantified to enable a proper adaptation of the concrete's mixture design in real time. To this end, this paper proposes a deep learning based method for the determination of concrete aggregate grading curves. In this context, we propose a network architecture applying multi-scale feature extraction modules in order to handle the strongly diverse object sizes of the particles. Furthermore, we propose and publish a novel dataset of concrete aggregate used for the quantitative evaluation of our method.
We present the results of a 40 ks XMM-Newton observation centered on the variable star V1818 Ori. Using a combination of the XMM-Newton and AllWISE catalog data, we identify a group of about 31 young stellar objects around V1818 Ori. This group is coincident with the eastern edge of the dust ring surrounding Kappa Ori. Previously, we concluded that the young stellar objects on the western side of ring were formed in an episode of star formation that started 3-5 Myr ago, and are at a distance similar to that of kappa Ori (250-280 pc) and in the foreground to the Orion A cloud. Here we use the XMM-Newton observation to calculate X-ray fluxes and luminosities of the young stars around V1818 Ori. We find that their X-ray luminosity function (XLF), calculated for a distance of ~270 pc, matches the XLF of the YSOs west of Kappa Ori. We rule out that this group of young stars is associated to Mon R2 as assumed in the literature, but rather they are part of the same Kappa Ori's ring stellar population.
It is a known phenomenon that adversarial robustness comes at a cost to natural accuracy. To improve this trade-off, this paper proposes an ensemble approach that divides a complex robust-classification task into simpler subtasks. Specifically, fractal divide derives multiple training sets from the training data, and fractal aggregation combines inference outputs from multiple classifiers that are trained on those sets. The resulting ensemble classifiers have a unique property that ensures robustness for an input if certain don't-care conditions are met. The new techniques are evaluated on MNIST and Fashion-MNIST, with no adversarial training. The MNIST classifier has 99% natural accuracy, 70% measured robustness and 36.9% provable robustness, within L2 distance of 2. The Fashion-MNIST classifier has 90% natural accuracy, 54.5% measured robustness and 28.2% provable robustness, within L2 distance of 1.5. Both results are new state of the art, and we also present new state-of-the-art binary results on challenging label-pairs.
We aim to use the concept of sheaf to establish a link between certain aspects of the set of positive integers numbers, a topic corresponding to the elementary mathematics, and some fundamental ideas of contemporary mathematics. We hope that this type of approach helps the school students to restate some problems of elementary mathematics in an environment deeper and suitable for its study.
EEG-based Emotion recognition holds significant promise for applications in human-computer interaction, medicine, and neuroscience. While deep learning has shown potential in this field, current approaches usually rely on large-scale high-quality labeled datasets, limiting the performance of deep learning. Self-supervised learning offers a solution by automatically generating labels, but its inter-subject generalizability remains under-explored. For this reason, our interest lies in offering a self-supervised learning paradigm with better inter-subject generalizability. Inspired by recent efforts in combining low-level and high-level tasks in deep learning, we propose a cascaded self-supervised architecture for EEG emotion recognition. Then, we introduce a low-level task, time-to-frequency reconstruction (TFR). This task leverages the inherent time-frequency relationship in EEG signals. Our architecture integrates it with the high-level contrastive learning modules, performing self-supervised learning for EEG-based emotion recognition. Experiment on DEAP and DREAMER datasets demonstrates superior performance of our method over similar works. The outcome results also highlight the indispensability of the TFR task and the robustness of our method to label scarcity, validating the effectiveness of the proposed method.
In the second magnetohydrodynamic (MHD) ballooning stable domain of a high-beta tokamak plasma, the Schroedinger equation for ideal MHD shear Alfven waves has discrete solutions corresponding to standing waves trapped between pressure-gradient-induced potential wells. Our goal is to understand how these so-called alpha-induced toroidal Alfven eigenmodes alpha-TAE are modified by the effects of finite Larmor radii (FLR) and kinetic compression of thermal ions in the limit of massless electrons. In the present paper, we neglect kinetic compression in order to isolate and examine in detail the effect of FLR terms. After a review of the physics of ideal MHD alpha-TAE, the effect of FLR on the Schroedinger potential, eigenfunctions and eigenvalues are described with the use of parameter scans. The results are used in a companion paper to identify instabilities driven by wave-particle resonances in the second stable domain.
In the analysis of neutron scattering measurements of condensed matter structure, it normally suffices to treat the incident and scattered neutron beams as if composed of incoherent distributions of plane waves with wavevectors of different magnitudes and directions which are taken to define an instrumental resolution. However, despite the wide-ranging applicability of this conventional treatment, there are cases in which the wave function of an individual neutron in the beam must be described more accurately by a spatially localized packet, in particular with respect to its transverse extent normal to its mean direction of propagation. One such case involves the creation of orbital angular momentum (OAM) states in a neutron via interaction with a material device of a given size. It is shown in the work reported here that there exist two distinct measures of coherence of special significance and utility for describing neutron beams in scattering studies of materials in general. One measure corresponds to the coherent superposition of basis functions and their wavevectors which constitute each individual neutron packet state function whereas the other measure can be associated with an incoherent distribution of mean wavevectors of the individual neutron packets in a beam. Both the distribution of the mean wavevectors of individual packets in the beam as well as the wavevector components of the superposition of basis functions within an individual packet can contribute to the conventional notion of instrumental resolution. However, it is the transverse spatial extent of packet wavefronts alone that determines the area within which a coherent scattering process can occur in the first place. This picture is shown to be consistent with standard quantum theory. It is also demonstrated that these two measures of coherence can be distinguished from one another experimentally.
We propose a dielectric structure which focuses the laser light well beyond the diffraction limit and thus considerably enhances the exerted optical trapping force upon dielectric nanoparticles. Although the structure supports a Fabry-Perot resonance, it actually acts as a nanoantenna in that the role of the resonance is to funnel the laser light into the structure. In comparison with the lens illuminating the structure, the proposed structure offers roughly a 10000-fold enhancement in the trapping force - part of this enhancement comes from an 80-fold enhancement in the field intensity, and the remaining comes from a 130-fold enhancement in the normalized gradient of the field intensity (viz. the gradient of the field intensity divided by the field intensity). Also, the proposed structure offers roughly a 100-fold enhancement in the depth of the trapping potential. It is noteworthy that 'self-induced back-action trapping' (SIBA), which has recently been the focus of interest in the context of optical resonators, does not take place in the proposed resonator. In this paper, we also point out some misconceptions about SIBA together with some hitherto unappreciated subtleties of the dipole approximation.
A trainable filter-based higher-order Markov Random Fields (MRFs) model - the so called Fields of Experts (FoE), has proved a highly effective image prior model for many classic image restoration problems. Generally, two options are available to incorporate the learned FoE prior in the inference procedure: (1) sampling-based minimum mean square error (MMSE) estimate, and (2) energy minimization-based maximum a posteriori (MAP) estimate. This letter is devoted to the FoE prior based single image super resolution (SR) problem, and we suggest to make use of the MAP estimate for inference based on two facts: (I) It is well-known that the MAP inference has a remarkable advantage of high computational efficiency, while the sampling-based MMSE estimate is very time consuming. (II) Practical SR experiment results demonstrate that the MAP estimate works equally well compared to the MMSE estimate with exactly the same FoE prior model. Moreover, it can lead to even further improvements by incorporating our discriminatively trained FoE prior model. In summary, we hold that for higher-order natural image prior based SR problem, it is better to employ the MAP estimate for inference.
The goal of this paper is the analytical validation of a model of Cermelli and Gurtin for an evolution law for systems of screw dislocations under the assumption of antiplane shear. The motion of the dislocations is restricted to a discrete set of glide directions, which are properties of the material. The evolution law is given by a "maximal dissipation criterion", leading to a system of differential inclusions. Short time existence, uniqueness, cross-slip, and fine cross-slip of solutions are proved.
Displacement flows are common in hydraulic fracturing, as fracking fluids of different composition are injected sequentially in the fracture. The injection of an immiscible fluid at the center of a liquid-filled fracture results in the growth of the fracture and the outward displacement of the interface between the two liquids. We study the dynamics of the fluid-driven fracture which is controlled by the competition between viscous, elastic, and toughness-related stresses. We use a model experiment to characterize the dynamics of the fracture for a range of mechanical properties of the fractured material and fracturing fluids. We form the liquid-filled pre-fracture in an elastic brittle matrix of gelatin. The displacing liquid is then injected. We record the radius and aperture of the fracture, and the position of the interface between the two liquids. In a typical experiment, the axisymmetric radial viscous flow is accommodated by the elastic deformation and fracturing of the matrix. We model the coupling between elastic deformation, viscous dissipation, and fracture propagation and recover the two fracturing regimes identified for single fluid injection. For the viscous-dominated and toughness-dominated regimes, we derive scaling equations that describe the crack growth due to a displacement flow and show the influence of the pre-existing fracture on the crack dynamics through a finite initial volume and an average viscosity of the fluid in the fracture.
Let $I\supsetneq J$ be two squarefree monomial ideals of a polynomial algebra over a field generated in degree $\geq d$, resp. $\geq d+1$ . Suppose that $I$ is either generated by four squarefree monomials of degrees $d$ and others of degrees $\geq d+1$, or by five special monomials of degrees $d$. If the Stanley depth of $I/J$ is $\leq d+1$ then the usual depth of $I/J$ is $\leq d+1$ too.
A generic qubit unitary operator affected by quantum noise is duplicated and inserted in a coherently superposed channel, superposing two paths offered to a probe qubit across the noisy unitary, and driven by a control qubit. A characterization is performed of the transformation realized by the superposed channel on the joint state of the probe-control qubit pair. The superposed channel is then specifically analyzed for the fundamental metrological task of phase estimation on the noisy unitary, with the performance assessed by the Fisher information, classical or quantum. A comparison is made with conventional estimation techniques and also with a quantum switched channel with indefinite causal order recently investigated for a similar task of phase estimation. In the analysis here, a first important observation is that the control qubit of the superposed channel, although it never directly interacts with the unitary being estimated, can nevertheless be measured alone for effective estimation, while discarding the probe qubit that interacts with the unitary. This property is also present with the switched channel but is inaccessible with conventional techniques. The optimal measurement of the control qubit here is characterized in general conditions. A second important observation is that the noise plays an essential role in coupling the control qubit to the unitary, and that the control qubit remains operative for phase estimation at very strong noise, even with a fully depolarizing noise, whereas conventional estimation and the switched channel become inoperative in these conditions. The results extend the analysis of the capabilities of coherently controlled channels which represent novel devices exploitable for quantum signal and information processing.
In the past decade quantum algorithms have been found which outperform the best classical solutions known for certain classical problems as well as the best classical methods known for simulation of certain quantum systems. This suggests that they may also speed up the simulation of some classical systems. I describe one class of discrete quantum algorithms which do so--quantum lattice gas automata--and show how to implement them efficiently on standard quantum computers.
Collisional relaxation rates of collective modes in nuclei are calculated using the Levinson equation for the reduced density matrix with a memory dependent collision term. Linearizing the collision integral two contribution have to be distinguished, the one from the quasiparticle energy and the one from occupation factors. The first one yields the known Landau formula of zero sound damping and the second one leads to the Fermi gas model of Ref.1 with the additional factor 3 in front of the frequencies. Adding both contribution we obtain a final relaxation rate for the Fermi liquid model. Calculations of the temperature dependence of the damping rates and of the shape evolution of IVGDR are in good agreement with the experiment and show only minor differences between both models.
The inclusive production of charged hadrons in the collisions of quasi-real photons e+e- -> e+e- +X has been measured using the OPAL detector at LEP. The data were taken at e+e- centre-of-mass energies from 183 to 209 GeV. The differential cross-sections as a function of the transverse momentum and the pseudorapidity of the hadrons are compared to theoretical calculations of up to next-to-leading order (NLO) in the strong coupling constant alpha{s}. The data are also compared to a measurement by the L3 Collaboration, in which a large deviation from the NLO predictions is observed.
We adopt A. J. Irving's sieve method to study the almost-prime values produced by products of irreducible polynomials evaluated at prime arguments. This generalizes the previous results of Irving and Kao, who separately examined the almost-prime values of a single irreducible polynomial evaluated at prime arguments.
Measured acoustic data can be contaminated by noise. This typically happens when microphones are mounted in a wind tunnel wall or on the fuselage of an aircraft, where hydrodynamic pressure fluctuations of the Turbulent Boundary Layer (TBL) can mask the acoustic pressures of interest. For measurements done with an array of microphones, methods exist for denoising the acoustic data. Use is made of the fact that the noise is usually concentrated in the diagonal of the Cross-Spectral Matrix, because of the short spatial coherence of TBL noise. This paper reviews several existing denoising methods and considers the use of Conventional Beamforming, Source Power Integration and CLEAN-SC for this purpose. A comparison between the methods is made using synthesized array data.
This letter addresses the constraint compatibility problem of control barrier functions (CBFs), which occurs when a safety-critical CBF requires a system to apply more control effort than it is capable of generating. This inevitably leads to a safety violation, which transitions the system to an unsafe (and possibly dangerous) trajectory. We resolve the constraint compatibility problem by constructing a control barrier function that maximizes the feasible action space for first and second-order constraints, and we prove that the optimal CBF encodes a dynamical motion primitive. Furthermore, we show that this dynamical motion primitive contains an implicit model for the future trajectory for time-varying components of the system. We validate our optimal CBF in simulation, and compare its behavior with a linear CBF.
We present an improved catalog of halo wide binaries, compiled from an extensive literature search. Most of our binaries stem from the common proper motion binary catalogs by Allen et al. (2004), and Chanam\'e \& Gould. (2004) but we have also included binaries from the lists of Ryan (1992) and Zapatero-Osorio \& Martin (2004). All binaries were carefully checked and their distances and systemic radial velocities are included, when available. Probable membership to the halo population was tested by means of reduced proper motion diagrams for 251 candidate halo binaries. After eliminating obvious disk binaries we ended up with 211 probable halo binaries, for 150 of which radial velocities are available. We compute galactic orbits for these 150 binaries and calculate the time they spend within the galactic disk. Considering the full sample of 251 candidate halo binaries as well as several subsamples, we find that the distribution of angular separations (or expected major semiaxes) follows a power law $f(a) \sim a^{-1}$ (Oepik's relation) up to different limits. For the 50 most disk-like binaries, those that spend their entire lives within $z = \pm 500$~pc, this limit is found to be 19,000 AU (0.09 pc), while for the 50 most halo-like binaries, those that spend on average only 18\% of their lives within $z = \pm 500$~pc, the limit is 63,000 AU (0.31 pc). In a companion paper we employ this catalog to establish limits on the masses of the halo massive perturbers (MACHOs).
This paper provides a survey of the state-of-the-art information theoretic analysis for overlay multi-user (more than two pairs) cognitive networks and reports new capacity results. In an overlay scenario, cognitive / secondary users share the same frequency band with licensed / primary users to efficiently exploit the spectrum. They do so without degrading the performance of the incumbent users, and may possibly even aid in transmitting their messages as cognitive users are assumed to possess the message(s) of primary user(s) and possibly other cognitive user(s). The survey begins with a short overview of the two-user overlay cognitive interference channel. The evolution from two-user to three-user overlay cognitive interference channels is described next, followed by generalizations to multi-user (arbitrary number of users) cognitive networks. The rest of the paper considers K-user cognitive interference channels with different message knowledge structures at the transmitters. Novel capacity inner and outer bounds are proposed. Channel conditions under which the bounds meet, thus characterizing the information theoretic capacity of the channel, for both Linear Deterministic and Gaussian channel models, are derived. The results show that for certain channel conditions distributed cognition, or having a cumulative message knowledge structure at the nodes, may not be worth the overhead as (approximately) the same capacity can be achieved by having only one global cognitive user whose role is to manage all the interference in the network. The paper concludes with future research directions.
We study non-equilibrium transport through a superconducting flat-band lattice in a two-terminal setup with the Schwinger-Keldysh method. We find that quasiparticle transport is suppressed and coherent pair transport dominates. For superconducting leads, the AC supercurrent overcomes the DC current which relies on multiple Andreev reflections. With normal-normal and normal-superconducting leads, the Andreev reflection and normal currents vanish. Flat band superconductivity is thus promising not only for high critical temperatures but also for suppressing unwanted quasiparticle processes.
Hot, Dust-Obscured Galaxies (Hot DOGs), selected from the WISE all sky infrared survey, host some of the most powerful Active Galactic Nuclei (AGN) known, and might represent an important stage in the evolution of galaxies. Most known Hot DOGs are at $z> 1.5$, due in part to a strong bias against identifying them at lower redshift related to the selection criteria. We present a new selection method that identifies 153 Hot DOG candidates at $z\sim 1$, where they are significantly brighter and easier to study. We validate this approach by measuring a redshift $z=1.009$, and an SED similar to higher redshift Hot DOGs for one of these objects, WISE J1036+0449 ($L_{\rm\,Bol}\simeq 8\times 10^{46}\rm\,erg\,s^{-1}$), using data from Keck/LRIS and NIRSPEC, SDSS, and CSO. We find evidence of a broadened component in MgII, which, if due to the gravitational potential of the supermassive black hole, would imply a black hole mass of $M_{\rm\,BH}\simeq 2 \times 10^8 M_{\odot}$, and an Eddington ratio of $\lambda_{\rm\,Edd}\simeq 2.7$. WISE J1036+0449 is the first Hot DOG detected by NuSTAR, and the observations show that the source is heavily obscured, with a column density of $N_{\rm\,H}\simeq(2-15)\times10^{23}\rm\,cm^{-2}$. The source has an intrinsic 2-10 keV luminosity of $\sim 6\times 10^{44}\rm\,erg\,s^{-1}$, a value significantly lower than that expected from the mid-infrared/X-ray correlation. We also find that the other Hot DOGs observed by X-ray facilities show a similar deficiency of X-ray flux. We discuss the origin of the X-ray weakness and the absorption properties of Hot DOGs. Hot DOGs at $z\lesssim1$ could be excellent laboratories to probe the characteristics of the accretion flow and of the X-ray emitting plasma at extreme values of the Eddington ratio.
Explosive phenomena are known to trigger a wealth of shocks in warm plasma environments, including the solar chromosphere and molecular clouds where the medium consists of both ionised and neutral species. Partial ionisation is critical in determining the behaviour of shocks, since the ions and neutrals locally decouple, allowing for substructure to exist within the shock. Accurately modelling partially ionised shocks requires careful treatment of the ionised and neutral species, and their interactions. Here we study a partially-ionised switch-off slow-mode shock using a multi-level hydrogen model with both collisional and radiative ionisation and recombination rates that are implemented into the two-fluid (P\underline{I}P) code, and study physical parameters that are typical of the solar chromosphere. The multi-level hydrogen model differs significantly from MHD solutions due to the macroscopic thermal energy loss during collisional ionisation. In particular, the plasma temperature both post-shock and within the finite-width is significantly cooler that the post-shock MHD temperature. Furthermore, in the mid to lower chromosphere, shocks feature far greater compression then their single-fluid MHD analogues. The decreased temperature and increased compression reveal the importance of non-equilibrium ionised in the thermal evolution of shocks in partially ionised media. Since partially ionised shocks are not accurately described by the Rankine-Hugoniot shock jump conditions, it may be incorrect to use these to infer properties of lower atmospheric shocks.
The modified algebraic Bethe ansatz, introduced by Cramp\'e and the author [8], is used to characterize the spectral problem of the Heisenberg XXZ spin-$\frac{1}{2}$ chain on the segment with lower and upper triangular boundaries. The eigenvalues and the eigenvectors are conjectured. They are characterized by a set of Bethe roots with cardinality equal to $N$ the length of the chain and which satisfies a set of Bethe equations with an additional term. The conjecture follows from exact results for small chains. We also present a factorized formula for the Bethe vectors of the Heisenberg XXZ spin-$\frac{1}{2}$ chain on the segment with two upper triangular boundaries.
We analyse semiclassical strings in AdS in the limit of one large spin. In this limit, classical string dynamics is described by a finite number of collective coordinates corresponding to spikes or cusps of the string. The semiclassical spectrum consists of two branches of excitations corresponding to "large" and "small" spikes respectively. We propose that these states are dual to the excitations known as large and small holes in the spin chain description of N=4 SUSY Yang-Mills. The dynamics of large spikes in classical string theory can be mapped to that of a classical spin chain of fixed length. In turn, small spikes correspond to classical solitons propagating on the background formed by the large spikes. We derive the dispersion relation for these excitations directly in the finite gap formalism.
We present spectral measurements made in the soft (20-100 keV) gamma-ray band of the region containing the composite supernova remnant G11.2-0.3 and its associated pulsar PSR J1811-1925. Analysis of INTEGRAL/IBIS data allows characterisation of the system above 10 keV. The IBIS spectrum is best fitted by a power law having photon index of 1.8^{+0.4}_{-0.3} and a 20-100 keV flux of 1.5E{-11} erg/cm^2/s. Analysis of archival Chandra data over different energy bands rules out the supernova shell as the site of the soft gamma-ray emission while broad band (1-200 keV) spectral analysis strongly indicates that the INTEGRAL/IBIS photons originate in the central zone of the system which contains both the pulsar and its nebula. The composite X-ray and soft gamma-ray spectrum indicates that the pulsar provides around half of the emission seen in the soft gamma-ray domain; its spectrum is hard with no sign of a cut off up to at least 80 keV. The other half of the emission above 10 keV comes from the PWN; with a power law slope of 1.7 its spectrum is softer than that of the pulsar. From the IBIS/ISGRI mosaics we are able to derive 2 sigma upper limits for the 20-100 keV flux from the location of the nearby TeV source HESS J1809-193 to be 4.8E{-12} erg/cm^2/s. We have also examined the likelihood of an association between PSR J1811-1925 and HESS J1809-193. Although PSR J1811-1925 is the most energetic pulsar in the region, the only one detected above 10 keV and thus a possible source of energy to fuel the TeV fluxes, there is no morphological evidence to support this pairing, making it an unlikely counterpart.
A survey is performed of various Multi-Armed Bandit (MAB) strategies in order to examine their performance in circumstances exhibiting non-stationary stochastic reward functions in conjunction with delayed feedback. We run several MAB simulations to simulate an online eCommerce platform for grocery pick up, optimizing for product availability. In this work, we evaluate several popular MAB strategies, such as $\epsilon$-greedy, UCB1, and Thompson Sampling. We compare the respective performances of each MAB strategy in the context of regret minimization. We run the analysis in the scenario where the reward function is non-stationary. Furthermore, the process experiences delayed feedback, where the reward function is not immediately responsive to the arm played. We devise a new adaptive technique (AG1) tailored for non-stationary reward functions in the delayed feedback scenario. The results of the simulation show show superior performance in the context of regret minimization compared to traditional MAB strategies.
We present a spectral rigidity result for the Dirac operator on lens spaces. More specifically, we show that each homogeneous lens space and each three dimensional lens space $L(q;p)$ with $q$ prime is completely characterized by its Dirac spectrum in the class of all lens spaces.
This paper presents a new open source Python framework for causal discovery from observational data and domain background knowledge, aimed at causal graph and causal mechanism modeling. The 'cdt' package implements the end-to-end approach, recovering the direct dependencies (the skeleton of the causal graph) and the causal relationships between variables. It includes algorithms from the 'Bnlearn' and 'Pcalg' packages, together with algorithms for pairwise causal discovery such as ANM. 'cdt' is available under the MIT License at https://github.com/Diviyan-Kalainathan/CausalDiscoveryToolbox.
Attenuated Radon projections with respect to the weight function $W_\mu(x,y) = (1-x^2-y^2)^{\mu-1/2}$ are shown to be closely related to the orthogonal expansion in two variables with respect to $W_\mu$. This leads to an algorithm for reconstructing two dimensional functions (images) from attenuated Radon projections. Similar results are established for reconstructing functions on the sphere from projections described by integrals over circles on the sphere, and for reconstructing functions on the three-dimensional ball and cylinder domains.
The opportunity to tell a white lie (i.e., a lie that benefits another person) generates a moral conflict between two opposite moral dictates, one pushing towards telling always the truth and the other pushing towards helping others. Here we study how people resolve this moral conflict. What does telling a white lie signal about a person's pro-social tendencies? To answer this question, we conducted a two-stage 2x2 experiment. In the first stage, we used a Deception Game to measure aversion to telling a Pareto white lie (i.e., a lie that helps both the liar and the listener), and aversion to telling an altruistic white lie (i.e., a lie that helps the listener at the expense of the liar). In the second stage we measured altruistic tendencies using a Dictator Game and cooperative tendencies using a Prisoner's dilemma. We found three major results: (i) both altruism and cooperation are positively correlated with aversion to telling a Pareto white lie; (ii) both altruism and cooperation are negatively correlated with aversion to telling an altruistic white lie; (iii) men are more likely than women to tell an altruistic white lie, but not to tell a Pareto white lie. Our results shed light on the moral conflit between pro-sociality and truth-telling. In particular, the first finding suggests that a significant proportion of people have non-distributional notions of what the right thing to do is: irrespective of their economic consequences, they tell the truth, they cooperate, they share their money.
In this paper, we give two elementary constructions of homogeneous quasi-morphisms defined on the group of Hamiltonian diffeomorphisms of certain closed connected symplectic manifolds (or on its universal cover). The first quasi-morphism, denoted by $\calabi\_{S}$, is defined on the group of Hamiltonian diffeomorphisms of a closed oriented surface $S$ of genus greater than 1. This construction is motivated by a question of M. Entov and L. Polterovich. If $U\subset S$ is a disk or an annulus, the restriction of $\calabi\_{S}$ to the subgroup of diffeomorphisms which are the time one map of a Hamiltonian isotopy in $U$ equals Calabi's homomorphism. The second quasi-morphism is defined on the universal cover of the group of Hamiltonian diffeomorphisms of a symplectic manifold for which the cohomology class of the symplectic form is a multiple of the first Chern class.
Inspired from perturbative calculations, this work introduces imaginary ($\Omega_{\rm I}$) and real ($\Omega$) rotation effects to the pure $SU(3)$ gauge potentials simply through variable transformations: The empirical Polyakov loop (PL) potentials can be rewritten as functions of the imaginary chemical potentials of gluons and ghosts $(q_{\rm ij})$, and the transformations are taken as $q_{\rm ij}\rightarrow q_{\rm ij}\pm\Omega_{\rm I}/T$ and $q_{\rm ij}\rightarrow q_{\rm ij}\pm i\,\Omega/T$, respectively. For the PL potential of Fukushima $(V_1)$, a smaller imaginary rotation $\Omega_{\rm I}$ tends to suppress PL at all temperature and the deconfinement transition keeps of first order. However, for the PL potential of Munich group $(V_2)$, $\Omega_{\rm I}$ tends to enhance PL at low temperature $T$, consistent with lattice simulations; but suppress PL at high $T$, consistent with perturbative calculations. Moreover, the deconfinement alters from first order to crossover with increasing $\Omega_{\rm I}$ as is expected from lattice simulations. On the other hand, the real rotation $\Omega$ tends to enhance PL at relatively low $T$ for both potentials, and the (pseudo-)critical temperature decreases with $\Omega$ as expected. Therefore, we find that analytic continuation of the phase diagram from imaginary to real rotation is not necessarily valid in the non-perturbative region. Finally, we apply the more successful PL potential $V_2$ to the Polyakov--Nambu-Jona-Lasinio (PNJL) model and discover that $\Omega_{\rm I}$ tends to break chiral symmetry while $\Omega$ tends to restore it. Especially, the modified model is even able to qualitatively explain the lattice result that a larger $T$ would catalyze chiral symmetry breaking for a large $\Omega_{\rm I}$.
We derive an analytical expression for the growth rate of matter density perturbations on the phantom brane (which is the normal branch of the Dvali-Gabadadze-Porrati model). This model is characterized by a phantomlike effective equation of state for dark energy at the present epoch. It agrees very well with observations. We demonstrate that the traditional parametrization $f=\Omega_m^\gamma$ with a quasiconstant growth index $\gamma$ is not successful in this case. Based on a power series expansion at large redshifts, we propose a different parametrization for this model: $f=\Omega_m^\gamma\left(1+\frac{b}{\ell H}\right)^\beta$, where $\beta$ and $b$ are constants. Our numerical simulations demonstrate that this new parametrization describes the growth rate with great accuracy - the maximum error being $\leq 0.1\%$ for parameter values consistent with observations.
In this work a new method is developed to investigate the Aharonov-Casher effect in a noncommutative space. It is shown that the holonomy receives non-trivial kinematical corrections.
We present results from the first simulations of networks of Type I Abelian Higgs cosmic strings to include both matter and radiation eras and Cosmic Microwave Background (CMB) constraints. In Type I strings, the string tension is a slowly decreasing function of the ratio of the scalar and gauge mass-squared, $\beta$. We find that the mean string separation shows no dependence on $\beta$, and that the energy-momentum tensor correlators decrease approximately in proportion to the square of the string tension, with additional O(1) correction factors which asymptote to constants below $\beta \lesssim 0.01$. Strings in models with low self-couplings can therefore satisfy current CMB bounds at higher symmetry-breaking scales. This is particularly relevant for models where the gauge symmetry is broken in a supersymmetric flat direction, for which the effective self-coupling can be extremely small. If our results can be extrapolated to $\beta \simeq 10^{-15}$, even strings formed at $10^{16}$ GeV (approximately the grand unification scale in supersymmetric extensions of the Standard Model) can be compatible with CMB constraints.
This paper is about the use of a novel, exact functional quantization method as applied to two commonly studied actions in theoretical physics. The functional method in question has its roots in the exact renormalisation group flow techniques pioneered by Wilson, but with the flow parameter not limited to the familiar momentum cutoff. Finding a configuration satisfying an expression for the exact effective action which does not vary with this parameter provides the basis for finding solutions to the physical actions we study. Firstly, the method is applied to an expression for the bare action of the pseudo-scalar axion used to explain the strong CP problem in QCD. When quantized, we find that the effective potential of the axion, when interactions are not considered, is necessarily flattened by spinodal instability effects. We regard this flattening asrepresenting the very early stage in the development of the axion potential, when the Peccei-Quinn U(1) symmetry is spontaneously broken resulting in a double-well potential. Using commonly quoted values for the parameters of such a potential, we devise an expression for the energy density of the emerging axion potential and this is compared to dark energy. We then apply the functional method to the bosonic string with time varying graviton, dilaton and antisymmetric tensor (resulting in the string-axion) background fields. We achieve a demonstration of conformal invariance in a non-perturbative manner in the beta functions, contrasting with conventional string cosmology where cancellation of a perturbative expansion is performed. We then offer some hints as to possible cosmological implications of our configuration in terms of optical anisotropy.
We consider a filtration $\mathbb{G}$ obtained as enlargement of a filtration $\mathbb{F}$ by a filtration $\mathbb{H}$. We assume that all $\mathbb{F}$-local martingales are represented by a martingale $M$ and all $\mathbb{H}$-local martingales are represented by a martingale $N$. $M$ and $N$ are not necessarily quasi-left continuous processes and their jump times may overlap. We first analyze the contribution of the accessible jump times of $M$ and $N$ to the Jacod's dimension of the space of the $\mathcal{H}^1(\mathbb{G})$-martingales. Then we prove a new martingale representation theorem on $\mathbb{G}$.
Inductive programming frequently relies on some form of search in order to identify candidate solutions. However, the size of the search space limits the use of inductive programming to the production of relatively small programs. If we could somehow correctly predict the subset of instructions required for a given problem then inductive programming would be more tractable. We will show that this can be achieved in a high percentage of cases. This paper presents a novel model of programming language instruction co-occurrence that was built to support search space partitioning in the Zoea distributed inductive programming system. This consists of a collection of intersecting instruction subsets derived from a large sample of open source code. Using the approach different parts of the search space can be explored in parallel. The number of subsets required does not grow linearly with the quantity of code used to produce them and a manageable number of subsets is sufficient to cover a high percentage of unseen code. This approach also significantly reduces the overall size of the search space - often by many orders of magnitude.
Early prediction of mortality and length of stay(LOS) of a patient is vital for saving a patient's life and management of hospital resources. Availability of electronic health records(EHR) makes a huge impact on the healthcare domain and there has seen several works on predicting clinical problems. However, many studies did not benefit from the clinical notes because of the sparse, and high dimensional nature. In this work, we extract medical entities from clinical notes and use them as additional features besides time-series features to improve our predictions. We propose a convolution based multimodal architecture, which not only learns effectively combining medical entities and time-series ICU signals of patients, but also allows us to compare the effect of different embedding techniques such as Word2vec, FastText on medical entities. In the experiments, our proposed method robustly outperforms all other baseline models including different multimodal architectures for all clinical tasks. The code for the proposed method is available at https://github.com/tanlab/ConvolutionMedicalNer.
One billion people live in informal settlements worldwide. The complex and multilayered spaces that characterize this unplanned form of urbanization pose a challenge to traditional approaches to mapping and morphological analysis. This study proposes a methodology to study the morphological properties of informal settlements based on terrestrial LiDAR (Light Detection and Ranging) data collected in Rocinha, the largest favela in Rio de Janeiro, Brazil. Our analysis operates at two resolutions, including a \emph{global} analysis focused on comparing different streets of the favela to one another, and a \emph{local} analysis unpacking the variation of morphological metrics within streets. We show that our methodology reveals meaningful differences and commonalities both in terms of the global morphological characteristics across streets and their local distributions. Finally, we create morphological maps at high spatial resolution from LiDAR data, which can inform urban planning assessments of concerns related to crowding, structural safety, air quality, and accessibility in the favela. The methods for this study are automated and can be easily scaled to analyze entire informal settlements, leveraging the increasing availability of inexpensive LiDAR scanners on portable devices such as cellphones.
We present a preliminary measurement of CP-violating asymmetries in fully reconstructed $B^0{\to}D^{(*)\pm}\pi^{\mp}$ and $\B^0{\to}D^{\pm}\rho^{\mp}$ decays in approximately 110 million $\Upsilon(\rm 4S) \to B\bar{B}$ decays collected with the BaBar detector at the PEP-II asymmetric-energy $B$ factory at SLAC. % From a maximum likelihood fit to the time-dependent decay distributions we obtain for the CP-violating parameters: $a^{D\pi} = -0.032\pm0.031 (\textrm{stat.})\pm 0.020 (\textrm{syst.}), c_{\rm lep}^{D\pi} = -0.059\pm0.055 (\textrm{stat.})\pm 0.033 (\textrm{syst.})$ on the $B^0{\to}D^{\pm}\pi^{\mp}$ sample, $a^{D^*\pi} = -0.049\pm0.031 (\textrm{stat.})\pm 0.020 (\textrm{syst.}), c_{\rm lep}^{D^*\pi} = +0.044\pm0.054 (\textrm{stat.})\pm 0.033 (\textrm{syst.})$ on the $B^0{\to}D^{*\pm}\pi^{\mp}$ sample, and $a^{D\rho} = -0.005\pm0.044 (\textrm{stat.})\pm 0.021 (\textrm{syst.}), c_{\rm lep}^{D\rho} = -0.147\pm0.074 (\textrm{stat.})\pm 0.035 (\textrm{syst.})$ on the $B^0{\to}D^{\pm}\rho^{\mp}$ sample.
This paper contains a proof of a conjecture of Braverman concerning Laumon quasiflag spaces. We consider the generating function Z(m), whose coefficients are the integrals of the equivariant Chern polynomial (with variable m) of the tangent bundles of the Laumon spaces. We prove Braverman's conjecture, which states that Z(m) coincides with the eigenfunction of the Calogero-Sutherland hamiltonian, up to a simple factor which we specify. This conjecture was inspired by the work of Nekrasov in the affine \hat{sl}_n setting, where a similar conjecture is still open.
A new method for extracting neutron densities from intermediate energy elastic proton-nucleus scattering observables uses a global Dirac phenomenological (DP) approach based on the Relativistic Impulse Approximation (RIA). Data sets for Ca40, Ca48 and Pb208 in the energy range from 500 MeV to 1040 MeV are considered. The global fits are successful in reproducing the data and in predicting data sets not included in the analysis. Using this global approach, energy independent neutron densities are obtained. The vector point proton density distribution is determined from the empirical charge density after unfolding the proton form factor. The other densities are parametrized. This work provides energy independent values for the RMS neutron radius, R_n and the neutron skin thickness, S_n, in contrast to the energy dependent values obtained by previous studies. In addition, the results presented in paper show that the expected rms neutron radius and skin thickness for Ca40 is accurately reproduced. The values of R_n and S_n obtained from the global fits that we consider to be the most reliable are given as follows: for Ca40 R_n is 3.314 > R_n > 3.310 fm and S_n is -0.063 > S_n > -0.067 fm; for Ca48 R_n is 3.459 > R_n > 3.413 fm and S_n is 0.102 > S_n > 0.056 fm; and for Pb208 R_n is 5.550 > R_n > 5.522 and S_n is 0.111 > S_n > 0.083 fm. These values are in reasonable agreement with nonrelativistic Skyrme Hartree-Fock models and with relativistic Hartree-Bogoliubov models with density-dependent meson-nucleon couplings. The results from the global fits for Ca48 and Pb208 are generally not in agreement with the usual relativistic mean-field models.
The Special Affine Fourier Transformation or the SAFT generalizes a number of well known unitary transformations as well as signal processing and optics related mathematical operations. Shift-invariant spaces also play an important role in sampling theory, multiresolution analysis, and many other areas of signal and image processing. Shannon's sampling theorem, which is at the heart of modern digital communications, is a special case of sampling in shift-invariant spaces. Furthermore, it is well known that the Poisson summation formula is equivalent to the sampling theorem and that the Zak transform is closely connected to the sampling theorem and the Poisson summation formula. These results have been known to hold in the Fourier transform domain for decades and were recently shown to hold in the Fractional Fourier transform domain by A. Bhandari and A. Zayed. The main goal of this article is to show that these results also hold true in the SAFT domain. We provide a short, self-contained proof of Shannon's theorem for functions bandlimited in the SAFT domain and then show that sampling in the SAFT domain is equivalent to orthogonal projection of functions onto a subspace of bandlimited basis associated with the SAFT domain. This interpretation of sampling leads to least-squares optimal sampling theorem. Furthermore, we show that this approximation procedure is linked with convolution and semi-discrete convolution operators that are associated with the SAFT domain. We conclude the article with an application of fractional delay filtering of SAFT bandlimited functions.
With the emergence of new photonic and plasmonic materials with optimized properties as well as advanced nanofabrication techniques, nanophotonic devices are now capable of providing solutions to global challenges in energy conversion, information technologies, chemical/biological sensing, space exploration, quantum computing, and secure communication. Addressing grand challenges poses inherently complex, multi-disciplinary problems with a manifold of stringent constraints in conjunction with the required system's performance. Conventional optimization techniques have long been utilized as powerful tools to address multi-constrained design tasks. One example is so-called topology optimization that has emerged as a highly successful architect for the advanced design of non-intuitive photonic structures. Despite many advantages, this technique requires substantial computational resources and thus has very limited applicability to highly constrained optimization problems within high-dimensions parametric space. In our approach, we merge the topology optimization method with machine learning algorithms such as adversarial autoencoders and show substantial improvement of the optimization process by providing unparalleled control of the compact design space representations. By enabling efficient, global optimization searches within complex landscapes, the proposed compact hyperparametric representations could become crucial for multi-constrained problems. The proposed approach could enable a much broader scope of the optimal designs and data-driven materials synthesis that goes beyond photonic and optoelectronic applications.
We study the in-medium modification of the isovector pi N amplitude using a non-linear representation of the sigma model but keeping the scalar degree of freedom. We check that our result does not depend on the representation. We discuss the connection with other approaches based on chiral perturbation theory.
We present novel convex-optimization-based solutions to the problem of blind beamforming of constant modulus signals, and to the related problem of linearly constrained blind beamforming of constant modulus signals. These solutions ensure global optimality and are parameter free, namely, do not contain any tuneable parameters and do not require any a-priori parameter settings. The performance of these solutions, as demonstrated by simulated data, is superior to existing methods.
Without higher moment assumptions, this note establishes the decay of the Kolmogorov distance in a central limit theorem for L\'evy processes. This theorem can be viewed as a continuous-time extension of the classical random walk result by Friedman, Katz and Koopmans.
A new scheme for testing the nuclear matter (NM) equation of state (EoS) at high densities using constraints from compact star (CS) phenomenology is applied to neutron stars with a core of deconfined quark matter (QM). An acceptable EoS shall not to be in conflict with the mass measurement of 2.1 +/- 0.2 solar mass (1 sigma level) for PSR J0751+1807 and the mass radius relation deduced from the thermal emission of RX J1856-3754. Further constraints for the state of matter in CS interiors come from temperature-age data for young, nearby objects. The CS cooling theory shall agree not only with these data, but also with the mass distribution inferred via population synthesis models as well as with LogN-LogS data. The scheme is applied to a set of hybrid EsoS with a phase transition to stiff, color superconducting QM which fulfills all above constraints and is constrained otherwise from NM saturation properties and flow data of heavy-ion collisions. We extrapolate our description to low temperatures and draw conclusions for the QCD phase diagram to be explored in heavy-ion collision experiments.
The clusters of a distribution are often defined by the connected components of a density level set. However, this definition depends on the user-specified level. We address this issue by proposing a simple, generic algorithm, which uses an almost arbitrary level set estimator to estimate the smallest level at which there are more than one connected components. In the case where this algorithm is fed with histogram-based level set estimates, we provide a finite sample analysis, which is then used to show that the algorithm consistently estimates both the smallest level and the corresponding connected components. We further establish rates of convergence for the two estimation problems, and last but not least, we present a simple, yet adaptive strategy for determining the width-parameter of the involved density estimator in a data-depending way.
A novel geomechanics concept is presented for studying the behavior of geomaterials and structures by capturing the underlying dynamics as realistically as possible for earthquake excitation applied in time domain. Enormous amount of damages caused to infrastructures during recent earthquakes in all over the world indicate that there is a considerable room for improvement. Causes for extensive damages are generally attributed to poor soil conditions at the region. It is interesting to note that all structures in a region with poor soil condition do not suffer similar damages; in fact, some of them remain damage-free. There are many reasons for this including inability to model the soil-structural systems properly, predict the future design earthquake time history at the site, model the dynamic amplification of responses caused by the excitation, incorporate major sources of nonlinearity and energy dissipation, and most importantly consider the presence of a considerable amount of uncertainty at every phase of the evaluation process. The most recent research trend is to capture complicated behavior by conducting multiple deterministic analyses by taking advantage of current significantly improved computational capability. By conducting few dozens of deterministic analyses at very intelligently selected points, structures can be designed more seismic load-tolerant. The performance based seismic design concept recently introduced in the U.S. is showcased in this paper. The requirements in the guidelines appear to be reasonable. The concept is expected to change the current engineering design paradigm. The authors believe that the proposed alternatives to the simulation and the basic random vibration concept.
We present an effective-field-theory calculation of the effect of a dimension-six operator involving the top quark on precision electroweak data via a top-quark loop. We demonstrate the renormalizability, in the modern sense, of the effective field theory. We use the oblique parameter U to bound the coefficient of the operator, and compare with the bound derived from top-quark decay.
This article discusses open problems, implemented solutions, and future research in the area of responsible AI in healthcare. In particular, we illustrate two main research themes related to the work of two laboratories within the Department of Informatics, Systems, and Communication at the University of Milano-Bicocca. The problems addressed concern, in particular, {uncertainty in medical data and machine advice}, and the problem of online health information disorder.
We present a new graph compressor that works by recursively detecting repeated substructures and representing them through grammar rules. We show that for a large number of graphs the compressor obtains smaller representations than other approaches. Specific queries such as reachability between two nodes or regular path queries can be evaluated in linear time (or quadratic times, respectively), over the grammar, thus allowing speed-ups proportional to the compression ratio.
We define a random walk of a particle in $\mathbb{R}^3$ where the space is rotating. The particle is not glued to the space and will collide with it at random times, resulting in changes in its velocity and direction. After many collisions, the random walk starts to have some asymptotic behaviors inherited from the movement of space. The paper will find the limit movement of the particle, and explain how the randomness of the random walk gives rise to the particle asymptotic deterministic movement.
In this paper, we propose a fuzzy adaptive loss function for enhancing deep learning performance in classification tasks. Specifically, we redefine the cross-entropy loss to effectively address class-level noise conditions, including the challenging problem of class imbalance. Our approach introduces aggregation operators, leveraging the power of fuzzy logic to improve classification accuracy. The rationale behind our proposed method lies in the iterative up-weighting of class-level components within the loss function, focusing on those with larger errors. To achieve this, we employ the ordered weighted average (OWA) operator and combine it with an adaptive scheme for gradient-based learning. Through extensive experimentation, our method outperforms other commonly used loss functions, such as the standard cross-entropy or focal loss, across various binary and multiclass classification tasks. Furthermore, we explore the influence of hyperparameters associated with the OWA operators and present a default configuration that performs well across different experimental settings.
An implementation of coupled-cluster (CC) theory to treat atoms and molecules in finite magnetic fields is presented. The main challenges stem from the magnetic-field dependence in the Hamiltonian, or, more precisely, the appearance of the angular momentum operator, due to which the wave function becomes complex and which introduces a gauge-origin dependence. For this reason, an implementation of a complex CC code is required together with the use of gauge-including atomic orbitals to ensure gauge-origin independence. Results of coupled-cluster singles--doubles--perturbative-triples (CCSD(T)) calculations are presented for atoms and molecules with a focus on the dependence of correlation and binding energies on the magnetic field.
Processes involving bottom quarks play a crucial role in the LHC phenomenology, from flavour physics to Higgs characterisation and as a window to new physics, appearing both as signals and irreducible background in BSM searches. These processes can be described in QCD either in a 4-flavor or 5-flavor scheme. In the former, $b$ quarks appear only in the final state and are considered massive. In 5-flavor schemes, calculations include $b$ quarks in the initial state. Possibly large logarithms originating from the collinear splitting of gluons into bottom pairs are resummed into the $b$ parton distribution function (PDF). In this contribution, I describe a simple method to assess the size of the logarithms in processes initiated by bottom quarks and show how a substantial and justified agreement between calculations in the two schemes can be achieved. As a consequence both calculations can be used in different context. To conclude, an overview of the current studies aiming to generalise the current appraisal is given and some preliminary results are discussed.
Monolayer FeSe films grown on SrTiO3 (STO) substrate show superconducting gap-opening temperatures (Tc) which are almost an order of magnitude higher than those of the bulk FeSe and are highest among all known Fe-based superconductors. Angle-resolved photoemission spectroscopy (ARPES) observed "replica bands" suggesting the importance of the interaction between FeSe electrons and STO phonons. These facts rejuvenated the quest for Tc enhancement mechanisms in iron-based, especially iron-chalcogenide, superconductors. Here, we perform the first numerically-exact sign-problem-free quantum Monte Carlo simulations to iron-based superconductors. We (i) study the electronic pairing mechanism intrinsic to heavily electron doped FeSe films, and (ii) examine the effects of electron-phonon interaction between FeSe and STO as well as nematic fluctuations on Tc. Armed with these results, we return to the question "what makes the Tc of monolayer FeSe on SrTiO3 so high?" in the conclusion and discussions.
In the article we give some estimations of the {\L}ojasiewicz exponent of nondegenerate surface singularities in terms of their Newton diagrams. We also give an exact formula for the {\L}ojasiewicz exponent of such singularities in some special cases. The results are stronger than Fukui inequality [8]. It is also a multidimensional generalization of the Lenarcik theorem [13].
The problem of front propagation in a stirred medium is addressed in the case of cellular flows in three different regimes: slow reaction, fast reaction and geometrical optics limit. It is well known that a consequence of stirring is the enhancement of front speed with respect to the non-stirred case. By means of numerical simulations and theoretical arguments we describe the behavior of front speed as a function of the stirring intensity, $U$. For slow reaction, the front propagates with a speed proportional to $U^{1/4}$, conversely for fast reaction the front speed is proportional to $U^{3/4}$. In the geometrical optics limit, the front speed asymptotically behaves as $U/\ln U$.
UHE neutrinos may transfer highest cosmic-rays energies overcoming $2.75K^\circ$ BBR and radio-waves opacities (the GZK cut off) from most distant AGN sources at the age of the Universe. These UHE $\nu$ might scatter onto those (light and cosmological) relic neutrinos clustered around our galactic halo or nearby neutrino hot dark halo clustered around the AGN blazar and its jets. The branched chain reactions from a primordial nucleon (via photo production of pions and decay to UHE neutrinos) toward the consequent beam dump scattering on galactic relic neutrinos is at least three order of magnitude more efficient than any known neutrino interactions with Earth atmosphere or direct nucleon propagation. Therefore the rarest cosmic rays (as the 320 EeV event) might be originated at far $(\tilde{>} 100 Mpc)$ distances (as Seyfert galaxy MCG 8-11-11); its corresponding UHE radiation power is in agreement with the observed one in MeV gamma energies. The final chain products observed on Earth by the Fly's Eye and AGASA detectors might be mainly neutron and anti-neutrons and delayed, protons and anti-protons at symmetric off-axis angles. These hadronic products are most probably secondaries of $W^+ W^-$ or $ZZ$ pair productions and might be consistent with the last AGASA discoveries of doublets and one triplet event.
Quantum metamaterials generalize the concept of metamaterials (artificial optical media) to the case when their optical properties are determined by the interplay of quantum effects in the constituent 'artificial atoms' with the electromagnetic field modes in the system. The theoretical investigation of these structures demonstrated that a number of new effects (such as quantum birefringence, strongly nonclassical states of light, etc) are to be expected, prompting the efforts on their fabrication and experimental investigation. Here we provide a summary of the principal features of quantum metamaterials and review the current state of research in this quickly developing field, which bridges quantum optics, quantum condensed matter theory and quantum information processing.
We classify all possible allowed constitutive relations of relativistic fluids in a statistical mechanical limit using the Schwinger-Keldysh effective action for hydrodynamics. We find that microscopic unitarity enforces genuinely new constraints on the allowed transport coefficients that are invisible in the classical hydrodynamic description; they are not implied by the second law or the Onsager relations. We term these conditions Schwinger-Keldysh positivity and provide explicit examples of the various allowed terms.
Momentum-based acceleration of stochastic gradient descent (SGD) is widely used in deep learning. We propose the quasi-hyperbolic momentum algorithm (QHM) as an extremely simple alteration of momentum SGD, averaging a plain SGD step with a momentum step. We describe numerous connections to and identities with other algorithms, and we characterize the set of two-state optimization algorithms that QHM can recover. Finally, we propose a QH variant of Adam called QHAdam, and we empirically demonstrate that our algorithms lead to significantly improved training in a variety of settings, including a new state-of-the-art result on WMT16 EN-DE. We hope that these empirical results, combined with the conceptual and practical simplicity of QHM and QHAdam, will spur interest from both practitioners and researchers. Code is immediately available.
We present a numerical method to approximate the long-time asymptotic solution $\rho_\infty(t)$ to the Lindblad master equation for an open quantum system under the influence of an external drive. The proposed scheme uses perturbation theory to rank individual drive terms according to their dynamical relevance, and adaptively determines an effective Hamiltonian. In the constructed rotating frame, $\rho_\infty$ is approximated by a time-independent, nonequilibrium steady-state. This steady-state can be computed with much better numerical efficiency than asymptotic long-time evolution of the system in the lab frame. We illustrate the use of this method by simulating recent transmission measurements of the heavy-fluxonium device, for which ordinary time-dependent simulations are severely challenging due to the presence of metastable states with lifetimes of the order of milliseconds.
In this article, we study the transport properties of Graphene-Superconductor-Graphene (GSG) heterojunction where the superconducting region is created in the middle of a graphene sheet, as contrasted to widely studied transport properties through a Superconductor-Graphene-Superconductor (SGS) type of Josephson junction. We particularly analyse in detail the Goos-H\"anchen shift of the electron and the hole at the GS interface in such a junction, due to normal as well as Andreev reflection, using a transfer matrix-based approach. Additionally, we evaluate the normalised differential conductance as a function of bias voltage that characterises the transport through such junction and point out how they are influenced by Andreev and normal reflection. In the subsequent parts of the article we demonstrate how the GH shift for both electron and hole changes with the width of the superconducting region. The behavior of the differential conductance in such junctions as a function of the bias voltage in the region, dominated by Andreev and normal reflection, is also presented and analysed.
We present an analytical and numerical study of the orbital migration and resonance capture of fictitious two-planet systems with masses in the super-Earth range undergoing Type-I migration. We find that, depending on the flare index and proximity to the central star, the average value of the period ratio, $P_2/P_1$, between both planets may show a significant deviation with respect to the nominal value. For planets trapped in the 2:1 commensurability, offsets may reach values on the order of $0.1$ for orbital periods on the order of $1$ day, while systems in the 3:2 mean-motion resonance (MMR) show much smaller offsets for all values of the semimajor axis. These properties are in good agreement with the observed distribution of near-resonant exoplanets, independent of their detection method. We show that 2:1-resonant systems far from the star, such as HD82943 and HR8799, are characterized by very small resonant offsets, while higher values are typical of systems discovered by Kepler with orbital periods approximately a few days. Conversely, planetary systems in the vicinity of the 3:2 MMR show little offset with no significant dependence on the orbital distance. In conclusion, our results indicate that the distribution of Kepler planetary systems around the 2:1 and 3:2 MMR are consistent with resonant configurations obtained as a consequence of a smooth migration in a laminar flared disk, and no external forces are required to induce the observed offset or its dependence with the commensurability or orbital distance from the star.
The gas giant HD 80606 b has a highly eccentric orbit (e $\sim$ 0.93). The variation due to the rapid shift of stellar irradiation provides a unique opportunity to probe the physical and chemical timescales and to study the interplay between climate dynamics and atmospheric chemistry. In this work, we present integrated models to study the atmospheric responses and the underlying physical and chemical mechanisms of HD 80606 b. We first run three-dimensional general circulation models (GCMs) to establish the atmospheric thermal and dynamical structures for different atmospheric metallicities and internal heat. Based on the GCM output, we then adopted a 1D time-dependent photochemical model to investigate the compositional variation along the eccentric orbit. The transition of the circulation patterns of HD 80606 b matched the dynamics regimes in previous works. Our photochemical models show that efficient vertical mixing leads to deep quench levels of the major carbon and nitrogen species and the quenching behavior does not change throughout the eccentric orbit. Instead, photolysis is the main driver of the time-dependent chemistry. While CH$_4$ dominates over CO through most of the orbits, a transient state of [CO]/[CH$_4$}] $>$ 1 after periastron is confirmed for all metallicity and internal heat cases. The upcoming JWST Cycle 1 GO program will be able to track this real-time CH$_4$--CO conversion and infer the chemical timescale. Furthermore, sulfur species initiated by sudden heating and photochemical forcing exhibit both short-term and long-term cycles, opening an interesting avenue for detecting sulfur on exoplanets.
We study the S=1 Heisenberg antiferromagnet on a spatially anisotropic triangular lattice by the numerical diagonalization method. We examine the stability of the long-range order of a three-sublattice structure observed in the isotropic system between the isotropic case and the case of isolated one-dimensional chains. It is found that the long-range-ordered ground state with this structure exists in the range of 0.7 \simle J_2/J_1 \le 1, where J_1 is the interaction amplitude along the chains and J_2 is the amplitude of other interactions.
With the tremendous success of deep learning, there exists imminent need to deploy deep learning models onto edge devices. To tackle the limited computing and storage resources in edge devices, model compression techniques have been widely used to trim deep neural network (DNN) models for on-device inference execution. This paper targets the commonly used FPGA (field programmable gate array) devices as the hardware platforms for DNN edge computing. We focus on the DNN quantization as the main model compression technique, since DNN quantization has been of great importance for the implementations of DNN models on the hardware platforms. The novelty of this work comes in twofold: (i) We propose a mixed-scheme DNN quantization method that incorporates both the linear and non-linear number systems for quantization, with the aim to boost the utilization of the heterogeneous computing resources, i.e., LUTs (look up tables) and DSPs (digital signal processors) on an FPGA. Note that all the existing (single-scheme) quantization methods can only utilize one type of resources (either LUTs or DSPs for the MAC (multiply-accumulate) operations in deep learning computations. (ii) We use a quantization method that supports multiple precisions along the intra-layer dimension, while the existing quantization methods apply multi-precision quantization along the inter-layer dimension. The intra-layer multi-precision method can uniform the hardware configurations for different layers to reduce computation overhead and at the same time preserve the model accuracy as the inter-layer approach.
The MEPED instruments on board the NOAA POES andMetOp satellites have been continuously measuring energetic particles in the magnetosphere since 1978. However, degradation of the proton detectors over time leads to an increase in the energy thresholds of the instrument and imposes great challenges to studies of long-term variability in the near-Earth space environment as well as a general quantification of the proton fluxes. By comparing monthly mean accumulated integral flux from a new and an old satellite at the same magnetic local time (MLT) and time period, we estimate the change in energy thresholds. The first 12 monthly energy spectra of the new satellite are used as a reference, and the derived monthly correction factors over a year for an old satellite show a small spread, indicating a robust calibration procedure. The method enables us to determine for the first time the correction factors also for the highest-energy channels of the proton detector. In addition, we make use of the newest satellite in orbit (MetOp-01) to find correction factors for 2013 for the NOAA 17 and MetOp-02 satellites. Without taking into account the level of degradation, the proton data from one satellite cannot be used quantitatively for more than 2 to 3 years after launch. As the electron detectors are vulnerable to contamination from energetic protons, the corrected proton measurements will be of value for electron flux measurements too. Thus, the correction factors ensure the correctness of both the proton and electron measurements.
Many image enhancement or editing operations, such as forward and inverse tone mapping or color grading, do not have a unique solution, but instead a range of solutions, each representing a different style. Despite this, existing learning-based methods attempt to learn a unique mapping, disregarding this style. In this work, we show that information about the style can be distilled from collections of image pairs and encoded into a 2- or 3-dimensional vector. This gives us not only an efficient representation but also an interpretable latent space for editing the image style. We represent the global color mapping between a pair of images as a custom normalizing flow, conditioned on a polynomial basis of the pixel color. We show that such a network is more effective than PCA or VAE at encoding image style in low-dimensional space and lets us obtain an accuracy close to 40 dB, which is about 7-10 dB improvement over the state-of-the-art methods.
CDW/Normal metal/CDW junctions and nanoconstrictions in crystals of the quasi-one-dimensional conductor NbSe$_3$ are manufactured using a focused-ion-beam. It is found that the low-temperature conduction of these structures changes dramatically and loses the features of the charge-density-wave transition. Instead, a dielectric phase is developed. Up to 6-order power-law variations of the conduction as a function of both temperature and electric field can be observed for this new phase. The transition from quasi-one-dimensional behavior to one-dimensional behavior is associated with destruction of the three-dimensional order of the charge-density waves by fluctuations. It results in a recovery of the Luttinger-liquid properties of metallic chains, like it takes place in sliding Luttinger liquid phase.
Network theory is rapidly changing our understanding of complex systems, but the relevance of topological features for the dynamic behavior of metabolic networks, food webs, production systems, information networks, or cascade failures of power grids remains to be explored. Based on a simple model of supply networks, we offer an interpretation of instabilities and oscillations observed in biological, ecological, economic, and engineering systems. We find that most supply networks display damped oscillations, even when their units - and linear chains of these units - behave in a non-oscillatory way. Moreover, networks of damped oscillators tend to produce growing oscillations. This surprising behavior offers, for example, a new interpretation of business cycles and of oscillating or pulsating processes. The network structure of material flows itself turns out to be a source of instability, and cyclical variations are an inherent feature of decentralized adjustments.
By employing Hopf's functional method, we find the exact characteristic functional for a simple nonlinear dynamical system introduced by Orszag. Steady-state equal-time statistics thus obtained are compared to direct numerical simulation. The solution is both non-trivial and strongly non-Gaussian.
We show how to derive exact boundary $S$ matrices for integrable quantum field theories in 1+1 dimensions using lattice regularization. We do this calculation explicitly for the sine-Gordon model with fixed boundary conditions using the Bethe ansatz for an XXZ-type spin chain in a boundary magnetic field. Our results agree with recent conjectures of Ghoshal and Zamolodchikov, and indicate that the only solutions to the Bethe equations which contribute to the scaling limit are the standard strings.
One-flip stable configurations of an Ising-model on a random graph with fluctuating connectivity are examined. In order to perform the quenched average of the number of stable configurations we introduce a global order-parameter function with two arguments. The analytical results are compared with numerical simulations.