text
stringlengths
6
128k
In this perspective, we discuss how one can initiate, image, and disentangle the ultrafast elementary steps of thermal-energy chemical dynamics, building upon advances in technology and scientific insight. We propose that combinations of ultrashort mid-infrared laser pulses, controlled molecular species in the gas phase, and forefront imaging techniques allow to unravel the elementary steps of general-chemistry reaction processes in real time. We detail, for prototypical first reaction systems, experimental methods enabling these investigations, how to sufficiently prepare and promote gas-phase samples to thermal-energy reactive states with contemporary ultrashort mid-infrared laser systems, and how to image the initiated ultrafast chemical dynamics. The results of such experiments will clearly further our understanding of general-chemistry reaction dynamics.
Diffusion models have been extensively utilized in AI-generated content (AIGC) in recent years, thanks to the superior generation capabilities. Combining with semantic communications, diffusion models are used for tasks such as denoising, data reconstruction, and content generation. However, existing diffusion-based generative models do not consider the stringent bandwidth limitation, which limits its application in wireless communication. This paper introduces a diffusion-driven semantic communication framework with advanced VAE-based compression for bandwidth-constrained generative model. Our designed architecture utilizes the diffusion model, where the signal transmission process through the wireless channel acts as the forward process in diffusion. To reduce bandwidth requirements, we incorporate a downsampling module and a paired upsampling module based on a variational auto-encoder with reparameterization at the receiver to ensure that the recovered features conform to the Gaussian distribution. Furthermore, we derive the loss function for our proposed system and evaluate its performance through comprehensive experiments. Our experimental results demonstrate significant improvements in pixel-level metrics such as peak signal to noise ratio (PSNR) and semantic metrics like learned perceptual image patch similarity (LPIPS). These enhancements are more profound regarding the compression rates and SNR compared to deep joint source-channel coding (DJSCC).
In this paper, we present CCEdit, a versatile generative video editing framework based on diffusion models. Our approach employs a novel trident network structure that separates structure and appearance control, ensuring precise and creative editing capabilities. Utilizing the foundational ControlNet architecture, we maintain the structural integrity of the video during editing. The incorporation of an additional appearance branch enables users to exert fine-grained control over the edited key frame. These two side branches seamlessly integrate into the main branch, which is constructed upon existing text-to-image (T2I) generation models, through learnable temporal layers. The versatility of our framework is demonstrated through a diverse range of choices in both structure representations and personalized T2I models, as well as the option to provide the edited key frame. To facilitate comprehensive evaluation, we introduce the BalanceCC benchmark dataset, comprising 100 videos and 4 target prompts for each video. Our extensive user studies compare CCEdit with eight state-of-the-art video editing methods. The outcomes demonstrate CCEdit's substantial superiority over all other methods.
We analyse transverse oscillations of a coronal loop excited by continuous monoperiodic motions of the loop footpoint at different frequencies in the presence of gravity. Using the MPI-AMRVAC code, we perform three-dimensional numerical magnetohydrodynamic simulations, considering the loop as a magnetic flux tube filled in with denser, hotter, and gravitationally stratified plasma. We show the resonant response of the loop to its external excitation and analyse the development of the Kelvin-Helmholtz instability at different heights. We also study the spatial distribution of plasma heating due to transverse oscillations along the loop. The positions of the maximum heating are in total agreement with those for the intensity of the Kelvin-Helmholtz instability, and correspond to the standing wave anti-nodes in the resonant cases. The initial temperature configuration and plasma mixing effect appear to play a significant role in plasma heating by transverse footpoint motions. In particular, the development of the Kelvin-Helmholtz instability in a hotter loop results in the enhancement of the mean plasma temperature in the domain.
Challenges related to development, deployment, and maintenance of reusable software for science are becoming a growing concern. Many scientists' research increasingly depends on the quality and availability of software upon which their works are built. To highlight some of these issues and share experiences, the First Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE1) was held in November 2013 in conjunction with the SC13 Conference. The workshop featured keynote presentations and a large number (54) of solicited extended abstracts that were grouped into three themes and presented via panels. A set of collaborative notes of the presentations and discussion was taken during the workshop. Unique perspectives were captured about issues such as comprehensive documentation, development and deployment practices, software licenses and career paths for developers. Attribution systems that account for evidence of software contribution and impact were also discussed. These include mechanisms such as Digital Object Identifiers, publication of "software papers", and the use of online systems, for example source code repositories like GitHub. This paper summarizes the issues and shared experiences that were discussed, including cross-cutting issues and use cases. It joins a nascent literature seeking to understand what drives software work in science, and how it is impacted by the reward systems of science. These incentives can determine the extent to which developers are motivated to build software for the long-term, for the use of others, and whether to work collaboratively or separately. It also explores community building, leadership, and dynamics in relation to successful scientific software.
We introduce a new technique for isolating components on the spectral side of the trace formula. By applying it to the Jacquet--Rallis relative trace formula, we complete the proof of the global Gan--Gross--Prasad conjecture and its refinement Ichino--Ikeda conjecture for $\mathrm{U}(n)\times\mathrm{U}(n+1)$ in the stable case.
We consider $u(t,x)=(u_1(t,x),\cdots,u_d(t,x))$ the solution to a system of non-linear stochastic heat equations in spatial dimension one driven by a $d$-dimensional space-time white noise. We prove that, when $d\leq 3$, the local time $L(\xi,t)$ of $\{u(t,x)\,,\;t\in[0,T]\}$ exists and $L(\bullet,t) $ belongs a.s. to the Sobolev space $ H^{\alpha}(\mathbb{R}^d)$ for $\alpha<\frac{4-d}{2}$, and when $d\geq 4$, the local time does not exist. We also show joint continuity and establish H\"{o}lder conditions for the local time of $\{u(t,x)\,,\;t\in[0,T]\}$. These results are then used to investigate the irregularity of the coordinate functions of $\{u(t,x)\,,\;t\in[0,T]\}$. Comparing to similar results obtained for the linear stochastic heat equation (i.e., the solution is Gaussian), we believe that our results are sharp. Finally, we get a sharp estimate for the partial derivatives of the joint density of $(u(t_1,x)-u(t_0,x),\cdots,u(t_n,x)-u(t_{n-1},x))$, which is a new result and of independent interest.
We prove that for a given operator in the Standard Model (SM) with baryon number B and lepton number L, that the operator's dimension is even (odd) if (B-L)/2 is even (odd). Consequently, this establishes the veracity of statements that were long observed or expected to be true, but not proven, e.g., operators with B-L=0 are of even dimension, B-L must be an even number, etc. These results remain true even if the SM is augmented by any number of right-handed neutrinos with L=1.
Cool stars on the main sequence generate X-rays from coronal activity, powered by a convective dynamo. With increasing temperature, the convective envelope becomes smaller and X-ray emission fainter. We present Chandra/HRC-I observations of four single stars with early A spectral types. Only the coolest star of this sample, $\tau^3$ Eri ($T_\mathrm{eff}\approx8\,,000$ K), is detected with $\log(L_X/L_\mathrm{bol})=-7.6$ while the three hotter stars ($T_\mathrm{eff}\geq8\,,000$ K), namely $\delta$ Leo, $\beta$ Leo, and $\iota$ Cen, remain undetected with upper limits $\log(L_X/L_\mathrm{bol})<-8.4$. The drop in X-ray emission thus occurs in a narrow range of effective temperatures around $\sim 8100$ K and matches the drop of activity in the C III and O VI transition region lines.
In this paper, the design and the characterization of a novel interrogator based on integrated Fourier transform (FT) spectroscopy is presented. To the best of our knowledge, this is the first integrated FT spectrometer used for the interrogation of photonic sensors. It consists of a planar spatial heterodyne spectrometer, which is implemented using an array of Mach-Zehnder interferometers (MZIs) with different optical path differences. Each MZI employs a 3$\times$3 multi-mode interferometer, allowing the retrieval of the complex Fourier coefficients. We derive a system of non-linear equations whose solution, which is obtained numerically from Newton's method, gives the modulation of the sensor's resonances as a function of time. By taking one of the sensors as a reference, to which no external excitation is applied and its temperature is kept constant, about 92$\%$ of the thermal induced phase drift of the integrated MZIs has been compensated. The minimum modulation amplitude that is obtained experimentally is 400 fm, which is more than two orders of magnitude smaller than the FT spectrometer resolution.
Let $K$ be an algebraically closed field of characteristic zero and let $R = K[X_1,\ldots,X_n]$. Let $I$ be an ideal in $R$. Let $A_n(K)$ be the $n^{th}$ Weyl algebra over $K$. By a result of Lyubeznik, the local cohomology modules $H^i_I(R)$ are holonomic $A_n(K)$-modules for each $i \geq 0$. In this paper we compute the Euler characteristic of De-Rahm cohomology of $H^{\htt P}_P(R)$ for certain classes of prime ideals $P$ in $R$.
A trade-off between the precision of an arbitrary current and the dissipation, known as the thermodynamic uncertainty relation, has been investigated for various Markovian systems. Here, we study the thermodynamic uncertainty relation for underdamped Langevin dynamics. By employing information inequalities, we prove that, for such systems, the relative fluctuation of a current at a steady state is constrained by both the entropy production and the average dynamical activity. We find that, unlike what is the case for overdamped dynamics, the dynamical activity plays an important role in the bound. We illustrate our results with two systems, a single-well potential system and a periodically driven Brownian particle model, and numerically verify the inequalities.
Fast Abstracts are short presentations of work in progress or opinion pieces and aim to serve as a rapid and flexible mechanism to (i) Report on current work that may or may not be complete; (ii) Introduce new ideas to the community; (iii) State positions on controversial issues or open problems. On the other hand, the goal of the Student Forum is to encourage students to attend EDCC and present their work, exchange ideas with researchers and practitioners, and get early feedback on their research efforts.
Multiquantum vortices in dilute atomic Bose-Einstein condensates confined in long cigar-shaped traps are known to be both energetically and dynamically unstable. They tend to split into single-quantum vortices even in the ultralow temperature limit with vanishingly weak dissipation, which has also been confirmed in the recent experiments [Y. Shin et al., Phys. Rev. Lett. 93, 160406 (2004)] utilizing the so-called topological phase engineering method to create multiquantum vortices. We study the stability properties of multiquantum vortices in different trap geometries by solving the Bogoliubov excitation spectra for such states. We find that there are regions in the trap asymmetry and condensate interaction strength plane in which the splitting instability of multiquantum vortices is suppressed, and hence they are dynamically stable. For example, the doubly quantized vortex can be made dynamically stable even in spherical traps within a wide range of interaction strength values. We expect that this suppression of vortex-splitting instability can be experimentally verified.
This paper introduces the notion of objection-based causal networks which resemble probabilistic causal networks except that they are quantified using objections. An objection is a logical sentence and denotes a condition under which a, causal dependency does not exist. Objection-based causal networks enjoy almost all the properties that make probabilistic causal networks popular, with the added advantage that objections are, arguably more intuitive than probabilities.
Let $A$ be a CQG Hopf $*$-algebra, i.e. a Hopf $*$-algebra with a positive invariant state. Given a unital right coideal $*$-subalgebra $B$ of $A$, we provide conditions for the existence of a quasi-invariant integral on the stabilizer coideal $B^{\perp}$ inside the dual discrete multiplier Hopf $*$-algebra of $A$. Given such a quasi-invariant integral, we show how it can be extended to a quasi-invariant integral on the Drinfeld double coideal. We moreover show that the representation theory of the Drinfeld double coideal has a monoidal structure. As an application, we determine the quasi-invariant integral for the coideal $*$-algebra $U_q(\mathfrak{sl}(2,\mathbb{R}))$ constructed from the Podle\'{s} spheres.
Let k be an algebraically closed field of characteristic 0. We conclude the classification of finite dimensional pointed Hopf algebras whose group of group-likes is S_4. We also describe all pointed Hopf algebras over S_5 whose infinitesimal braiding is associated to the rack of transpositions.
It is well known that every modular form~$f$ on a discrete subgroup $\Gamma\leqslant \textrm{SL}(2, \mathbb R)$ satisfies a third-order nonlinear ODE that expresses algebraic dependence of the functions~$f$, $f'$, $f''$ and~$f'''$. These ODEs are automatically invariant under the Lie group $\textrm{SL}(2, \mathbb R)$, which acts on the solution spaces thereof with an open orbit (and the discrete stabiliser~$\Gamma$ of a generic solution). Similarly, every modular form satisfies a fourth-order nonlinear ODE that is invariant under the Lie group $\textrm{GL}(2, \mathbb R)$ acting on its solution space with an open orbit. ODEs for modular forms can be compactly expressed in terms of the differential invariants of these actions. The invariant forms of both ODEs define plane algebraic curves naturally associated with every modular form; the corresponding ODEs can be seen as modular parametrisations of the associated curves. After reviewing examples of nonlinear ODEs satisfied by classical modular forms (such as Eisenstein series, modular forms on congruence subgroups of level two and three, theta constants, and some newforms of weight two), we generalise these results to Jacobi forms; these satisfy involutive third-order PDE systems that are invariant under the Lie group $\textrm{SL}(2, \mathbb R)\ltimes H$ where $H$ is the Heisenberg group.
A Rota--Baxter operator is an algebraic abstraction of integration, which is the typical example of a weight zero Rota-Baxter operator. We show that studying the modules over the polynomial Rota--Baxter algebra $(k[x],P)$ is equivalent to studying the modules over the Jordan plane, and we generalize the direct decomposability results for the $(k[x],P)$-modules in [Iy] from algebraically closed fields of characteristic zero to fields of characteristic zero. Furthermore, we provide a classification of Rota--Baxter modules up to isomorphism based on indecomposable $k[x]$-modules.
We study cumulants of the chiral order parameter as function of beam energy as a possible signal for the presence of a critical end point and first-order phase transition in the QCD phase diagram. We model the expansion of a heavy-ion collision by a fluid dynamic expansion coupled to the explicit propagation of the chiral order parameter sigma via a Langevin equation. We evolve the medium until a parametrized freeze-out condition is met where we calculate event-by-event fluctuations and cumulants of sigma which are expected to follow the trend of net-proton number cumulants. We emphasize the role of a nonequilibrium first-order phase transition: The presence of an unstable phase causes the well-known bending of the trajectories in the space of temperature and baryochemical potential. For these cases at lower beam energies, the system crosses the freeze-out line more than once, allowing us to calculate a range of cumulants for each initial condition which are overall enhanced for the second hit of the freeze-out line. We thus find not only the critical end point but also the phase transition of the underlying model clearly reflected in the cumulants. The impact of volume fluctuations is demonstrated to play a measurable role for fluid dynamical evolutions that last significantly long.
Given a function $f$ from the set $[N]$ to a $d$-dimensional integer grid, we consider data structures that allow efficient orthogonal range searching queries in the image of $f$, without explicitly storing it. We show that, if $f$ is of the form $[N]\to [2^{w}]^d$ for some $w=\mathrm{polylog} (N)$ and is computable in constant time, then, for any $0<\alpha <1$, we can obtain a data structure using $\tilde {O}(N^{1-\alpha / 3})$ words of space such that, for a given $d$-dimensional axis-aligned box $B$, we can search for some $x\in [N]$ such that $f(x) \in B$ in time $\tilde{O}(N^{\alpha})$. This result is obtained simply by combining integer range searching with the Fiat-Naor function inversion scheme, which was already used in data-structure problems previously. We further obtain - data structures for range counting and reporting, predecessor, selection, ranking queries, and combinations thereof, on the set $f([N])$, - data structures for preimage size and preimage selection queries for a given value of $f$, and - data structures for selection and ranking queries on geometric quantities computed from tuples of points in $d$-space. These results unify and generalize previously known results on 3SUM-indexing and string searching, and are widely applicable as a black box to a variety of problems. In particular, we give a data structure for a generalized version of gapped string indexing, and show how to preprocess a set of points on an integer grid in order to efficiently compute (in sublinear time), for points contained in a given axis-aligned box, their Theil-Sen estimator, the $k$th largest area triangle, or the induced hyperplane that is the $k$th furthest from the origin.
We present the form of the Dirac quantisation condition for the p-form charges carried by p-brane solutions of supergravity theories. This condition agrees precisely with the conditions obtained in lower dimensions, as is necessary for consistency with Kaluza-klein dimensional reduction. These considerations also determine the charge lattice of BPS soliton states, which proves to be a universal modulus-independent lattice when the charges are defined to be the canonical charges corresponding to the quantum supergravity symmetry groups.
A recent study of ultra-compact dwarf galaxies (UCDs) in the Virgo cluster revealed that some of them show faint envelopes and have measured mass-to-light ratios of 5 and larger, which can not be explained by simple population synthesis models. It is believed that this proves that some of the UCDs must possess a dark matter halo and may therefore be stripped nuclei of dwarf ellipticals rather than merged star cluster complexes. Using an efficient N-body method we investigate if a close passage of a UCD through the central region of the host galaxy is able to enhance the measured mass-to-light ratio by tidal forces leaving the satellite slightly out of virial equilibrium and thereby leading to an overestimation of its virial mass. We find this to be possible and discuss the general problem of measuring dynamical masses for objects that are probably interacting with their hosts.
We propose a configuration of D-branes welded by analogous orbifold operation to be responsible for the enhancement of $SO(2N)$ gauge symmetry in type II string compactified on the $D_n$-type singular limit of K3. Evidences are discussed from the $D_n$-type ALE and D-manifold point of view. A subtlety regarding the ability of seeing the enhanced $SO(2N)$ gauge symmetry perturbatively is briefly addressed.
We define a natural concept of duality for the h-Hopf algebroids introduced by Etingof and Varchenko. We prove that the special case of the trigonometric SL(2) dynamical quantum group is self-dual, and may therefore be viewed as a deformation both of the function algebra F(SL(2)) and of the enveloping algebra U(sl(2)). Matrix elements of the self-duality in the Peter-Weyl basis are 6j-symbols; this leads to a new algebraic interpretation of the hexagon identity or quantum dynamical Yang-Baxter equation for quantum and classical 6j-symbols.
This article is the first in a series of three papers, whose scope is to give new proofs to the well known theorems of Calder\'{o}n, Coifman, McIntosh and Meyer. Here we treat the case of the first commutator and some of its generalizations.
Probabilistic image registration methods estimate the posterior distribution of transformation. The conventional way of interpreting the transformation posterior is to use the mode as the most likely transformation and assign its corresponding intensity to the registered voxel. Meanwhile, summary statistics of the posterior are employed to evaluate the registration uncertainty, that is the trustworthiness of the registered image. Despite the wide acceptance, this convention has never been justified. In this paper, based on illustrative examples, we question the correctness and usefulness of conventional methods. In order to faithfully translate the transformation posterior, we propose to encode the variability of values into a novel data type called ensemble fields. Ensemble fields can serve as a complement to the registered image and a foundation for developing advanced methods to characterize the uncertainty in registration-based tasks. We demonstrate the potential of ensemble fields by pilot examples
In recent years, there has been an explosive growth in multimodal learning. Image captioning, a classical multimodal task, has demonstrated promising applications and attracted extensive research attention. However, recent studies have shown that image caption models are vulnerable to some security threats such as backdoor attacks. Existing backdoor attacks against image captioning typically pair a trigger either with a predefined sentence or a single word as the targeted output, yet they are unrelated to the image content, making them easily noticeable as anomalies by humans. In this paper, we present a novel method to craft targeted backdoor attacks against image caption models, which are designed to be stealthier than prior attacks. Specifically, our method first learns a special trigger by leveraging universal perturbation techniques for object detection, then places the learned trigger in the center of some specific source object and modifies the corresponding object name in the output caption to a predefined target name. During the prediction phase, the caption produced by the backdoored model for input images with the trigger can accurately convey the semantic information of the rest of the whole image, while incorrectly recognizing the source object as the predefined target. Extensive experiments demonstrate that our approach can achieve a high attack success rate while having a negligible impact on model clean performance. In addition, we show our method is stealthy in that the produced backdoor samples are indistinguishable from clean samples in both image and text domains, which can successfully bypass existing backdoor defenses, highlighting the need for better defensive mechanisms against such stealthy backdoor attacks.
In this paper we reconsider recently derived bounds on $MeV$ tau neutrinos, taking into account previously unaccounted for effects. We find that, assuming that the neutrino life-time is longer than $O(100~sec)$, the constraint $N_{eff}<3.6$ rules out $\nu_{\tau}$ masses in the range $0.5~(MeV)<m_{\nu_\tau}<35~(MeV)$ for Majorana neutrinos and $0.74~(MeV)<m_{\nu_\tau}<35~(MeV)$ for Dirac neutrinos. Given that the present laboratory bound is 35 MeV, our results lower the present bound to $0.5$ and $0.74$ for Majorana and Dirac neutrinos respectively.
The idea of unifying quarks and leptons in a gauge symmetry is very appealing. However, such an unification gives rise to leptoquark type gauge bosons for which current collider limits push their masses well beyond the TeV scale. We present a model in the framework of extra dimensions which breaks such quark-lepton unification symmetry via compactification at the TeV scale. These color triplet leptoquark gauge bosons, as well as the new quarks present in the model, can be produced at the LHC with distinctive final state signatures. These final state signals include high p_T multi-jets and multi-leptons with missing energy, monojets with missing energy, as well as the heavy charged particles passing through the detectors, which we also discuss briefly. The model also has a neutral Standard Model singlet heavy lepton which is stable, and can be a possible candidate for the dark matter.
GuessWhat?! is a two-player visual dialog guessing game where player A asks a sequence of yes/no questions (Questioner) and makes a final guess (Guesser) about a target object in an image, based on answers from player B (Oracle). Based on this dialog history between the Questioner and the Oracle, a Guesser makes a final guess of the target object. Previous baseline Oracle model encodes no visual information in the model, and it cannot fully understand complex questions about color, shape, relationships and so on. Most existing work for Guesser encode the dialog history as a whole and train the Guesser models from scratch on the GuessWhat?! dataset. This is problematic since language encoder tend to forget long-term history and the GuessWhat?! data is sparse in terms of learning visual grounding of objects. Previous work for Questioner introduces state tracking mechanism into the model, but it is learned as a soft intermediates without any prior vision-linguistic insights. To bridge these gaps, in this paper we propose Vilbert-based Oracle, Guesser and Questioner, which are all built on top of pretrained vision-linguistic model, Vilbert. We introduce two-way background/target fusion mechanism into Vilbert-Oracle to account for both intra and inter-object questions. We propose a unified framework for Vilbert-Guesser and Vilbert-Questioner, where state-estimator is introduced to best utilize Vilbert's power on single-turn referring expression comprehension. Experimental results show that our proposed models outperform state-of-the-art models significantly by 7%, 10%, 12% for Oracle, Guesser and End-to-End Questioner respectively.
We analyse the internal structure and dynamics of cosmic-web filaments that connect massive high-$z$ haloes. Our analysis is based on a high-resolution AREPO cosmological simulation zooming-in on a volume encompassing three ${\rm Mpc}$-scale filaments feeding three massive haloes of $\sim 10^{12}\,\text{M}_\odot$ at $z \sim 4$, embedded in a large-scale sheet. Each filament is surrounded by a cylindrical accretion shock of radius $r_{\rm shock} \sim 50 \,{\rm kpc}$. The post-shock gas is in virial equilibrium with the potential well set by an isothermal dark-matter filament. The filament line-mass is $\sim 9\times 10^8\,\text{M}_\odot\,{\rm kpc}^{-1}$, the gas fraction within $r_{\rm shock}$ is the universal baryon fraction, and the virial temperature is $\sim 7\times 10^5 {\rm K}$. In the outer ''thermal'' (T) zone, $r \geq 0.65 \, r_{\rm shock}$, inward gravity and ram-pressure forces are over-balanced by outwards thermal pressure forces, decelerating the inflowing gas expanding the shock outward. In the intermediate ''vortex'' (V) zone, $0.25 \leq r/ r_{\rm shock} \leq 0.65$, the velocity field is dominated by a quadrupolar vortex structure due to offset inflow along the sheet through the post-shock gas. The outwards force is dominated by centrifugal forces associated with these vortices, with additional contributions from global rotation and thermal pressure. The shear and turbulent forces associated with the vortices act inward. The inner ''stream'' (S) zone, $r < 0.25 \, r_{\rm shock}$, is a dense isothermal core, $T\sim 3 \times 10^4 \, {\rm K}$ and $n_{\rm H}\sim 0.01 \,{\rm cm^{-3}}$, defining the cold streams that feed galaxies. The core is formed by an isobaric cooling flow and is associated with a decrease in outwards forces, though it exhibits both inflows and outflows. [abridged]
Over the past 20 years, Kenya's demand for petroleum products has proliferated. This is mainly because this particular commodity is used in many sectors of the country's economy. Exchange rates are impacted by constantly shifting prices, which also impact Kenya's industrial output of commodities. The cost of other items produced and even the expansion of the economy is significantly impacted by any change in the price of petroleum products. Therefore, accurate petroleum price forecasting is critical for devising policies that are suitable to curb fuel-related shocks. Data mining techniques are the tools used to find valuable patterns in data. Data mining techniques used in petroleum price prediction, including artificial neural networks (ANNs), support vector machines (SVMs), and intelligent optimization techniques like the genetic algorithm (GA), have grown increasingly popular. This study provides a comprehensive review of the existing data mining techniques for making predictions on petroleum prices. The data mining techniques are classified into regression models, deep neural network models, fuzzy sets and logic, and hybrid models. A detailed discussion of how these models are developed and the accuracy of the models is provided.
Sixth generation systems are expected to face new security challenges, while opening up new frontiers towards context awareness in the wireless edge. The workhorse behind this projected technological leap will be a whole new set of sensing capabilities predicted for 6G devices, in addition to the ability to achieve high precision localization. The combination of these enhanced traits can give rise to a new breed of context-aware security protocols, following the quality of security (QoSec) paradigm. In this framework, physical layer security solutions emerge as competitive candidates for low complexity, low-delay and low-footprint, adaptive, flexible and context aware security schemes, leveraging the physical layer of the communications in genuinely cross-layer protocols, for the first time.
The Somos 5 sequences are a family of sequences defined by a fifth order bilinear recurrence relation with constant coefficients. For particular choices of coefficients and initial data, integer sequences arise. By making the connection with a second order nonlinear mapping with a first integral, we prove that the two subsequences of odd/even index terms each satisfy a Somos 4 (fourth order) recurrence. This leads directly to the explicit solution of the initial value problem for the Somos 5 sequences in terms of the Weierstrass sigma function for an associated elliptic curve.
We propose a modification of a recently introduced generalized translation operator, by including a $q$-exponential factor, which implies in the definition of a Hermitian deformed linear momentum operator $\hat{p}_q$, and its canonically conjugate deformed position operator $\hat{x}_q$. A canonical transformation leads the Hamiltonian of a position-dependent mass particle to another Hamiltonian of a particle with constant mass in a conservative force field of a deformed phase space. The equation of motion for the classical phase space may be expressed in terms of the generalized dual $q$-derivative. A position-dependent mass confined in an infinite square potential well is shown as an instance. Uncertainty and correspondence principles are analyzed.
We propose a generalization of the Chern-Simon (CS) Lagrangian which is invariant under the SIM(2) transformations but not under the full Lorentz group. We study the effect of such a term on radiation propagating over cosmological distances. We find that the dominant effect of this term is to produce circular polarization as radiation propagates through space. We use the circular polarization data from distant radio sources in order to impose a limit on this term.
We estimate the $L^{p}$ norms of the discrepancy between the volume and the number of integer points in $r\Omega-x$, a dilated by a factor $r$ and translated by a vector $x$ of a convex body $\Omega$ in $\mathbb{R}^{d}$ with smooth boundary with strictly positive curvature, \[ \left\{ {\displaystyle\int_{\mathbb R}}{\displaystyle\int_{\mathbb{T}^{d}}}\left\vert \sum_{k\in\mathbb{Z}^{d}}\chi _{r\Omega-x}(k)-r^{d}\left\vert \Omega\right\vert \right\vert ^{p}dxd\mu(r-R) \right\} ^{1/p}, \] where $\mu$ is a Borel measure compactly supported on the positive real axis and $R\to+\infty$.
In this paper, we analyze the upper critical field of four MgB2 thin films, with different resistivity (between 5 to 50 mWcm) and critical temperature (between 29.5 to 38.8 K), measured up to 28 Tesla. In the perpendicular direction the critical fields vary from 13 to 24 T and we can estimate 42-57 T range in other direction. We observe linear temperature dependence even at low temperatures without saturation, in contrast to BCS theory. Considering the multiband nature of the superconductivity in MgB2, we conclude that two different scattering mechanisms influence separately resistivity and critical field. In this framework, resistivity values have been calculated from Hc2(T) curves and compared with the measured ones.
We study biwarped product submanifolds which are special cases of multiply warped product submanifolds in K\"{a}hler manifolds. We observe the non-existence of such submanifolds under some circumstances. We show that there exists a non-trivial biwarped product submanifold of a certain type by giving an illustrate example. We also give a necessary and sufficient condition for such submanifolds to be locally trivial. Moreover, we establish an inequality for the squared norm of the second fundamental form in terms of the warping functions for such submanifolds. The equality case is also discussed.
This paper discusses the relationship between memoized top-down recognizers and chart parsers. It presents a version of memoization suitable for continuation-passing style programs. When applied to a simple formalization of a top-down recognizer it yields a terminating parser.
We aim to determine the production rates of several parent and product volatiles and the 12C/13C isotopic carbon ratio in the long-period comet C/2004 Q2 (Machholz), which is likely to originate from the Oort Cloud. The line emission from several molecules in the coma was measured with high signal-to-noise ratio in January 2005 at heliocentric distance of 1.2 AU by means of high-resolution spectroscopic observations using the Submillimeter Telescope (SMT). We have obtained production rates of several volatiles (CH3OH, HCN, H13CN, HNC, H2CO, CO and CS) by comparing the observed and simulated line-integrated intensities. Furthermore, multiline observations of the CH3OH (7-6) series allow us to estimate the rotational temperature using the rotation diagram technique. We find that the CH3OH population distribution of the levels sampled by these lines can be described by a rotational temperature of 40 \pm 3 K. Derived mixing ratios relative to hydrogen cyanide are CO/CH3OH/H2CO/CS/HNC/H13CN/HCN = 30.9/24.6/4.8/0.57/0.031/0.013/1 assuming a pointing offset of 8" due to the uncertain ephemeris at the time of the observations and the telescope pointing error. The measured relative molecular abundances in C/2004 Q2 (Machholz) are between low- to typical values of those obtained in Oort Cloud comets, suggesting that it has visited the inner solar system previously and undergone thermal processing. The HNC/HCN abundance ratio of ~3.1% is comparable to that found in other comets, accounting for the dependence on the heliocentric distance, and could possibly be explained by ion-molecule chemical processes in the low-temperature atmosphere. From a tentative H13CN detection, the measured value of 97 \pm 30 for the H12CN/H13CN isotopologue pair is consistent with a telluric value.
A key aim in biology and psychology is to identify fundamental principles underpinning the behavior of animals, including humans. Analyses of human language and the behavior of a range of non-human animal species have provided evidence for a common pattern underlying diverse behavioral phenomena: words follow Zipf's law of brevity (the tendency of more frequently used words to be shorter), and conformity to this general pattern has been seen in the behavior of a number of other animals. It has been argued that the presence of this law is a sign of efficient coding in the information theoretic sense. However, no strong direct connection has been demonstrated between the law and compression, the information theoretic principle of minimizing the expected length of a code. Here we show that minimizing the expected code length implies that the length of a word cannot increase as its frequency increases. Furthermore, we show that the mean code length or duration is significantly small in human language, and also in the behavior of other species in all cases where agreement with the law of brevity has been found. We argue that compression is a general principle of animal behavior, that reflects selection for efficiency of coding.
This paper studies the second moment boundedness of solutions of linear stochastic delay differential equations. First, we give a framework, for general $\mathrm{N}$-dimensional linear stochastic differential equations with a single discrete delay, of calculating the characteristic function for the second moment boundedness. Next, we apply the proposed framework to a special case of a type of 2-dimensional equation that the stochastic terms are decoupled. For the 2-dimensional equation, we obtain the characteristic function explicitly given by equation coefficients, the characteristic function gives sufficient conditions for the second moment to be bounded or unbounded.
We demonstrate a fiber-integrated quantum optical circulator that is operated by a single atom and that relies on the chiral interaction between emitters and transversally confined light. Like its counterparts in classical optics, our circulator exhibits an inherent asymmetry between light propagation in the forward and the backward direction. However, rather than a magnetic field or a temporal modulation, it is the internal quantum state of the atom that controls the operation direction of the circulator. This working principle is compatible with preparing the circulator in a coherent superposition of its operational states. Such a quantum circulator may thus become a key element for routing and processing quantum information in scalable integrated optical circuits. Moreover, it features a strongly nonlinear response at the single-photon level, thereby enabling, e.g., photon number-dependent routing and novel quantum simulation protocols.
Let $(R,\mathfrak{m})$ be a Noetherian local ring of dimension $d > 0$. Let $I_\bullet = \{I_n\}_{n \in \mathbb{N}}$ be a graded family of $\mathfrak{m}$-primary ideals in $R$. We examine how far off from a polynomial can the length function $\ell_R(R/I_n)$ be asymptotically. More specifically, we show that there exists a constant $\gamma > 0$ such that for all $n \ge 0$, $$\ell_R(R/I_{n+1}) - \ell_R(R/I_n) < \gamma n^{d-1}.$$
A core issue with learning to optimize neural networks has been the lack of generalization to real world problems. To address this, we describe a system designed from a generalization-first perspective, learning to update optimizer hyperparameters instead of model parameters directly using novel features, actions, and a reward function. This system outperforms Adam at all neural network tasks including on modalities not seen during training. We achieve 2x speedups on ImageNet, and a 2.5x speedup on a language modeling task using over 5 orders of magnitude more compute than the training tasks.
Do we know if a short selling ban or a Tobin Tax result in more stable asset prices? Or do they in fact make things worse? Just like medicine regulatory measures in financial markets aim at improving an already complex system. And just like medicine these interventions can cause side effects which are even harder to assess when taking the interplay with other measures into account. In this paper an agent based stock market model is built that tries to find answers to the questions above. In a stepwise procedure regulatory measures are introduced and their implications on market liquidity and stability examined. Particularly, the effects of (i) a ban of short selling (ii) a mandatory risk limit, i.e. a Value-at-Risk limit, (iii) an introduction of a Tobin Tax, i.e. transaction tax on trading, and (iv) any arbitrary combination of the measures are observed and discussed. The model is set up to incorporate non-linear feedback effects of leverage and liquidity constraints leading to fire sales and escape dynamics. In its unregulated version the model outcome is capable of reproducing stylised facts of asset returns like fat tails and clustered volatility. Introducing regulatory measures shows that only a mandatory risk limit is beneficial from every perspective, while a short selling ban - though reducing volatility - increases tail risk. The contrary holds true for a Tobin Tax: it reduces the occurrence of crashes but increases volatility. Furthermore, the interplay of measures is not negligible: measures block each other and a well chosen combination can mitigate unforeseen side effects. Concerning the Tobin Tax the findings indicate that an overdose can do severe harm.
We investigate efficiency of a gauge-covariant neural network and an approximation of the Jacobian in optimizing the complexified integration path toward evading the sign problem in lattice field theories. For the construction of the complexified integration path, we employ the path optimization method. The $2$-dimensional $\text{U}(1)$ gauge theory with the complex gauge coupling constant is used as a laboratory to evaluate the efficiency. It is found that the gauge-covariant neural network, which is composed of the Stout-like smearing, can enhance the average phase factor, as the gauge-invariant input does. For the approximation of the Jacobian, we test the most drastic case in which we perfectly drop the Jacobian during the learning process. It reduces the numerical cost of the Jacobian calculation from ${\cal O}(N^3)$ to ${\cal O}(1)$, where $N$ means the number of degrees of freedom of the theory. The path optimization using this Jacobian approximation still enhances the average phase factor at expense of a slight increase of the statistical error.
We report on van der Waals epitaxial growth, materials characterization and magnetotransport experiments in crystalline nanosheets of Bismuth Telluro-Sulfide (BTS). Highly layered, good-quality crystalline nanosheets of BTS are obtained on SiO$_2$ and muscovite mica. Weak-antilocalization (WAL), electron-electron interaction-driven insulating ground state and universal conductance fluctuations are observed in magnetotransport experiments on BTS devices. Temperature, thickness and magnetic field dependence of the transport data indicate the presence of two-dimensional surface states along with bulk conduction, in agreement with theoretical models. An extended-WAL model is proposed and utilized in conjunction with a two-channel conduction model to analyze the data, revealing a surface component and evidence of multiple conducting channels. A facile growth method and detailed magnetotransport results indicating BTS as an alternative topological insulator material system are presented.
We have performed a deep Chandra observation of Galactic Type Ia supernova remnant G299.2-2.9. Here we report the initial results from our imaging and spectral analysis. The observed abundance ratios of the central ejecta are in good agreement with those predicted by delayed-detonation Type Ia supernovae models. We reveal inhomogeneous spatial and spectral structures of metal-rich ejecta in G299.2-2.9. The Fe/Si abundance ratio in the northern part of the central ejecta is higher than that in the southern part. An elongation of ejecta material extends out to the western outermost boundary of the remnant. In this western elongation, both the Si and Fe are enriched with a similar abundance ratio to that in the southern part of the central nebula. These structured distributions of metal-rich ejecta material suggest that this Type Ia supernova might have undergone a significantly asymmetric explosion and/or has been expanding into a structured medium.
To study the variability of the 523 B and Be stars observed in the Magellanic clouds with the VLT-FLAMES, we cross-matched the stars of our sample with the photometric database MACHO, which provides for each star an 8 years lightcurve. We searched for long, medium, and short-term periodicity and found the eclipsing binaries in our sample. For these stars, combining, spectroscopy and photometry, we were able to provide information on several systems of stars (systemic velocities, ratios of masses, etc). We also present the ratios of B-binaries to B-non binaries in the LMC/SMC in comparison with the MW. Note that this ratio is also an important issue to understand the mechanism of star-formation at low metallicity. We also found the first multiperiodic B and Be stars in the SMC, in particular the first SMC Beta Cep and SPB, while, according to the models, pulsations were not foreseen in low metallicity environments, i.e. typically in the SMC. Our results show that the instability strips are shifted towards higher temperatures in comparison with the Milky Way' strips of pulsating B-type stars. By the fact that we found more pulsating Be stars than pulsating B stars in the SMC, it seems that the fast rotation favours the presence of pulsations. However, the ratio of pulsating B-type stars to "non"-pulsating B-type stars at low metallicity is lower than at high metallicity.
We describe a fully broadband approach for electron spin resonance (ESR) experiments where it is possible to not only tune the magnetic field but also the frequency continuously over wide ranges. Here a metallic coplanar transmission line acts as compact and versatile microwave probe that can easily be implemented in different cryogenic setups. We perform ESR measurements at frequencies between 0.1 and 67 GHz and at temperatures between 50 mK and room temperature. Three different types of samples (Cr3+ ions in ruby, organic radicals of the nitronyl-nitroxide family, and the doped semiconductor Si:P) represent different possible fields of application for the technique. We demonstrate that an extremely large phase space in temperature, magnetic field, and frequency for ESR measurements, substantially exceeding the range of conventional ESR setups, is accessible with metallic coplanar lines.
In this work, we analysed new LOw Frequency ARray observations of the mini halo in the cluster RBS797, together with archival Very Large Array observations and the recent Chandra results. This cluster is known to host a powerful active galactic nucleus (AGN) at its centre, with two pairs of jets propagating in orthogonal directions. Recent X-ray observations have detected three pairs of shock fronts, connected with the activity of the central AGN. Our aim is to investigate the connection between the mini halo emission and the activity of the central source. We find that the diffuse radio emission is elongated in different directions at 144 MHz (east-west) with respect to 1.4 GHz (north-south), tracing the orientation of the two pairs of jets. The mini halo emission is characterised by an average spectral index $\alpha=-1.02\pm 0.05$. The spectral index profile of the mini halo shows a gradual flattening from the centre to the periphery. Such a trend is unique among the mini halos studied to date, and resembles the spectral index trend typical of particles re-accelerated by shocks. However, the estimated contribution to the radio brightness profile coming from shock re-acceleration is found to be insufficient to account for the radial brightness profile of the mini halo. We propose three scenarios that could explain the observed trend: (i) the AGN-driven shocks are propagating onto an already existing mini halo, re-energising the electrons. We estimate that the polarisation induced by the shocks could be detected at 6 GHz and above; (ii) we could be witnessing turbulent re-acceleration in a high magnetic field cluster; and (iii) the mini halo could have a hadronic origin, in which the particles are injected by Future observations in polarisation would be fundamental to understand the role of shocks and the magnetic field.
The solving degree of a system of multivariate polynomial equations provides an upper bound for the complexity of computing the solutions of the system via Groebner bases methods. In this paper, we consider polynomial systems that are obtained via Weil restriction of scalars. The latter is an arithmetic construction which, given a finite Galois field extension $k\hookrightarrow K$, associates to a system $\mathcal{F}$ defined over $K$ a system $\mathrm{Weil}(\mathcal{F})$ defined over $k$, in such a way that the solutions of $\mathcal{F}$ over $K$ and those of $\mathrm{Weil}(\mathcal{F})$ over $k$ are in natural bijection. In this paper, we find upper bounds for the complexity of solving a polynomial system $\mathrm{Weil}(\mathcal{F})$ obtained via Weil restriction in terms of algebraic invariants of the system $\mathcal{F}$.
The raw coronagraphic performance of current high-contrast imaging instruments is limited by the presence of a quasi-static speckle (QSS) background, resulting from instrumental non-common path errors (NCPEs). Rapid development of efficient speckle subtraction techniques in data reduction has enabled final contrasts of up to 10-6 to be obtained, however it remains preferable to eliminate the underlying NCPEs at the source. In this work we introduce the coronagraphic Modal Wavefront Sensor (cMWS), a new wavefront sensor suitable for real-time NCPE correction. This pupil-plane optic combines the apodizing phase plate coronagraph with a holographic modal wavefront sensor, to provide simultaneous coronagraphic imaging and focal-plane wavefront sensing using the science point spread function. We first characterise the baseline performance of the cMWS via idealised closed-loop simulations, showing that the sensor successfully recovers diffraction-limited coronagraph performance over an effective dynamic range of +/-2.5 radians root-mean-square (RMS) wavefront error within 2-10 iterations. We then present the results of initial on-sky testing at the William Herschel Telescope, and demonstrate that the sensor is able to retrieve injected wavefront aberrations to an accuracy of 10nm RMS under realistic seeing conditions. We also find that the cMWS is capable of real-time broadband measurement of atmospheric wavefront variance at a cadence of 50Hz across an uncorrected telescope sub-aperture. When combined with a suitable closed-loop adaptive optics system, the cMWS holds the potential to deliver an improvement in raw contrast of up to two orders of magnitude over the uncorrected QSS floor. Such a sensor would be eminently suitable for the direct imaging and spectroscopy of exoplanets with both existing and future instruments, including EPICS and METIS for the E-ELT.
We develop and calibrate a realistic model flame for hydrodynamical simulations of deflagrations in white dwarf (Type Ia) supernovae. Our flame model builds on the advection-diffusion-reaction model of Khokhlov and includes electron screening and Coulomb corrections to the equation of state in a self-consistent way. We calibrate this model flame--its energetics and timescales for energy release and neutronization--with self-heating reaction network calculations that include both these Coulomb effects and up-to-date weak interactions. The burned material evolves post-flame due to both weak interactions and hydrodynamic changes in density and temperature. We develop a scheme to follow the evolution, including neutronization, of the NSE state subsequent to the passage of the flame front. As a result, our model flame is suitable for deflagration simulations over a wide range of initial central densities and can track the temperature and electron fraction of the burned material through the explosion and into the expansion of the ejecta.
In 2007 Lam and Pylyavskyy found a combinatorial formula for the dual stable Grothendieck polynomials, which are the dual basis of the stable Grothendieck polynomials with respect to the Hall inner product. In 2016 Galashin, Grinberg, and Liu introduced refined dual stable Grothendieck polynomials by putting additional sequence of parameters in the combinatorial formula of Lam and Pylyavskyy. Grinberg conjectured a Jacobi--Trudi type formula for refined dual stable Grothendieck polynomials. In this paper this conjecture is proved by using bijections of Lam and Pylyavskyy.
We develop a new perspective on research conducted through visualization design study that emphasizes design as a method of inquiry and the broad range of knowledge-contributions achieved through it as multiple, subjective, and socially constructed. From this interpretivist position we explore the nature of visualization design study and develop six criteria for rigor. We propose that rigor is established and judged according to the extent to which visualization design study research and its reporting are INFORMED, REFLEXIVE, ABUNDANT, PLAUSIBLE, RESONANT, and TRANSPARENT. This perspective and the criteria were constructed through a four-year engagement with the discourse around rigor and the nature of knowledge in social science, information systems, and design. We suggest methods from cognate disciplines that can support visualization researchers in meeting these criteria during the planning, execution, and reporting of design study. Through a series of deliberately provocative questions, we explore implications of this new perspective for design study research in visualization, concluding that as a discipline, visualization is not yet well positioned to embrace, nurture, and fully benefit from a rigorous, interpretivist approach to design study. The perspective and criteria we present are intended to stimulate dialogue and debate around the nature of visualization design study and the broader underpinnings of the discipline.
A finite or infinite matrix $A$ with rational entries (and only finitely many non-zero entries in each row) is called image partition regular if, whenever the natural numbers are finitely coloured, there is a vector $x$, with entries in the natural numbers, such that $Ax$ is monochromatic. Many of the classical results of Ramsey theory are naturally stated in terms of image partition regularity. Our aim in this paper is to investigate maximality questions for image partition regular matrices. When is it possible to add rows on to $A$ and remain image partition regular? When can one add rows but `nothing new is produced'? What about adding rows and also new variables? We prove some results about extensions of the most interesting infinite systems, and make several conjectures. Perhaps our most surprising positive result is a compatibility result for Milliken-Taylor systems, stating that (in many cases) one may adjoin one Milliken-Taylor system to a translate of another and remain image partition regular. This is in contrast to earlier results, which had suggested a strong inconsistency between different Milliken-Taylor systems. Our main tools for this are some algebraic properties of the $\beta N$, the Stone-Cech compactification of the natural numbers.
We construct geometric examples of pseudomanifolds that satisfy the Witt condition for intersection homology Poincare duality with respect to certain fields but not others. We also compute the bordism theory of $K$-Witt spaces for an arbitrary field $K$, extending results of Siegel for $K=Q$.
The typical transverse momentum of a quark in the proton is a basic property of any QCD based model of nucleon structure. However, calculations in phenomenological models typically give rather small values of transverse momenta, which are difficult to reconcile with the larger values observed in high energy experiments such as Drell-Yan reactions and Semi-inclusive deep inelastic scattering. In this letter we calculate the leading twist transverse momentum dependent distribution functions (TMDs) using a generalization of the Adelaide group's relativistic formalism that has previously given good fits to the parton distributions. This enables us to examine the $k_{T}$ dependence of the TMDs in detail, and determine typical widths of these distributions. These are found to be significantly larger than those of previous calculations. We then use TMD factorization in order to evolve these distributions up to experimental scales where we can compare with data on $\langle k_{T} \rangle$ and $\langle k^{2}_{T} \rangle$. Our distributions agree well with this data.
To achieve a smooth and safe guiding of a drone formation by a human operator, we propose a novel interaction strategy for a human-swarm communication which combines impedance control and vibrotactile feedback. The presented approach takes into account the human hand velocity and changes the formation shape and dynamics accordingly using impedance interlinks simulated between quadrotors, which helps to achieve a natural swarm behavior. Several tactile patterns representing static and dynamic parameters of the swarm are proposed. The user feels the state of the swarm at the fingertips and receives valuable information to improve the controllability of the complex formation. A user study revealed the patterns with high recognition rates. A flight experiment demonstrated the possibility to accurately navigate the formation in a cluttered environment using only tactile feedback. Subjects stated that tactile sensation allows guiding the drone formation through obstacles and makes the human-swarm communication more interactive. The proposed technology can potentially have a strong impact on the human-swarm interaction, providing a higher level of awareness during the swarm navigation.
Using methods of effective field theory, we show that after resummation of Sudakov logarithms the spectral densities of interacting quark and gluon fields in ordinary quantum field theories such as QCD are virtually indistinguishable from those of "unparticles" of a hypothetical conformal sector coupled to the Standard Model, recently studied by Georgi. Unparticles are therefore less exotic that originally thought. Models in which a hidden sector weakly coupled to the Standard Model contains a QCD-like theory, which confines at some scale much below the characteristic energy of a given process, can give rise to signatures closely resembling those from unparticles.
Smart and agile drones are fast becoming ubiquitous at the edge of the cloud. The usage of these drones are constrained by their limited power and compute capability. In this paper, we present a Transfer Learning (TL) based approach to reduce on-board computation required to train a deep neural network for autonomous navigation via Deep Reinforcement Learning for a target algorithmic performance. A library of 3D realistic meta-environments is manually designed using Unreal Gaming Engine and the network is trained end-to-end. These trained meta-weights are then used as initializers to the network in a test environment and fine-tuned for the last few fully connected layers. Variation in drone dynamics and environmental characteristics is carried out to show robustness of the approach. Using NVIDIA GPU profiler it was shown that the energy consumption and training latency is reduced by 3.7x and 1.8x respectively without significant degradation in the performance in terms of average distance traveled before crash i.e. Mean Safe Flight (MSF). The approach is also tested on a real environment using DJI Tello drone and similar results were reported.
We give sharp conditions for global in time existence of gradient flow solutions to a Cahn-Hilliard-type equation, with backwards second order degenerate diffusion, in any dimension and for general initial data. Our equation is the 2-Wasserstein gradient flow of a free energy with two competing effects: the Dirichlet energy and the power-law internal energy. Homogeneity of the functionals reveals critical regimes that we analyse. Sharp conditions for global in time solutions, constructed via the minimising movement scheme, also known as JKO scheme, are obtained. Furthermore, we study a system of two Cahn-Hilliard-type equations exhibiting an analogous gradient flow structure.
In this paper, we consider the problem of visual scanning mechanism underpinning sensorimotor tasks, such as walking and driving, in dynamic environments. We exploit eye tracking data for offering two new cognitive effort measures in visual scanning behavior of virtual driving. By utilizing the retinal flow induced by fixation, two novel measures of cognitive effort are proposed through the importance of grids in the viewing plane and the concept of information quantity, respectively. Psychophysical studies are conducted to reveal the effectiveness of the two proposed measures. Both these two cognitive effort measures have shown their significant correlation with pupil size change. Our results suggest that the quantitative exploitation of eye tracking data provides an effective approach for the evaluation of sensorimotor activities.
A limit variety is a variety that is minimal with respect to being non-finitely based. The two limit varieties of Marcel Jackson are the only known examples of limit varieties of aperiodic monoids. Our previous work had shown that there exists a limit subvariety of aperiodic monoids that is different from Marcel Jackson's limit varieties. In this paper, we introduce a new limit variety of aperiodic monoids.
We initiate the study of property testing problems concerning relations between permutations. In such problems, the input is a tuple $(\sigma_1,\dotsc,\sigma_d)$ of permutations on $\{1,\dotsc,n\}$, and one wishes to determine whether this tuple satisfies a certain system of relations $E$, or is far from every tuple that satisfies $E$. If this computational problem can be solved by querying only a small number of entries of the given permutations, we say that $E$ is testable. For example, when $d=2$ and $E$ consists of the single relation $\mathsf{XY=YX}$, this corresponds to testing whether $\sigma_1\sigma_2=\sigma_2\sigma_1$, where $\sigma_1\sigma_2$ and $\sigma_2\sigma_1$ denote composition of permutations. We define a collection of graphs, naturally associated with the system $E$, that encodes all the information relevant to the testability of $E$. We then prove two theorems that provide criteria for testability and non-testability in terms of expansion properties of these graphs. By virtue of a deep connection with group theory, both theorems are applicable to wide classes of systems of relations. In addition, we formulate the well-studied group-theoretic notion of stability in permutations as a special case of the testability notion above, interpret all previous works on stability as testability results, survey previous results on stability from a computational perspective, and describe many directions for future research on stability and testability.
We found an active state lasting for ~200 d in the AM CVn star NSV 1440 in 2022. During this state, the object reached a magnitude of 16.5, 2.0-2.5 mag above quiescence, and showed a number of superposed normal outbursts. Such an active state was probably brought either by an enhanced mass-transfer from the secondary or increased quiescent viscosity of the accretion disk. These possibilities are expected to be distinguished by an observation of the interval to the next superoutburst. We also found that the brightness and the course toward the end of the event were similar to the post-superoutburst fading tail in 2021. The mechanism producing the 2022 active state and post-superoutburst fading tails in AM CVn stars may be the same, and the present finding is expected to clarify the nature of these still poorly understood fading tails in AM CVn stars, and potentially of the corresponding phenomenon in hydrogen-rich WZ Sge stars. We also note that the faint, long "superoutbursts" in long-period AM CVn stars claimed in the past were not true outbursts powered by disk instability, but were more likely phenomena similar to the 2022 active state in NSV 1440.
In this paper we study the area minimizing problem in some kinds of conformal cones. This concept is a generalization of the cones in Eulcidean spaces and the cylinders in product manifolds. We define a non-closed-minimal (NCM) condition for bounded domains. Under this assumption and other necessary conditions we establish the existence of bounded minimal graphs in mean convex conformal cones. Moreover those minimal graphs are the solutions to corresponding area minizing problems. We can solve the area minimizing problem in non-mean convex translating conformal cones if these cones are contained in a larger mean convex conformal cones with the NCM assumption. We give examples to illustrate that this assumption can not be removed for our main results.
Using the method of renormalization group, we improve the two-loop effective potential of the massive $\phi^4$ theory to obtain the next-next-to-leading logarithm correction in the $\bar{MS}$ scheme. Our result well reproduces the next-next-to-leading logarithm parts of the ordinary loop expansion result known up to the four-loop order.
Chiral perturbation theory is nowadays a well-established approach to incorporate the chiral constraints from QCD. Nevertheless, for systems involving one baryon, the power counting which dictates the chiral order of observables is not as simple and consensual as in the purely mesonic case. The heavy baryon approach, which relies on a non-relativistic expansion around the limit of infinitely heavy baryon, recovers the usual power counting but destroys some analytic properties of the scattering amplitude. Some years ago, Becher and Leutwyler proposed a Lorentz-invariant formulation of chiral perturbation theory that maintains the required analytic properties, but at the expense of a less intuitive power counting. Aware of the shortcomings of the heavy baryon formalism, the S\~ao Paulo group derived the two-pion exchange component of the nucleon-nucleon potential in line with the works of Becher and Leutwyler. A striking result was that the long distance properties of the potential is determined by the specific low energy region of the pion-nucleon scattering amplitude where the heavy baryon expansion fails. In this talk I will discuss the origin of such failure and how it reflects in the asymptotics of the nucleon-nucleon interaction. Some results for phase shifts and deuteron properties will be shown, followed by a comparison with the heavy baryon predictions.
We show that the minimization problem of any non-convex and non-lower semi-continuous function on a compact convex subset of a locally convex real topological vector space can be studied via an associated convex and lower semi-continuous function $\Gamma \left( h\right) $. This observation uses the notion of $\Gamma $-regularization as a key ingredient. As an application we obtain, on any locally convex real space, a generalization of the Lanford III-Robinson theorem which has only been proven for separable real Banach spaces. The latter is a characterization of subdifferentials of convex continuous functions.
Context. RW Aur A is a classical T Tauri star (CTTS) with an unusually rich emission line spectrum. In 2014 the star faded by ~ 3 magnitudes in the V band and went into a long-lasting minimum. In 2010 the star suffered from a similar fading, although less deep. These events in RW Aur A are very unusual among the CTTS, and have been attributed to occultations by passing dust clouds. Aims. We want to find out if any spectral changes took place after the last fading of RW Aur A with the intention to gather more information on the occulting body and the cause of the phenomenon. Methods. We collected spectra of the two components of RW Aur. Photometry was made before and during the minimum. Results. The overall spectral signatures reflecting emission from accretion flows from disk to star did not change after the fading. However, blue-shifted absorption components related to the stellar wind had increased in strength in certain resonance lines, and the profiles and strengths, but not fluxes, of forbidden lines had become drastically different. Conclusions. The extinction through the obscuring cloud is grey indicating the presence of large dust grains. At the same time, there are no traces of related absorbing gas. The cloud occults the star and the interior part of the stellar wind, but not the wind/jet further out. The dimming in 2014 was not accompanied by changes in the accretion flows at the stellar surface. There is evidence that the structure and velocity pattern of the stellar wind did change significantly. The dimmings could be related to passing condensations in a tidally disrupted disk, as proposed earlier, but we also speculate that large dust grains have been stirred up from the inclined disk into the line-of-sight through the interaction with an enhanced wind.
Achieving quantum-level control over electromagnetic waves, magnetisation dynamics, vibrations and heat is invaluable for many practical application and possible by exploiting the strong radiation-matter coupling. Most of the modern strong microwave photon-magnon coupling developments rely on the integration of metal-based microwave resonators with a magnetic material. However, it has recently been realised that all-dielectric resonators made of or containing magneto-insulating materials can operate as a standalone strongly-coupled system characterised by low dissipation losses and strong local microwave field enhancement. Here, after a brief overview of recent developments in the field, I discuss examples of such dielectric resonant systems and demonstrate their ability to operate as multiresonant antennas for light, microwaves, magnons, sound, vibrations and heat. This multiphysics behaviour opens up novel opportunities for the realisation of multiresonant coupling such as, for example, photon-magnon-phonon coupling. I also propose several novel systems in which strong photon-magnon coupling in dielectric antennas and similar structures is expected to extend the capability of existing devices or may provide an entirely new functionality. Examples of such systems include novel magnetofluidic devices, high-power microwave power generators, and hybrid devices exploiting the unique properties of electrical solitons.
Elliptic operators on stratified manifolds with any finite number of strata are considered. Under certain assumptions on the symbols of operators, we obtain index formulas, which express index as a sum of indices of elliptic operators on the strata.
To understand the origin of the magnetic fields in massive stars as well as their impact on stellar internal structure, evolution, and circumstellar environment, within the MiMeS project, we searched for magnetic objects among a large sample of massive stars, and build a sub-sample for in-depth follow-up studies required to test the models and theories of fossil field origins, magnetic wind confinement and magnetospheric properties, and magnetic star evolution. We obtained high-resolution spectropolarimetric observations of a large number of OB stars thanks to three large programs that have been allocated on the high-resolution spectropolarimeters ESPaDOnS, Narval, and the polarimetric module HARPSpol of the HARPS spectrograph. We report here on the methods and first analysis of the HARPSpol magnetic detections. We identified the magnetic stars using a multi-line analysis technique. Then, when possible, we monitored the new discoveries to derive their rotation periods, which are critical for follow-up and magnetic mapping studies. We also performed a first-look analysis of their spectra and identified obvious spectral anomalies (e.g., abundance peculiarities, Halpha emission), which are also of interest for future studies. In this paper, we focus on eight of the 11 stars in which we discovered or confirmed a magnetic field from the HARPSpol LP sample (the remaining three were published in a previous paper). Seven of the stars were detected in early-type Bp stars, while the last star was detected in the Ap companion of a normal early B-type star. We report obvious spectral and multiplicity properties, as well as our measurements of their longitudinal field strengths, and their rotation periods when we are able to derive them. We also discuss the presence or absence of Halpha emission with respect to the theory of centrifugally-supported magnetospheres. (Abriged)
A cylindrical GEM detector is under development, to serve as an upgraded inner tracker at the BESIII spectrometer. It will consist of three layers of cylindrically-shaped triple GEMs surrounding the interaction point. The experiment is taking data at the e+e- collider BEPCII in Beijing (China) and the GEM tracker will be installed in 2018. Tests on the performances of triple GEMs in strong magnetic field have been run by means of the muon beam available in the H4 line of SPS (CERN) with both planar chambers and the first cylindrical prototype. Efficiencies and resolutions have been evaluated using different gains, gas mixtures, with and without magnetic field. The obtained efficiency is 97-98% on single coordinate view, in many operational arrangements. The spatial resolution for planar GEMs has been evaluated with two different algorithms for the position determination: the charge centroid and the micro time projection chamber (mu-TPC) methods. The two modes are complementary and are able to cope with the asymmetry of the electron avalanche when running in magnetic field, and with non-orthogonal incident tracks. With the charge centroid, a resolution lower than 100 micron has been reached without magnetic field and lower than 200 micron with a magnetic field up to 1 T. The mu-TPC mode showed to be able to improve those results. In the first beam test with the cylindrical prototype, the detector had a very good stability under different voltage configurations and particle intensities. The resolution evaluation is in progress.
We re-analyze the Cepheid data used to infer the value of $H_0$ by calibrating SnIa. We do not enforce a universal value of the empirical Cepheid calibration parameters $R_W$ (Cepheid Wesenheit color-luminosity parameter) and $M_H^{W}$ (Cepheid Wesenheit H-band absolute magnitude). Instead, we allow for variation of either of these parameters for each individual galaxy. We also consider the case where these parameters have two universal values: one for low galactic distances $D<D_c$ and one for high galactic distances $D>D_c$ where $D_c$ is a critical transition distance. We find hints for a $3\sigma$ level mismatch between the low and high galactic distance parameter values. We then use AIC and BIC criteria to compare and rank the following types of models: Base models: Universal values for $R_W$ and $M_H^{W}$ (no parameter variation), I Individual fitted galactic $R_W$ with a universal fitted $M_H^{W}$, II Universal fixed $R_W$ with individual fitted galactic $M_H^{W}$, III Universal fitted $R_W$ with individual fitted galactic $M_H^{W}$, IV Two universal fitted $R_W$ (near and far) with one universal fitted $M_H^{W}$, V Universal fitted $R_W$ with two universal fitted $M_H^{W}$ (near and far), VI Two universal fitted $R_W$ with two universal fitted $M_H^{W}$ (near and far). We find that the AIC and BIC criteria consistently favor model IV instead of the commonly used Base model where no variation is allowed for the Cepheid empirical parameters. The best fit value of the SnIa absolute magnitude $M_B$ and of $H_0$ implied by the favored model IV is consistent with the inverse distance ladder calibration based on the CMB sound horizon $H_0=67.4\pm 0.5\,km\,s^{-1}\,Mpc^{-1}$. Thus in the context of the favored model IV the Hubble crisis is not present. This model may imply the presence of a fundamental physics transition taking place at a time more recent than $100\,Myrs$ ago.
In this note, we provide a simple derivation of expressions for the restricted partition function and its polynomial part. Our proof relies on elementary algebra on rational functions and a lemma that expresses the polynomial part as an average of the partition function.
We propose a new file format named "H5MD" for storing molecular simulation data, such as trajectories of particle positions and velocities, along with thermodynamic observables that are monitored during the course of the simulation. H5MD files are HDF5 (Hierarchical Data Format) files with a specific hierarchy and naming scheme. Thus, H5MD inherits many benefits of HDF5, e.g., structured layout of multi-dimensional datasets, data compression, fast and parallel I/O, and portability across many programming languages and hardware platforms. H5MD files are self-contained and foster the reproducibility of scientific data and the interchange of data between researchers using different simulation programs and analysis software. In addition, the H5MD specification can serve for other kinds of data (e.g. experimental data) and is extensible to supplemental data, or may be part of an enclosing file structure.
We compute the R-matrix which intertwines two dimensional evaluation representations with Drinfeld comultiplication for U_q(\widehat{sl}_2). This R-matrix contains terms proportional to the delta-function. We construct the algebra A(R) generated by the elements of the matrices L^\pm(z) with relations determined by R. In the category of highest weight representations, there is a Hopf algebra isomorphism between A(R) and an extension \overline{U}_q(\widehat{sl}_2)} of Drinfeld's algebra.
Building on a specific formalization of analogical relationships of the form "A relates to B as C relates to D", we establish a connection between two important subfields of artificial intelligence, namely analogical reasoning and kernel-based machine learning. More specifically, we show that so-called analogical proportions are closely connected to kernel functions on pairs of objects. Based on this result, we introduce the analogy kernel, which can be seen as a measure of how strongly four objects are in analogical relationship. As an application, we consider the problem of object ranking in the realm of preference learning, for which we develop a new method based on support vector machines trained with the analogy kernel. Our first experimental results for data sets from different domains (sports, education, tourism, etc.) are promising and suggest that our approach is competitive to state-of-the-art algorithms in terms of predictive accuracy.
We are concerned with the inviscid limit of the Navier-Stokes equations to the Euler equations for compressible fluids in $\mathbb{R}^3$. Motivated by the Kolmogorov hypothesis (1941) for incompressible flow, we introduce a Kolmogorov-type hypothesis for barotropic flows, in which the density and the sonic speed normally vary significantly. We then observe that the compressible Kolmogorov-type hypothesis implies the uniform boundedness of some fractional derivatives of the weighted velocity and sonic speed in the space variables in $L^2$, which is independent of the viscosity coefficient $\mu>0$. It is shown that this key observation yields the equicontinuity in both space and time of the density in $L^\gamma$ and the momentum in $L^2$, as well as the uniform bound of the density in $L^{q_1}$ and the velocity in $L^{q_2}$ independent of $\mu>0$, for some fixed $q_1 >\gamma$ and $q_2 >2$, where $\gamma>1$ is the adiabatic exponent. These results lead to the strong convergence of solutions of the Navier-Stokes equations to a solution of the Euler equations for barotropic fluids in $\mathbb{R}^3$. Not only do we offer a framework for mathematical existence theories, but also we offer a framework for the interpretation of numerical solutions through the identification of a function space in which convergence should take place, with the bounds that are independent of $\mu>0$, that is in the high Reynolds number limit.
We apply a new method of determination of the size of the broad emission-line region (BLR) in active galactic nuclei. This method relates the radius of the broad-line region of AGN to the soft X-ray luminosity and spectral index. Comparing the BLR distances calculated from our photoionization scaling model to the BLR distances determined by reverberation mapping shows that the scaling law agrees with the $R\sim L^{1/2}$ empirical relation. We investigate a complimentary method of estimating the BLR distance - based on the Keplerian broadening of the emission lines and the central mass estimated from X-ray variability.
We demonstrate how the zitterbewegung charge oscillations can be detected through a charge conductance measurement in a three-terminal junction. By tuning the spin-orbit interaction strength or an external magnetic field the zitterbewegung period can be modulated, translating into complementary conductance oscillations in the two outgoing leads of the junction. The proposed experimental setup is within the reach of demonstrated technology and material parameters, and enables the observation of the so far elusive zitterbewegung phenomenon.
We investigated the possibility of construction the homogeneous and isotropic cosmological solutions in Weyl geometry. We derived the self-consistency condition which ensures the conformal invariance of the complete set of equations of motion. There is the special gauge in choosing the conformal factor when the Weyl vector equals zero. In this gauge we found new vacuum cosmological solutions absent in General Relativity. Also, we found new solution in Weyl geometry for the radiation dominated universe with the cosmological term, corresponding to the constant curvature scalar in our special gauge. Possible relation of our results to the understanding both dark matter and dark energy is discussed.
Bethe's view-point on the global energy problems is presented. Bethe claimed that the nuclear power is a necessity in future. Nuclear energetic must be based on breeder reactors. Bethe considered the non-proliferation of nuclear weapons as the main problem of long-range future of nuclear energetics. The solution of this problem he saw in heavy water moderated thermal breeders, using uranium-233, uranium-238 and thorium as a fuel.
Software vulnerabilities, caused by unintentional flaws in source code, are a primary root cause of cyberattacks. Static analysis of source code has been widely used to detect these unintentional defects introduced by software developers. Large Language Models (LLMs) have demonstrated human-like conversational abilities due to their capacity to capture complex patterns in sequential data, such as natural languages. In this paper, we harness LLMs' capabilities to analyze source code and detect known vulnerabilities. To ensure the proposed vulnerability detection method is universal across multiple programming languages, we convert source code to LLVM IR and train LLMs on these intermediate representations. We conduct extensive experiments on various LLM architectures and compare their accuracy. Our comprehensive experiments on real-world and synthetic codes from NVD and SARD demonstrate high accuracy in identifying source code vulnerabilities.
The Sloan Digital Sky Survey has identified a total of 212 cataclysmic variables, most of which are fainter than 18th magnitude. This is the deepest and most populous homogeneous sample of cataclysmic variables to date, and we are undertaking a project to characterise this population. We have found that the SDSS sample is dominated by a great ``silent majority'' of old and faint CVs. We detect, for the first time, a population spike at the minimum period of 80 minutes which has been predicted by theoretical studies for over a decade.
In contrast to the abundant research focusing on large-scale models, the progress in lightweight semantic segmentation appears to be advancing at a comparatively slower pace. However, existing compact methods often suffer from limited feature representation capability due to the shallowness of their networks. In this paper, we propose a novel lightweight segmentation architecture, called Multi-scale Feature Propagation Network (MFPNet), to address the dilemma. Specifically, we design a robust Encoder-Decoder structure featuring symmetrical residual blocks that consist of flexible bottleneck residual modules (BRMs) to explore deep and rich muti-scale semantic context. Furthermore, taking benefit from their capacity to model latent long-range contextual relationships, we leverage Graph Convolutional Networks (GCNs) to facilitate multi-scale feature propagation between the BRM blocks. When evaluated on benchmark datasets, our proposed approach shows superior segmentation results.
Waring's classical problem deals with expressing every natural number as a sum of g(k) k-th powers. Recently there has been considerable interest in similar questions for nonabelian groups, and simple groups in particular. Here the k-th power word is replaced by an arbitrary nontrivial group word w, and the goal is to express group elements as short products of values of w. We give a best possible and somewhat surprising solution for this Waring type problem for various finite simple groups, showing that a product of length two suffices to express all elements. We also show that the set of values of w is very large, improving various results obtained previously. Along the way we also obtain new results of independent interest on character values and class squares in symmetric groups. Our methods involve algebraic geometry, representation theory, probabilistic arguments, as well as three prime theorems from additive number theory (approximating Goldbach's Conjecture).
A new fabrication process is developed for growing Bi2Se3 topological insulators in the form of nanowires/nanobelts and ultra-thin films. It consists of two consecutive procedures: first Bi2Se3 nanowires/nanobelts are deposited by standard catalyst free vapour-solid deposition on different substrates positioned inside a quartz tube. Then, the Bi2Se3, stuck on the inner surface of the quartz tube, is re-evaporated and deposited in the form of ultra-thin films on new substrates at temperature below 100 {\deg}C, which is of relevance for flexible electronic applications. The method is new, quick, very inexpensive, easy to control and allows obtaining films with different thickness down to one quintuple layer (QL) during the same procedure. The composition and the crystal structure of both the nanowires/nanobelts and the thin films is analysed by different optical, electronic and structural techniques. For the films, scanning tunnelling spectroscopy shows that the Fermi level is positioned in the middle of the energy bandgap as a consequence of the achieved correct stoichiometry. Ultra-thin films, with thickness in the range 1-10 QLs deposited on n-doped Si substrates, show good rectified properties suitable for their use as photodetectors in the ultra violet-visible-near infrared wavelength range
Ti3SiC2 is a potential structural material for nuclear reactor applications. However, He irradiation effects in this material are not well understood, especially at high temperatures. Here, we compare the effects of He irradiation in Ti3SiC2 at room temperature (RT) and at 750 {\deg}C. Irradiation at 750 {\deg}C was found to lead to extremely elongated He bubbles that are concentrated in the nano-laminate layers of Ti3SiC2, whereas the overall crystal structure of the material remained intact. In contrast, at RT, the layered structure was significantly damaged and highly disordered after irradiation. Our study reveals that at elevated temperatures, the unique structure of Ti3SiC2 can accommodate large amounts of He atoms in the nano-laminate layer, without compromising the structural stability of the material. The structure and the mechanical tests results show that the irradiation induced swelling and hardening at 750 {\deg}C are much smaller than those at RT. These results indicate that Ti3SiC2 has an excellent resistance to accumulation of radiation-induced He impurities and that it has a considerable tolerance to irradiation-induced degradation of mechanical properties at high temperatures.
An unified scheme for describing both spin and orbital motion in symmetry-breaking chiral quark model is suggested. The analytic results of the spin and orbital angular momenta carried by different quark flavors in the nucleon are given. The quark spin reduction due to spin-flip in the chiral splitting processes is compensated by the increase of the orbital angular momentum carried by the quarks and antiquarks. The sum of both spin and orbital angular momenta in the nucleon is 1/2, if the gluons and other degrees of freedom are neglected. The same conclusion holds for other octet and decuplet baryons. Possible modification and application are briefly discussed.
We present an analysis of the distribution of structural properties for Milky Way-mass halos in the Millennium-II Simulation (MS-II). This simulation of structure formation within the standard LCDM cosmology contains thousands of Milky Way-mass halos and has sufficient resolution to properly resolve many subhalos per host. It thus provides a major improvement in the statistical power available to explore the distribution of internal structure for halos of this mass. In addition, the MS-II contains lower resolution versions of the Aquarius Project halos, allowing us to compare our results to simulations of six halos at a much higher resolution. We study the distributions of mass assembly histories, of subhalo mass functions and accretion times, and of merger and stripping histories for subhalos capable of impacting disks at the centers of halos. We show that subhalo abundances are not well-described by Poisson statistics at low mass, but rather are dominated by intrinsic scatter. Using the masses of subhalos at infall and the abundance-matching assumption, there is less than a 10% chance that a Milky Way halo with M_vir =10^12 M_sun will host two galaxies as bright as the Magellanic Clouds. This probability rises to ~25% for a halo with M_vir=2.5 x 10^12 M_sun. The statistics relevant for disk heating are very sensitive to the mass range that is considered relevant. Mergers with infall mass : redshift zero virial mass greater than 1:30 could well impact a central galactic disk and are a near inevitability since z=2, whereas only half of all halos have had a merger with infall mass : redshift zero virial mass greater than 1:10 over this same period.
We consider spacelike graphs $\Gamma_f$ of simple products $(M\times N, g\times -h)$ where $(M,g)$ and $(N,h)$ are Riemannian manifolds and $f:M\to N$ is a smooth map. Under the condition of the Cheeger constant of $M$ to be zero and some condition on the second fundamental form at infinity, we conclude that if $\Gamma_f \subset M\times N$ has parallel mean curvature $H$ then $H=0$. This holds trivially if $M$ is closed. If $M$ is the $m$-hyperbolic space then for any constant $c$, we describe a explicit foliation of $H^m\times R$ by hypersurfaces with constant mean curvature $c$.
Flash flooding is a significant societal problem, but related precipitation forecasts are often poor. To address this, one can try to use output from convection-parametrising (global) ensembles, post-processed to forecast at point-scale, or convection-resolving limited area ensembles. In this study, we combine both. First, we apply the "ecPoint-rainfall" post-processing to the ECMWF global ensemble. Then, we use 2.2km COSMO LAM ensemble output (centred on Italy), and also post-process it using a scale-selective neighbourhood approach to compensate for insufficient members. The two components then undergo lead-time-weighted blending, to create the final probabilistic 6h rainfall forecasts. Product creation for forecasters constituted the "Italy Flash Flood use case" within the EU-funded MISTRAL project and it will be a real-time open-access product. One year of verification shows that ecPoint is the most skilful ensemble product. The post-processed COSMO ensemble adds most value to summer convective events in the evening, when the global model has an underprediction bias. In two heavy rainfall case studies we observed underestimation of the largest point totals in the raw ECMWF ensemble, and overestimation in the raw COSMO ensemble. However, ecPoint increase the value and highlighted best the most affected areas, whilst post-processing of COSMO diminished extremes by eradicating unreliable detail. The final merged products looked best from a user perspective and seemed to be the most skilful of all. Although our LAM post-processing does not implicitly include bias correction (a topic for further work) our study nonetheless provides a unique blueprint for successfully combining ensemble rainfall forecasts from global and LAM systems around the world. It also has important implications for forecast products as global ensembles move ever closer to having convection-permitting resolution.
Kirby and Thompson introduced a length of a trisection. They also defined the length of a 4-manifold as the minimum of length among all lengths of trisection of a 4-manifold. In this paper, we consider trisections whose Kirby-Thompson length is 2. Kirby and Thompson conjectured that length 2 trisection is a trisection of 4-manifold with length 0. We shall prove this conjecture in this paper.