text
stringlengths
6
128k
The classical Truncated Moment problem asks for necessary and sufficient conditions so that a linear functional $L$ on $\mathcal{P}_{d}$, the vector space of real $n$-variable polynomials of degree at most $d$, can be written as integration with respect to a positive Borel measure $\mu$ on $\mathbb{R}^n$. We work in a more general setting, where $L$ is a linear functional acting on a finite dimensional vector space $V$ of Borel-measurable functions defined on a $T_{1}$ topological space $S$. Using an iterative geometric construction, we associate to $L$ a subset of $S$ called the \textit{core variety}, $\mathcal{CV}(L)$. Our main result is that $L$ has a representing measure $\mu$ if and only if $\mathcal{CV}(L)$ is nonempty. In this case, $L$ has a finitely atomic representing measure, and the union of the supports of such measures is precisely $\mathcal{CV}(L)$. We also use the core variety to describe the facial decomposition of the cone of functionals in the dual space $V^{*}$ having representing measures. We prove a generalization of the Truncated Riesz-Haviland Theorem of Curto-Fialkow, which permits us to solve a generalized Truncated Moment Problem in terms of positive extensions of $L$. These results are adapted to derive a Riesz-Haviland Theorem for a generalized Full Moment Problem and to obtain a core variety theorem for the latter problem.
We find conditions which guarantee that a given flow on a closed smooth manifold admits a smooth Lyapunov one-form lying in a prescribed de Rham cohomology class. These conditions are formulated in terms of Schwartzman's asymptotic cycles of the flow.
Electromagnetic waves carry energy, linear momentum, and angular momentum. When light (or other electromagnetic radiation) interacts with material media, both energy and momentum are usually exchanged. The force and torque experienced by material bodies in their interactions with the electromagnetic field are such that the energy as well as the linear and angular momenta of the overall system (i.e., the system of field plus matter) are conserved. Radiation forces are now used routinely to trap and manipulate small objects such as glass or plastic micro-beads and biological cells, to drive micro- and nano-machines, and to contemplate interstellar travel with the aid of solar sails. We discuss the properties of the electromagnetic field that enable such wide-ranging applications.
First, dark matter is introduced. Next, the Dirac negative energy state is rediscussed. It is a negative matter with some new characteristics, which are mainly the gravitation each other, but the repulsion with all positive matter. Such the positive and negative matters are two regions of topological separation in general case, and the negative matter is invisible. It is the simplest candidate of dark matter, and can explain some characteristics of the dark matter and dark energy. Recent phantom on dark energy is namely a negative matter. We propose that in quantum fluctuations the positive matter and negative matter are created at the same time, and derive an inflation cosmos, which is created from nothing. The Higgs mechanism is possibly a product of positive and negative matter. Based on a basic axiom and the two foundational principles of the negative matter, we research its predictions and possible theoretical tests, in particular, the season effect. The negative matter should be a necessary development of Dirac theory. Finally, we propose the three basic laws of the negative matter. The existence of four matters on positive, opposite, and negative, negative-opposite particles will form the most perfect symmetrical world.
Providing an efficient strategy to navigate safely through unsignaled intersections is a difficult task that requires determining the intent of other drivers. We explore the effectiveness of Deep Reinforcement Learning to handle intersection problems. Using recent advances in Deep RL, we are able to learn policies that surpass the performance of a commonly-used heuristic approach in several metrics including task completion time and goal success rate and have limited ability to generalize. We then explore a system's ability to learn active sensing behaviors to enable navigating safely in the case of occlusions. Our analysis, provides insight into the intersection handling problem, the solutions learned by the network point out several shortcomings of current rule-based methods, and the failures of our current deep reinforcement learning system point to future research directions.
IceTop, the surface component of the IceCube Neutrino Observatory at the South Pole, is an air shower array with an area of 1 km2. The detector allows a detailed exploration of the mass composition of primary cosmic rays in the energy range from about 100 TeV to 1 EeV by exploiting the correlation between the shower energy measured in IceTop and the energy deposited by muons in the deep ice. In this paper we report on the technical design, construction and installation, the trigger and data acquisition systems as well as the software framework for calibration, reconstruction and simulation. Finally the first experience from commissioning and operating the detector and the performance as an air shower detector will be discussed.
We investigate the dependence of steady-state properties of Schelling's segregation model on the agents' activation order. Our basic formalism is the Pollicott-Weiss version of Schelling's segregation model. Our main result modifies this baseline scenario by incorporating contagion in the decision to move: (pairs of) agents are connected by a second, agent influence network. Pair activation is specified by a random walk on this network. The considered schedulers choose the next pair nonadaptively. We can complement this result by an example of adaptive scheduler (even one that is quite fair) that is able to preclude maximal segregation. Thus scheduler nonadaptiveness seems to be required for the validity of the original result under arbitrary asynchronous scheduling. The analysis (and our result) are part of an adversarial scheduling approach we are advocating to evolutionary games and social simulations.
This paper addresses several unsettled issues associated with string creation in systems of orthogonal Dp-D(8-p) branes. The interaction between the branes can be understood either from the closed string or open string picture. In the closed string picture it has been noted that the DBI action fails to capture an extra RR exchange between the branes. We demonstrate how this problem persists upon lifting to M-theory. These D-brane systems are analysed in the closed string picture by using gauge-fixed boundary states in a non-standard lightcone gauge, in which RR exchange can be analysed precisely. The missing piece in the DBI action also manifests itself in the open string picture as a mismatch between the Coleman-Weinberg potential obtained from the effective field theory and the corresponding open string calculation. We show that this difference can be reconciled by taking into account the superghosts in the (0+1)effective theory of the chiral fermion, that arises from gauge fixing the spontaneously broken world-line local supersymmetries.
Unit testing is an essential activity in software development for verifying the correctness of software components. However, manually writing unit tests is challenging and time-consuming. The emergence of Large Language Models (LLMs) offers a new direction for automating unit test generation. Existing research primarily focuses on closed-source LLMs (e.g., ChatGPT and CodeX) with fixed prompting strategies, leaving the capabilities of advanced open-source LLMs with various prompting settings unexplored. Particularly, open-source LLMs offer advantages in data privacy protection and have demonstrated superior performance in some tasks. Moreover, effective prompting is crucial for maximizing LLMs' capabilities. In this paper, we conduct the first empirical study to fill this gap, based on 17 Java projects, five widely-used open-source LLMs with different structures and parameter sizes, and comprehensive evaluation metrics. Our findings highlight the significant influence of various prompt factors, show the performance of open-source LLMs compared to the commercial GPT-4 and the traditional Evosuite, and identify limitations in LLM-based unit test generation. We then derive a series of implications from our study to guide future research and practical use of LLM-based unit test generation.
In this paper, an important discovery has been found for nonconforming immersed finite element (IFE) methods using the integral values on edges as degrees of freedom for solving elliptic interface problems. We show that those IFE methods without penalties are not guaranteed to converge optimally if the tangential derivative of the exact solution and the jump of the coefficient are not zero on the interface. A nontrivial counter example is also provided to support our theoretical analysis. To recover the optimal convergence rates, we develop a new nonconforming IFE method with additional terms locally on interface edges. The new method is parameter-free which removes the limitation of the conventional partially penalized IFE method. We show the IFE basis functions are unisolvent on arbitrary triangles which is not considered in the literature. Furthermore, different from multipoint Taylor expansions, we derive the optimal approximation capabilities of both the Crouzeix-Raviart and the rotated-$Q_1$ IFE spaces via a unified approach which can handle the case of variable coefficients easily. Finally, optimal error estimates in both $H^1$- and $L^2$- norms are proved and confirmed with numerical experiments.
Scoring the driving performance of various drivers on a unified scale, based on how safe or economical they drive on their daily trips, is essential for the driver profile task. Connected vehicles provide the opportunity to collect real-world driving data, which is advantageous for constructing scoring models. However, the lack of pre-labeled scores impede the use of supervised regression models and the data privacy issues hinder the way of traditionally data-centralized learning on the cloud side for model training. To address them, an unsupervised scoring method is presented without the need for labels while still preserving fairness and objectiveness compared to subjective scoring strategies. Subsequently, a federated learning framework based on vehicle-cloud collaboration is proposed as a privacy-friendly alternative to centralized learning. This framework includes a consistently federated version of the scoring method to reduce the performance degradation of the global scoring model caused by the statistical heterogeneous challenge of local data. Theoretical and experimental analysis demonstrate that our federated scoring model is consistent with the utility of the centrally learned counterpart and is effective in evaluating driving performance.
Optical diffraction tomography (ODT) has emerged as an important label-free tool in biomedicine to measure the three-dimensional (3D) structure of a biological sample. In this paper, we describe ODT using second-harmonic generation (SHG) which is a coherent nonlinear optical process with a strict symmetry selectivity and has several advantages over traditional fluorescence methods. We report the tomographic retrieval of the 3D second-order nonlinear optical susceptibility using two-dimensional holographic measurements of the SHG fields at different illumination angles and polarization states. The method is a generalization of the conventional linear ODT to the nonlinear scenario. We demonstrate the method with a numerically simulated nanoparticle distribution and an experiment with muscle tissue fibers. Our results show that SHG ODT does not only provide an effective contrast mechanism for label-free imaging but also due to the symmetry requirement enables the visualization of properties that are not otherwise accessible.
We propose the first practical learned lossless image compression system, L3C, and show that it outperforms the popular engineered codecs, PNG, WebP and JPEG 2000. At the core of our method is a fully parallelizable hierarchical probabilistic model for adaptive entropy coding which is optimized end-to-end for the compression task. In contrast to recent autoregressive discrete probabilistic models such as PixelCNN, our method i) models the image distribution jointly with learned auxiliary representations instead of exclusively modeling the image distribution in RGB space, and ii) only requires three forward-passes to predict all pixel probabilities instead of one for each pixel. As a result, L3C obtains over two orders of magnitude speedups when sampling compared to the fastest PixelCNN variant (Multiscale-PixelCNN). Furthermore, we find that learning the auxiliary representation is crucial and outperforms predefined auxiliary representations such as an RGB pyramid significantly.
Due to the rapid development technologies for small unmanned aircraft systems (sUAS), the supply and demand market for sUAS is expanding globally. With the great number of sUAS ready to fly in civilian airspace, an sUAS aircraft traffic management system that can guarantee the safe and efficient operation of sUAS is still at absence. In this paper, we propose a control protocol design and analysis method for sUAS traffic management (UTM) which can safely manage a large number of sUAS. The benefits of our approach are two folds: at the top level, the effort for monitoring sUAS traffic (authorities) and control/planning for each sUAS (operator/pilot) are both greatly reduced under our framework; and at the low level, the behavior of individual sUAS is guaranteed to follow the restrictions. Mathematical proofs and numerical simulations are presented to demonstrate the proposed method.
In a previous article for S&P magazine, we made a case for the new intellectual challenges in the Internet of Things security research. In this article, we revisit our earlier observations and discuss a few results from the computer security community that tackle new issues. Using this sampling of recent work, we identify a few broad general themes for future work.
We introduce high-order dynamical decoupling strategies for open system adiabatic quantum computation. Our numerical results demonstrate that a judicious choice of high-order dynamical decoupling method, in conjunction with an encoding which allows computation to proceed alongside decoupling, can dramatically enhance the fidelity of adiabatic quantum computation in spite of decoherence.
The Virgo Environmental Survey Tracing Ionised Gas Emission (VESTIGE) is a blind narrow-band Halpha+[NII] imaging survey carried out with MegaCam at the Canada-France-Hawaii Telescope. The survey covers the whole Virgo cluster region from its core to one virial radius (104 deg^2). The sensitivity of the survey is of f(Halpha) ~ 4 x 10^-17 erg sec-1 cm^-2 (5 sigma detection limit) for point sources and Sigma (Halpha) ~ 2 x 10^-18 erg sec^-1 cm^-2 arcsec^-2 (1 sigma detection limit at 3 arcsec resolution) for extended sources, making VESTIGE the deepest and largest blind narrow-band survey of a nearby cluster. This paper presents the survey in all its technical aspects, including the survey design, the observing strategy, the achieved sensitivity in both the narrow-band Halpha+[NII] and in the broad-band r filter used for the stellar continuum subtraction, the data reduction, calibration, and products, as well as its status after the first observing semester. We briefly describe the Halpha properties of galaxies located in a 4x1 deg^2 strip in the core of the cluster north of M87, where several extended tails of ionised gas are detected. This paper also lists the main scientific motivations of VESTIGE, which include the study of the effects of the environment on galaxy evolution, the fate of the stripped gas in cluster objects, the star formation process in nearby galaxies of different type and stellar mass, the determination of the Halpha luminosity function and of the Halpha scaling relations down to ~ 10^6 Mo stellar mass objects, and the reconstruction of the dynamical structure of the Virgo cluster. This unique set of data will also be used to study the HII luminosity function in hundreds of galaxies, the diffuse Halpha+[NII] emission of the Milky Way at high Galactic latitude, and the properties of emission line galaxies at high redshift.
In our paper [Markl, Shnider: Drinfel'd Algebra Deformations and the Associahedra, IMRN 1994, no. 4, 169-176] we announced a construction of a cohomology controlling deformations of quasi-coassociative (or Drinfel'd) bialgebras. The full version of the paper will appear as [Markl, Shnider: Drinfel'd Algebra Deformations, Homotopy Comodules and the Associahedra] in Trans. Amer. Math. Soc. The construction in the paper was based on very explicit arguments using deep combinatorial properties of the associahedra. The present paper gives an alternative, general nonsense approach to the construction. So, we just prove the existence of such a cohomology without explicitly constructing it. This should be compared with the two approaches to the cohomology of associative algebras: we either describe explicitly the Hochschild complex and say "Behold! this is the cohomology" or we prove the existence of a projective resolution and define the cohomology as the derived functor.
We prove that every del Pezzo surface of degree two over a finite field is unirational, building on the work of Manin and an extension by Salgado, Testa, and V\'arilly-Alvarado, who had proved this for all but three surfaces. Over general fields of characteristic not equal to two, we state sufficient conditions for a del Pezzo surface of degree two to be unirational.
Through capturing spectral data from a wide frequency range along with the spatial information, hyperspectral imaging (HSI) can detect minor differences in terms of temperature, moisture and chemical composition. Therefore, HSI has been successfully applied in various applications, including remote sensing for security and defense, precision agriculture for vegetation and crop monitoring, food/drink, and pharmaceuticals quality control. However, for condition monitoring and damage detection in carbon fibre reinforced polymer (CFRP), the use of HSI is a relatively untouched area, as existing non-destructive testing (NDT) techniques focus mainly on delivering information about physical integrity of structures but not on material composition. To this end, HSI can provide a unique way to tackle this challenge. In this paper, with the use of a near-infrared HSI camera, applications of HSI for the non-destructive inspection of CFRP products are introduced, taking the EU H2020 FibreEUse project as the background. Technical challenges and solutions on three case studies are presented in detail, including adhesive residues detection, surface damage detection and Cobot based automated inspection. Experimental results have fully demonstrated the great potential of HSI and related vision techniques for NDT of CFRP, especially the potential to satisfy the industrial manufacturing environment.
We calculate the nuclear dependence of direct photon production in hadron-nucleus collisions. In terms of a multiple scattering picture, we factorize the cross section for direct photon production into calculable short-distance partonic parts times multiparton correlation functions in nuclei. We present the hadron-nucleus cross section as $A^{\alpha}$ times the hadron-nucleon cross section. Using information on the multiparton correlation functions extracted from photon-nucleus experiments, we compute the value of $\alpha$ as a function of transverse momentum of the direct photon. We also compare our results with recent data from Fermilab experiment E706.
In this work we present a review of the most popular depth-averaged models to simulate dry granular flows such as aerial avalanches. The classical Savage-Hutter model and recent ones using a $\mu(I)$-rheology law are studied. The objective is firstly to point out the advantages of each such models and secondly to understand how the hypothesis considered in the derivation process influence on the final system.
I revisit some classic publications on modularity, to show what problems its pioneers wanted to solve. These problems occur with spreadsheets too: to recognise them may help us avoid them.
We present VLT spectroscopic observations of 7 discovered galaxy groups between 0.3<z<0.7. The groups were selected from the Strong Lensing Legacy Survey (SL2S), a survey that consists in a systematic search for strong lensing systems in the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS). We give details about the target selection, spectroscopic observations and data reduction for the first release of confirmed SL2S groups. The dynamical analysis of the systems reveals that they are gravitationally bound structures, with at least 4 confirmed members and velocity dispersions between 300 and 800 km/s. Their virial masses are between 10^13 and 10^14 M_sun, and so can be classified as groups or low mass clusters. Most of the systems are isolated groups, except two of them that show evidence of an ongoing merger of two sub-structures. We find a good agreement between the velocity dispersions estimated from the analysis of the kinematics of group galaxies and the weak lensing measurements, and conclude that the dynamics of baryonic matter is a good tracer of the total mass content in galaxy groups.
We investigate the influence of the driving mechanism on the hysteretic response of systems with athermal dynamics. In the framework of local-mean field theory at finite temperature (but neglecting thermallly activated processes), we compare the rate-independent hysteresis loops obtained in the random field Ising model (RFIM) when controlling either the external magnetic field $H$ or the extensive magnetization $M$. Two distinct behaviors are observed, depending on disorder strength. At large disorder, the $H$-driven and $M$-driven protocols yield identical hysteresis loops in the thermodynamic limit. At low disorder, when the $H$-driven magnetization curve is discontinuous (due to the presence of a macroscopic avalanche), the $M$-driven loop is re-entrant while the induced field exhibits strong intermittent fluctuations and is only weakly self-averaging. The relevance of these results to the experimental observations in ferromagnetic materials, shape memory alloys, and other disordered systems is discussed.
Anticipating human motion depends on two factors: the past motion and the person's intention. While the first factor has been extensively utilized to forecast short sequences of human motion, the second one remains elusive. In this work we approximate a person's intention via a symbolic representation, for example fine-grained action labels such as walking or sitting down. Forecasting a symbolic representation is much easier than forecasting the full body pose with its complex inter-dependencies. However, knowing the future actions makes forecasting human motion easier. We exploit this connection by first anticipating symbolic labels and then generate human motion, conditioned on the human motion input sequence as well as on the forecast labels. This allows the model to anticipate motion changes many steps ahead and adapt the poses accordingly. We achieve state-of-the-art results on short-term as well as on long-term human motion forecasting.
Even though many of the experiments leading to the standard model of particle physics were done at large accelerator laboratories in the US and at CERN, many exciting developments happened in smaller national facilities all over the world. In this report we highlight the history of accelerator facilities in Germany.
In the presence of large extra dimensions, the fundamental Planck scale can be much lower than the apparent four-dimensional Planck scale. In this setup, the weak gravity conjecture implies a much more stringent constraint on the UV cutoff for the U(1) gauge theory in four dimensions. This new energy scale may be relevant to LHC.
In this note we study the dynamics of a model recently introduced by one of us, that displays glassy phenomena in absence of energy barriers. Using an adiabatic hypothesis we derive an equation for the evolution of the energy as a function of time that describes extremely well the glassy behaviour observed in Monte Carlo simulations.
Transformer-based sequence-to-sequence architectures, while achieving state-of-the-art results on a large number of NLP tasks, can still suffer from overfitting during training. In practice, this is usually countered either by applying regularization methods (e.g. dropout, L2-regularization) or by providing huge amounts of training data. Additionally, Transformer and other architectures are known to struggle when generating very long sequences. For example, in machine translation, the neural-based systems perform worse on very long sequences when compared to the preceding phrase-based translation approaches (Koehn and Knowles, 2017). We present results which suggest that the issue might also be in the mismatch between the length distributions of the training and validation data combined with the aforementioned tendency of the neural networks to overfit to the training data. We demonstrate on a simple string editing task and a machine translation task that the Transformer model performance drops significantly when facing sequences of length diverging from the length distribution in the training data. Additionally, we show that the observed drop in performance is due to the hypothesis length corresponding to the lengths seen by the model during training rather than the length of the input sequence.
A locally decodable code encodes n-bit strings x in m-bit codewords C(x), in such a way that one can recover any bit x_i from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries need exponential length: m=2^{Omega(n)}. Previously this was known only for linear codes (Goldreich et al. 02). Our proof shows that a 2-query LDC can be decoded with only 1 quantum query, and then proves an exponential lower bound for such 1-query locally quantum-decodable codes. We also show that q quantum queries allow more succinct LDCs than the best known LDCs with q classical queries. Finally, we give new classical lower bounds and quantum upper bounds for the setting of private information retrieval. In particular, we exhibit a quantum 2-server PIR scheme with O(n^{3/10}) qubits of communication, improving upon the O(n^{1/3}) bits of communication of the best known classical 2-server PIR.
We study the propagation of null rays and massless fields in a black hole fluctuating geometry. The metric fluctuations are induced by a small oscillating incoming flux of energy. The flux also induces black hole mass oscillations around its average value. We assume that the metric fluctuations are described by a statistical ensemble. The stochastic variables are the phases and the amplitudes of Fourier modes of the fluctuations. By averaging over these variables, we obtain an effective propagation for massless fields which is characterized by a critical length defined by the amplitude of the metric fluctuations: Smooth wave packets with respect to this length are not significantly affected when they are propagated forward in time. Concomitantly, we find that the asymptotic properties of Hawking radiation are not severely modified. However, backward propagated wave packets are dissipated by the metric fluctuations once their blue shifted frequency reaches the inverse critical length. All these properties bear many resemblences with those obtained in models for black hole radiation based on a modified dispersion relation. This strongly suggests that the physical origin of these models, which were introduced to confront the trans-Planckian problem, comes from the fluctuations of the black hole geometry.
This paper introduces the implementation of steganography method called StegIbiza, which uses tempo modulation as hidden message carrier. With the use of Python scripting language, a bit string was encoded and decoded using WAV and MP3 files. Once the message was hidden into a music files, an internet radio was created to evaluate broadcast possibilities. No dedicated music or signal processing equipment was used in this StegIbiza implementation
Measurements at the RHIC and the LHC have observed flavor dependence of single-hadron suppression, which reveal the role played by quark masses in the parton interactions with the quark-gluon plasma (QGP) medium. In this study, we explore the manifestation of quark mass effect and flavor dependence in jet observables. We approach this study using the LIDO transport model. Both elastic and medium-induced radiative processes are implemented for hard parton evolution in the medium. To guarantee energy-momentum conservation in the model for the study of full jet observables, we also include a component that mimics the energy-momentum transported by medium excitation. We first predict the heavy-jet (B-jet, D-jet) and inclusive-jet nuclear modification factor $R_{AA}$ in central nuclear collisions at both the RHIC and the LHC beam energies. We observe a flavor-dependent jet suppression as a function of jet transverse momentum, which can be tested by future precision measurements of heavy jets. We further investigate a novel observable that considers the angular correlation between two hard objects: a D-meson and a jet, which provides model constraints in addition to those imposed by inclusive measurements.
This paper discusses emerging operational challenges associated with the integration of solar photovoltaic (PV) in the All-Island power system (AIPS) of Ireland and Northern Ireland. These include the impact of solar PV on: (i) dispatch down levels; (ii) long-term frequency deviations; (iii) voltage magnitude variations; and (iv) operational demand variations. A case study based on actual data from the AIPS is used to analyze the above challenges. It is shown that despite its (still) relatively low penetration compared to wind power penetration, solar PV is challenging the real-time operation of the AIPS, e.g., maintaining frequency within operational limits. EirGrid and SONI, the transmission system operators (TSOs) of the AIPS, are working toward addressing all the above challenges.
Generative adversarial networks (GANs) offer an effective solution to the image-to-image translation problem, thereby allowing for new possibilities in medical imaging. They can translate images from one imaging modality to another at a low cost. For unpaired datasets, they rely mostly on cycle loss. Despite its effectiveness in learning the underlying data distribution, it can lead to a discrepancy between input and output data. The purpose of this work is to investigate the hypothesis that we can predict image quality based on its latent representation in the GANs bottleneck. We achieve this by corrupting the latent representation with noise and generating multiple outputs. The degree of differences between them is interpreted as the strength of the representation: the more robust the latent representation, the fewer changes in the output image the corruption causes. Our results demonstrate that our proposed method has the ability to i) predict uncertain parts of synthesized images, and ii) identify samples that may not be reliable for downstream tasks, e.g., liver segmentation task.
We propose a novel transformer-style architecture called Global-Local Filter Network (GLFNet) for medical image segmentation and demonstrate its state-of-the-art performance. We replace the self-attention mechanism with a combination of global-local filter blocks to optimize model efficiency. The global filters extract features from the whole feature map whereas the local filters are being adaptively created as 4x4 patches of the same feature map and add restricted scale information. In particular, the feature extraction takes place in the frequency domain rather than the commonly used spatial (image) domain to facilitate faster computations. The fusion of information from both spatial and frequency spaces creates an efficient model with regards to complexity, required data and performance. We test GLFNet on three benchmark datasets achieving state-of-the-art performance on all of them while being almost twice as efficient in terms of GFLOP operations.
Matrix game, which is also known as two person zero sum game, is a famous model in game theory. There are some well established theories about it, such as von Neumann minimax theorem. However, almost no literature have reported the relationship between eigenvalue/eigenvector and properties of matrix game. In this paper, we find such relation of some special matrices and try to extend some conclusions to general matrix.
We show that ultracold atoms can be controlled in multi-band optical lattices through spatially periodic Raman pulses for investigation of a class of strongly correlated physics related to the Kondo problem. The underlying dynamics of this system is described by a spin-dependent fermionic or bosonic Kondo-Hubbard lattice model even if we have only spin-independent atomic collision interaction. We solve the bosonic Kondo-Hubbard lattice model through a mean-field approximation, and the result shows a clear phase transition from the ferromagnetic superfluid to the Kondo-signet insulator at the integer filling.
In this paper we study the representation theory of filtered algebras with commutative associated graded whose spectrum has finitely many symplectic leaves. Examples are provided by the algebras of global sections of quantizations of symplectic resolutions, quantum Hamiltonian reductions, spherical symplectic reflection algebras. We introduce the notion of holonomic modules for such algebras. We show that the generalized Bernstein inequality holds for simple modules and turns into equality for holonomic simples provided the algebraic fundamental groups of all leaves are finite. Under the same assumption, we prove that the associated variety of a simple holonomic module is equi-dimensional. We also prove that, if the regular bimodule has finite length or if the algebra in question is a quantum Hamiltonian reduction, then any holonomic module has finite length. This allows to reduce the Bernstein inequality for arbitrary modules to simple ones. We prove that the regular bimodule has finite length for the global sections of quantizations of symplectic resolutions and for Rational Cherednik algebras. The paper contains a joint appendix by the author and Etingof that motivates the definition of a holonomic module in the case of global sections of a quantization of a symplectic resolution.
Display technologies have evolved over the years. It is critical to develop practical HDR capturing, processing, and display solutions to bring 3D technologies to the next level. Depth estimation of multi-exposure stereo image sequences is an essential task in the development of cost-effective 3D HDR video content. In this paper, we develop a novel deep architecture for multi-exposure stereo depth estimation. The proposed architecture has two novel components. First, the stereo matching technique used in traditional stereo depth estimation is revamped. For the stereo depth estimation component of our architecture, a mono-to-stereo transfer learning approach is deployed. The proposed formulation circumvents the cost volume construction requirement, which is replaced by a ResNet based dual-encoder single-decoder CNN with different weights for feature fusion. EfficientNet based blocks are used to learn the disparity. Secondly, we combine disparity maps obtained from the stereo images at different exposure levels using a robust disparity feature fusion approach. The disparity maps obtained at different exposures are merged using weight maps calculated for different quality measures. The final predicted disparity map obtained is more robust and retains best features that preserve the depth discontinuities. The proposed CNN offers flexibility to train using standard dynamic range stereo data or with multi-exposure low dynamic range stereo sequences. In terms of performance, the proposed model surpasses state-of-the-art monocular and stereo depth estimation methods, both quantitatively and qualitatively, on challenging Scene flow and differently exposed Middlebury stereo datasets. The architecture performs exceedingly well on complex natural scenes, demonstrating its usefulness for diverse 3D HDR applications.
We investigate the possibility of achieving a slow signal field at the level of single photons inside nanofibers by exploiting stimulated Brillouin scattering, which involves a strong pump field and the vibrational modes of the waveguide. The slow signal is significantly amplified for a pump field with a frequency higher than that of the signal, and attenuated for a lower pump frequency. We introduce a configuration for obtaining a propagating slow signal without gain or loss and with a relatively wide bandwidth. This process involves two strong pump fields with frequencies both higher and lower than that of the signal, where the effects of signal amplification and attenuation compensate each other. We account for thermal fluctuations due to the scattering off thermal phonons and identify conditions under which thermal contributions to the signal field are negligible. The slowing of light through Brillouin optomechanics may serve as a vital tool for optical quantum information processing and quantum communications within nanophotonic structures.
We study the observation of a thin dust shell, radially freely falling to a Reissner-Nordstrom black hole, by an observer who is also freely and radially falling into this black hole. Considered and resolved are several common paradoxes and fallacies peculiar for such problems. The results of this analytical study are written as a numerical code that allows for calculating all related effects of this model. The numerical result have been presented in a few synthesized videos, making a colorful, quantitative and detailed description of the occurring astrophysical phenomena, both above and below the horizon.
This work demonstrates direct visual sensory-motor control using high-speed CNN inference via a SCAMP-5 Pixel Processor Array (PPA). We demonstrate how PPAs are able to efficiently bridge the gap between perception and action. A binary Convolutional Neural Network (CNN) is used for a classic rock, paper, scissors classification problem at over 8000 FPS. Control instructions are directly sent to a servo motor from the PPA according to the CNN's classification result without any other intermediate hardware.
We present the surface magnetic field conditions of the brightest pulsating RV Tauri star, R Sct. Our investigation is based on the longest spectropolarimetric survey ever performed on this variable star. The analysis of high resolution spectra and circular polarization data give sharp information on the dynamics of the atmosphere and the surface magnetism, respectively. Our analysis shows that surface magnetic field can be detected at different phases along a pulsating cycle, and that it may be related to the presence of a radiative shock wave periodically emerging out of the photosphere and propagating throughout the stellar atmosphere.
This paper is a compressed summary of some principal definitions and concepts in the approach to the black box algebra being developed by the authors. We suggest that black box algebra could be useful in cryptanalysis of homomorphic encryption schemes, and that homomorphic encryption is an area of research where cryptography and black box algebra may benefit from exchange of ideas.
We have examined the local structure of PMN-PT and PZN-PT solid solutions using density functional theory. We find that the directions and magnitudes of cation displacement can be explained by an interplay of cation-oxygen bonding, electrostatic dipole-dipole interactions and short-range direct and through oxygen Pb-B-cation repulsive interactions. We find that the Zn ions off-center in the PZN-PT system, which also enables larger Pb and Nb/Ti displacements. The off-centering behavior of Zn lessens Pb-B-cation repulsion, leading to a relaxor to ferroelectric and a rhombohedral to tetragonal phase transition at low PbTiO$_3$ content in the PZN-PT system. We also show that a simple quadratic relationship exists between Pb and B-cation displacements and the temperature maximum of dielectric constant, thus linking the enhanced displacements in PZN-PT systems with the higher transition temperatures.
The Morse potential is relatively closed to the harmonic oscillator quantum system. Thus, following the idea used for the latter, we study the possibility of creating entanglement using squeezed coherent states of the Morse potential as an input field of a beam splitter. We measure the entanglement with the linear entropy for two types of such states and we study the dependence with the coherence and squeezing parameters. The new results are linked with observations made on probability densities and uncertainty relations of those states. The dynamical evolution of the linear entropy is also explored.
Various methods of searching for supersymmetric dark matter are sensitive to WIMPs with different properties. One consequence of this is that the phenomenology of dark matter detection can vary dramatically in different supersymmetric breaking scenarios. In this paper, we consider the sensitivities to supersymmetric dark matter of different detection methods and techniques in a wide variety of supersymmetric breaking scenarios. We discuss the ability of various astrophysical experiments, such as direct experiments, gamma-rays satellites, neutrino telescopes and positron and anti-proton cosmic ray experiments, to test various supersymmetry breaking scenarios. We also discuss what information can be revealed about supersymmetry breaking by combining results from complementary experiments. We place an emphasis on the differences between various experimental techniques.
We study $L^p$-$L^q$ estimate for the spectral projection operator $\Pi_\lambda$ associated to the Hermite operator $H=|x|^2-\Delta$ in $\mathbb R^d$. Here $\Pi_\lambda$ denotes the projection to the subspace spanned by the Hermite functions which are the eigenfunctions of $H$ with eigenvalue $\lambda$. Such estimates were previously available only for $q=p'$, equivalently with $p=2$ or $q=2$ (by $TT^*$ argument) except for the estimates which are straightforward consequences of interpolation between those estimates. As shown in the works of Karadzhov, Thangavelu, and Koch and Tataru, the local and global estimates for $\Pi_\lambda$ are of different nature. Especially, $\Pi_\lambda$ exhibits complicated behaviors near the set $\sqrt\lambda\mathbb S^{d-1}$. Compared with the spectral projection operator associated to the Laplacian, $L^p$-$L^q$ estimate for $\Pi_\lambda$ is not so well understood up to now for general $p,q$. In this paper we consider $L^p$--$L^q$ estimate for $\Pi_\lambda$ in a general framework including the local and global estimates with $1\le p\le 2\le q\le \infty$ and undertake the work of characterizing the sharp bounds on $\Pi_\lambda$. We establish various new sharp estimates in extended ranges of $p,q$. First of all, we provide a complete characterization of the local estimate for $\Pi_\lambda$ which was first considered by Thangavelu. Secondly, for $d\ge5$, we prove the endpoint $L^2$--$L^{2(d+3)/(d+1)}$ estimate for $\Pi_\lambda$ which has been left open since the work of Koch and Tataru. Thirdly, we extend the range of $p,q$ for which the operator $\Pi_\lambda$ is uniformly bounded from $L^p$ to $L^q$.
The photoluminescence dynamics of a microscopic gas of indirect excitons trapped in coupled quantum wells is probed at very low bath temperature (approximately 350 mK). Our experiments reveal the non linear energy relaxation characteristics of indirect excitons. Particularly, we observe that the excitons dynamics is strongly correlated with the screening of structural disorder by repulsive exciton-exciton interactions. For our experiments where two-dimensional excitonic states are gradually defined, the distinctive enhancement of the exciton scattering rate towards lowest energy states with increasing density does not reveal unambiguously quantum statistical effects such as Bose stimulation.
We review the role of two-photon exchange (TPE) in electron-hadron scattering, focusing in particular on hadronic frameworks suitable for describing the low and moderate Q^2 region relevant to most experimental studies. We discuss the effects of TPE on the extraction of nucleon form factors and their role in the resolution of the proton electric to magnetic form factor ratio puzzle. The implications of TPE on various other observables, including neutron form factors, electroproduction of resonances and pions, and nuclear form factors, are summarized. Measurements seeking to directly identify TPE effects, such as through the angular dependence of polarization measurements, nonlinear epsilon contributions to the cross sections, and via e+ p to e- p cross section ratios, are also outlined. In the weak sector, we describe the role of TPE and gamma-Z interference in parity-violating electron scattering, and assess their impact on the extraction of the strange form factors of the nucleon and the weak charge of the proton.
In this paper we study the long time dynamics of the solutions to the initial-boundary value problem for a scalar conservation law with a saturating nonlinear diffusion. After discussing the existence of a unique stationary solution and its asymptotic stability, we focus our attention on the phenomenon of 'metastability', whereby the time-dependent solution develops into a layered function in a relatively short time, and subsequently approaches a steady state in a very long time interval. Numerical simulations illustrate the results.
For the first time we introduce an error estimator for the numerical approximation of the equations describing the dynamics of sea ice. The idea of the estimator is to identify different error contributions coming from spatial and temporal discretization as well as from the splitting in time of the ice momentum equations from further parts of the coupled system. The novelty of the error estimator lies in the consideration of the splitting error, which turns out to be dominant with increasing mesh resolution. Errors are measured in user specified functional outputs like the total sea ice extent. The error estimator is based on the dual weighted residual method that asks for the solution of an additional dual problem for obtaining sensitivity information. Estimated errors can be used to validate the accuracy of the solution and, more relevant, to reduce the discretization error by guiding an adaptive algorithm that optimally balances the mesh size and the time step size to increase the efficiency of the simulation.
Motivations, emotions, and actions are inter-related essential factors in human activities. While motivations and emotions have long been considered at the core of exploring how people take actions in human activities, there has been relatively little research supporting analyzing the relationship between human mental states and actions. We present the first study that investigates the viability of modeling motivations, emotions, and actions in language-based human activities, named COMMA (Cognitive Framework of Human Activities). Guided by COMMA, we define three natural language processing tasks (emotion understanding, motivation understanding and conditioned action generation), and build a challenging dataset Hail through automatically extracting samples from Story Commonsense. Experimental results on NLP applications prove the effectiveness of modeling the relationship. Furthermore, our models inspired by COMMA can better reveal the essential relationship among motivations, emotions and actions than existing methods.
The behavior of a massive scalar particle on the spacetime surrounding a monopole is studied from a quantum mechanical point of view. All the boundary conditions necessary to turn into self-adjoint the spatial portion of the wave operator are found and their importance to the quantum interpretation of singularities is emphasized.
Superradiant scattering of linear spin $s=0,\pm 1,\pm 2$ fields on Kerr black hole background is investigated in the time domain by integrating numerically the homogeneous Teukolsky master equation. The applied numerical setup has already been used in studying long time evolution and tail behavior of electromagnetic and metric perturbations on rotating black hole background [arXiv:1905.09082v3]. To have a clear setup the initial data is chosen to be of the compact support, while to optimize superradiance the frequency of the initial data is fine tuned. Our most important finding is that the rate of superradiance strongly depends on the relative position of the (compact) support of the initial data and the ergoregion. When they are well-separated then only a modest -- in case of $s=0$ scalar fields negligible -- superradiance occurs, whereas it can get to be amplified significantly whenever the support of the initial data and the ergoregion overlap.
The Higgs mechanism well describes the electroweak symmetry breaking in nature. We consider a possibility that the microscopic origin of the Higgs field is UV physics of QCD. We construct a UV complete model of a higher dimensional Yang-Mills theory as a deformation of a deconstructed (2,0) theory in six dimensions, and couple the top and bottom (s)quarks to it. We see that the Higgs fields appear as magnetic degrees of freedom. The model can naturally explain the masses of the Higgs boson and the top quark. The rho meson-like resonances with masses such as 1 TeV are predicted.
We consider an infinite spatial inhomogeneous random graph model with an integrable connection kernel that interpolates nicely between existing spatial random graph models. Key examples are versions of the weight-dependent random connection model, the infinite geometric inhomogeneous random graph, and the age-based random connection model. These infinite models arise as the local limit of the corresponding finite models, see \cite{LWC_SIRGs_2020}. For these models we identify the scaling of the \emph{local clustering} as a function of the degree of the root in different regimes in a unified way. We show that the scaling exhibits phase transitions as the interpolation parameter moves across different regimes. In addition to the scaling we also identify the leading constants of the clustering function. This allows us to draw conclusions on the geometry of a \emph{typical} triangle contributing to the clustering in the different regimes.
A classification is given of rank 3 group actions which are quasiprimitive but not primitive. There are two infinite families and a finite number of individual imprimitive examples. When combined with earlier work of Bannai, Kantor, Liebler, Liebeck and Saxl, this yields a classification of all quasiprimitive rank 3 permutation groups. Our classification is achieved by first classifying imprimitive almost simple permutation groups which induce a 2-transitive action on a block system and for which a block stabiliser acts 2-transitively on the block. We also determine those imprimitive rank 3 permutation groups $G$ such that the induced action on a block is almost simple and $G$ does not contain the full socle of the natural wreath product in which $G$ embeds.
We construct a Dirac-Born-Infeld (DBI) action coupled to a two-form field in four dimensional $\mathcal{N}=1$ supergravity. Our superconformal formulation of the action shows a universal way to construct it in various Poincar\'e supergravity formulations. We generalize the DBI action to that coupled to matter sector. We also discuss duality transformations of the DBI action, which are useful for phenomenological and cosmological applications.
We propose an experimentally accessible, objective measure for the macroscopicity of superposition states in mechanical quantum systems. Based on the observable consequences of a minimal, macrorealist extension of quantum mechanics, it allows one to quantify the degree of macroscopicity achieved in different experiments.
We present a Monte-Carlo algorithm for the simulation of the all-order strong coupling expansion of the Z2 gauge theory. This random surface ensemble is equivalent to the standard formulation, but allows to measure some quantities, like Polyakov loop correlators or excess free energies, with an accuracy that could not have been easily achieved with traditional simulation methods. One interesting application of the algorithm is the investigation of effective string theories.
The probability distribution of temperature of a blackbody can be determined from its power spectrum. This technique is called blackbody radiation inversion. In the present paper blackbody radiation inversion is applied on the spectrum of the Sun. The probability distribution of temperature and the mean temperature of the Sun are calculated without assuming a homogenous temperature and without using Stefan-Boltzmann law. Different properties of this distribution are characterized. This paper presents the very first mention and investigation of the distortions present within the Sun's spectrum.
Driven by network intrusion detection, we propose a MultiResolution Anomaly Detection (MRAD) method, which effectively utilizes the multiscale properties of Internet features and network anomalies. In this paper, several theoretical properties of the MRAD method are explored. A major new result is the mathematical formulation of the notion that a two-scaled MRAD method has larger power than the average power of the detection method based on the given two scales. Test threshold is also developed. Comparisons between MRAD method and other classical outlier detectors in time series are reported as well.
We give a Littlewood-Richardson type rule for expanding the product of a row-strict quasisymmetric Schur function and a symmetric Schur function in terms of row-strict quasisymmetric Schur functions. This expansion follows from several new properties of an insertion algorithm defined by Mason and Remmel (2010) which inserts a positive integer into a row-strict composition tableau.
We study the problem of learning individualized dose intervals using observational data. There are very few previous works for policy learning with continuous treatment, and all of them focused on recommending an optimal dose rather than an optimal dose interval. In this paper, we propose a new method to estimate such an optimal dose interval, named probability dose interval (PDI). The potential outcomes for doses in the PDI are guaranteed better than a pre-specified threshold with a given probability (e.g., 50%). The associated nonconvex optimization problem can be efficiently solved by the Difference-of-Convex functions (DC) algorithm. We prove that our estimated policy is consistent, and its risk converges to that of the best-in-class policy at a root-n rate. Numerical simulations show the advantage of the proposed method over outcome modeling based benchmarks. We further demonstrate the performance of our method in determining individualized Hemoglobin A1c (HbA1c) control intervals for elderly patients with diabetes.
We study the formation of spin-1 symbiotic spinor solitons in a quasi-one- (quasi-1D) and quasi-two-dimensional (quasi-2D) hyper-fine spin $F=1$ ferromagnetic Bose-Einstein condensate (BEC). The symbiotic solitons necessarily have a repulsive intraspecies interaction and are bound due to an attractive interspecies interaction. Due to a collapse instability in higher dimensions, an additional spin-orbit coupling is necessary to stabilize a quasi-2D symbiotic spinor soliton. Although a quasi-1D symbiotic soliton has a simple Gaussian-type density distribution, novel spatial periodic structure in density is found in quasi-2D symbiotic SO-coupled spinor solitons. For a weak SO coupling, the quasi-2D solitons are of the $(-1, 0, +1)$ or $(+1, 0, -1)$ type with intrinsic vorticity and multi-ring structure, for Rashba or Dresselhaus SO coupling, respectively, where the numbers in the parentheses are angular momenta projections in spin components $F_z = +1, 0, -1$, respectively. For a strong SO coupling, stripe and superlattice solitons, respectively, with a stripe and square-lattice modulation in density, are found in addition to the multi-ring solitons. The stationary states were obtained by imaginary-time propagation of a mean-field model; dynamical stability of the solitons was established by real-time propagation over a long period of time. The possibility of the creation of such a soliton by removing the trap of a confined spin-1 BEC in a laboratory is also demonstrated.
The autonomous control of unmanned aircraft is a highly safety-critical domain with great economic potential in a wide range of application areas, including logistics, agriculture, civil engineering, and disaster recovery. We report on the development of a dynamic monitoring framework for the DLR ARTIS (Autonomous Rotorcraft Testbed for Intelligent Systems) family of unmanned aircraft based on the formal specification language RTLola. RTLola is a stream-based specification language for real-time properties. An RTLola specification of hazardous situations and system failures is statically analyzed in terms of consistency and resource usage and then automatically translated into an FPGA-based monitor. Our approach leads to highly efficient, parallelized monitors with formal guarantees on the noninterference of the monitor with the normal operation of the autonomous system.
We contribute to the program of proving lower bounds on the size of branching programs solving the Tree Evaluation Problem introduced by Cook et. al. (2012). Proving a super-polynomial lower bound for the size of nondeterministic thrifty branching programs (NTBP) would separate $NL$ from $P$ for thrifty models solving the tree evaluation problem. First, we show that {\em Read-Once NTBPs} are equivalent to whole black-white pebbling algorithms thus showing a tight lower bound (ignoring polynomial factors) for this model. We then introduce a weaker restriction of NTBPs called {\em Bitwise Independence}. The best known NTBPs (of size $O(k^{h/2+1})$) for the tree evaluation problem given by Cook et. al. (2012) are Bitwise Independent. As our main result, we show that any Bitwise Independent NTBP solving $TEP_{2}^{h}(k)$ must have at least $\frac{1}{2}k^{h/2}$ states. Prior to this work, lower bounds were known for NTBPs only for fixed heights $h=2,3,4$ (See Cook et. al. (2012)). We prove our results by associating a fractional black-white pebbling strategy with any bitwise independent NTBP solving the Tree Evaluation Problem. Such a connection was not known previously even for fixed heights. Our main technique is the entropy method introduced by Jukna and Z{\'a}k (2001) originally in the context of proving lower bounds for read-once branching programs. We also show that the previous lower bounds given by Cook et. al. (2012) for deterministic branching programs for Tree Evaluation Problem can be obtained using this approach. Using this method, we also show tight lower bounds for any $k$-way deterministic branching program solving Tree Evaluation Problem when the instances are restricted to have the same group operation in all internal nodes.
In this paper, we investigate a non-interacting scalar field cosmology with an arbitrary potential using the $f$-deviser method that relies on the differentiability properties of the potential. Using this alternative mathematical approach, we present a unified dynamical system analysis at a scalar field's background and perturbation levels with arbitrary potentials. For illustration, we consider a monomial and double exponential potential. These two classes of potentials comprise the asymptotic behaviour of several classes of scalar field potentials, and, therefore, they provide the skeleton for the typical behaviour of arbitrary potentials. Moreover, we analyse the linear cosmological perturbations in the matterless case by considering three scalar perturbations: the evolution of the Bardeen potentials, the comoving curvature perturbation, the so-called Sasaki-Mukhanov variable, or the scalar field perturbation in uniform curvature gauge. Finally, an exhaustive dynamical system analysis for each scalar perturbation is presented, including the evolution of Bardeen potentials in the presence of matter.
Motivated by experimental results on $\bar B\to D^{(*)}K^-K^{0}$, we use a factorization approach to study these decays. Two mechanisms concerning kaon pair production arise: current-produced (from vacuum) and transition (from the $B$ meson). The kaon pair in the $\bar B {}^0\to D^{(*)+}K^-K^0$ decays can be produced only by the vector current (current-produced), whose matrix element can be extracted from $e^+e^-\to K\bar K$ processes via isospin relations. The decay rates obtained this way are in good agreement with experiment. The $B^-\to D^{(*)0}K^-K^0$ decays involve both current-produced and transition processes. By using QCD counting rules and the measured $B^-\to D^{(*)0} K^- K^0$ decay rates, the measured decay spectra can be understood.
The astronomy community has at its disposal a large back catalog of public spectroscopic galaxy redshift surveys that can be used for the measurement of luminosity functions. Utilizing the back catalog with new photometric surveys to maximum efficiency requires modeling the color selection bias imposed on selection of target galaxies by flux limits at multiple wavelengths. The likelihood derived herein can address, in principle, all possible color selection biases through the use of a generalization of the luminosity function, $\Phi(L)$, over the space of all spectra: the spectro-luminosity functional, $\Psi[L_\nu]$. It is, therefore, the first estimator capable of simultaneously analyzing multiple redshift surveys in a consistent way. We also propose a new way of parametrizing the evolution of the classic Shechter function parameters, $L_\star$ and $\phi_\star$, that improves both the physical realism and statistical performance of the model. The techniques derived in this work will be used in an upcoming paper to measure the luminosity function of galaxies at the rest frame wavelength of $2.4\operatorname{\mu m}$ using the Widefield Infrared Survey Explorer (WISE).
We revisit the possibilities of accommodating the experimental indications of the lepton flavor universality violation in $b$-hadron decays in the minimal scenarios in which the Standard Model is extended by the presence of a single $\mathcal{O}(1\,\mathrm{TeV})$ leptoquark state. To do so we combine the most recent low energy flavor physics constraints, including $R_{K^{(\ast)}}^\mathrm{exp}$ and $R_{D^{(\ast)}}^\mathrm{exp}$, and combine them with the bounds on the leptoquark masses and their couplings to quarks and leptons as inferred from the direct searches at the LHC and the studies of the large $p_T$ tails of the $pp\to \ell\ell$ differential cross section. We find that none of the scalar leptoquarks of $m_\mathrm{LQ} \simeq 1\div 2$ TeV can accommodate the $B$-anomalies alone. Only the vector leptoquark, known as $U_1$, can provide a viable solution which, in the minimal setup, provides an interesting prediction, i.e. a lower bound to the lepton flavor violating $b\to s\mu^\pm\tau^\mp$ decay modes, such as $\mathcal{B}(B\to K\mu\tau) \gtrsim 0.7\times 10^{-7}$.
The Volterra integral equations of the first kind with piecewise smooth kernel are considered. Such equations appear in the theory of optimal control of the evolving systems. The existence theorems are proved. The method for constructing approximations of parametric families of solutions of such equations is suggested. The parametric family of solutions is constructed in terms of a logarithmic-power asymptotics.
Identifying the three-dimensional (3D) crystal-plane and strain-field distributions of nanocrystals is essential for optical, catalytic, and electronic applications. Here, we developed a methodology for visualizing the 3D information of chiral gold nanoparticles with concave gap structures by Bragg coherent X-ray diffraction imaging. The distribution of the high-Miller-index planes constituting the concave chiral gap was precisely determined. The highly strained region adjacent to the chiral gaps was resolved, which was correlated to the 432-symmetric morphology of the nanoparticles and its corresponding plasmonic properties were numerically predicted from the atomically defined structures. This approach can serve as a general characterization platform for visualizing the 3D crystallographic and strain distributions of nanoparticles, especially for applications where structural complexity and local heterogeneity are major determinants, as exemplified in plasmonics.
This paper deals with differential pencils possessing a term depending on the unknown function with a fixed argument. We deduce the so called main equation together with its fine structure for the spectral problem. Then, according to the boundary conditions and the position of argument, we describe two cases: degenerate and non-degenerate. For these two cases, the uniqueness of inverse spectral problem are studied and a constructive procedure for reconstructing the potentials along with necessary and sufficient conditions of its solvability are obtained.
Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In MRI, transfer learning is important for developing strategies that address the variation in MR images. Additionally, transfer learning is beneficial to re-utilize machine learning models that were trained to solve related tasks to the task of interest. Our goal is to identify research directions, gaps of knowledge, applications, and widely used strategies among the transfer learning approaches applied in MR brain imaging. We performed a systematic literature search for articles that applied transfer learning to MR brain imaging. We screened 433 studies and we categorized and extracted relevant information, including task type, application, and machine learning methods. Furthermore, we closely examined brain MRI-specific transfer learning approaches and other methods that tackled privacy, unseen target domains, and unlabeled data. We found 129 articles that applied transfer learning to brain MRI tasks. The most frequent applications were dementia related classification tasks and brain tumor segmentation. A majority of articles utilized transfer learning on convolutional neural networks (CNNs). Only few approaches were clearly brain MRI specific, considered privacy issues, unseen target domains or unlabeled data. We proposed a new categorization to group specific, widely-used approaches. There is an increasing interest in transfer learning within brain MRI. Public datasets have contributed to the popularity of Alzheimer's diagnostics/prognostics and tumor segmentation. Likewise, the availability of pretrained CNNs has promoted their utilization. Finally, the majority of the surveyed studies did not examine in detail the interpretation of their strategies after applying transfer learning, and did not compare to other approaches.
Ca$_{3}$Ru$_{2}$O$_{7}$ is a polar metal that belongs to the class of multiferroic magnetic materials. Here, tiny amounts of Fe doping in the Ru sites bring about dramatic changes in the electronic and magnetic properties and generate a complex H-T phase diagram. To date, not much is known about the ground state of such a system in the absence of magnetic field. By performing muon-spin spectroscopy (${\mu}$SR) measurements in 5% Fe-doped Ca$_{3}$Ru$_{2}$O$_{7}$ single crystals, we investigate its electronic properties at a local level. Transverse-field ${\mu}$SR results indicate a very sharp normal-to-antiferromagnetic transition at T$_{N}$ = 79.7(1) K, with a width of only 1 K. Zero-field ${\mu}$SR measurements in the magnetically ordered state allow us to determine the local fields B$_{i}$ at the muon implantation sites. By symmetry, muons stopping close to the RuO$_{2}$ planes detect only the weak nuclear dipolar fields, while those stopping next to apical oxygens sense magnetic fields as high as 150 mT. In remarkable agreement with the nominal Fe-doping, a $\sim$ 6% minority of the these muons feel slightly lower fields, reflecting a local magnetic frustration induced by iron ions. Finally, B$_{i}$ shows no significant changes across the metal-to-insulator transition, close to 40 K. We ascribe this surprising lack of sensitivity to the presence of crystal twinning.
We observed asteroid (596) Scheila and its ejecta cloud using the Swift UV-optical telescope. We obtained photometry of the nucleus and the ejecta, and for the first time measured the asteroid's reflection spectrum between 290 - 500 nm. Our measurements indicate significant reddening at UV wavelengths (13% per 1000 {\AA}) and a possible broad, unidentified absorption feature around 380 nm. Our measurements indicate that the outburst has not permanently increased the asteroid's brightness. We did not detect any of the gases that are typically associated with either hypervolatile activity thought responsible for cometary outbursts (CO+, CO2+), or for any volatiles excavated with the dust (OH, NH, CN, C2, C3). We estimate that 6 x 10^8 kg of dust was released with a high ejection velocity of 57 m/s (assuming 1 {\mu}m sized particles). While the asteroid is red in color and the ejecta have the same color as the Sun, we suggest that the dust does not contain any ice. Based on our observations, we conclude that (596) Scheila was most likely impacted by another main belt asteroid less than 100 meters in diameter.
We exploit the parquet formalism to derive exact flow equations for the two-particle-reducible four-point vertices, the self-energy, and typical response functions, circumventing the reliance on higher-point vertices. This includes a concise, algebraic derivation of the multiloop flow equations, which have previously been obtained by diagrammatic considerations. Integrating the multiloop flow for a given input of the totally irreducible vertex is equivalent to solving the parquet equations with that input. Hence, one can tune systems from solvable limits to complicated situations by variation of one-particle parameters, staying at the fully self-consistent solution of the parquet equations throughout the flow. Furthermore, we use the resulting differential form of the Schwinger-Dyson equation for the self-energy to demonstrate one-particle conservation of the parquet approximation and to construct a conserving two-particle vertex via functional differentiation of the parquet self-energy. Our analysis gives a unified picture of the various many-body relations and exact renormalization group equations.
The second quantization of the quaternionic fermionic field is undertaken using the real Hilbert space approach to quaternionic quantum mechanics ($\mathbbm H$QM). The solution responds to an open problem of quaternionic quantum theory, and launches the basis to the development of the quaternionic interaction theory.
We obtain relaxation times for field theories with Lifshitz scaling and with holographic duals Einstein-Maxwell-Dilaton gravity theories. This is done by computing quasinormal modes of a bulk scalar field in the presence of Lifshitz black branes. We determine the relation between relaxation time and dynamical exponent z, for various values of boundary dimension d and operator scaling dimension. It is found that for d>z+1, at zero momenta, the modes are non-overdamped, whereas for d<=z+1 the system is always overdamped. For d=z+1 and zero momenta, we present analytical results.
The goal of this paper is to provide a survey and application-focused atlas of collective behavior coordination algorithms for multi-agent systems. We survey the general family of collective behavior algorithms for multi-agent systems and classify them according to their underlying mathematical structure. In doing so, we aim to capture fundamental mathematical properties of algorithms (e.g., scalability with respect to the number of agents and bandwidth use) and to show how the same algorithm or family of algorithms can be used for multiple tasks and applications. Collectively, this paper provides an application-focused atlas of algorithms for collective behavior of multi-agent systems, with three objectives: 1. to act as a tutorial guide to practitioners in the selection of coordination algorithms for a given application; 2. to highlight how mathematically similar algorithms can be used for a variety of tasks, ranging from low-level control to high-level coordination; 3. to explore the state-of-the-art in the field of control of multi-agent systems and identify areas for future research.
We present a new numerical scheme to study systems of non-convex, irregular, and punctured particles in an efficient manner. We employ this method to analyze regular packings of odd-shaped bodies, not only from a nanoparticle but also both from a computational geometry perspective. Besides determining close-packed structures for many shapes, we also discover a new denser configuration for Truncated Tetrahedra. Moreover, we consider recently synthesized nanoparticles and colloids, where we focus on the excluded volume interactions, to show the applicability of our method in the investigation of their crystal structures and phase behavior. Extensions to the presented scheme include the incorporation of soft particle-particle interactions, the study of quasicrystalline systems, and random packings.
The use of random samples to approximate properties of geometric configurations has been an influential idea for both combinatorial and algorithmic purposes. This chapter considers two related notions---$\epsilon$-approximations and $\epsilon$-nets---that capture the most important quantitative properties that one would expect from a random sample with respect to an underlying geometric configuration.
We explore the critical fluctuations near the chiral critical endpoint (CEP) in a chiral effective model and discuss possible signals of the CEP, recently explored experimentally in nuclear collision. Particular attention is paid to the dependence of such signals on the location of the phase boundary and the CEP relative to the chemical freeze-out conditions in nuclear collisions. We argue that in effective models, standard freeze-out fits to heavy-ion data should not be used directly. Instead, the relevant quantities should be examined on lines in the phase diagram that are defined self-consistently, within the framework of the model. We discuss possible choices for such an approach.
When learning a mapping from an input space to an output space, the assumption that the sample distribution of the training data is the same as that of the test data is often violated. Unsupervised domain shift methods adapt the learned function in order to correct for this shift. Previous work has focused on utilizing unlabeled samples from the target distribution. We consider the complementary problem in which the unlabeled samples are given post mapping, i.e., we are given the outputs of the mapping of unknown samples from the shifted domain. Two other variants are also studied: the two sided version, in which unlabeled samples are give from both the input and the output spaces, and the Domain Transfer problem, which was recently formalized. In all cases, we derive generalization bounds that employ discrepancy terms.
A full phonon intensity cancellation is reported in a longitudinal polarized inelastic neutron scattering experiment performed on the magnetocaloric compound MnFe$_{4}$Si$_{3}$, a ferromagnet with $T_{Curie}$ $\approx$ 305 K. The TA[100] phonon polarized along the $c$-axis measured from the Brillouin zone center $\textbf{G}$=(0, 0, 2) is observed only in one ($\sigma_{z}^{++}$) of the two non-spin-flip polarization channels and is absent in the other one ($\sigma_{z}^{--}$) at low temperatures. This effect disappears at higher temperatures, in the vicinity of $T_{Curie}$, where the phonon is measured in both channels with nonetheless marked different intensities. The effect is understood as originating from nuclear-magnetic interference between the nuclear one-phonon and the magnetovibrational one-phonon scattering cross-sections. The total cancellation reported is accidental, i.e. does not correspond to a systematic effect, as established by measurements in different Brillouin zones.
The study of quantum thermal machines, and more generally of open quantum systems, often relies on master equations. Two approaches are mainly followed. On the one hand, there is the widely used, but often criticized, local approach, where machine sub-systems locally couple to thermal baths. On the other hand, in the more established global approach, thermal baths couple to global degrees of freedom of the machine. There has been debate as to which of these two conceptually different approaches should be used in situations out of thermal equilibrium. Here we compare the local and global approaches against an exact solution for a particular class of thermal machines. We consider thermodynamically relevant observables, such as heat currents, as well as the quantum state of the machine. Our results show that the use of a local master equation is generally well justified. In particular, for weak inter-system coupling, the local approach agrees with the exact solution, whereas the global approach fails for non-equilibrium situations. For intermediate coupling, the local and the global approach both agree with the exact solution and for strong coupling, the global approach is preferable. These results are backed by detailed derivations of the regimes of validity for the respective approaches.
This technical report describes the training of nomic-embed-text-v1, the first fully reproducible, open-source, open-weights, open-data, 8192 context length English text embedding model that outperforms both OpenAI Ada-002 and OpenAI text-embedding-3-small on short and long-context tasks. We release the training code and model weights under an Apache 2 license. In contrast with other open-source models, we release a training data loader with 235 million curated text pairs that allows for the full replication of nomic-embed-text-v1. You can find code and data to replicate the model at https://github.com/nomic-ai/contrastors
The fifth generation of mobile broadband is more than just an evolution to provide more mobile bandwidth, massive machine-type communications, and ultra-reliable and low-latency communications. It relies on a complex, dynamic and heterogeneous environment that implies addressing numerous testing and security challenges. In this paper we present 5Greplay, an open-source 5G network traffic fuzzer that enables the evaluation of 5G components by replaying and modifying 5G network traffic by creating and injecting network scenarios into a target that can be a 5G core service (e.g., AMF, SMF) or a RAN network (e.g., gNodeB). The tool provides the ability to alter network packets online or offline in both control and data planes in a very flexible manner. The experimental evaluation conducted against open-source based 5G platforms, showed that the target services accept traffic being altered by the tool, and that it can reach up to 9.56 Gbps using only 1 processor core to replay 5G traffic.
We calculated the longitudinal acoustic phonon limited electron mobility of 14 two dimensional semiconductors with composition of MX$_2$, where M (= Mo, W, Sn, Hf, Zr and Pt) is the transition metal, and X is S, Se and Te. We treated the scattering matrix by deformation potential approximation. We found that out of the 14 compounds, MoTe$_2$, HfSe$_2$ and HfTe$_2$, are promising regarding to the possible high mobility and finite band gap. The phonon limited mobility can be above 2500 cm$^2$V$^{-1}$s$^{-1}$ at room temperature.
We survey discrete and continuous model-theoretic notions which have important connections to general topology. We present a self-contained exposition of several interactions between continuous logic and $C_p$-theory which have applications to a classification problem involving Banach spaces not including $c_0$ or $l^p$, following recent results obtained by P. Casazza and J. Iovino for compact continuous logics. Using $C_p$-theoretic results involving Grothendieck spaces and double limit conditions, we extend their results to a broader family of logics, namely those with a first countable weakly Grothendieck space of types. We pose $C_p$-theoretic problems which have model-theoretic implications.
Effects of long-term atmospheric change were looked for in photometry employing the Gemini North and South twin Multi-Object Spectrograph (GMOS-N and GMOS-S) archival data. The whole GMOS imaging database, beginning from 2003, was compared against the all-sky Gaia object catalog, yielding ~10^6 Sloan r'-filter samples, ending in 2021. These were combined with reported sky and meteorological conditions, versus a simple model of the atmosphere plus cloud together with simulated throughputs. One exceptionally extincted episode in 2009 is seen, as is a trend (similar at both sites) of about 2 mmag worsening attenuation per decade. This is consistent with solar-radiance transmissivity records going back over six decades, aerosol density measurements, and more than 0.2 deg C per decade rise in global air temperature, which has implications for calibration of historic datasets or future surveys.
The Discrete Light-Cone Quantization (DLCQ) of a supersymmetric SU(N) gauge theory in 1+1 dimensions is discussed, with particular emphasis given to the inclusion of all dynamical zero modes. Interestingly, the notorious `zero-mode problem' is now tractable because of special supersymmetric cancellations. In particular, we show that anomalous zero-mode contributions to the currents are absent, in contrast to what is observed in the non-supersymmetric case. We find that the supersymmetric partner of the gauge zero mode is the diagonal component of the fermion zero mode. An analysis of the vacuum structure is provided and it is shown that the inclusion of zero modes is crucial for probing the phase properties of the vacua. In particular, we find that the ground state energy is zero and N-fold degenerate, and thus consistent with unbroken supersymmetry. We also show that the inclusion of zero modes for the light-cone supercharges leaves the supersymmetry algebra unchanged. Finally, we remark that the dependence of the light-cone Fock vacuum in terms of the gauge zero is unchanged in the presence of matter fields.
For a graph G, let f_{ij} be the number of spanning rooted forests in which vertex j belongs to a tree rooted at i. In this paper, we show that for a path, the f_{ij}'s can be expressed as the products of Fibonacci numbers; for a cycle, they are products of Fibonacci and Lucas numbers. The {\em doubly stochastic graph matrix} is the matrix F=(f_{ij})/f, where f is the total number of spanning rooted forests of G and n is the number of vertices in G. F provides a proximity measure for graph vertices. By the matrix forest theorem, F^{-1}=I+L, where L is the Laplacian matrix of G. We show that for the paths and the so-called T-caterpillars, some diagonal entries of F (which provides a measure of the self-connectivity of vertices) converge to \phi^{-1} or to 1-\phi^{-1}, where \phi is the golden ratio, as the number of vertices goes to infinity. Thereby, in the asymptotic, the corresponding vertices can be metaphorically considered as "golden introverts" and "golden extroverts," respectively. This metaphor is reinforced by a Markov chain interpretation of the doubly stochastic graph matrix, according to which F equals the overall transition matrix of a random walk with a random number of steps on G.
Music generation with the aid of computers has been recently grabbed the attention of many scientists in the area of artificial intelligence. Deep learning techniques have evolved sequence production methods for this purpose. Yet, a challenging problem is how to evaluate generated music by a machine. In this paper, a methodology has been developed based upon an interactive evolutionary optimization method, with which the scoring of the generated melodies is primarily performed by human expertise, during the training. This music quality scoring is modeled using a Bi-LSTM recurrent neural network. Moreover, the innovative generated melody through a Genetic algorithm will then be evaluated using this Bi-LSTM network. The results of this mechanism clearly show that the proposed method is able to create pleasurable melodies with desired styles and pieces. This method is also quite fast, compared to the state-of-the-art data-oriented evolutionary systems.
In this paper, we find the energy-momentum distribution of stationary axisymmetric spacetimes in the context of teleparallel theory by using M$\ddot{o}$ller prescription. The metric under consideration is the generalization of the Weyl metrics called the Lewis-Papapetrou metric. The class of stationary axisymmetric solutions of the Einstein field equations has been studied by Galtsov to include the gravitational effect of an {\it external} source. Such spacetimes are also astrophysically important as they describe the exterior of a body in equilibrium. The energy density turns out to be non-vanishing and well-defined and the momentum becomes constant except along $\theta$-direction. It is interesting to mention that the results reduce to the already available results for the Weyl metrics when we take $\omega=0$.
It is shown that the spectrum of the asymmetric rotor can be realized quantum mechanically in terms of a system of interacting bosons. This is achieved in the SU(3) limit of the interacting boson model by considering higher-order interactions between the bosons. The spectrum corresponds to that of a rigid asymmetric rotor in the limit of infinite boson number.