text
stringlengths
6
128k
An integral representation of an operator mean via the power means is obtained. As an application, we shall give explicit condition of operator means that the Ando-Hiai inequality holds.
We study semi-inclusive charmless decays $B \to \pi X$, where $X$ does not contain a charm (anti)quark. The mode $\bar B^0 \to \pi^- X$ turns out to be be particularly useful for determination of the CKM matrix element $|V_{ub}|$. We present the branching ratio (BR) of $\bar B^0 \to \pi^- X$ as a function of $|V_{ub}|$, with an estimation of possible uncertainty. The BR is expected to be an order of $10^{-4}$.
Aims. A simplified model of jet power from active galactic nuclei is proposed in which the relationship between jet power and disk luminosity is discussed by combining disk accretion with two mechanisms of extracting energy magnetically from a black hole accretion disk, i.e., the Blandford-Payne (BP) and the Blandford-Znajek (BZ) processes. Methods. By including the BP process into the conservation laws of mass, angular momentum and energy, we derive the expressions of the BP power and disk luminosity, and the jet power is regarded as the sum of the BZ and BP powers. Results. We find that the disk radiation flux and luminosity decrease because a fraction of the accretion energy is channelled into the outflow/jet in the BP process. It is found that the dominant cooling mode of the accretion disk is determined mainly by how the poloidal magnetic field decreases with the cylindrical radius of the jet. By using the parameter space we found, which consists of the black hole spin and the self-similar index of the configuration of the poloidal magnetic field frozen in the disk, we were able to compare the relative importance of the following quantities related to the jet production: (1) the BP power versus the disk luminosity, (2) the BP power versus the BZ power, and (3) the jet power versus the disk luminosity. In addition, we fit the jet power and broad-line region luminosity of 11 flat-spectrum radio quasars (FSRQs) and 17 steep-spectrum radio quasars (SSRQs) based on our model.
The integrals defining the two-loop beta-function for the general renormalizable N=1 supersymmetric Yang--Mills theory, regularized by higher covariant derivatives, are investigated. It is shown that they are given by integrals of double total derivatives. These integrals are not equal to zero due to appearing of delta-functions. These delta-functions allow to reduce the two-loop integrals to one-loop integrals, which can be easily calculated. The result agrees with the exact NSVZ beta-function and calculations made by different methods.
We present molecular line observations of 13CO and C18O J=3-2, CN N = 3 - 2, and CS J=7-6 lines in the protoplanetary disk around TW Hya at a high spatial resolution of ~9 au (angular resolution of 0.15''), using the Atacama Large Millimeter/Submillimeter Array. A possible gas gap is found in the deprojected radial intensity profile of the integrated C18O line around a disk radius of ~58 au, slightly beyond the location of the au-scale dust clump at ~52 au, which resembles predictions from hydrodynamic simulations of planet-disk interaction. In addition, we construct models for the physical and chemical structure of the TW Hya disk, taking account of the dust surface density profile obtained from high spatial resolution dust continuum observations. As a result, the observed flat radial profile of the CN line intensities is reproduced due to a high dust-to-gas surface density ratio inside ~20 au. Meanwhile, the CO isotopologue line intensities trace high temperature gas and increase rapidly inside a disk radius of ~30 au. A model with either CO gas depletion or depletion of gas-phase oxygen elemental abundance is required to reproduce the relatively weak CO isotopologue line intensities observed in the outer disk, consistent with previous atomic and molecular line observations towards the TW Hya disk. {Further observations of line emission of carbon-bearing species, such as atomic carbon and HCN, with high spatial resolution would help to better constrain the distribution of elemental carbon abundance in the disk gas.
We prove that a countably compact space is monotonically retractable if and only if it has a full retractional skeleton. In particular, a compact space is monotonically retractable if and only if it is Corson. This gives an answer to a question of R. Rojas-Hern{\'a}ndez and V. V. Tkachuk. Further, we apply this result to characterize retractional skeleton using a topology on the space of continuous functions, answering thus a question of the first author and a related question of W. Kubi\'s.
In this paper, we consider the infinite-dimensional integration problem on weighted reproducing kernel Hilbert spaces with norms induced by an underlying function space decomposition of ANOVA-type. The weights model the relative importance of different groups of variables. We present new randomized multilevel algorithms to tackle this integration problem and prove upper bounds for their randomized error. Furthermore, we provide in this setting the first non-trivial lower error bounds for general randomized algorithms, which, in particular, may be adaptive or non-linear. These lower bounds show that our multilevel algorithms are optimal. Our analysis refines and extends the analysis provided in [F. J. Hickernell, T. M\"uller-Gronbach, B. Niu, K. Ritter, J. Complexity 26 (2010), 229-254], and our error bounds improve substantially on the error bounds presented there. As an illustrative example, we discuss the unanchored Sobolev space and employ randomized quasi-Monte Carlo multilevel algorithms based on scrambled polynomial lattice rules.
We consider the simplest model for $T$ - linear growth of resistivity in metals. It is shown that the so called "Planckian" limit for the temperature dependent relaxation rate of electrons follows from a certain procedure for representation of experimental data on resistivity and, in this sense, is a kind of delusion.
A linear program with linear complementarity constraints (LPCC) requires the minimization of a linear objective over a set of linear constraints together with additional linear complementarity constraints. This class has emerged as a modeling paradigm for a broad collection of problems, including bilevel programs, Stackelberg games, inverse quadratic programs, and problems involving equilibrium constraints. The presence of the complementarity constraints results in a nonconvex optimization problem. We develop a branch-and-cut algorithm to find a global optimum for this class of optimization problems, where we branch directly on complementarities. We develop branching rules and feasibility recovery procedures and demonstrate their computational effectiveness in a comparison with CPLEX. The implementation builds on CPLEX through the use of callback routines. The computational results show that our approach is a strong alternative to constructing an integer programming formulation using big-$M$ terms to represent bounds for variables, with testing conducted on general LPCCs as well as on instances generated from bilevel programs with convex quadratic lower level problems.
We study theoretically orbital effects of a parallel magnetic field applied to a disordered superconducting film. We find that the field reduces the phase stiffness and leads to strong quantum phase fluctuations driving the system into an insulating behavior. This microscopic model shows that the critical field decreases with the sheet resistance, in agreement with recent experimental results. The predictions of this model can be used to discriminate spin and orbital effects. We find that experiments conducted by A. Johansson \textit{et al.} are more consistent with the orbital mechanism.
In this paper, we consider the product-limit quantile estimator of an unknown quantile function under a censored dependent model. This is a parallel problem to the estimation of the unknown distribution function by the product-limit estimator under the same model. Simultaneous strong Gaussian approximations of the product-limit process and product-limit quantile process are constructed with rate $O((\log n)^{-\lambda})$ for some $\lambda>0,$. The strong Gaussian approximation of the product-limit process is then applied to derive the laws of the iterated logarithm for product-limit process.
Background and Purpose: Various 'positive-contrast' neurographic methods have been investigated for imaging the extracranial course of the facial nerve. However, nerve visibility can be inconsistent with these sequences and may depend on the composition of the parotid gland, limiting consistent identification. To address this, we describe and evaluate a 'negative-contrast' method for imaging of the extracranial facial nerve using three-dimensional variable flip angle turbo spin echo (VFA-TSE) imaging. We investigate strategies for further optimization, including parotid-specific VFA-TSE optimization and the use of gadolinium-based contrast agent (GBCA). Materials and Methods: 6 healthy volunteers and 10 patients with parotid tumors underwent VFA-TSE and double echo steady state (DESS) imaging of the extracranial facial nerve at 3T. The main trunk, divisions and branches of the extracranial facial nerve were manually segmented by three radiologists, enabling CNR and Hausdorff distance computation and confidence scoring. CNR, Hausdorff distance and confidence scores were compared between sequences and between pre- and post-contrast imaging to evaluate the effect of GBCA. Results: CNR, Hausdorff distances and confidence scores were superior for VFA-TSE compared to DESS imaging. GBCA administration produced a further increase in CNR of nerve against parotid and improved differentiation of nerve from tumor. Conclusion: Imaging of the extracranial facial nerve with VFA-TSE depicts the nerve as a low signal structure ('black nerve') against the high signal parotid parenchyma ('white parotid') and outperforms positive-contrast DESS imaging in terms of CNR, segmentation consistency and confidence. GBCA further increases negative contrast and improves differentiation of nerve from tumor.
Simplified models provide a useful way to study the impacts of a small number of new particles on experimental observables and the interplay of those observables, without the need to construct an underlying theory. In this study, we perform global fits of simplified dark matter models with GAMBIT using an up-to-date set of likelihoods for indirect detection, direct detection and collider searches. We investigate models in which a scalar or fermionic dark matter candidate couples to quarks via an s-channel vector mediator. Large parts of parameter space survive for each model. In the case of Dirac or Majorana fermion dark matter, excesses in LHC monojet searches and relic density limits tend to prefer the resonance region, where the dark matter has approximately half the mass of the mediator. A combination of vector and axial-vector couplings to the Dirac candidate also leads to competing constraints from direct detection and unitarity violation.
We introduce and study the classical and quantum mechanics of certain non hyperbolic maps on the unit square. These maps are modifications of the usual baker's map and their behaviour ranges from chaotic motion on the whole measure to chaos on a set of measure zero. Thus we have called these maps ``lazy baker maps.'' The aim of introducing these maps is to provide the simplest models of systems with a mixed phase space, in which there are both regular and chaotic motions. We find that despite the obviously contrived nature of these maps they provide a good model for the study of the quantum mechanics of such systems. We notice the effect of a classically chaotic fractal set of measure zero on the corresponding quantum maps, which leads to a transition in the spectral statistics. Some periodic orbits belonging to this fractal set are seen to scar several eigenfunctions.
I consider some promising future directions for quantum information theory that could influence the development of 21st century physics. Advances in the theory of the distinguishability of superoperators may lead to new strategies for improving the precision of quantum-limited measurements. A better grasp of the properties of multi-partite quantum entanglement may lead to deeper understanding of strongly-coupled dynamics in quantum many-body systems, quantum field theory, and quantum gravity.
Discussion of "Harold Jeffreys's Theory of Probability revisited," by Christian Robert, Nicolas Chopin, and Judith Rousseau, for Statistical Science [arXiv:0804.3173]
We analytically evaluate the Renyi entropies for the two dimensional free boson CFT. The CFT is considered to be compactified on a circle and at finite temperature. The Renyi entropies S_n are evaluated for a single interval using the two point function of bosonic twist fields on a torus. For the case of the compact boson, the sum over the classical saddle points results in the Riemann-Siegel theta function associated with the A_{n-1} lattice. We then study the Renyi entropies in the decompactification regime. We show that in the limit when the size of the interval becomes the size of the spatial circle, the entanglement entropy reduces to the thermal entropy of free bosons on a circle. We then set up a systematic high temperature expansion of the Renyi entropies and evaluate the finite size corrections for free bosons. Finally we compare these finite size corrections both for the free boson CFT and the free fermion CFT with the one-loop corrections obtained from bulk three dimensional handlebody spacetimes which have higher genus Riemann surfaces as its boundary. One-loop corrections in these geometries are entirely determined by quantum numbers of the excitations present in the bulk. This implies that the leading finite size corrections contributions from one-loop determinants of the Chern-Simons gauge field and the Dirac field in the dual geometry should reproduce that of the free boson and the free fermion CFT respectively. By evaluating these corrections both in the bulk and in the CFT explicitly we show that this expectation is indeed true.
Let $\mathrm{X(n)}$ be Ravenel's Thom spectrum over $\Omega \mathrm{SU}(n)$. We say a spectrum $E$ has chromatic defect $n$ if $n$ is the smallest positive integer such that $E\otimes \mathrm{X(n)}$ is complex orientable. We compute the chromatic defect of various examples of interest: finite spectra, the Real Johnson--Wilson theories $\mathrm{ER(n)}$, the fixed points $\mathrm{EO}_n(G)$ of Morava $E$-theories with respect to a finite subgroup $G$ of the Morava stabilizer group, and the connective image of $J$ spectrum j. Having finite chromatic defect is closely related to the existence of analogues of the classical Wood splitting $\mathrm{ko}\otimes C(\eta)\simeq \mathrm{ku}$. We show that such splittings exist in quite a wide generality for fp spectra $E$. When $E$ participates in such a splitting, $E$ admits a $\mathbb Z$-indexed Adams--Novikov tower, which may be used to deduce differentials in the Adams--Novikov spectral sequence of $E$.
In solving the problem of finding a temperature distribution which, at zero temperature, corresponds to superfluidity, i.e., to nonzero energy, the author tried to quantize free energy. This was done on the basis of supersecondary quantization whose special case is the usual secondary quantization for bosons and with the help of which new representations of the Schr\"odinger equation were obtained. The supersecondary quantization allowed the author to construct a variational method whose zero approximation are the Hartree-Fock and Bogolyubov-BCSch variational principles. This method works especially well in the case of not a large number of particles. The new quantization and the variational method are of general character and can be used in the quantum field theory.
This note presents a procedure of constructing a higher dimensional sphere map from a lower dimensional one and gives an explicit formula for smooth sphere map with a given degree. As an application a new proof of a generalized Poincare-Hopf theorem called Morse index formula is also presented.
We study generalized fixed-point equations over idempotent semirings and provide an efficient algorithm for the detection whether a sequence of Kleene's iterations stabilizes after a finite number of steps. Previously known approaches considered only bounded semirings where there are no infinite descending chains. The main novelty of our work is that we deal with semirings without the boundedness restriction. Our study is motivated by several applications from interprocedural dataflow analysis. We demonstrate how the reachability problem for weighted pushdown automata can be reduced to solving equations in the framework mentioned above and we describe a few applications to demonstrate its usability.
Image generation using generative AI is rapidly becoming a major new source of visual media, with billions of AI generated images created using diffusion models such as Stable Diffusion and Midjourney over the last few years. In this paper we collect and analyse over 3 million prompts and the images they generate. Using natural language processing, topic analysis and visualisation methods we aim to understand collectively how people are using text prompts, the impact of these systems on artists, and more broadly on the visual cultures they promote. Our study shows that prompting focuses largely on surface aesthetics, reinforcing cultural norms, popular conventional representations and imagery. We also find that many users focus on popular topics (such as making colouring books, fantasy art, or Christmas cards), suggesting that the dominant use for the systems analysed is recreational rather than artistic.
Vector quantized diffusion (VQ-Diffusion) is a powerful generative model for text-to-image synthesis, but sometimes can still generate low-quality samples or weakly correlated images with text input. We find these issues are mainly due to the flawed sampling strategy. In this paper, we propose two important techniques to further improve the sample quality of VQ-Diffusion. 1) We explore classifier-free guidance sampling for discrete denoising diffusion model and propose a more general and effective implementation of classifier-free guidance. 2) We present a high-quality inference strategy to alleviate the joint distribution issue in VQ-Diffusion. Finally, we conduct experiments on various datasets to validate their effectiveness and show that the improved VQ-Diffusion suppresses the vanilla version by large margins. We achieve an 8.44 FID score on MSCOCO, surpassing VQ-Diffusion by 5.42 FID score. When trained on ImageNet, we dramatically improve the FID score from 11.89 to 4.83, demonstrating the superiority of our proposed techniques.
Roughly speaking, a conic bundle is a surface, fibered over a curve, such that the fibers are conics (not necessarily smooth). We define stability for conic bundles and construct a moduli space. We prove that (after fixing some invariants) these moduli spaces are irreducible (under some conditions). Conic bundles can be thought of as generalizations of orthogonal bundles on curves. We show that in this particular case our definition of stability agrees with the definition of stability for orthogonal bundles. Finally, in an appendix by I. Mundet i Riera, a Hitchin-Kobayashi correspondence is stated for conic bundles.
We present an investigation of coherent backscattering of light that is multiple scattered by a photonic crystal by using a broad-band technique. The results significantly extend on previous backscattering measurements on photonic crystals by simultaneously accessing a large frequency and angular range. Backscatter cones around the stop gap are successfully modelled with diffusion theory for a random medium. Strong variations of the apparent mean free path and the cone enhancement are observed around the stop band. The variations of the mean free path are described by a semi-empirical three-gap model including band structure effects on the internal reflection and penetration depth. A good match between theory and experiment is obtained without the need of additional contributions of group velocity or density of states. We argue that the cone enhancement reveals additional information on directional transport properties that are otherwise averaged out in diffuse multiple scattering.
Given a graph G, we construct a convex polytope whose face poset is based on marked subgraphs of G. Dubbed the graph multiplihedron, we provide a realization using integer coordinates. Not only does this yield a natural generalization of the multiphihedron, but features of this polytope appear in works related to quilted disks, bordered Riemann surfaces, and operadic structures. Certain examples of graph multiplihedra are related to Minkowski sums of simplices and cubes and others to the permutohedron.
Babies born with low and very low birthweights -- i.e., birthweights below 2,500 and 1,500 grams, respectively -- have an increased risk of complications compared to other babies, and the proportion of babies with a low birthweight is a common metric used when evaluating public health in a population. While many factors increase the risk of a baby having a low birthweight, many can be linked to the mother's socioeconomic status, which in turn contributes to large racial disparities in the incidence of low weight births. Here, we employ Bayesian statistical models to analyze the proportion of babies with low birthweight in Pennsylvania counties by race/ethnicity. Due to the small number of births -- and low weight births -- in many Pennsylvania counties when stratified by race/ethnicity, our methods must walk a fine line. On one hand, leveraging spatial structure can help improve the precision of our estimates. On the other hand, we must be cautious to avoid letting the model overwhelm the information in the data and produce spurious conclusions. As such, we first develop a framework by which we can measure (and control) the informativeness of our spatial model. After demonstrating the properties of our framework via simulation, we analyze the low birthweight data from Pennsylvania and examine the extent to which the commonly used conditional autoregressive model can lead to oversmoothing. We then reanalyze the data using our proposed framework and highlight its ability to detect (or not detect) evidence of racial disparities in the incidence of low birthweight.
We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations of arrival times reveals a higher correlation compared to the shortest path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach to the problem using only algebraic methods.
MAXI J1535-571 is a Galactic black hole candidate X-ray binary that was discovered going into outburst in 2017 September. In this paper, we present comprehensive radio monitoring of this system using the Australia Telescope Compact Array (ATCA), as well as the MeerKAT radio observatory, showing the evolution of the radio jet during its outburst. Our radio observations show the early rise and subsequent quenching of the compact jet as the outburst brightened and then evolved towards the soft state. We constrain the compact jet quenching factor to be more than 3.5 orders of magnitude. We also detected and tracked (for 303 days) a discrete, relativistically-moving jet knot that was launched from the system. From the motion of the apparently superluminal knot, we constrain the jet inclination (at the time of ejection) and speed to $\leq 45^{\circ}$ and $\geq0.69$c, respectively. Extrapolating its motion back in time, our results suggest that the jet knot was ejected close in time to the transition from the hard intermediate state to soft intermediate state. The launching event also occurred contemporaneously with a short increase in X-ray count rate, a rapid drop in the strength of the X-ray variability, and a change in the type-C quasi-periodic oscillation (QPO) frequency that occurs $>$2.5 days before the first appearance of a possible type-B QPO.
This document is one of the deliverable reports created for the ESCAPE project. ESCAPE stands for Energy-efficient Scalable Algorithms for Weather Prediction at Exascale. The project develops world-class, extreme-scale computing capabilities for European operational numerical weather prediction and future climate models. This is done by identifying Weather & Climate dwarfs which are key patterns in terms of computation and communication (in the spirit of the Berkeley dwarfs). These dwarfs are then optimised for different hardware architectures (single and multi-node) and alternative algorithms are explored. Performance portability is addressed through the use of domain specific languages. Atlas has been presented in deliverable D1.3. With this deliverable D2.3, a first version of the Atlas software libraries is publicly released with a permissive open-source license. The software is freely available for download, and contains a user guide and installation instructions. The Atlas libraries have been carefully designed with the user's perspective in mind. Even though Atlas is mainly coded in C++, an equivalent Fortran interface is presented without additional runtime overhead. The Fortran interfaces are provided to accommodate existing NWP and climate models that typically consist of Fortran subroutines. The mixed Fortran/C++ design enhances interoperability between NWP and climate models and novel data management techniques. Atlas provides interoperability with accelerator hardware, and can serve as foundation to support higher level abstractions as used in domain specific languages.
Dynamics of complex social systems has often been described in the framework of temporal networks, where links are considered to exist only at the moment of interaction between nodes. Such interaction patterns are not only driven by internal interaction mechanisms, but also affected by environmental changes. To investigate the impact of the environmental changes on the dynamics of temporal networks, we analyze several face-to-face interaction datasets using the multiscale entropy (MSE) method to find that the observed temporal correlations can be categorized according to the environmental similarity of datasets such as classes and break times in schools. By devising and studying a temporal network model considering a periodically changing environment as well as a preferential activation mechanism, we numerically show that our model could successfully reproduce various empirical results by the MSE method in terms of multiscale temporal correlations. Our results demonstrate that the environmental changes can play an important role in shaping the dynamics of temporal networks when the interactions between nodes are influenced by the environment of the systems.
We underline some differences between the geometric aspect of Berezin's approach to quantization on homogeneous K\"ahler manifolds and Bergman's construction for bounded domains in $\mathbb{C}^n$. We construct explicitly the Bergman representative coordinates for the Siegel-Jacobi disk $\mathcal{D}^J_1$, which is a partially bounded manifold whose points belong to $\mathbb{C}\times\mathcal{D}_1$, where $\mathcal{D}_1$ denotes the Siegel disk. The Bergman representative coordinates on $\mathcal{D}^J_1$ are globally defined, the Siegel-Jacobi disk is a normal K\"ahler homogeneous Lu Qi-Keng manifold, whose representative manifold is the Siegel-Jacobi disk itself.
We suggest a new experiment sensitive to a possible difference between the amount of CP violation as measured on the surface of the Earth and in a lower gravity environment. Our proposed experiment is model independent and could yield a $5\sigma$ measurement within tens of days, indicating a dependence of the level of CP violation in the neutral kaon system on the local gravitational potential.
We explore the possible observational signatures of different types of kink modes (horizontal and vertical oscillations in their fundamental mode and second harmonic) that may arise in coronal loops, with the aim of determining how well the individual modes can be uniquely identified from time series of images. A simple, purely geometrical model is constructed to describe the different types of kink-mode oscillations. These are then `observed' from a given direction. In particular, we employ the 3D geometrical parameters of 14 TRACE loops of transverse oscillations to try to identify the correct observed wave mode. We find that for many combinations of viewing and loop geometry it is not straightforward to distinguish between at least two types of kink modes just using time series of images. We also considered Doppler signatures and find that these can help obtain unique identifications of the oscillation modes when employed in combination with imaging. We then compare the modeled spatial signatures with the observations of 14 TRACE loops. We find that out of three oscillations previously identified as fundamental horizontal mode oscillations, two cases appear to be fundamental vertical mode oscillations (but possibly combined with the fundamental horizontal mode), and one case appears to be a combination of the fundamental vertical and horizontal modes, while in three cases it is not possible to clearly distinguish between the fundamental mode and the second-harmonic of the horizontal oscillation. In five other cases it is not possible to clearly distinguish between a fundamental horizontal mode and the second-harmonic of a vertical mode.
We propose a new architecture for the learning of predictive spatio-temporal motion models from data alone. Our approach, dubbed the Dropout Autoencoder LSTM, is capable of synthesizing natural looking motion sequences over long time horizons without catastrophic drift or motion degradation. The model consists of two components, a 3-layer recurrent neural network to model temporal aspects and a novel auto-encoder that is trained to implicitly recover the spatial structure of the human skeleton via randomly removing information about joints during training time. This Dropout Autoencoder (D-AE) is then used to filter each predicted pose of the LSTM, reducing accumulation of error and hence drift over time. Furthermore, we propose new evaluation protocols to assess the quality of synthetic motion sequences even for which no ground truth data exists. The proposed protocols can be used to assess generated sequences of arbitrary length. Finally, we evaluate our proposed method on two of the largest motion-capture datasets available to date and show that our model outperforms the state-of-the-art on a variety of actions, including cyclic and acyclic motion, and that it can produce natural looking sequences over longer time horizons than previous methods.
Affective computing is a field of study that focuses on developing systems and technologies that can understand, interpret, and respond to human emotions. Speech Emotion Recognition (SER), in particular, has got a lot of attention from researchers in the recent past. However, in many cases, the publicly available datasets, used for training and evaluation, are scarce and imbalanced across the emotion labels. In this work, we focused on building a balanced corpus from these publicly available datasets by combining these datasets as well as employing various speech data augmentation techniques. Furthermore, we experimented with different architectures for speech emotion recognition. Our best system, a multi-modal speech, and text-based model, provides a performance of UA(Unweighed Accuracy) + WA (Weighed Accuracy) of 157.57 compared to the baseline algorithm performance of 119.66
In this paper we describe the process of collection, transcription, and annotation of recordings of spontaneous speech samples from Turkish-German bilinguals, and the compilation of a corpus called TuGeBiC. Participants in the study were adult Turkish-German bilinguals living in Germany or Turkey at the time of recording in the first half of the 1990s. The data were manually tokenised and normalised, and all proper names (names of participants and places mentioned in the conversations) were replaced with pseudonyms. Token-level automatic language identification was performed, which made it possible to establish the proportions of words from each language. The corpus is roughly balanced between both languages. We also present quantitative information about the number of code-switches, and give examples of different types of code-switching found in the data. The resulting corpus has been made freely available to the research community.
We explored the sparticle mass spectrum in light of the muon g-2 anomaly and the little hierarchy problem in a class of gauge mediated supersymmetry breaking model. Here the messenger fields transform in the adjoint representation of the Standard Model gauge symmetry. To avoid unacceptably light right-handed slepton masses the standard model is supplemented by additional U(1)_B-L gauge symmetry. Considering a non-zero U(1)_B-L D-term leads to an additional contribution to the soft supersymmetry breaking mass terms which makes the right-handed slepton masses compatible with the current experimental bounds. We show that in the framework of Lambda_{3}<0 and mu < 0, the muon g-2 anomaly and the observed 125 GeV Higgs boson mass a can be simultaneously accommodated. The slepton masses in this case are predicted to lie in the few hundred GeV range, which can be tested at LHC. Despite the heavy colored spectrum the the little hierarchy problem in this model can be ameliorated and electroweak fine tuning parameter can be as low as 10 or so.
Bimetric gravity is an interesting alternative to standard GR given its potential to provide a concrete theoretical framework for a ghost-free massive gravity theory. Here we investigate a class of Bimetric gravity models for their cosmological implications. We study the background expansion as well as the growth of matter perturbations at linear and second order. We use low-redshift observations from SnIa (Pantheon+ and SH0ES), Baryon Acoustic Oscillations (BAO), the growth ($f\sigma_{8}$) measurements and the measurement from Megamaser Cosmology Project to constrain the Bimetric model. We find that the Bimetric models are consistent with the present data alongside the $\Lambda$CDM model. We reconstructed the `` effective dark energy equation of state" ($\omega_{de}$) and "Skewness" ($S_{3}$) parameters for the Bimetric model from the observational constraints and show that the current low-redshift data allow significant deviations in $\omega_{de}$ and $S_{3}$ parameters with respect to the $\Lambda$CDM behaviour. We also look at the ISW effect via galaxy-temperature correlations and find that the best fit Bimetric model behaves similarly to $\Lambda$CDM in this regard.
Bibliometrics such as the number of papers and times cited are often used to compare researchers based on specific criteria. The criteria, however, are different in each research domain and are set by empirical laws. Moreover, there are arguments, such that the simple sum of metric values works to the advantage of elders. Therefore, this paper attempts to constitute features from time series data of bibliometrics, and then classify the researchers according to the features. In detail, time series patterns are extracted from bibliographic data sets, and then a model to classify whether the researchers are "distinguished" or not is created by a machine learning technique. The experiments achieved an F-measure of 80.0% in the classification of 114 researchers in two research domains based on the data sets of Japan Science and Technology Agency and Elsevier's Scopus. In the future, we will conduct verification on a number of researchers in several domains, and then make use of discovering "distinguished" researchers, who are not widely known.
The limitation of the detection rate of standard bakelite resistive plate chambers (RPC) used as muon detectors in the LHC experiments has prevented the use of such detectors in the high rate regions in both CMS and ATLAS detectors. One alternative to these detectors are RPCs made with low resistivity glass plates ($10^{10} {\rm \Omega .cm}$), a beam test at DESY has shown that such detectors can operate at few thousand Hz/cm$^2$ with high efficiency(> 90%)
The design of a building requires an architect to balance a wide range of constraints: aesthetic, geometric, usability, lighting, safety, etc. At the same time, there are often a multiplicity of diverse designs that can meet these constraints equally well. Architects must use their skills and artistic vision to explore these rich but highly constrained design spaces. A number of computer-aided design tools use automation to provide useful analytical data and optimal designs with respect to certain fitness criteria. However, this automation can come at the expense of a designer's creative control. We propose uDOME, a user-in-the-loop system for computer-aided design exploration that balances automation and control by efficiently exploring, analyzing, and filtering the space of environment layouts to better inform an architect's decision-making. At each design iteration, uDOME provides a set of diverse designs which satisfy user-defined constraints and optimality criteria within a user defined parameterization of the design space. The user then selects a design and performs a similar optimization with the same or different parameters and objectives. This exploration process can be repeated as many times as the designer wishes. Our user studies indicates that \DOME, with its diversity-based approach, improves the efficiency and effectiveness of even novice users with minimal training, without compromising the quality of their designs.
The Spallation Neutron Source (SNS), located at Oak Ridge Laboratory in the United States, will be coming online over the next few years. In addition to producing fluxes of high-intensity neutrons, the interaction of the proton beam with the liquid mercury target produces copious pions. The pi+ and subsequent mu+ decay at rest, providing a neutrino beam comprising numu, nue, and anti-numu components. This neutrino beam is ideal for high-precision neutrino experiments. OscSNS is a proposed multi-purpose experiment that will perform a search for light sterile neutrinos, search for beyond the Standard Model interactions using neutrino oscillations, and provide tests of Standard Model predictions through world-record precision neutrino cross section measurements. OscSNS plans to submit a full proposal for funding in 2009.
QCD sum-rules are used to calculate the $\hat\rho(1^{-+})\to\pi\eta, \pi\eta'$ decay widths of the exotic hybrid in two different $\eta-\eta'$ mixing schemes. In the conventional flavour octet-singlet mixing scheme, the decay widths are both found to be small, while in the recently-proposed quark mixing scheme, the decay width $\Gamma_{\hat\rho\to\eta\pi}\approx 250 MeV$ is large compared with the decay width $\Gamma_{\hat\rho\to\eta^\prime\pi}\approx 20 MeV$. These results provide some insight into $\eta$-$\eta'$ mixing and hybrid decay features.
We construct the ultraviolet completion of the Standard Model that contains an infinite sequence of Hypercolor gauge groups. So, the whole gauge group of the theory is $... \otimes SU(5)\otimes SU(4) \otimes SU(3) \otimes SU(2) \otimes U(1)$. Here SU(4) is the Technicolor group of Farhi - Susskind model. The breakdown of chiral symmetry due to the the Technicolor gives rise to finite $W$ and $Z$ boson masses in a usual way. The other Hypercolor groups are not confining. We suggest the hypothesis that the fermion masses are not related in any way to technicolor gauge group. We suppose that the fermion mass formation mechanism is related to the energies much higher than the technicolor scale. Formally the fermion masses appear in our model as an external input. In the construction of the theory we use essentially the requirement that it posseses an additional discrete symmetry $\cal Z$ that is the continuation of the $Z_6$ symmetry of the Standard Model. It has been found that there exists such a choice of the hypercharges of the fermions that the chiral anomaly is absent while the symmetry $\cal Z$ is preserved.
Features of the angular distributions of accelerated atomic projectiles at grazing angles of incidence on the crystal surface are studied by using the computer simulation. The interaction between the projectiles and the crystal-lattice atoms and atomic structure of the crystal surface are calculated by means of the electron density functional method. The angular distributions of scattered projectiles are simulated by taking into account their interaction with several atomic layers in the crystal lattice and atomic thermal displacements. Good agreement between the calculated results and known experimental data is achieved. A possibility of reconstructing the ion-atom dynamic interaction potential from the dependence of the rainbow scattering angle of nitrogen atoms under conditions of grazing incidence on the surface of an aluminum crystal on the total kinetic energy of the accelerated atomic particles is shown.
Procedural text understanding requires machines to reason about entity states within the dynamical narratives. Current procedural text understanding approaches are commonly \textbf{entity-wise}, which separately track each entity and independently predict different states of each entity. Such an entity-wise paradigm does not consider the interaction between entities and their states. In this paper, we propose a new \textbf{scene-wise} paradigm for procedural text understanding, which jointly tracks states of all entities in a scene-by-scene manner. Based on this paradigm, we propose \textbf{S}cene \textbf{G}raph \textbf{R}easoner (\textbf{SGR}), which introduces a series of dynamically evolving scene graphs to jointly formulate the evolution of entities, states and their associations throughout the narrative. In this way, the deep interactions between all entities and states can be jointly captured and simultaneously derived from scene graphs. Experiments show that SGR not only achieves the new state-of-the-art performance but also significantly accelerates the speed of reasoning.
The generalised Gegenbauer functions of fractional degree (GGF-Fs), denoted by ${}^{r\!}G^{(\lambda)}_\nu(x)$ (right GGF-Fs) and ${}^{l}G^{(\lambda)}_\nu(x)$ (left GGF-Fs) with $x\in (-1,1),$ $\lambda>-1/2$ and real $\nu\ge 0,$ are special functions (usually non-polynomials), which are defined upon the hypergeometric representation of the classical Gegenbauer polynomial by allowing integer degree to be real fractional degree. Remarkably, the GGF-Fs become indispensable for optimal error estimates of polynomial approximation to singular functions, and have intimate relations with several families of nonstandard basis functions recently introduced for solving fractional differential equations. However, some properties of GGF-Fs, which are important pieces for the analysis and applications, are unknown or under explored. The purposes of this paper are twofold. The first is to show that for $\lambda,\nu>0$ and $x=\cos\theta$ with $\theta\in (0,\pi),$ \begin{equation*}\label{IntRep-0N} (\sin \varphi)^{\lambda}\,{}^{r\!}G_\nu^{(\lambda)}(\cos \varphi)= \frac{2^\lambda\Gamma(\lambda+1/2)}{\sqrt{\pi} {(\nu+\lambda)^{\lambda}}} \, {\cos ((\nu+\lambda)\varphi- \lambda\pi/2)} +{\mathcal R}_\nu^{(\lambda)} (\varphi), \end{equation*} and derive the precise expression of the "residual" term ${\mathcal R}_\nu^{(\lambda)} (\varphi).$ With this at our disposal, we obtain the bounds of GGF-Fs uniform in $\nu.$ Under an appropriate weight function, the bounds are uniform for $\theta\in [0,\pi]$ as well. Moreover, we can study the asymptotics of GGF-Fs with large fractional degree $\nu.$ The second is to present miscellaneous properties of GGF-Fs for better understanding of this family of useful special functions.
Chiral properties of positive and negative parity nucleons, N and N*$, are studied from the viewpoint of chiral symmetry. Two possible ways to assign chiral transformations to the negative parity nucleon are considered. Using linear sigma models based on the two chiral realizations, theoretical as well as phenomenological consequences of the two different assignments are investigated. We find that the nucleon mass in the chiral restored phase is the key quantity to determine the meson-nucleon couplings and the axial charges of nucleons. We also discuss the role of chiral symmetry breaking in the mass splitting of N and N* in the two sigma models.
We discuss anharmonicity of the multi-octupole-phonon states in $^{208}$Pb based on a covariant density functional theory, by fully taking into account the interplay between the quadrupole and the octupole degrees of freedom. Our results indicate the existence of a large anharmonicity in the transition strengths, even though the excitation energies are similar to those in the harmonic limit. We also show that the quadrupole-shape fluctuation significantly enhances the fragmentation of the two-octupole-phonon states in $^{208}$Pb. Using those transition strengths as inputs to coupled channels calculations, we then discuss the fusion reaction of $^{16}$O+$^{208}$Pb at energies around the Coulomb barrier. We show that the anharmonicity of the octupole vibrational excitation considerably improves previous coupled-channels calculations in the harmonic oscillator limit, significantly reducing the height of the main peak in the fusion barrier distribution.
We establish an improved version of the Moser-Trudinger inequality in the hyperbolic space $\mathbb H^n$, $n\geq 2$. Namely, we prove the following result: for any $0 \leq \lambda < \left(\frac{n-1}n\right)^n$, then we have $$ \sup_{\substack{u\in C_0^\infty(\mathbb H^n) \int_{\mathbb H^n} |\nabla_g u|_g^n d\text{Vol}_g -\lambda \int_{\mathbb H^n} |u|^n d\text{ Vol}_g \leq 1}} \int_{\mathbb H^n} \Phi_n(\alpha_n |u|^{\frac{n}{n-1}}) d\text{ Vol}_g < \infty, $$ where $\alpha_n = n \omega_{n-1}^{\frac1{n-1}}$, $\omega_{n-1}$ denotes the surface area of the unit sphere in $\mathbb R^n$ and $\Phi_n(t) = e^t -\sum_{j=0}^{n-2}\frac{t^j}{j!}$. This improves the Moser-Trudinger inequality in hyperbolic spaces obtained recently by Mancini and Sandeep, by Mancini, Sandeep and Tintarev and by Adimurthi and Tintarev. In the limiting case $\lambda =(\frac{n-1}n)^n$, we prove a Moser-Trudinger inequality with exact growth in $\mathbb H^n$, $$ \sup_{\substack{u\in C_0^\infty(\mathbb H^n) \int_{\mathbb H^n} |\nabla_g u|_g^n d\text{ Vol}_g -(\frac{n-1}n)^n \int_{\mathbb H^n} |u|^n d\text{ Vol}_g \leq 1}} \frac{1}{\int_{\mathbb H^n} |u|^n d\text{ Vol}_g}\int_{\mathbb H^n} \frac{\Phi_n(\alpha_n |u|^{\frac{n}{n-1}})}{(1+ |u|)^{\frac n{n-1}}} d\text{ Vol}_g < \infty. $$ This improves the Moser-Trudinger inequality with exact growth in $\mathbb H^n$ established by Lu and Tang.
We calculate the Curie temperature of layered ferromagnets, chromium tri-iodide (CrI3), chromium tri-bromide (CrBr3), chromium germanium tri-telluride (CrGeTe3), and the Neel temperature of a layered anti-ferromagnet iron di-chloride (FeCl2), using first-principles density functional theory calculations and Monte-Carlo simulations. We develop a computational method to model the magnetic interactions in layered magnetic materials and calculate their critical temperature. We provide a unified method to obtain the magnetic exchange parameters (J) for an effective Heisenberg Hamiltonian from first-principles, taking into account both the magnetic ansiotropy as well as the out-of-plane interactions. We obtain the magnetic phase change behavior, in particular the critical temperature, from the susceptibility and the specific-heat, calculated using the three-dimensional Monte-Carlo (Metropolis) algorithm. The calculated Curie temperatures for ferromagnetic materials (CrI3, CrBr3 and CrGeTe3), match very well with experimental values. We show that the interlayer interaction in bulk CrI3 with R3 stacking is significantly stronger than the C2/m stacking, in line with experimental observations. We show that the strong interlayer interaction in R3 CrI results in a competition between the in-plane and the out-of-plane magnetic easy axis. Finally, we calculate the Neel temperature of FeCl2 to be 47 +- 8 K, and show that the magnetic phase transition in FeCl2 occurs in two steps with a high-temperature intralayer ferromagnetic phase transition, and a low-temperature interlayer anti-ferromagnetic phase transition.
We present the exact solution of the Baxter's three-color problem on a random planar graph, using the random-matrix formulation of the problem, given by B. Eynard and C. Kristjansen. We find that the number of three-coloring of an infinite random graph is 0.9843 per vertex.
Electron-positron pair production is considered in the relativistic collision of a nucleus and an anti-nucleus, in which both leptons are created in bound states of the corresponding nucleus-lepton system. Compared to free and bound-free pair production this process is shown to display a qualitatively different dependency both on the impact energy and charged of the colliding particles. Interestingly, at high impact energies the cross section for this process is found to be larger than that for the analogous atomic process of non-radiative electron capture although the latter does not involve the creation of new particles.
Vortex is a universal and significant phenomenon that has been known for centuries. However, creating vortices to the atomic limit has remained elusive because that the characteristic length to support a vortex is usually much larger than the atomic scale. Very recently, it was demonstrated that intervalley scattering induced by the single carbon defect of graphene leads to phase winding over a closed path surrounding the defect. Motivated by this, we demonstrate, in this Letter, that the single carbon defects at A and B sublattices of graphene can be regarded as pseudospin-mediated atomic-scale vortices with angular momenta l = +2 and -2, respectively. The quantum interferences measurements of the interacting vortices indicate that the vortices cancel each other, resulting in zero total angular momentum, in the |A| = |B| case, and they show aggregate chirality and angular momenta similar to a single vortex of the majority in the |A| not equal to |B| case, where |A| (|B|) is the number of vortices with angular momenta l = +2 (l = -2).
Recent advancements in Large Language Models have transformed ML/AI development, necessitating a reevaluation of AutoML principles for the Retrieval-Augmented Generation (RAG) systems. To address the challenges of hyper-parameter optimization and online adaptation in RAG, we propose the AutoRAG-HP framework, which formulates the hyper-parameter tuning as an online multi-armed bandit (MAB) problem and introduces a novel two-level Hierarchical MAB (Hier-MAB) method for efficient exploration of large search spaces. We conduct extensive experiments on tuning hyper-parameters, such as top-k retrieved documents, prompt compression ratio, and embedding methods, using the ALCE-ASQA and Natural Questions datasets. Our evaluation from jointly optimization all three hyper-parameters demonstrate that MAB-based online learning methods can achieve Recall@5 $\approx 0.8$ for scenarios with prominent gradients in search space, using only $\sim20\%$ of the LLM API calls required by the Grid Search approach. Additionally, the proposed Hier-MAB approach outperforms other baselines in more challenging optimization scenarios. The code will be made available at https://aka.ms/autorag.
The chiral QED$_3$--Gross-Neveu-Yukawa (QED$_3$-GNY) theory is a $2+1$-dimensional U(1) gauge theory with $N_f$ flavors of four-component Dirac fermions coupled to a scalar field. For $N_f=1$, the specific chiral Ising QED$_3$-GNY model has recently been conjectured to be dual to the deconfined quantum critical point that describes Neel--valence-bond-solid transition of frustrated quantum magnets on square lattice. We study the universal critical behaviors of the chiral QED$_3$-GNY model in $d=4-\epsilon$ dimensions for an arbitrary $N_f$ . We calculate the boson anomalous dimensions, inverse correlation length exponent, as well as the scaling dimensions of nonsinglet fermion bilinear in the chiral QED$_3$-GNY model. The Pad$\acute{e}$ estimates for the exponents are obtained in the chiral Ising-, XY- and Heisenberg-QED$_3$-GNY universality class respectively. We also establish the general condition of the supersymmetric criticality for the ungauged QED$_3$-GNY model. For the conjectured duality between chiral QED$_3$-GNY critical point and deconfined quantum critical point, we find the inverse correlation length exponent has a lower boundary $\nu^{-1}>0.75$, beyond which the Ising-QED$_3$-GNY--$\mathbb{C}$P$^1$ duality may hold.
In this invited response we answer all comments by Engelen and Hansen [arXiv:2207.07844]. We point out that the superfluid and superconductive properties of H(0) have been published previously. We explain some differences between covalently bonded molecules and the molecules in the ultradense matter H(0) form, and explain some aspects of the energetics of H(0) molecules during Coulomb explosions. We point out that the experimental spectra shown in our publication are not ion time-of-flight spectra but neutral time-of-flight spectra with the peak width given by the internal energetics and not by experimental factors. We point out that no phase diagram has been measured for H(0). Further we point out that a Rydberg state is a hydrogenic state and thus that all hydrogen atom states are Rydberg states. That Rydberg states always have large principal quantum numbers is a complete misunderstanding. We point out that a QM description of H(0) is published. We point out that the internuclear distances in p(0), D(0) and pD(0) have been measured by rotational spectroscopy in two publications for three different spin states. They are measured in the pm range with fm precision.
We propose an improved analytical model for the horizon-absorbed gravitational-wave energy flux of a small body in circular orbit in the equatorial plane of a Kerr black hole. Post-Newtonian (PN) theory provides an analytical description of the multipolar components of the absorption flux through Taylor expansions in the orbital frequency. Building on previous work, we construct a mode-by-mode factorization of the absorbed flux whose Taylor expansion agrees with current PN results. This factorized form significantly improves the agreement with numerical results obtained with a frequency-domain Teukolsky code, which evolves through a sequence of circular orbits up to the photon orbit. We perform the comparison between model and numerical data for dimensionless Kerr spins $-0.99 \leq q \leq 0.99$ and for frequencies up to the light ring of the Kerr black hole. Our proposed model enforces the presence of a zero in the flux at an orbital frequency equal to the frequency of the horizon, as predicted by perturbation theory. It also reproduces the expected divergence of the flux close to the light ring. Neither of these features are captured by the Taylor-expanded PN flux. Our proposed absorption flux can also help improve models for the inspiral, merger, ringdown of small mass-ratio binary systems.
The quality of output from large language models (LLMs), particularly in machine translation (MT), is closely tied to the quality of in-context examples (ICEs) provided along with the query, i.e., the text to translate. The effectiveness of these ICEs is influenced by various factors, such as the domain of the source text, the order in which the ICEs are presented, the number of these examples, and the prompt templates used. Naturally, selecting the most impactful ICEs depends on understanding how these affect the resulting translation quality, which ultimately relies on translation references or human judgment. This paper presents a novel methodology for in-context learning (ICL) that relies on a search algorithm guided by domain-specific quality estimation (QE). Leveraging the XGLM model, our methodology estimates the resulting translation quality without the need for translation references, selecting effective ICEs for MT to maximize translation quality. Our results demonstrate significant improvements over existing ICL methods and higher translation performance compared to fine-tuning a pre-trained language model (PLM), specifically mBART-50.
Continuous analogs of orthogonal polynomials on the circle are solutions of a canonical system of differential equations, introduced and studied by M.G.Krein and recently generalized to matrix systems by L.A.Sakhnovich. We prove that the continuous analog of the adjoint polynomials converges in the upper half-plane in the case of L^2 coefficients, but in general the limit can be defined only up to a constant multiple even when the coefficients are in L^p for any p>2, the spectral measure is absolutely continuous and the Szego-Kolmogorov-Krein condition is satisfied. Thus we point out that Krein's and Sakhnovich's papers contain an inaccuracy, which does not undermine known implications from these results.
The critical regime of the charge exchange (CE) manganite spin glass Eu_{0.5}Ba_{0.5}MnO_{3} is investigated using linear and non linear magnetic susceptibility and the divergence of the third ordered susceptibility (chi{_3}) signifying the onset of a conventional freezing transition is experimentally demonstrated. The divergence in chi{_3}, dynamical scaling of the linear susceptibility and relevant scaling equations are used to determine the critical exponents associated with this freezing transition, the values of which match well with the 3D Ising universality class. Magnetic field dependence of the spin glass response function is used to estimate the spin correlation length which is seen to be larger than the charge/orbital correlation length reported in this system.
Unsupervised Domain Adaptation (UDA) aims at improving the generalization capability of a model trained on a source domain to perform well on a target domain for which no labeled data is available. In this paper, we consider the semantic segmentation of urban scenes and we propose an approach to adapt a deep neural network trained on synthetic data to real scenes addressing the domain shift between the two different data distributions. We introduce a novel UDA framework where a standard supervised loss on labeled synthetic data is supported by an adversarial module and a self-training strategy aiming at aligning the two domain distributions. The adversarial module is driven by a couple of fully convolutional discriminators dealing with different domains: the first discriminates between ground truth and generated maps, while the second between segmentation maps coming from synthetic or real world data. The self-training module exploits the confidence estimated by the discriminators on unlabeled data to select the regions used to reinforce the learning process. Furthermore, the confidence is thresholded with an adaptive mechanism based on the per-class overall confidence. Experimental results prove the effectiveness of the proposed strategy in adapting a segmentation network trained on synthetic datasets like GTA5 and SYNTHIA, to real world datasets like Cityscapes and Mapillary.
Towards characterizing the optimization landscape of games, this paper analyzes the stability of gradient-based dynamics near fixed points of two-player continuous games. We introduce the quadratic numerical range as a method to characterize the spectrum of game dynamics and prove the robustness of equilibria to variations in learning rates. By decomposing the game Jacobian into symmetric and skew-symmetric components, we assess the contribution of a vector field's potential and rotational components to the stability of differential Nash equilibria. Our results show that in zero-sum games, all Nash are stable and robust; in potential games, all stable points are Nash. For general-sum games, we provide a sufficient condition for instability. We conclude with a numerical example in which learning with timescale separation results in faster convergence.
We study deep neural networks for classification of images with quality distortions. We first show that networks fine-tuned on distorted data greatly outperform the original networks when tested on distorted data. However, fine-tuned networks perform poorly on quality distortions that they have not been trained for. We propose a mixture of experts ensemble method that is robust to different types of distortions. The "experts" in our model are trained on a particular type of distortion. The output of the model is a weighted sum of the expert models, where the weights are determined by a separate gating network. The gating network is trained to predict optimal weights for a particular distortion type and level. During testing, the network is blind to the distortion level and type, yet can still assign appropriate weights to the expert models. We additionally investigate weight sharing methods for the mixture model and show that improved performance can be achieved with a large reduction in the number of unique network parameters.
Cancer has relational information residing at varying scales, modalities, and resolutions of the acquired data, such as radiology, pathology, genomics, proteomics, and clinical records. Integrating diverse data types can improve the accuracy and reliability of cancer diagnosis and treatment. There can be disease-related information that is too subtle for humans or existing technological tools to discern visually. Traditional methods typically focus on partial or unimodal information about biological systems at individual scales and fail to encapsulate the complete spectrum of the heterogeneous nature of data. Deep neural networks have facilitated the development of sophisticated multimodal data fusion approaches that can extract and integrate relevant information from multiple sources. Recent deep learning frameworks such as Graph Neural Networks (GNNs) and Transformers have shown remarkable success in multimodal learning. This review article provides an in-depth analysis of the state-of-the-art in GNNs and Transformers for multimodal data fusion in oncology settings, highlighting notable research studies and their findings. We also discuss the foundations of multimodal learning, inherent challenges, and opportunities for integrative learning in oncology. By examining the current state and potential future developments of multimodal data integration in oncology, we aim to demonstrate the promising role that multimodal neural networks can play in cancer prevention, early detection, and treatment through informed oncology practices in personalized settings.
We show that, for Finsler spaces with cubic metric, Landsberg spaces are Berwaldian. Also, for decomposable metrics, we determine specific conditions for a space with cubic metric to be of Berwald type, thus refining the result in [6].
There are p heterogeneous objects to be assigned to n competing agents (n > p) each with unit demand. It is required to design a Groves mechanism for this assignment problem satisfying weak budget balance, individual rationality, and minimizing the budget imbalance. This calls for designing an appropriate rebate function. Our main result is an impossibility theorem which rules out linear rebate functions with non-zero efficiency in heterogeneous object assignment. Motivated by this theorem, we explore two approaches to get around this impossibility. In the first approach, we show that linear rebate functions with non-zero are possible when the valuations for the objects are correlated. In the second approach, we show that rebate functions with non-zero efficiency are possible if linearity is relaxed.
Magnetic-field control of fundamental optical properties is a crucial challenge in the engineering of multifunctional microdevices. Van der Waals (vdW) magnets retaining a magnetic order even in atomically thin layers, offer a promising platform for hosting exotic magneto-optical functionalities owing to their strong spin-charge coupling. Here, we demonstrate that a giant optical anisotropy can be controlled by magnetic fields in the vdW magnet FePS$_3$. The giant linear dichroism ($\sim$11%), observed below $T_{\text{N}}\!\sim\!120$ K, is nearly fully suppressed in a wide energy range from 1.6 to 2.0 eV, following the collapse of the zigzag magnetic order above 40 T. This remarkable phenomenon can be explained as a result of symmetry changes due to the spin order, enabling minority electrons of Fe$^{2+}$ to hop in a honeycomb lattice. The modification of spin-order symmetry by external fields provides a novel route for controllable anisotropic optical micro-devices.
In this study, we present the original method for reconstructing the potential of interparticle interaction from statistically averaged structural data, namely, the radial distribution function of particles in many-particle system. This method belongs to a family of machine learning methods and is implemented through the differential evolution algorithm. As demonstrated for the case of the Lennard-Jones liquid taken as an example, there is no one-to-one correspondence between structure and potential of interparticle interaction of a many-particle disordered system at a certain thermodynamic state. Namely, a whole family of the Mie potentials determined by two parameters $p_{ 1 }$ and $p_{ 2 }$ related to each other according to a certain rule can reproduce properly a unique structure of the Lennard-Jones liquid at a given thermodynamic state. It is noteworthy that this family of the potentials quite correctly reproduces for the Lennard-Jones liquid the transport properties (in particular, the self-diffusion coefficient) over a temperature range as well as the dynamic structure factor, which is one of the key characteristics of the collective dynamics of particles.
We derive the modifications introduced by extra-spatial dimensions beyond the four dimensional spacetime on the macroscopic properties of neutron stars, which in turn affect the gravitational wave spectrum of their binaries. It turns out that the mass-radius relation of the neutron stars, and their tidal deformability, are affected non-trivially by the presence of extra dimensions, and can be used to constrain parameters associated with those dimensions. Implications for I-Love-Q universality relations are also discussed and utilized to obtain a constraint on one such parameter. Importantly, we show, for the first time, that measurements of the component masses and tidal deformabilities of the binary neutron star system GW170817, constrain the brane tension in the single brane-world model of Randall and Sundrum to be greater than $35.1~\textrm{GeV}^{4}$. This work opens up the possibility of making such a constraint more robust by improving the modelling of binaries on the brane in the future.
Recent studies have suggested that the stability of peer-to-peer networks may rely on persistent peers, who dwell on the network after they obtain the entire file. In the absence of such peers, one piece becomes extremely rare in the network, which leads to instability. Technological developments, however, are poised to reduce the incidence of persistent peers, giving rise to a need for a protocol that guarantees stability with non-persistent peers. We propose a novel peer-to-peer protocol, the group suppression protocol, to ensure the stability of peer-to-peer networks under the scenario that all the peers adopt non-persistent behavior. Using a suitable Lyapunov potential function, the group suppression protocol is proven to be stable when the file is broken into two pieces, and detailed experiments demonstrate the stability of the protocol for arbitrary number of pieces. We define and simulate a decentralized version of this protocol for practical applications. Straightforward incorporation of the group suppression protocol into BitTorrent while retaining most of BitTorrent's core mechanisms is also presented. Subsequent simulations show that under certain assumptions, BitTorrent with the official protocol cannot escape from the missing piece syndrome, but BitTorrent with group suppression does.
Chern-Simons gauge theory for compact semisimple groups is analyzed from a perturbation theory point of view. The general form of the perturbative series expansion of a Wilson line is presented in terms of the Casimir operators of the gauge group. From this expansion new numerical knot invariants are obtained. These knot invariants turn out to be of finite type (Vassiliev invariants), and to possess an integral representation. Using known results about Jones, HOMFLY, Kauffman and Akutsu-Wadati polynomial invariants these new knot invariants are computed up to type six for all prime knots up to six crossings. Our results suggest that these knot invariants can be normalized in such a way that they are integer-valued.
In this paper, first, we review a straightforward analytical technique based on image impedance concept for designing traditional microwave microstrip coupled line filters using distributed elements. In the introduced approach, we characterize a quarter-wave coupled line section, and then these discrete sections can be connected in series to synthesis the final desired frequency response. Next, we use a novel open-stub based technique to suppress spurious harmonic frequencies. Finally, using proposed technique, we design and simulate a band pass filter (BPF). The simulation results prove the usefulness of the proposed technique.
Localization methods have produced explicit expressions for the sphere partition functions of (2,2) superconformal field theories. The mirror symmetry conjecture predicts an IR duality between pairs of Abelian gauged linear sigma models, a class of which describe families of Calabi-Yau manifolds realizable as complete intersections in toric varieties. We investigate this prediction for the sphere partition functions and find agreement between that of a model and its mirror up to the scheme-dependent ambiguities inherent in the definitions of these quantities.
We build a cartesian closed category, called Cho, based on event structures. It allows an interpretation of higher-order stateful concurrent programs that is refined and precise: on the one hand it is conservative with respect to standard Hyland-Ong games when interpreting purely functional programs as innocent strategies, while on the other hand it is much more expressive. The interpretation of programs constructs compositionally a representation of their execution that exhibits causal dependencies and remembers the points of non-deterministic branching.The construction is in two stages. First, we build a compact closed category Tcg. It is a variant of Rideau and Winskel's category CG, with the difference that games and strategies in Tcg are equipped with symmetry to express that certain events are essentially the same. This is analogous to the underlying category of AJM games enriching simple games with an equivalence relations on plays. Building on this category, we construct the cartesian closed category Cho as having as objects the standard arenas of Hyland-Ong games, with strategies, represented by certain events structures, playing on games with symmetry obtained as expanded forms of these arenas.To illustrate and give an operational light on these constructions, we interpret (a close variant of) Idealized Parallel Algol in Cho.
Graph representation learning has become a hot research topic due to its powerful nonlinear fitting capability in extracting representative node embeddings. However, for sequential data such as speech signals, most traditional methods merely focus on the static graph created within a sequence, and largely overlook the intrinsic evolving patterns of these data. This may reduce the efficiency of graph representation learning for sequential data. For this reason, we propose an adaptive graph representation learning method based on dynamically evolved graphs, which are consecutively constructed on a series of subsequences segmented by a sliding window. In doing this, it is better to capture local and global context information within a long sequence. Moreover, we introduce a weighted approach to update the node representation rather than the conventional average one, where the weights are calculated by a novel matrix computation based on the degree of neighboring nodes. Finally, we construct a learnable graph convolutional layer that combines the graph structure loss and classification loss to optimize the graph structure. To verify the effectiveness of the proposed method, we conducted experiments for speech emotion recognition on the IEMOCAP and RAVDESS datasets. Experimental results show that the proposed method outperforms the latest (non-)graph-based models.
We present experimental and numerical studies of broad-area semiconductor lasers with chaotic ray dynamics. The emission intensity distributions at the cavity boundaries are measured and compared to ray tracing simulations and numerical calculations of the passive cavity modes. We study two different cavity geometries, a D-cavity and a stadium, both of which feature fully chaotic ray dynamics. While the far-field distributions exhibit fairly homogeneous emission in all directions, the emission intensity distributions at the cavity boundary are highly inhomogeneous, reflecting the non-uniform intensity distributions inside the cavities. The excellent agreement between experiments and simulations demonstrates that the intensity distributions of wave-chaotic semiconductor lasers are primarily determined by the cavity geometry. This is in contrast to conventional Fabry-Perot broad-area lasers for which the intensity distributions are to a large degree determined by the nonlinear interaction of the lasing modes with the semiconductor gain medium.
The motion of gravitational axion-like particles (ALP) around a Kerr black hole is analyzed, paying attention to resonance and distribution of spectral radiation. We first discuss the computation of $\sqrt{g}{\tilde R}_{\mu \nu \rho \rho \sigma}R^{\mu \nu \rho \sigma}$ and its implications with Pontryagin's theorem and a detailed analysis of Teukolsky's master equation is done. After carefully analyzing the Teukolsky master equation, we show that this system exhibits resonance when $\omega \gtrsim \mu$ where $\mu$ is the mass of the ALP. A skew-normal distribution can approximate the energy distribution, and we can calculate the mean lifetime of the resonance for black holes with masses between 100 to 1000 $M_{\odot}$. This range corresponds to a duration between $10^{-1}$s and $10^{41}$s, the observation range used in LIGO data.
This paper proposes a novel differentiable architecture search method by formulating it into a distribution learning problem. We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution. With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based optimizer in an end-to-end manner. This formulation improves the generalization ability and induces stochasticity that naturally encourages exploration in the search space. Furthermore, to alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme that enables searching directly on large-scale tasks, eliminating the gap between search and evaluation phases. Extensive experiments demonstrate the effectiveness of our method. Specifically, we obtain a test error of 2.46% for CIFAR-10, 23.7% for ImageNet under the mobile setting. On NAS-Bench-201, we also achieve state-of-the-art results on all three datasets and provide insights for the effective design of neural architecture search algorithms.
One of the important consequences of the no-force condition for BPS states is the existence of stable static multi-center solutions, at least in ungauged supergravities. This observation has been at the heart of many developments in brane physics, including the construction of intersecting branes and reduced symmetry D-brane configurations corresponding to the Coulomb branch of the gauge theory. However the search for multi-center solutions to gauged supergravities has proven rather elusive. Because of the background curvature, it appears such solutions cannot be static. Nevertheless even allowing for time dependence, general multi-center solutions to gauged supergravity have yet to be constructed. In this letter we investigate the construction of such solutions for the case of D=5, N=2 gauged supergravity coupled to an arbitrary number of vector multiplets. Formally, we find a family of time dependent multi-center black hole solutions which are easily generalized to the case of AdS supergravities in general dimensions. While these are not true solutions, as they have a complex metric and gauge potential, they may be related to a Wick rotated theory or to a theory where the coupling is taken to be imaginary. These solutions thus provide a partial realization of true multi-center black-holes in gauged supergravities.
Instructors and researchers think "thinking like a physicist" is important for students' professional development. However, precise definitions and observational markers remain elusive. We reinterpret popular beliefs inventories in physics to indicate what physicists think "thinking like a physicist" entails. Through discourse analysis of upper-division students' speech in natural settings, we show that students may appropriate or resist these elements. We identify a new element in the physicist speech genre: brief, embedded, spontaneous metacognitive talk (BESM talk). BESM talk communicates students' in-the-moment enacted expectations about physics as a technical field and a cultural endeavor. Students use BESM talk to position themselves as physicists or non-physicists. Students also use BESM talk to communicate their expectations in four ways: understanding, confusion, spotting inconsistencies, and generalized expectations.
Convolutional neural networks (CNNs) have achieved significant popularity, but their computational and memory intensity poses challenges for resource-constrained computing systems, particularly with the prerequisite of real-time performance. To release this burden, model compression has become an important research focus. Many approaches like quantization, pruning, early exit, and knowledge distillation have demonstrated the effect of reducing redundancy in neural networks. Upon closer examination, it becomes apparent that each approach capitalizes on its unique features to compress the neural network, and they can also exhibit complementary behavior when combined. To explore the interactions and reap the benefits from the complementary features, we propose the Chain of Compression, which works on the combinational sequence to apply these common techniques to compress the neural network. Validated on the image-based regression and classification networks across different data sets, our proposed Chain of Compression can significantly compress the computation cost by 100-1000 times with ignorable accuracy loss compared with the baseline model.
We describe the $C^*$-algebra of an $E$-unitary or strongly 0-$E$-unitary inverse semigroup as the partial crossed product of a commutative $C^*$-algebra by the maximal group image of the inverse semigroup. We give a similar result for the $C^*$-algebra of the tight groupoid of an inverse semigroup. We also study conditions on a groupoid $C^*$-algebra to be Morita equivalent to a full crossed product of a commutative $C^*$-algebra with an inverse semigroup, generalizing results of Khoshkam and Skandalis for crossed products with groups.
We investigate the privacy of two approaches to (biometric) template protection: Helper Data Systems and Sparse Ternary Coding with Ambiguization. In particular, we focus on a privacy property that is often overlooked, namely how much leakage exists about one specific binary property of one component of the feature vector. This property is e.g. the sign or an indicator that a threshold is exceeded. We provide evidence that both approaches are able to protect such sensitive binary variables, and discuss how system parameters need to be set.
We consider a recently proposed nonlinear Schroedinger equation exhibiting soliton-like solutions of the power-law form $e_q^{i(kx-wt)}$, involving the $q$-exponential function which naturally emerges within nonextensive thermostatistics [$e_q^z \equiv [1+(1-q)z]^{1/(1-q)}$, with $e_1^z=e^z$]. Since these basic solutions behave like free particles, obeying $p=\hbar k$, $E=\hbar \omega$ and $E=p^2/2m$ ($1 \le q<2$), it is relevant to investigate how they change under the effect of uniform acceleration, thus providing the first steps towards the application of the aforementioned nonlinear equation to the study of physical scenarios beyond free particle dynamics. We investigate first the behaviour of the power-law solutions under Galilean transformation and discuss the ensuing Doppler-like effects. We consider then constant acceleration, obtaining new solutions that can be equivalently regarded as describing a free particle viewed from an uniformly accelerated reference frame (with acceleration $a$) or a particle moving under a constant force $-ma$. The latter interpretation naturally leads to the evolution equation $i\hbar \frac{\partial}{\partial t}(\frac{\Phi}{\Phi_0}) = - \frac{1}{2-q}\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} [(\frac{\Phi}{\Phi_0})^{2-q}] + V(x)(\frac{\Phi}{\Phi_0})^{q}$ with $V(x)=max$. Remarkably enough, the potential $V$ couples to $\Phi^q$, instead of coupling to $\Phi$, as happens in the familiar linear case ($q=1$).
In a paper by Sapounakis, Tasoulas, and Tsikouras \cite{stt}, the authors count the number of occurrences of patterns of length four in Dyck paths. In this paper we specify in one direction and generalize in another. We only count ballot paths that avoid a given pattern, where a ballot path stays weakly above the diagonal $y=x$, starts at the origin, and takes steps from the set $\{\uparrow ,\to \}=\{u,r\}$. A pattern is a finite string made from the same step set; it is also a path. Notice that a ballot path ending at a point along the diagonal is a Dyck path.
It is argued that it is far more cost effective to carry out some projects with medium-sized dedicated zenith telescopes rather than large steerable telescopes, freeing the later to carry out projects that truly need them. I show that the large number of objects observed with a surveying 4-m zenith telescope allows one to carry out cosmological projects at low redshifts. Examining two case studies, I show first that a variability survey would obtain light curves for several thousands of type Ia supernovae per year up to z=1 and easily discriminate among competing cosmological models. Finally, I discuss a second case study, consisting of a spectrophotometric survey carried out with interference filters, showing its power to discriminate among cosmological models and to study the large-scale distribution of galaxies in the Universe.
A new approach to understanding evolution [Val09], namely viewing it through the lens of computation, has already started yielding new insights, e.g., natural selection under sexual reproduction can be interpreted as the Multiplicative Weight Update (MWU) Algorithm in coordination games played among genes [CLPV14]. Using this machinery, we study the role of mutation in changing environments in the presence of sexual reproduction. Following [WVA05], we model changing environments via a Markov chain, with the states representing environments, each with its own fitness matrix. In this setting, we show that in the absence of mutation, the population goes extinct, but in the presence of mutation, the population survives with positive probability. On the way to proving the above theorem, we need to establish some facts about dynamics in games. We provide the first, to our knowledge, polynomial convergence bound for noisy MWU in a coordination game. Finally, we also show that in static environments, sexual evolution with mutation converges, for any level of mutation.
The mode-fluctuation distribution $P(W)$ is studied for chaotic as well as for non-chaotic quantum billiards. This statistic is discussed in the broader framework of the $E(k,L)$ functions being the probability of finding $k$ energy levels in a randomly chosen interval of length $L$, and the distribution of $n(L)$, where $n(L)$ is the number of levels in such an interval, and their cumulants $c_k(L)$. It is demonstrated that the cumulants provide a possible measure for the distinction between chaotic and non-chaotic systems. The vanishing of the normalized cumulants $C_k$, $k\geq 3$, implies a Gaussian behaviour of $P(W)$, which is realized in the case of chaotic systems, whereas non-chaotic systems display non-vanishing values for these cumulants leading to a non-Gaussian behaviour of $P(W)$. For some integrable systems there exist rigorous proofs of the non-Gaussian behaviour which are also discussed. Our numerical results and the rigorous results for integrable systems suggest that a clear fingerprint of chaotic systems is provided by a Gaussian distribution of the mode-fluctuation distribution $P(W)$.
Parallel acquisition systems arise in various applications in order to moderate problems caused by insufficient measurements in single-sensor systems. These systems allow simultaneous data acquisition in multiple sensors, thus alleviating such problems by providing more overall measurements. In this work we consider the combination of compressed sensing with parallel acquisition. We establish the theoretical improvements of such systems by providing recovery guarantees for which, subject to appropriate conditions, the number of measurements required per sensor decreases linearly with the total number of sensors. Throughout, we consider two different sampling scenarios -- distinct (corresponding to independent sampling in each sensor) and identical (corresponding to dependent sampling between sensors) -- and a general mathematical framework that allows for a wide range of sensing matrices (e.g., subgaussian random matrices, subsampled isometries, random convolutions and random Toeplitz matrices). We also consider not just the standard sparse signal model, but also the so-called sparse in levels signal model. This model includes both sparse and distributed signals and clustered sparse signals. As our results show, optimal recovery guarantees for both distinct and identical sampling are possible under much broader conditions on the so-called sensor profile matrices (which characterize environmental conditions between a source and the sensors) for the sparse in levels model than for the sparse model. To verify our recovery guarantees we provide numerical results showing phase transitions for a number of different multi-sensor environments.
We present 850 and 450 micron observations of the dense regions within the Auriga-California molecular cloud using SCUBA-2 as part of the JCMT Gould Belt Legacy Survey to identify candidate protostellar objects, measure the masses of their circumstellar material (disk and envelope), and compare the star formation to that in the Orion A molecular cloud. We identify 59 candidate protostars based on the presence of compact submillimeter emission, complementing these observations with existing Herschel/SPIRE maps. Of our candidate protostars, 24 are associated with young stellar objects (YSOs) in the Spitzer and Herschel/PACS catalogs of 166 and 60 YSOs, respectively (177 unique), confirming their protostellar nature. The remaining 35 candidate protostars are in regions, particularly around LkHalpha 101, where the background cloud emission is too bright to verify or rule out the presence of the compact 70 micron emission that is expected for a protostellar source. We keep these candidate protostars in our sample but note that they may indeed be prestellar in nature. Our observations are sensitive to the high end of the mass distribution in Auriga-Cal. We find that the disparity between the richness of infrared star forming objects in Orion A and the sparsity in Auriga-Cal extends to the submillimeter, suggesting that the relative star formation rates have not varied over the Class II lifetime and that Auriga-Cal will maintain a lower star formation efficiency.
I show that the characteristic diffusion timescale and the gamma-ray escape timescale, of SN Ia supernova ejecta, are related with each other through the time when the bolometric luminosity, $L_{\rm bol}$, intersects with instantaneous radioactive decay luminosity, $L_\gamma$, for the second time after the light-curve peak. Analytical arguments, numerical radiation-transport calculations, and observational tests show that $L_{\rm bol}$ generally intersects $L_\gamma$ at roughly $1.7$ times the characteristic diffusion timescale of the ejecta. This relation implies that the gamma-ray escape timescale is typically 2.7 times the diffusion timescale, and also implies that the bolometric luminosity 15 days after the peak, $L_{\rm bol}(t_{15})$, must be close to the instantaneous decay luminosity at that time, $L_\gamma (t_{15})$. With the employed calculations and observations, the accuracy of $L_{\rm bol}=L_\gamma$ at $t=t_{15}$ is found to be comparable to the simple version of "Arnett's rule" ($L_{\rm bol}=L_\gamma$ at $t=t_{\rm peak}$). This relation aids the interpretation of SN Ia supernova light curves and may also be applicable to general hydrogen-free explosion scenarios powered by other central engines.
Persuasion and argumentation are possibly among the most complex examples of the interplay between multiple human subjects. With the advent of the Internet, online forums provide wide platforms for people to share their opinions and reasonings around various diverse topics. In this work, we attempt to model persuasive interaction between users on Reddit, a popular online discussion forum. We propose a deep LSTM model to classify whether a conversation leads to a successful persuasion or not, and use this model to predict whether a certain chain of arguments can lead to persuasion. While learning persuasion dynamics, our model tends to identify argument facets implicitly, using an attention mechanism. We also propose a semi-supervised approach to extract argumentative components from discussion threads. Both these models provide useful insight into how people engage in argumentation on online discussion forums.
Activity of inhibitory neuron with delayed feedback is considered in the framework of point stochastic processes. The neuron receives excitatory input impulses from a Poisson stream, and inhibitory impulses from the feedback line with a delay. We investigate here, how does the presence of inhibitory feedback affect the output firing statistics. Using binding neuron (BN) as a model, we derive analytically the exact expressions for the output interspike intervals (ISI) probability density, mean output ISI and coefficient of variation as functions of model's parameters for the case of threshold 2. Using the leaky integrate-and-fire (LIF) model, as well as the BN model with higher thresholds, these statistical quantities are found numerically. In contrast to the previously studied situation of no feedback, the ISI probability densities found here both for BN and LIF neuron become bimodal and have discontinuity of jump type. Nevertheless, the presence of inhibitory delayed feedback was not found to affect substantially the output ISI coefficient of variation. The ISI coefficient of variation found ranges between 0.5 and 1. It is concluded that introduction of delayed inhibitory feedback can radically change neuronal output firing statistics. This statistics is as well distinct from what was found previously (Vidybida & Kravchuk, 2009) by a similar method for excitatory neuron with delayed feedback.
The structural properties of Er-doped AlNO epilayers grown by radio frequency magnetron sputtering were studied by Extended X-ray Absorption Fine Structure (EXAFS) spectra recorded at the Er L 3 edge. The analysis revealed that Er substitutes for Al in all the studied samples and the increase in Er concentration from 0.5 to 3.6 at.% is not accompanied by formation of ErN, Er 2 O 3 or Er clusters. Simultaneously recorded X-ray Absorption Near Edge Structure (XANES) spectra verify that the bonding configuration of Er is similar in all studied samples. The Er-N distance is 2 constant at 2.18-2.19 {\AA} i.e. approximately 15% larger than the Al-N bondlength, revealing that the introduction of Er in the cation sublattice causes considerable local distortion. The Debye-Waller factor, which measures the static disorder, of the second nearest shell of Al neighbors, has a local minimum for the sample containing 1% Er that coincides with the highest photoluminescence efficiency of the sample set.
We report initial measurements on our firstMoAu Transition Edge Sensors (TESs). The TESs formed from a bilayer of 40 nm of Mo and 106 nm of Au showed transition temperatures of about 320 mK, higher than identical TESs with a MoCu bilayer which is consistent with a reduced electron transmission coefficient between the bilayer films. We report measurements of thermal conductance in the 200 nm thick silicon nitride SiNx support structures at this temperature, TES dynamic behaviour and current noise measurements.
We present a quantitative study of strain correlations in quiescent supercooled liquids and glasses. Recent two-dimensional computer simulations and experiments indicate that even supercooled liquids exhibit long-lived, long-range strain correlations. Here we investigate this issue in three dimensions via experiments on hard sphere colloids and molecular dynamics simulations of a glass forming binary Lennard Jones mixture. Both in the glassy state and in the supercooled regime, strain correlations are found to decay with a $1/r^{3}$ power-law behavior, reminiscent of elastic fields around an inclusion. Moreover, theoretical predictions on the time dependence of the correlation amplitude are in line with the results obtained from experiments and simulations. It is argued that the size of the domain, which exhibits a "solid-like" cooperative strain pattern in a supercooled liquid, is determined by the product of the speed of sound with the structural relaxation time. While this length is of the order of nanometers in the normal liquid state, it grows to macroscale when approaching the glass transition.
We present an analysis of a Suzaku observation of the link region between the galaxy clusters A399 and A401. We obtained the metallicity of the intracluster medium (ICM) up to the cluster virial radii for the first time. We determine the metallicity where the virial radii of the two clusters cross each other (~2 Mpc away from their centers) and found that it is comparable to that in their inner regions (~0.2 Zsun). It is unlikely that the uniformity of metallicity up to the virial radii is due to mixing caused by a cluster collision. Since the ram-pressure is too small to strip the interstellar medium of galaxies around the virial radius of a cluster, the fairly high metallicity that we found there indicates that the metals in the ICM are not transported from member galaxies by ram-pressure stripping. Instead, the uniformity suggests that the proto-cluster region was extensively polluted with metals by extremely powerful outflows (superwinds) from galaxies before the clusters formed. We also searched for the oxygen emission from the warm--hot intergalactic medium in that region and obtained a strict upper limit of the hydrogen density (nH<4.1x10^-5 cm^-3).
In this paper, we prove a necessary and sufficient condition for Tracy-Widom law of Wigner matrices. Consider $N \times N$ symmetric Wigner matrices $H$ with $H_{ij} = N^{-1/2} x_{ij}$, whose upper right entries $x_{ij}$ $(1\le i< j\le N)$ are $i.i.d.$ random variables with distribution $\mu$ and diagonal entries $x_{ii}$ $(1\le i\le N)$ are $i.i.d.$ random variables with distribution $\wt \mu$. The means of $\mu$ and $\wt \mu$ are zero, the variance of $\mu$ is 1, and the variance of $\wt \mu $ is finite. We prove that Tracy-Widom law holds if and only if $\lim_{s\to \infty}s^4\p(|x_{12}| \ge s)=0$. The same criterion holds for Hermitian Wigner matrices.