text
stringlengths
6
128k
Recent work has found that adversarially-robust deep networks used for image classification are more interpretable: their feature attributions tend to be sharper, and are more concentrated on the objects associated with the image's ground-truth class. We show that smooth decision boundaries play an important role in this enhanced interpretability, as the model's input gradients around data points will more closely align with boundaries' normal vectors when they are smooth. Thus, because robust models have smoother boundaries, the results of gradient-based attribution methods, like Integrated Gradients and DeepLift, will capture more accurate information about nearby decision boundaries. This understanding of robust interpretability leads to our second contribution: \emph{boundary attributions}, which aggregate information about the normal vectors of local decision boundaries to explain a classification outcome. We show that by leveraging the key factors underpinning robust interpretability, boundary attributions produce sharper, more concentrated visual explanations -- even on non-robust models. Any example implementation can be found at \url{https://github.com/zifanw/boundary}.
This short note describes a connection between algorithmic dimensions of individual points and classical pointwise dimensions of measures.
It was conjectured by Ohba, and proved by Noel, Reed and Wu that $k$-chromatic graphs $G$ with $|V(G)| \le 2k+1$ are chromatic-choosable. This upper bound on $|V(G)|$ is tight: if $k$ is even, then $K_{3 \star (k/2+1), 1 \star (k/2-1)}$ and $K_{4, 2 \star (k-1)}$ are $k$-chromatic graphs with $2 k+2$ vertices that are not chromatic-choosable. It was proved in [arXiv:2201.02060] that these are the only non-$k$-choosable complete $k$-partite graphs with $2k+2$ vertices. For $G =K_{3 \star (k/2+1), 1 \star (k/2-1)}$ or $K_{4, 2 \star (k-1)}$, a bad list assignment of $G$ is a $k$-list assignment $L$ of $G$ such that $G$ is not $L$-colourable. Bad list assignments for $G=K_{4, 2 \star (k-1)}$ were characterized in [Discrete Mathematics 244 (2002), 55-66]. In this paper, we first give a simpler proof of this result, and then we characterize bad list assignments for $G=K_{3 \star (k/2+1), 1 \star (k/2-1)}$. Using these results, we characterize all non-$k$-choosable (non-complete) $k$-partite graphs with $2k+2$ vertices.
In this paper, we introduce the Interpretable Cross-Examination Technique (ICE-T), a novel approach that leverages structured multi-prompt techniques with Large Language Models (LLMs) to improve classification performance over zero-shot and few-shot methods. In domains where interpretability is crucial, such as medicine and law, standard models often fall short due to their "black-box" nature. ICE-T addresses these limitations by using a series of generated prompts that allow an LLM to approach the problem from multiple directions. The responses from the LLM are then converted into numerical feature vectors and processed by a traditional classifier. This method not only maintains high interpretability but also allows for smaller, less capable models to achieve or exceed the performance of larger, more advanced models under zero-shot conditions. We demonstrate the effectiveness of ICE-T across a diverse set of data sources, including medical records and legal documents, consistently surpassing the zero-shot baseline in terms of classification metrics such as F1 scores. Our results indicate that ICE-T can be used for improving both the performance and transparency of AI applications in complex decision-making environments.
We present spectral calibration equations for determining mafic silicate composition of near-Earth asteroid (25143) Itokawa from visible/near-infrared spectra measured using the Near Infrared Spectrometer (NIRS), on board the Japanese Hayabusa spacecraft. Itokawa was the target of the Hayabusa sample return mission and has a surface composition similar to LL-type ordinary chondrites. Existing laboratory spectral calibrations use a spectral wavelength range that is wider (0.75-2.5 microns) than that of the NIRS instrument (0.85-2.1 microns) making them unfit for interpreting the Hayabusa spectral data currently archived in the Planetary Data System. We used laboratory measured near-infrared reflectance spectra of ordinary (H, L and LL) chondrites from the study of Dunn et al. (2010), which we resampled to the NIRS wavelength range. Using spectral parameters extracted from these resampled spectra we established a relationship between band parameters and their mafic silicate composition (olivine and low-Ca pyroxene). We found a correlation >90% between mafic silicate composition (fayalite and forsterite mol. %) estimated by our spectral method and X-ray diffraction (XRD) measured values. To test the validity of the newly derived equations we blind tested them using nine laboratory-measured spectra of L and LL type chondrites with known composition. We found that the absolute difference between the measured and computed values is in the range 0.1 to 1.6 mol. %. Our study suggests that the derived calibration is robust and can be applied to Hayabusa NIRS data despite its limited spectral range. We applied the derived equations to a subset of uncalibrated NIRS spectra and the derived fayalite and ferrosilite values are consistent with Itokawa having a LL chondrite type surface composition.
We study the structure of SU(5) F-theory GUT models that engineer additional U(1) symmetries. These are highly constrained by a set of relations observed by Dudas and Palti (DP) that originate from the physics of 4D anomaly cancellation. Using the DP relations, we find a general tension between unification and the suppression of dimension 5 proton decay when one or more U(1)'s are PQ symmetries and hypercharge flux is used to break the SU(5) GUT group. We then specialize to spectral cover models, whose global completions in F- theory we know how to construct. In that setting, we provide a technical derivation of the DP relations, construct spectral covers that yield all possible solutions to them, and provide a complete survey of spectral cover models for SU(5) GUTs that exhibit two U(1) symmetries.
In a photonic realization of qubits the implementation of quantum logic is rather difficult due the extremely weak interaction on the few photon level. On the other hand, in these systems interference is available to process the quantum states. We formalize the use of interference by the definition of a simple class of operations which include linear optical elements, auxiliary states and conditional operations. We investigate an important subclass of these tools, namely linear optical elements and auxiliary modes in the vacuum state. For this tools, we are able to extend a previous quantitative result, a no-go theorem for perfect Bell state analyzer on two qubits in polarization entanglement, by a quantitative statement. We show, that within this subclass it is not possible to discriminate unambiguously four equiprobable Bell states with a probability higher than 50 %.
For a solar flare occurring on 2010 November 3, we present observations using several SDO/AIA extreme-ultraviolet (EUV) passbands of an erupting flux rope followed by inflows sweeping into a current sheet region. The inflows are soon followed by outflows appearing to originate from near the termination point of the inflowing motion - an observation in line with standard magnetic reconnection models. We measure average inflow plane-of-sky speeds to range from ~150-690 km/s with the initial, high-temperature inflows being the fastest. Using the inflow speeds and a range of Alfven speeds, we estimate the Alfvenic Mach number which appears to decrease with time. We also provide inflow and outflow times with respect to RHESSI count rates and find that the fast, high-temperature inflows occur simultaneously with a peak in the RHESSI thermal lightcurve. Five candidate inflow-outflow pairs are identified with no more than a minute delay between detections. The inflow speeds of these pairs are measured to be 10^2 km/s with outflow speeds ranging from 10^2-10^3 km/s - indicating acceleration during the reconnection process. The fastest of these outflows are in the form of apparently traveling density enhancements along the legs of the loops rather than the loop apexes themselves. These flows could either be accelerated plasma, shocks, or waves prompted by reconnection. The measurements presented here show an order of magnitude difference between the retraction speeds of the loops and the speed of the density enhancements within the loops - presumably exiting the reconnection site.
We study the annihilation of topological solitons in the simplest setting: a one-dimensional ferromagnet with an easy axis. We develop an effective theory of the annihilation process in terms of four collective coordinates: two zero modes of the translational and rotational symmetries $Z$ and $\Phi$, representing the average position and azimuthal angle of the two solitons, and two conserved momenta $\zeta$ and $\varphi$, representing the relative distance and twist. Comparison with micromagnetic simulations shows that our approach captures well the essential physics of the process.
We have systematically investigated the optical conductivity spectra of La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ ($0.3 \leqslant x \leqslant 0.5$). We find that just above the magnetic ordering temperatures, the optical gap shows an enhancement up to $\sim 0.3$ eV near $x=0.4$. Based on a $x$-dependent comparison of the nesting vector of the hypothetical Fermi surface and the superlattice wave-vector, we suggest that the peculiar $x$-dependence of the optical gap can be understood in terms of charge and lattice correlation enhanced by the charge density wave instability in nested Fermi surface.
We use a single-channel scattering formalism to derive general expressions for the Andreev bound-state energies and resulting current--phase relationship in a one-dimensional semiconductor-based SNS-junction, including arbitrarily oriented effective spin--orbit and Zeeman fields and taking into account disorder in the junction in by including a single scatterer with transmission probability $0 \leq T^2 \leq 1$, arbitrarily located in the normal region. We first corroborate our results by comparing them to the known analytic limiting-case expressions. Then we simplify our general result in several additional limits, including the case of the scatterer being at a specific location and the cases of small and large spin--orbit fields compared to the Zeeman splitting, assuming low transparency ($T\ll 1$). We believe that our results could be helpful for disentangling the main spin-mixing processes in experiments on low-dimensional semiconductor-based SNS-junctions.
We provide a minimal alternative gauged U(1)$_{\rm B-L}$ model in which three right-handed neutrinos with charges of (5,-4,-4) and only one B-L Higgs field with charge 1 are introduced. Consistent active neutrino masses and mixings can be obtained if the $Z_2$ symmetry on two of the right-handed neutrinos is introduced. It predicts two heavy degenerate right-handed neutrinos, which may realize resonant leptogenesis scenario, and one relatively light sterile neutrino, which is a good dark matter candidate.
Since decoupling in the early universe in helicity states, primordial neutrinos propagating in astrophysical magnetic fields precess and undergo helicity changes. In view of the XENON1T experiment possibly finding a large magnetic moment of solar neutrinos, we estimate the helicity flipping for relic neutrinos in both cosmic and galactic magnetic fields. The flipping probability is sensitive both to the neutrino magnetic moment and the structure of the magnetic fields, thus potentially a probe of the fields. As we find, even a magnetic moment well below that suggested by XENON1T could significantly affect relic neutrino helicities and their detection rate via inverse tritium beta decay.
In rotating black hole background surrounded by dark matter, we investigated the super-radiant phenomenon of massive scalar field and its associated instability.Using the method of asymptotic matching, we computed the amplification factor of scalar wave scattering to assess the strength of super-radiance. We discussed the influence of dark matter density on amplification factor in this black hole background. Our result indicates that the presence of dark matter has suppressive influence on black hole super-radiance. We also computed the net extracted energy to further support this result. Finally, we analyzed the super-radiant instability caused by massive scalar field using the black hole bomb mechanism and found that the presence of dark matter has no influence on the super-radiant instability condition.
Despite exciting progress in pre-training for visual-linguistic (VL) representations, very few aspire to a small VL model. In this paper, we study knowledge distillation (KD) to effectively compress a transformer-based large VL model into a small VL model. The major challenge arises from the inconsistent regional visual tokens extracted from different detectors of Teacher and Student, resulting in the misalignment of hidden representations and attention distributions. To address the problem, we retrain and adapt the Teacher by using the same region proposals from Student's detector while the features are from Teacher's own object detector. With aligned network inputs, the adapted Teacher is capable of transferring the knowledge through the intermediate representations. Specifically, we use the mean square error loss to mimic the attention distribution inside the transformer block and present a token-wise noise contrastive loss to align the hidden state by contrasting with negative representations stored in a sample queue. To this end, we show that our proposed distillation significantly improves the performance of small VL models on image captioning and visual question answering tasks. It reaches 120.8 in CIDEr score on COCO captioning, an improvement of 5.1 over its non-distilled counterpart; and an accuracy of 69.8 on VQA 2.0, a 0.8 gain from the baseline. Our extensive experiments and ablations confirm the effectiveness of VL distillation in both pre-training and fine-tuning stages.
RX J0123.4-7321 is a well-established Be star X-ray binary system (BeXRB) in the Small Magellanic Cloud (SMC). Like many such systems the variable X-ray emission is driven by the underlying behaviour of the mass donor Be star. Previous work has shown that the optical and X-ray were characterised by regular outbursts at the proposed binary period of 119 d. However around February 2008 the optical behaviour changed substantially, with the previously regular optical outbursts ending. Reported here are new optical (OGLE) and X-ray (Swift) observations covering the period after 2008 which suggest an almost total circumstellar disc loss followed by a gradual recovery. This indicates the probable transition of a Be star to a B star, and back again. However, at the time of the most recent OGLE data (March 2020) the characteristic periodic outbursts had yet to return to their early state, indicating that the disk still had some re-building yet to complete.
Multimodal Variational Autoencoders (VAEs) represent a promising group of generative models that facilitate the construction of a tractable posterior within the latent space, given multiple modalities. Daunhawer et al. (2022) demonstrate that as the number of modalities increases, the generative quality of each modality declines. In this study, we explore an alternative approach to enhance the generative performance of multimodal VAEs by jointly modeling the latent space of unimodal VAEs using score-based models (SBMs). The role of the SBM is to enforce multimodal coherence by learning the correlation among the latent variables. Consequently, our model combines the superior generative quality of unimodal VAEs with coherent integration across different modalities.
Within the closed time path formalism a general nonperturbative expression is derived which resums through the Bethe-Salpter equation all leading order contributions to the shear viscosity in hot scalar field theory. Using a previously derived generalized fluctuation-dissipation theorem for nonlinear response functions in the real-time formalism, it is shown that the Bethe-Salpeter equation decouples in the so-called (r,a) basis. The general result is applied to scalar field theory with pure lambda*phi**4 and mixed g*phi**3+lambda*phi**4 interactions. In both cases our calculation confirms the leading order expression for the shear viscosity previously obtained in the imaginary time formalism.
We present the first-order coherent exclusive exponentiation (CEEX) scheme, with the full control over spin polarization for all fermions. In particular it is applicable to difficult case of narrow resonances. The resulting spin amplitudes and the differential distributions are given in a form ready for their implementation in the Monte Carlo event generator. The initial-final state interferences are under control. The way is open to the use of the exact amplitudes for two and more hard photons, using Weyl-spinor techniques, without giving up the advantages of the exclusive exponentiation, of the Yennie-Frautschi-Suura type.
The first globally convergent numerical method for a Coefficient Inverse Problem (CIP) for the Riemannian Radiative Transfer Equation (RRTE) is constructed. This is a version of the so-called \textquotedblleft convexification" method, which has been pursued by this research group for a number of years for some other CIPs for PDEs. Those PDEs are significantly different from the RRTE. The presence of the Carleman Weight Function (CWF) in the numerical scheme is the key element which insures the global convergence. Convergence analysis is presented along with the results of numerical experiments, which confirm the theory. RRTE governs the propagation of photons in the diffuse medium in the case when they propagate along geodesic lines between their collisions. Geodesic lines are generated by the spatially variable dielectric constant of the medium.
Recently, Dotsenko and Tamaroff have shown that a morphism of $T\longrightarrow S$ of monads over a category $\mathscr C$ satisfies the PBW-property if and only if it makes $S$ into a free right $T$-module. We consider an adjunction $\Psi=(G,F)$ between categories $\mathscr C$, $\mathscr D$, a monad $S$ on $\mathscr C$ and a monad $T$ on $\mathscr D$. We show that a morphism $\phi:(\mathscr C,S)\longrightarrow (\mathscr D,T)$ that is well behaved with respect to the adjunction $\Psi$ has a PBW-property if and only if it makes $S$ satisfy a certain freeness condition with respect to $T$-modules with values in $\mathscr C$.
This paper shows an elementary and direct proof of the Fundamental Theorem of Algebra, via Bolzano-Weierstrass Theorem on Minima and the Binomial Formula, that avoids: any root extraction other than the one used to define the modulus function over the complex plane, trigonometry, differentiation, integration, series, arguments by induction and arguments using epsilon's and delta's.
Onsager's reciprocity relations for the coefficients of transport equations are now 87 years old. Sometimes these relations are called the Fourth Law of Thermodynamics. Among others they provide an effective criterion for the existence of local equilibrium and of microscopic reversibility. Since the beginning of the century Onsager's relations have seen a revival in the field of spincaloritronics. There the relations are very helpful in judging the utility of modern devices for electronic data processing.
Diabetic retinopathy is the most important complication of diabetes. Early diagnosis of retinal lesions helps to avoid visual loss or blindness. Due to high-resolution and small-size lesion regions, applying existing methods, such as U-Nets, to perform segmentation on fundus photography is very challenging. Although downsampling the input images could simplify the problem, it loses detailed information. Conducting patch-level analysis helps reaching fine-scale segmentation yet usually leads to misunderstanding as the lack of context information. In this paper, we propose an efficient network that combines them together, not only being aware of local details but also taking fully use of the context perceptions. This is implemented by integrating the decoder parts of a global-level U-net and a patch-level one. The two streams are jointly optimized, ensuring that they are enhanced mutually. Experimental results demonstrate our new framework significantly outperforms existing patch-based and global-based methods, especially when the lesion regions are scattered and small-scaled.
Sound source localization is crucial in acoustic sensing and monitoring-related applications. In this paper, we do a comprehensive analysis of improvement in sound source localization by combining the direction of arrivals (DOAs) with their derivatives which quantify the changes in the positions of sources over time. This study uses the SALSA-Lite feature with a convolutional recurrent neural network (CRNN) model for predicting DOAs and their first-order derivatives. An update rule is introduced to combine the predicted DOAs with the estimated derivatives to obtain the final DOAs. The experimental validation is done using TAU-NIGENS Spatial Sound Events (TNSSE) 2021 dataset. We compare the performance of the networks predicting DOAs with derivative vs. the one predicting only the DOAs at low SNR levels. The results show that combining the derivatives with the DOAs improves the localization accuracy of moving sources.
Ambi-polar metrics, defined so as to allow the signature to change from +4 to -4 across hypersurfaces, are a mainstay in the construction of BPS microstate geometries. This paper elucidates the cohomology of these spaces so as to simplify greatly the construction of infinite families of fluctuating harmonic magnetic fluxes. It is argued that such fluxes should come from scalar, harmonic pre-potentials whose source loci are holomorphic divisors. This insight is obtained by exploring the Kahler structure of ambi-polar Gibbons-Hawking spaces and it is shown that differentiating the pre-potentials with respect to Kahler moduli yields solutions to the BPS equations for the electric potentials sourced by the magnetic fluxes. This suggests that harmonic analysis on ambi-polar spaces has a novel, and an extremely rich structure, that is deeply intertwined with the BPS equations. We illustrate our results using a family of two-centered solutions.
In this paper, we review the literature on statistical long-range correlation in DNA sequences. We examine the current evidence for these correlations, and conclude that a mixture of many length scales (including some relatively long ones) in DNA sequences is responsible for the observed 1/f-like spectral component. We note the complexity of the correlation structure in DNA sequences. The observed complexity often makes it hard, or impossible, to decompose the sequence into a few statistically stationary regions. We suggest that, based on the complexity of DNA sequences, a fruitful approach to understand long-range correlation is to model duplication, and other rearrangement processes, in DNA sequences. One model, called ``expansion-modification system", contains only point duplication and point mutation. Though simplistic, this model is able to generate sequences with 1/f spectra. We emphasize the importance of DNA duplication in its contribution to the observed long-range correlation in DNA sequences.
There are many ways to express similar things in text, which makes evaluating natural language generation (NLG) systems difficult. Compounding this difficulty is the need to assess varying quality criteria depending on the deployment setting. While the landscape of NLG evaluation has been well-mapped, practitioners' goals, assumptions, and constraints -- which inform decisions about what, when, and how to evaluate -- are often partially or implicitly stated, or not stated at all. Combining a formative semi-structured interview study of NLG practitioners (N=18) with a survey study of a broader sample of practitioners (N=61), we surface goals, community practices, assumptions, and constraints that shape NLG evaluations, examining their implications and how they embody ethical considerations.
We study permutability properties of matrix semigroups over commutative bipotent semirings (of which the best-known example is the tropical semiring). We prove that every such semigroup is weakly permutable (a result previous stated in the literature, but with an erroneous proof) and then proceed to study in depth the question of when they are strongly permutable (which turns out to depend heavily on the semiring). Along the way we classify monogenic bipotent semirings and describe all isomorphisms between truncated tropical semirings.
Suppose $\mathfrak{g}=\mathfrak{g}_{\bar 0}+\mathfrak{g}_{\bar 1} is a Lie superalgebra of queer type or periplectic type over an algebraically closed field $\textbf{k}$ of characteristic $p>2$. In this article, we initiate preliminarily to investigate modular representations of periplectic Lie superalgebras and then verify the first super Kac-Weisfeiler conjecture on the maximal dimensions of irreducible modules for $\mathfrak{g}$ proposed by the second-named author in [Shu] where the conjecture is targeted at all finite-dimensional restricted Lie superalgebras over $\bk$, and already proved to be true for basic classical Lie superalgebras and completely solvable restricted Lie superalgebras.
Federated learning (FL) allows multiple parties (distributed devices) to train a machine learning model without sharing raw data. How to effectively and efficiently utilize the resources on devices and the central server is a highly interesting yet challenging problem. In this paper, we propose an efficient split federated learning algorithm (ESFL) to take full advantage of the powerful computing capabilities at a central server under a split federated learning framework with heterogeneous end devices (EDs). By splitting the model into different submodels between the server and EDs, our approach jointly optimizes user-side workload and server-side computing resource allocation by considering users' heterogeneity. We formulate the whole optimization problem as a mixed-integer non-linear program, which is an NP-hard problem, and develop an iterative approach to obtain an approximate solution efficiently. Extensive simulations have been conducted to validate the significantly increased efficiency of our ESFL approach compared with standard federated learning, split learning, and splitfed learning.
The issue of how to create open-ended evolution in an artificial system is one the open problems in artificial life. This paper examines two of the factors that have some bearing on this issue, using the Tierra artificial life system. {\em Parsimony pressure} is a tendency to penalise more complex organisms by the extra cost needed to reproduce longer genotypes, encouraging simplification to happen. In Tierra, parsimony is controlled by the \verb+SlicePow+ parameter. When full parsimony is selected, evolution optimises the ancestral organism to produce extremely simple organisms. With parsimony completely relaxed, organisms grow larger, but not more complex. They fill up with ``junk''. This paper looks at scanning a range of \verb+SlicePow+ from 0.9 to 1 to see if there is an optimal value for generating complexity. Tierra (along with most ALife systems) use pseudo random number generators. Algorithms can never create information, only destroy it. So the total complexity of the Tierra system is bounded by the initial complexity, implying that the individual organism complexity is bounded. Biological systems, however, have plenty of sources of randomness, ultimately dependent on quantum randomness, so do not have this complexity limit. Sources of real random numbers exist for computers called {\em entropy gatherers} -- this paper reports on the effect of changing Tierra's pseudo random number generator for an entropy gatherer.
The layered metamagnet CrSBr offers a rich interplay between magnetic, optical and electrical properties that can be extended down to the two-dimensional (2D) limit. Despite the extensive research regarding the long-range magnetic order in magnetic van der Waals materials, short-range correlations have been loosely investigated. By using Small-Angle Neutron Scattering (SANS) we show the formation of short-range magnetic regions in CrSBr with correlation lengths that increase upon cooling up to ca. 3 nm at the antiferromagnetic ordering temperature (TN ~ 140 K). Interestingly, these ferromagnetic correlations start developing below 200 K, i.e., well above TN. Below TN, these correlations rapidly decrease and are negligible at low-temperatures. The experimental results are well-reproduced by an effective spin Hamiltonian, which pinpoints that the short-range correlations in CrSBr are intrinsic to the monolayer limit, and discard the appearance of any frustrated phase in CrSBr at low-temperatures within our experimental window between 2 and 200 nm. Overall, our results are compatible with a spin freezing scenario of the magnetic fluctuations in CrSBr and highlight SANS as a powerful technique for characterizing the rich physical phenomenology beyond the long-range order paradigm offered by van der Waals magnets.
This paper explores how Generative Adversarial Networks (GANs) learn representations of phonological phenomena. We analyze how GANs encode contrastive and non-contrastive nasality in French and English vowels by applying the ciwGAN architecture (Begus 2021a). Begus claims that ciwGAN encodes linguistically meaningful representations with categorical variables in its latent space and manipulating the latent variables shows an almost one to one corresponding control of the phonological features in ciwGAN's generated outputs. However, our results show an interactive effect of latent variables on the features in the generated outputs, which suggests the learned representations in neural networks are different from the phonological representations proposed by linguists. On the other hand, ciwGAN is able to distinguish contrastive and noncontrastive features in English and French by encoding them differently. Comparing the performance of GANs learning from different languages results in a better understanding of what language specific features contribute to developing language specific phonological representations. We also discuss the role of training data frequencies in phonological feature learning.
In this paper, we study the persistence and remaining regularity of KAM invariant torus under sufficiently small perturbations of a Hamiltonian function together with its derivatives, in sense of finite smoothness with modulus of continuity, as a generalization of classical H\"{o}lder continuous circumstances. To achieve this goal, we extend the Jackson approximation theorem to the case of modulus of continuity, and establish a corresponding regularity theorem adapting to the new iterative scheme. Via these tools, we establish a KAM theorem with sharp differentiability hypotheses, which asserts that the persistent torus keeps prescribed universal Diophantine frequency unchanged and reaches the regularity for persistent KAM torus beyond H\"older's type.
In 1974 Hulse and Taylor discovered the binary pulsar. At that time Prof. Dyson was visiting the Max Planck Institute for Physics at Munich, where I was also working. He initiated a number of discussions on this object. During them it occurred to me that this system could be used to test Geodetic Precession in Einsteins theory, which, even after years of work by the Stanford gyroscope expt, had remained a challenge. I showed some preliminary calculations to Prof Dyson and he encouraged me to do a more refined job. To be applicable to the binary pulsar, one needed to generalise the general relativistic calculations to beyond the so called test particle assumption. Barker and OConnell had obtained such a result from analysing the gravitational interactions of spin half Dirac fermions in linearized spin 2 theories of gravitation. With C.F. Cho I produced a purely classical calculation, using Schwingers Source theory. Boerner, Ehlers and Rudolf confirmed this result with their general relativistic calculations shortly after. With V. Radhakrishnan, I gave a detailed model for the pulse width and polarization sweep as a means of observing this effect. All throughout Prof. Dyson was supportive with reading the manuscripts and his critical comments. In 2005, coincidentally the centennial of the Annus Mirabilis(1905), Hotan, Bailes and Ord observed this in the binary pulsar J1141 6545.
Artificial Intelligence (AI), a cornerstone of 21st-century technology, has seen remarkable growth in China. In this paper, we examine China's AI development process, demonstrating that it is characterized by rapid learning and differentiation, surpassing the export-oriented growth propelled by Foreign Direct Investment seen in earlier Asian industrializers. Our data indicates that China currently leads the USA in the volume of AI-related research papers. However, when we delve into the quality of these papers based on specific metrics, the USA retains a slight edge. Nevertheless, the pace and scale of China's AI development remain noteworthy. We attribute China's accelerated AI progress to several factors, including global trends favoring open access to algorithms and research papers, contributions from China's broad diaspora and returnees, and relatively lax data protection policies. In the vein of our research, we have developed a novel measure for gauging China's imitation of US research. Our analysis shows that by 2018, the time lag between China and the USA in addressing AI research topics had evaporated. This finding suggests that China has effectively bridged a significant knowledge gap and could potentially be setting out on an independent research trajectory. While this study compares China and the USA exclusively, it's important to note that research collaborations between these two nations have resulted in more highly cited work than those produced by either country independently. This underscores the power of international cooperation in driving scientific progress in AI.
We present a theoretical study of the spectra produced by optical-radio-frequency double resonance devices, in which resonant linearly polarized light is used in the optical pumping and detection processes. We extend previous work by presenting algebraic results which are valid for atomic states with arbitrary angular momenta, arbitrary rf intensities, and arbitrary geometries. The only restriction made is the assumption of low light intensity. The results are discussed in view of their use in optical magnetometers.
It is shown that an arbitrary static, spherically symmetric metric can be presented as an exact solution of a scalar-tensor theory (STT) of gravity with certain nonminimal coupling function $f(\phi)$ and potential $U(\phi)$. The scalar field in this representation can change its nature from canonical to phantom on certain coordinate spheres. This representation, however, is valid in general not in the full range of the radial coordinate but only piecewise. Two examples of STT representations are discussed: for the Reissner-Nordstr\"om metric and for the Simpson-Visser regularization of the Schwarzschild metric (the so-called black bounce space-time).
Gene mutation prediction in hepatocellular carcinoma (HCC) is of great diagnostic and prognostic value for personalized treatments and precision medicine. In this paper, we tackle this problem with multi-instance multi-label learning to address the difficulties on label correlations, label representations, etc. Furthermore, an effective oversampling strategy is applied for data imbalance. Experimental results have shown the superiority of the proposed approach.
The Goos-H{\"a}nchen (GH) shift is a specifical optical phenomenon that describes a shift parallel to the reflected light inside the plane of incidence, when a finite-width light undergoes total internal reflection at the interface of medium. Although the GH shift in optics has been widely observed experimentally, its generalization remains uncovered completely in relativistic quantum mechanics for the existence of Klein's paradox. Recently, Wang has solved Klein's paradox based on the different solutions adpoted for Dirac's equation with step potential in corresponding energy regions \href{https://dx.doi.org/10.1088/2399-6528/abd340}{[J. Phys. Commun. {\bf 4}, 125010 (2020)]}. In the light of Wang's method, we calculate the GH shift for Dirac fermions under relativistic conditions when they are incident obliquely on a three-dimensional infinite potential barrier. Furthermore, we find that the relativistic quantum GH shift can be negative, which is different from the non-relativistic case.
From the worldsheet perspective, the superpotential on a D-brane wrapping internal cycles of a Calabi-Yau manifold is given as a generating functional for disk correlation functions. On the other hand, from the geometric point of view, D-brane superpotentials are captured by certain chain integrals. In this work, we explicitly show for branes wrapping internal 2-cycles how these two different approaches are related. More specifically, from the worldsheet point of view, D-branes at the Landau-Ginzburg point have a convenient description in terms of matrix factorizations. We use a formula derived by Kapustin and Li to explicitly evaluate disk correlators for families of D2-branes. On the geometry side, we then construct a three-chain whose period gives rise to the effective superpotential and show that the two expressions coincide. Finally, as an explicit example, we choose a particular compact Calabi-Yau hypersurface and compute the effective D2-brane superpotential in different branches of the open moduli space, in both geometric and worldsheet approaches.
In crowd counting, each training image contains multiple people, where each person is annotated by a dot. Existing crowd counting methods need to use a Gaussian to smooth each annotated dot or to estimate the likelihood of every pixel given the annotated point. In this paper, we show that imposing Gaussians to annotations hurts generalization performance. Instead, we propose to use Distribution Matching for crowd COUNTing (DM-Count). In DM-Count, we use Optimal Transport (OT) to measure the similarity between the normalized predicted density map and the normalized ground truth density map. To stabilize OT computation, we include a Total Variation loss in our model. We show that the generalization error bound of DM-Count is tighter than that of the Gaussian smoothed methods. In terms of Mean Absolute Error, DM-Count outperforms the previous state-of-the-art methods by a large margin on two large-scale counting datasets, UCF-QNRF and NWPU, and achieves the state-of-the-art results on the ShanghaiTech and UCF-CC50 datasets. DM-Count reduced the error of the state-of-the-art published result by approximately 16%. Code is available at https://github.com/cvlab-stonybrook/DM-Count.
Sap exudation is the process whereby trees such as sugar (Acer saccharum) and red maple (Acer rubrum) generate unusually high positive stem pressure in response to repeated cycles of freeze and thaw. This elevated xylem pressure permits the sap to be harvested over a period of several weeks and hence is a major factor in the viability of the maple syrup industry. The extensive literature on sap exudation documents competing hypotheses regarding the physical and biological mechanisms that drive positive pressure generation in maple, but to date relatively little effort has been expended on devising mathematical models for the exudation process. In this paper, we utilize an existing model of Graf et al. [J. Roy. Soc. Interface 12:20150665, 2015] that describes heat and mass transport within the multiphase gas-liquid-ice mixture in the porous xylem tissue. The model captures the inherent multiscale nature of xylem transport by including phase change and osmotic transport in wood cells on the microscale, which is coupled to heat transport through the tree stem on the macroscale. A parametric study based on simulations with synthetic temperature data identifies the model parameters that have greatest impact on stem pressure build-up. Measured daily temperature fluctuations are then used as model inputs and the resulting simulated pressures are compared directly with experimental measurements taken from mature red and sugar maple stems during the sap harvest season. The results demonstrate that our multiscale freeze-thaw model reproduces realistic exudation behavior, thereby providing novel insights into the specific physical mechanisms that dominate positive pressure generation in maple trees.
We present the results from a monitoring campaign of the Narrow-Line Seyfert~1 galaxy PG 1211+143. The object was monitored with ground-based facilities (UBVRI photometry; from February to July, 2007) and with Swift (X-ray photometry/spectroscopy and UV/Optical photometry; between March and May, 2007). We found PG 1211+143 in a historical low X-ray flux state at the beginning of the Swift monitoring campaign in March 2007. It is seen from the light curves that while violently variable in X-rays, the quasar shows little variations in optical/UV bands. The X-ray spectrum in the low state is similar to other Narrow-Line Seyfert 1 galaxies during their low-states and can be explained by a strong partial covering absorber or by X-ray reflection onto the disk. With the current data set, however, it is not possible to distinguish between both scenarios. The interband cross-correlation functions indicate a possible reprocessing of the X-rays into the longer wavelengths, consistent with the idea of a thin accretion disk, powering the quasar. The time lags between the X-ray and the optical/UV light curves, ranging from ~2 to ~18 days for the different wavebands, scale approximately as ~lambda^(4/3), but appear to be somewhat larger than expected for this object, taking into account its accretion disk parameters. Possible implications for the location of the X-ray irradiating source are discussed.
In this paper, we present the fast computational algorithms for the Jacobi sums of orders $l^2$ and $2l^{2}$ with odd prime $l$ by formulating them in terms of the minimum number of cyclotomic numbers of the corresponding orders. We also implement two additional algorithms to validate these formulae, which are also useful for the demonstration of the minimality of cyclotomic numbers required.
The concept of a symplectic structure first appeared in the works of Lagrange on the so-called "method of variation of the constants". These works are presented, together with those of Poisson, who first defined the composition law called today the "Poisson bracket". The method of variation of the constants is presented using today's mathematical concepts and notations.
We derive and analyse the full set of equations of motion for non-extreme static black holes (including examples with the spatial curvatures k=-1 and k=0) in D=5 N=2 gauged supergravity by employing the techniques of "very special geometry". These solutions turn out to differ from those in the ungauged supergravity only in the non-extremality function, which has an additional term (proportional to the gauge coupling g), responsible for the appearance of naked singularities in the BPS-saturated limit. We derive an explicit solution for the STU model of gauged supergravity which is incidentally also a solution of D=5 N=4 and N=8 gauged supergravity. This solution is specified by three charges, the asymptotic negative cosmological constant (minimum of the potential) and a non-extremality parameter. While its BPS-saturated limit has a naked singularity, we find a lower bound on the non-extremality parameter (or equivalently on the ADM mass) for which the non-extreme solutions are regular. When this bound is saturated the extreme (non-supersymmetric) solution has zero Hawking temperature and finite entropy. Analogous qualitative features are expected to emerge for black hole solutions in D=4 gauged supergravity as well.
Introducing Modern Physics represents an increasingly urgent need, towards which physics education concentrates many efforts. In order to contribute to this attempt, at the Department of Mathematics and Physics of Roma Tre University in Rome we focused on the possibility of treating General Relativity (GR) at high school level. We started with an interactive activity addressed to students that exploits the rubber sheet analogy (RSA) to show various phenomena related to gravity using the concept of space-time. Then, having verified its effectiveness, we began to include it among the initiatives the Department carry for high school teacher professional development, with the explicit aim of making them capable of carrying on the activity autonomously in the classrooms. In this paper, we analyse the teacher training approach we realized, and all the materials developed.
The underlying physics of giant radio halos and mini halos in galaxy clusters is still an open question, which becomes more pressing with the growing number of detections. In this paper, we explore the possibility that radio-emitting electrons are generated in hadronic cosmic ray (CR) proton interactions with ambient thermal protons of the intra-cluster medium. Our CR model derives from cosmological hydrodynamical simulations of cluster formation and additionally accounts for CR transport in the form of CR streaming and diffusion. This opens the possibility of changing the radio halo luminosity by more than an order of magnitude on a dynamical time scale. We build a mock galaxy cluster catalog from the large MultiDark N-body LCDM simulation by adopting a phenomenological gas density model for each cluster based on X-ray measurements that matches Sunyaev-Zel'dovich (SZ) and X-ray scaling relations and luminosity function. Using magnetic field strength estimates from Faraday rotation measure studies, our model successfully reproduces the observed surface brightness profiles of giant radio halos (Coma, A2163) as well as radio mini-halos (Perseus, Ophiuchus), while obeying upper limits on the gamma-ray emission in these clusters. Our model is also able to simultaneously reproduce the observed bimodality of radio-loud and radio-quiet clusters at the same L_X as well as the unimodal distribution of radio-halo luminosity versus the SZ flux Y; thereby suggesting a physical solution to this apparent contradiction. For a plausible fraction of 10% radio-loud clusters, our model matches the NVSS radio-halo luminosity function. Constructing an analytical radio-halo luminosity function, we demonstrate the unique prospects for low-frequency radio surveys (such as the LOFAR Tier 1 survey) to detect ~3500 radio halos back to redshift two and to probe the underlying physics of radio halos. [abridged]
While questions of a functional localization of consciousness in the brain have been the subject of myriad studies, the idea of a temporal access code as a specific brain mechanism for consciousness has remained a neglected possibility. Dresp-Langley and Durup (2009, 2012) proposed a theoretical approach in terms of a temporal access mechanism for consciousness based on its two universally recognized properties. Consciousness is limited in processing capacity and described by a unique processing stream across a single dimension, which is time. The time ordering function of conscious states is highlighted and neurobiological theories of the temporal brain activities likely to underlie such function are discussed, and the properties of the code model are then introduced. Spatial information is integrated into provisory topological maps at non-conscious levels through adaptive resonant matching, but does not form part of the temporal access code as such. The latter, de-correlated from the spatial code, operates without any need for firing synchrony on the sole basis of temporal coincidence probabilities in dedicated resonant circuits through progressively non-arbitrary selection of specific temporal activity patterns in the continuously developing brain.
We present a Heralded Photon Source based only on linear optics and weak coherent states. By time-tuning a Hong-Ou-Mandel interferometer fed with frequency-displaced coherent states, the output photons can be synchronously heralded following sub-Poisson statistics, which is indicated by the second-order correlation function ($g^2\left(0\right)=0.556$). The absence of phase-matching restrictions makes the source widely tunable, with 100-nm spectral tunability on the telecom bands. The technique presents yield comparable to state-of-the-art spontaneous parametric down-conversion-based sources, with high coherence and fiber-optic quantum communication compatibility.
In this paper, we present a deep reinforcement learning method for quadcopter bypassing the obstacle on the flying path. In the past study, the algorithm only controls the forward direction about quadcopter. In this letter, we use two functions to control quadcopter. One is quadcopter navigating function. It is based on calculating coordination point and find the straight path to the goal. The other function is collision avoidance function. It is implemented by deep Q-network model. Both two function will output rotating degree, the agent will combine both output and turn direct. Besides, deep Q-network can also make quadcopter fly up and down to bypass the obstacle and arrive at the goal. Our experimental result shows that the collision rate is 14% after 500 flights. Based on this work, we will train more complex sense and transfer model to the real quadcopter.
The mating pathway in \emph{Saccharomyces cerevisiae} is one of the best understood signal transduction pathways in eukaryotes. It transmits the mating signal from plasma membrane into the nucleus through the G-protein coupled receptor and the mitogen-activated protein kinase (MAPK) cascade. According to the current understandings of the mating pathway, we construct a system of ordinary differential equations to describe the process. Our model is consistent with a wide range of experiments, indicating that it captures some main characteristics of the signal transduction along the pathway. Investigation with the model reveals that the shuttling of the scaffold protein and the dephosphorylation of kinases involved in the MAPK cascade cooperate to regulate the response upon pheromone induction and to help preserving the fidelity of the mating signaling. We explored factors affecting the dose-response curves of this pathway and found that both negative feedback and concentrations of the proteins involved in the MAPK cascade play crucial role. Contrary to some other MAPK systems where signaling sensitivity is being amplified successively along the cascade, here the mating signal is transmitted through the cascade in an almost linear fashion.
Atomically thin two-dimensional (2D) van der Waals semiconductors are promising candidate materials for post-silicon electronics. However, it remains challenging to attain completely uniform monolayer semiconductor wafers free of over-grown islands. Here, we report the observation of the energy funneling effect and ambient photodelamination phenomenon in inhomogeneous few-layer WS$_2$ flakes under low illumination fluencies down to several nW/$\mu$m$^{2}$ and its potential as a non-invasive post-etching strategy for selectively stripping the local excessive overlying islands. Photoluminescent tracking on the photoetching traces reveals relatively fast etching rates around $0.3-0.8\,\mu$m/min at varied temperatures and an activation energy of $1.7\,$eV. By using crystallographic and electronic characterization, we also confirm the non-invasive nature of the low-power photodelamination and the highly preserved lattice quality in the as-etched monolayer products, featuring a comparable average density of atomic defects (ca.$4.2\times 10^{13}\,$cm$^{-2}$) to pristine flakes and a high electron mobility up to $80\,$cm$^{2}\cdot$V$^{-1}\cdot$s$^{-1}$) at room temperature. This approach opens a non-invasive photoetching route for thickness uniformity management in 2D van der Waals semiconductor wafers for electronic applications.
A symmetry-preserving Poincar\'e-covariant quark+diquark Faddeev equation treatment of the nucleon is used to deliver parameter-free predictions for the nucleon's axial and induced pseudoscalar form factors, $G_A$ and $G_P$, respectively. The result for $G_A$ can reliably be represented by a dipole form factor characterised by an axial charge $g_A=G_A(0)=1.25(3)$ and a mass-scale $M_A = 1.23(3) m_N$, where $m_N$ is the nucleon mass; and regarding $G_P$, the induced pseudoscalar charge $g_p^\ast = 8.80(23)$, the ratio $g_p^\ast/g_A = 7.04(22)$, and the pion pole dominance Ansatz is found to provide a reliable estimate of the directly computed result. The ratio of flavour-separated quark axial charges is also calculated: $g_A^d/g_A^u=-0.16(2)$. This value expresses a marked suppression of the size of the $d$-quark component relative to that found in nonrelativistic quark models and owes to the presence of strong diquark correlations in the nucleon Faddeev wave function -- both scalar and axial-vector, with the scalar diquark being dominant. The predicted form for $G_A$ should provide a sound foundation for analyses of the neutrino-nucleus and antineutrino-nucleus cross-sections that are relevant to modern accelerator neutrino experiments.
A theory of tunneling through a quantum dot is presented which enables us to study combined effects of Coulomb blockade and discrete energy spectrum of the dot. The expression of tunneling current is derived from the Keldysh Green's function method, and is shown to automatically satisfy the conservation at DC current of both junctions.
We study operators obtained by coupling an $n \times n$ random matrix from one of the Gaussian ensembles to the discrete Laplacian. We find the joint distribution of the eigenvalues and resonances of such operators. This is one of the possible mathematical models for quantum scattering in a complex physical system with one semi-infinite lead attached.
We present a novel approach to study eigenvalues of deformed random matrices. This approach applies to many deformed Gaussian matrix models; two such models are studied in detail: the deformed GOE and the spiked population model.
We introduce the notion of abelian solutions of KP equations and show that all of them are algebro-geometric.
We introduce a broad class of fractal jet observables that recursively probe the collective properties of hadrons produced in jet fragmentation. To describe these collinear-unsafe observables, we generalize the formalism of fragmentation functions, which are important objects in QCD for calculating cross sections involving identified final-state hadrons. Fragmentation functions are fundamentally nonperturbative, but have a calculable renormalization group evolution. Unlike ordinary fragmentation functions, generalized fragmentation functions exhibit nonlinear evolution, since fractal observables involve correlated subsets of hadrons within a jet. Some special cases of generalized fragmentation functions are reviewed, including jet charge and track functions. We then consider fractal jet observables that are based on hierarchical clustering trees, where the nonlinear evolution equations also exhibit tree-like structure at leading order. We develop a numeric code for performing this evolution and study its phenomenological implications. As an application, we present examples of fractal jet observables that are useful in discriminating quark jets from gluon jets.
We present the new parton distribution functions (PDFs) from the CTEQ-TEA collaboration, obtained using a wide variety of high-precision Large Hadron Collider (LHC) data, in addition to the combined HERA I+II deep-inelastic scattering data set, along with the data sets present in the CT14 global QCD analysis. New LHC measurements in single-inclusive jet production with the full rapidity coverage, as well as production of Drell-Yan pairs, top-quark pairs, and high-$p_T$ $Z$ bosons, are included to achieve the greatest sensitivity to the PDFs. The parton distributions are determined at NLO and NNLO, with each of these PDFs accompanied by error sets determined using the Hessian method. Fast PDF survey techniques, based on the Hessian representation and the Lagrange Multiplier method, are used to quantify the preference of each data set to quantities such as $\alpha_s(m_Z)$, and the gluon and strange quark distributions. We designate the main resulting PDF set as CT18. The ATLAS 7 TeV precision $W/Z$ data are not included in CT18, due to their tension with other data sets in the global fit. Alternate PDF sets are generated including the ATLAS precision 7 TeV $W/Z$ data (CT18A), a new scale choice for low-$x$ DIS data (CT18X), or all of the above with a slightly higher choice for the charm mass (CT18Z). Theoretical calculations of standard candle cross sections at the LHC (such as the $gg$ fusion Higgs boson cross section) are presented.
Network control theory can be used to model how one should steer the brain between different states by driving a specific region with an input. The needed energy to control a network is often used to quantify its controllability, and controlling brain networks requires diverse energy depending on the selected input region. We use the theory of how input node placement affects the longest control chain (LCC) in the controllability of brain networks to study the role of the architecture of white matter fibers in the required control energy. We show that the energy needed to control human brain networks is related to the LCC, i.e., the longest distance between the input region and other regions in the network. We indicate that regions that control brain networks with lower energy have small LCCs. These regions align with areas that can steer the brain around the state space smoothly. By contrast, regions that need higher energy to move the brain toward different target states have larger LCCs. We also investigate the role of the number of paths between regions in the control energy. Our results show that the more paths between regions, the lower cost needed to control brain networks. We evaluate the number of paths by counting specific motifs in brain networks since determining all paths in graphs is a difficult problem.
Let ${\bf M}=(M_1,\ldots, M_k)$ be a tuple of real $d\times d$ matrices. Under certain irreducibility assumptions, we give checkable criteria for deciding whether ${\bf M}$ possesses the following property: there exist two constants $\lambda\in {\Bbb R}$ and $C>0$ such that for any $n\in {\Bbb N}$ and any $i_1, \ldots, i_n \in \{1,\ldots, k\}$, either $M_{i_1} \cdots M_{i_n}={\bf 0}$ or $C^{-1} e^{\lambda n} \leq \| M_{i_1} \cdots M_{i_n} \| \leq C e^{\lambda n}$, where $\|\cdot\|$ is a matrix norm. The proof is based on symbolic dynamics and the thermodynamic formalism for matrix products. As applications, we are able to check the absolute continuity of a class of overlapping self-similar measures on ${\Bbb R}$, the absolute continuity of certain self-affine measures in ${\Bbb R}^d$ and the dimensional regularity of a class of sofic affine-invariant sets in the plane.
We reconsider the ellipsoidal-collapse model and extend it in two ways: We modify the treatment of the external gravitational shear field, introducing a hybrid model in between linear and non-linear evolution, and we introduce a virialisation criterion derived from the tensor virial theorem to replace the ad-hoc criterion employed so far. We compute the collapse parameters delta_c and Delta_v and find that they increase with ellipticity e and decrease with prolaticity p. We marginalise them over the appropriate distribution of e and p and show the marginalised results as functions of halo mass and virialisation redshift. While the hybrid model for the external shear gives results very similar to those obtained from the non-linear model, ellipsoidal collapse changes the collapse parameters typically by (20...50)%, in a way increasing with decreasing halo mass and decreasing virialisation redshift. We qualitatively confirm the dependence on mass and virialisation redshift of a fitting formula for delta_c, but find noticeable quantitative differences in particular at low mass and high redshift. The derived mass function is in good agreement with mass functions recently proposed in the literature.
We report the discovery of two new invariants for three-qubit states which, similarly to the 3-tangle, are invariant under local unitary transformations and permutations of the parties. These quantities have a direct interpretation in terms of the anisotropy of pairwise spin correlations. Applications include a universal ordering of pairwise quantum correlation measures for pure three-qubit states; tradeoff relations for anisotropy, 3-tangle and Bell nonlocality; strong monogamy relations for Bell inequalities, Einstein-Podolsky-Rosen steering inequalities, geometric discord and fidelity of remote state preparation (including results for arbitrary three-party states); and a statistical and reference-frame-independent form of quantum secret sharing.
We present a method of discovering governing differential equations from data without the need to specify a priori the terms to appear in the equation. The input to our method is a dataset (or ensemble of datasets) corresponding to a particular solution (or ensemble of particular solutions) of a differential equation. The output is a human-readable differential equation with parameters calibrated to the individual particular solutions provided. The key to our method is to learn differentiable models of the data that subsequently serve as inputs to a genetic programming algorithm in which graphs specify computation over arbitrary compositions of functions, parameters, and (potentially differential) operators on functions. Differential operators are composed and evaluated using recursive application of automatic differentiation, allowing our algorithm to explore arbitrary compositions of operators without the need for human intervention. We also demonstrate an active learning process to identify and remedy deficiencies in the proposed governing equations.
We study the efficiency of sequential first-price item auctions at (subgame perfect) equilibrium. This auction format has recently attracted much attention, with previous work establishing positive results for unit-demand valuations and negative results for submodular valuations. This leaves a large gap in our understanding between these valuation classes. In this work we resolve this gap on the negative side. In particular, we show that even in the very restricted case in which each bidder has either an additive valuation or a unit-demand valuation, there exist instances in which the inefficiency at equilibrium grows linearly with the minimum of the number of items and the number of bidders. Moreover, these inefficient equilibria persist even under iterated elimination of weakly dominated strategies. Our main result implies linear inefficiency for many natural settings, including auctions with gross substitute valuations, capacitated valuations, budget-additive valuations, and additive valuations with hard budget constraints on the payments. Another implication is that the inefficiency in sequential auctions is driven by the maximum number of items contained in any player's optimal set, and this is tight. For capacitated valuations, our results imply a lower bound that equals the maximum capacity of any bidder, which is tight following the upper-bound technique established by Paes Leme et al. \cite{PaesLeme2012}.
Using perturbation expansion of Maxwell equations, the amplitude equation is derived for nonlinear TM and TE surface plasmon waves supported by graphene. The equation describes interplay between in-plane beam diffraction and nonlinerity due to light intensity induced corrections to graphene conductivity and susceptibility of dielectrics. For strongly localized TM plasmons, graphene is found to bring the superior contribution to the overall nonlinearity. In contrast, nonlinear response of the substrate and cladding dielectrics can become dominant for weakly localized TE plasmons.
In this paper we obtain a global characterization of the dynamics of even solutions to the one-dimensional nonlinear Klein-Gordon (NLKG) equation on the line with focusing nonlinearity |u|^{p-1}u, p>5, provided their energy exceeds that of the ground state only sightly. The method is the same as in the three-dimensional case arXiv:1005.4894, the major difference being in the construction of the center-stable manifold. The difficulty there lies with the weak dispersive decay of 1-dimensional NLKG. In order to address this specific issue, we establish local dispersive estimates for the perturbed linear Klein-Gordon equation, similar to those of Mizumachi arXiv:math/0605031. The essential ingredient for the latter class of estimates is the absence of a threshold resonance of the linearized operator.
Heavy quark symmetry predicts the value of $B \rightarrow D$ and $B \rightarrow D^*$ transition matrix elements of the current $\bar c \gamma_\mu (1 - \gamma_5)b$, at zero recoil (where in the rest frame of the $B$ the $D$ or $D^*$ is also at rest). We use chiral perturbation theory to compute the leading corrections to these predictions which are generated at low momentum, below the chiral symmetry breaking scale.
We consider a chain of atoms that are bound together by a harmonic force. Spin-1/2 electrons that move between neighboring chain sites (H\"uckel model) induce a lattice dimerization at half band filling (Peierls effect). We supplement the H\"uckel model with a local Hubbard interaction and a long-range Ohno potential, and calculate the average bond-length, dimerization, and optical phonon frequencies for finite straight and zig-zag chains using the density-matrix renormalization group (DMRG) method. We check our numerical approach against analytic results for the H\"uckel model. The Hubbard interaction mildly affects the average bond length but substantially enhances the dimerization and increases the optical phonon frequencies whereas, for moderate Coulomb parameters, the long-range Ohno interaction plays no role.
We develop a general approach of the almost sure central limit theorem for the quasi-continuous vectorial martingales and we release a quadratic extension of this theorem while specifying speeds of convergence. As an application of this result we study the problem of estimate the variance of a process with stationary and idependent increments in statistics.
We present the calculation of the atmospheric neutrino fluxes with an interaction model named JAM, which is used in PHITS (Particle and Heavy-Ion Transport code System). The JAM interaction model agrees with the HARP experiment a little better than DPMJET-III. After some modifications, it reproduces the muon flux below 1~GeV/c at balloon altitudes better than the modified-DPMJET-III which we used for the calculation of atmospheric neutrino flux in previous works. Some improvements in the calculation of atmospheric neutrino flux are also reported.
In recent years, language models have drastically grown in size, and the abilities of these models have been shown to improve with scale. The majority of recent scaling laws studies focused on high-compute high-parameter count settings, leaving the question of when these abilities begin to emerge largely unanswered. In this paper, we investigate whether the effects of pre-training can be observed when the problem size is reduced, modeling a smaller, reduced-vocabulary language. We show the benefits of pre-training with masked language modeling (MLM) objective in models as small as 1.25M parameters, and establish a strong correlation between pre-training perplexity and downstream performance (GLUE benchmark). We examine downscaling effects, extending scaling laws to models as small as ~1M parameters. At this scale, we observe a break of the power law for compute-optimal models and show that the MLM loss does not scale smoothly with compute-cost (FLOPs) below $2.2 \times 10^{15}$ FLOPs. We also find that adding layers does not always benefit downstream performance.
We study a twisted Hubbard tube modeling the [CrAs] structure of quasi-one-dimensional superconductors A2Cr3As3 (A = K, Rb, Cs). The molecular-orbital bands emerging from the quasi-degenerate atomic orbitals are exactly solved. An effective Hamiltonian is derived for a region where three partially filled bands intersect the Fermi energy. The deduced local interactions among these active bands show a significant reduction compared to the original atomic interactions. The resulting three-channel Luttinger liquid shows various interaction-induced instabilities including two kinds of spin-triplet superconducting instabilities due to gapless spin excitations, with one of them being superseded by the spin-density-wave phase in the intermediate Hund's coupling regime. The implications of these results for the alkali chromium arsenides are discussed.
The winds of cool luminous AGB stars are commonly assumed to be driven by radiative acceleration of dust grains which form in the extended atmospheres produced by pulsation-induced shock waves. The dust particles gain momentum by absorption or scattering of stellar photons, and they drag along the surrounding gas particles through collisions, triggering an outflow. This scenario, here referred to as Pulsation-Enhanced Dust-DRiven Outflow (PEDDRO), has passed a range of critical observational tests as models have developed from empirical and qualitative to increasingly self-consistent and quantitative. A reliable theory of mass loss is an essential piece in the bigger picture of stellar and galactic chemical evolution, and central for determining the contribution of AGB stars to the dust budget of galaxies. In this review, I discuss the current understanding of wind acceleration and indicate areas where further efforts by theorists and observers are needed.
The challenge of open-vocabulary recognition lies in the model has no clue of new categories it is applied to. Existing works have proposed different methods to embed category cues into the model, \eg, through few-shot fine-tuning, providing category names or textual descriptions to Vision-Language Models. Fine-tuning is time-consuming and degrades the generalization capability. Textual descriptions could be ambiguous and fail to depict visual details. This paper tackles open-vocabulary recognition from a different perspective by referring to multi-modal clues composed of textual descriptions and exemplar images. Our method, named OVMR, adopts two innovative components to pursue a more robust category cues embedding. A multi-modal classifier is first generated by dynamically complementing textual descriptions with image exemplars. A preference-based refinement module is hence applied to fuse uni-modal and multi-modal classifiers, with the aim to alleviate issues of low-quality exemplar images or textual descriptions. The proposed OVMR is a plug-and-play module, and works well with exemplar images randomly crawled from the Internet. Extensive experiments have demonstrated the promising performance of OVMR, \eg, it outperforms existing methods across various scenarios and setups. Codes are publicly available at \href{https://github.com/Zehong-Ma/OVMR}{https://github.com/Zehong-Ma/OVMR}.
The order type of a point set in $R^d$ maps each $(d{+}1)$-tuple of points to its orientation (e.g., clockwise or counterclockwise in $R^2$). Two point sets $X$ and $Y$ have the same order type if there exists a mapping $f$ from $X$ to $Y$ for which every $(d{+}1)$-tuple $(a_1,a_2,\ldots,a_{d+1})$ of $X$ and the corresponding tuple $(f(a_1),f(a_2),\ldots,f(a_{d+1}))$ in $Y$ have the same orientation. In this paper we investigate the complexity of determining whether two point sets have the same order type. We provide an $O(n^d)$ algorithm for this task, thereby improving upon the $O(n^{\lfloor{3d/2}\rfloor})$ algorithm of Goodman and Pollack (1983). The algorithm uses only order type queries and also works for abstract order types (or acyclic oriented matroids). Our algorithm is optimal, both in the abstract setting and for realizable points sets if the algorithm only uses order type queries.
Quantum fluctuations of massless scalar fields represented by quantum fluctuations of the quasiparticle vacuum in a zero-temperature dilute Bose-Einstein condensate may well provide the first experimental arena for measuring the Casimir force of a field other than the electromagnetic field. This would constitute a real Casimir force measurement - due to quantum fluctuations - in contrast to thermal fluctuation effects. We develop a multidimensional cut-off technique for calculating the Casimir energy of massless scalar fields in $d$-dimensional rectangular spaces with $q$ large dimensions and $d-q$ dimensions of length $L$ and generalize the technique to arbitrary lengths. We explicitly evaluate the multidimensional remainder and express it in a form that converges exponentially fast. Together with the compact analytical formulas we derive, the numerical results are exact and easy to obtain. Most importantly, we show that the division between analytical and remainder is not arbitrary but has a natural physical interpretation. The analytical part can be viewed as the sum of individual parallel plate energies and the remainder as an interaction energy. In a separate procedure, via results from number theory, we express some odd-dimensional homogeneous Epstein zeta functions as products of one-dimensional sums plus a tiny remainder and calculate from them the Casimir energy via zeta function regularization.
Manga is a world popular comic form originated in Japan, which typically employs black-and-white stroke lines and geometric exaggeration to describe humans' appearances, poses, and actions. In this paper, we propose MangaGAN, the first method based on Generative Adversarial Network (GAN) for unpaired photo-to-manga translation. Inspired by how experienced manga artists draw manga, MangaGAN generates the geometric features of manga face by a designed GAN model and delicately translates each facial region into the manga domain by a tailored multi-GANs architecture. For training MangaGAN, we construct a new dataset collected from a popular manga work, containing manga facial features, landmarks, bodies, and so on. Moreover, to produce high-quality manga faces, we further propose a structural smoothing loss to smooth stroke-lines and avoid noisy pixels, and a similarity preserving module to improve the similarity between domains of photo and manga. Extensive experiments show that MangaGAN can produce high-quality manga faces which preserve both the facial similarity and a popular manga style, and outperforms other related state-of-the-art methods.
In this paper, we study biharmonic Riemannian submersions $\pi:M^2\times\r\to (N^2,h)$ from a product manifold onto a surface and obtain some local characterizations of such biharmonic maps. Our results show that when the target surface is flat, a proper biharmonic Riemannian submersion $\pi:M^2\times\r\to (N^2,h)$ is locally a projection of a special twisted product, and when the target surface is non-flat, $\pi$ is locally a special map between two warped product spaces with a warping function that solves a single ODE. As a by-product, we also prove that there is a unique proper biharmonic Riemannian submersion $H^2\times \r\to \r^2$ given by the projection of a warped product.
This is Chapter 24 in the "AutoMathA" handbook. Finite automata have been used effectively in recent years to define infinite groups. The two main lines of research have as their most representative objects the class of automatic groups (including word-hyperbolic groups as a particular case) and automata groups (singled out among the more general self-similar groups). The first approach implements in the language of automata some tight constraints on the geometry of the group's Cayley graph, building strange, beautiful bridges between far-off domains. Automata are used to define a normal form for group elements, and to monitor the fundamental group operations. The second approach features groups acting in a finitely constrained manner on a regular rooted tree. Automata define sequential permutations of the tree, and represent the group elements themselves. The choice of particular classes of automata has often provided groups with exotic behaviour which have revolutioned our perception of infinite finitely generated groups.
Differential cross sections for dijet photoproduction and this process in association with a leading neutron, e+ + p -> e+ + jet + jet + X (+ n), have been measured with the ZEUS detector at HERA using an integrated luminosity of 40 pb-1. The fraction of dijet events with a leading neutron was studied as a function of different jet and event variables. Single- and double-differential cross sections are presented as a function of the longitudinal fraction of the proton momentum carried by the leading neutron, xL, and of its transverse momentum squared, pT^2. The dijet data are compared to inclusive DIS and photoproduction results; they are all consistent with a simple pion-exchange model. The neutron yield as a function of xL was found to depend only on the fraction of the proton beam energy going into the forward region, independent of the hard process. No firm conclusion can be drawn on the presence of rescattering effects.
Modeling correlation (and covariance) matrices can be challenging due to the positive-definiteness constraint and potential high-dimensionality. Our approach is to decompose the covariance matrix into the correlation and variance matrices and propose a novel Bayesian framework based on modeling the correlations as products of unit vectors. By specifying a wide range of distributions on a sphere (e.g. the squared-Dirichlet distribution), the proposed approach induces flexible prior distributions for covariance matrices (that go beyond the commonly used inverse-Wishart prior). For modeling real-life spatio-temporal processes with complex dependence structures, we extend our method to dynamic cases and introduce unit-vector Gaussian process priors in order to capture the evolution of correlation among components of a multivariate time series. To handle the intractability of the resulting posterior, we introduce the adaptive $\Delta$-Spherical Hamiltonian Monte Carlo. We demonstrate the validity and flexibility of our proposed framework in a simulation study of periodic processes and an analysis of rat's local field potential activity in a complex sequence memory task.
Model inversion, whose goal is to recover training data from a pre-trained model, has been recently proved feasible. However, existing inversion methods usually suffer from the mode collapse problem, where the synthesized instances are highly similar to each other and thus show limited effectiveness for downstream tasks, such as knowledge distillation. In this paper, we propose Contrastive Model Inversion~(CMI), where the data diversity is explicitly modeled as an optimizable objective, to alleviate the mode collapse issue. Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination. To this end, we introduce in CMI a contrastive learning objective that encourages the synthesizing instances to be distinguishable from the already synthesized ones in previous batches. Experiments of pre-trained models on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI not only generates more visually plausible instances than the state of the arts, but also achieves significantly superior performance when the generated data are used for knowledge distillation. Code is available at \url{https://github.com/zju-vipa/DataFree}.
The rapid development of large language models (LLMs) has significantly improved the generation of fluent and convincing text, raising concerns about their misuse on social media platforms. We present a methodology using Twitter datasets to examine the generative capabilities of four LLMs: Llama 3, Mistral, Qwen2, and GPT4o. We evaluate 7B and 8B parameter base-instruction models of the three open-source LLMs and validate the impact of further fine-tuning and "uncensored" versions. Our findings show that "uncensored" models with additional in-domain fine-tuning dramatically reduce the effectiveness of automated detection methods. This study addresses a gap by exploring smaller open-source models and the effects of "uncensoring," providing insights into how fine-tuning and content moderation influence machine-generated text detection.
The metric (resp. edge metric or mixed metric) dimension of a graph $G$, is the cardinality of the smallest ordered set of vertices that uniquely recognizes all the pairs of distinct vertices (resp. edges, or vertices and edges) of $G$ by using a vector of distances to this set. In this note we show two unexpected results on hypercube graphs. First, we show that the metric and edge metric dimension of $Q_d$ differ by only one for every integer $d$. In particular, if $d$ is odd, then the metric and edge metric dimensions of $Q_d$ are equal. Second, we prove that the metric and mixed metric dimensions of the hypercube $Q_d$ are equal for every $d \ge 3$. We conclude the paper by conjecturing that all these three types of metric dimensions of $Q_d$ are equal when $d$ is large enough.
Given two binary trees on $N$ labeled leaves, the quartet distance between the trees is the number of disagreeing quartets. By permuting the leaves at random, the expected quartets distance between the two trees is $\frac{2}{3}\binom{N}{4}$. However, no strongly explicit construction reaching this bound asymptotically was known. We consider complete, balanced binary trees on $N=2^n$ leaves, labeled by $n$ long bit sequences. Ordering the leaves in one tree by the prefix order, and in the other tree by the suffix order, we show that the resulting quartet distance is $\left(\frac{2}{3} + o(1)\right)\binom{N}{4}$, and it always exceeds the $\frac{2}{3}\binom{N}{4}$ bound.
We have studied numerically the evolution of an adiabatic quantum computer in the presence of a Markovian ohmic environment by considering Ising spin glass systems with up to 20 qubits independently coupled to this environment via two conjugate degrees of freedom. The required computation time is demonstrated to be of the same order as that for an isolated system and is not limited by the single-qubit decoherence time $T_2^*$, even when the minimum gap is much smaller than the temperature and decoherence-induced level broadening. For small minimum gap, the system can be described by an effective two-state model coupled only longitudinally to environment.
In the framework of the theoretical model of the phase transition of binary solutions into spatially inhomogeneous states proposed earlier by the autors [1], which takes into account nonlinear effects, the role of the cubic in concentration term in the expansion of free energy was studied. It is shown that taking into account the cubic term contributions to the stabilization of a homogeneous state. The existence of two inhomogeneous phases in an isotropic medium, considered in [1], proves to be possible only at half the concentration of the solution. The contribution of inhomogeneity effects to thermodynamic quantities is calculated. Phase transitions from a homogeneous state and between inhomogeneous phases are second-order phase transitions.
We consider the effective surface motion of a particle that intermittently unbinds from a planar surface and performs bulk excursions. Based on a random walk approach we derive the diffusion equations for surface and bulk diffusion including the surface-bulk coupling. From these exact dynamic equations we analytically obtain the propagator of the effective surface motion. This approach allows us to deduce a superdiffusive, Cauchy-type behavior on the surface, together with exact cutoffs limiting the Cauchy form. Moreover we study the long-time dynamics for the surface motion.
Determinant formulas are presented for: a certain positive semidefinite, hermitian matrix; the loss value of multilinear regression; the multiple linear regression coefficient.
Permutation codes are a class of structured vector quantizers with a computationally-simple encoding procedure based on sorting the scalar components. Using a codebook comprising several permutation codes as subcodes preserves the simplicity of encoding while increasing the number of rate-distortion operating points, improving the convex hull of operating points, and increasing design complexity. We show that when the subcodes are designed with the same composition, optimization of the codebook reduces to a lower-dimensional vector quantizer design within a single cone. Heuristics for reducing design complexity are presented, including an optimization of the rate allocation in a shape-gain vector quantizer with gain-dependent wrapped spherical shape codebook.
We develop the concept of fractal metamaterials which consist of arrays of nano and micron sized rings containing Josephson junctions which play the role of "atoms" in such artificial materials. We show that if some of the junctions have $\pi$-shifts in the Josephson phases that the "atoms" become magnetic and their arrays can have tuned positive or negative permeabilty. Each individual "$\pi$-ring" - the Josephson ring with one $\pi$-junction - can be in one of two energetically degenerate magnetic states in which the supercurrent flows in the clockwise or counter-clockwise direction. This results in magnetic moments that point downwards or upwards, respectively. The value of the total magnetization of such a metamaterial may display fractal features. We describe the magnetic properties of such superconducting metamaterials, including the magnetic field distribution in them (i.e. in the network that is made up of these rings). We also describe the way that the magnetic flux penetrates into the Josephson network and how it is strongly dependent on the geometry of the system.
The particle-based ellipsoidal statistical Bhatnagar-Gross-Krook (ESBGK) model is extended to diatomic molecules and compared with the Direct Simulation Monte Carlo (DSMC) method. For this an efficient method is developed that optionally allows the handling of quantized vibrational energies. The proposed method is verified with a gas in an adiabatic box relaxing from a non-equilibrium state to an equilibrium. It is shown that the analytical Landau-Teller expression as well as DSMC results agree very well with the new method. Furthermore, the method is compared with DSMC results and experimental measurements of a hypersonic flow around a 70$^\circ$ blunted cone. It is shown that the ellipsoidal statistical BGK compares very well with the DSMC results while saving up to a factor of $\approx 35.8$ CPU time for this low Knudsen number case.
In this article, we study the existence and multiplicity of non-negative solutions of following $p$-fractional equation: $$ \quad \left\{\begin{array}{lr}\ds \quad - 2\int_{\mb R^n}\frac{|u(y)-u(x)|^{p-2}(u(y)-u(x))}{|x-y|^{n+p\al}} dxdy = \la h(x)|u|^{q-1}u+ b(x)|u|^{r-1} u \; \text{in}\; \Om \quad \quad \quad \quad u \geq 0 \; \mbox{in}\; \Om,\quad u\in W^{\al,p}(\mb R^n), \quad \quad\quad \quad\quad u =0\quad\quad \text{on} \quad \mb R^n\setminus \Om \end{array} \right. $$ where $\Om$ is a bounded domain in $\mb R^n$, $p\geq 2$, $n> p\al$, $\al\in(0,1)$, $0< q<p-1 <r < \frac{np}{n-ps}-1$, $\la>0$ and $h$, $b$ are sign changing smooth functions. We show the existence of solutions by minimization on the suitable subset of Nehari manifold using the fibering maps. We find that there exists $\la_0$ such that for $\la\in (0,\la_0)$, it has at least two solutions.
We study the four-terminal resistance fluctuations of mesoscopic samples near the transition between the $\nu=2$ and the $\nu=1$ quantum Hall states. We observe near-perfect correlations between the fluctuations of the longitudinal and Hall components of the resistance. These correlated fluctuations appear in a magnetic-field range for which the two-terminal resistance of the samples is quantized. We discuss these findings in light of edge-state transport models of the quantum Hall effect. We also show that our results lead to an ambiguity in the determination of the width of quantum Hall transitions.
Recommending items to potentially interested users has been an important commercial task that faces two main challenges: accuracy and explainability. While most collaborative filtering models rely on statistical computations on a large scale of interaction data between users and items and can achieve high performance, they often lack clear explanatory power. We propose UIPC-MF, a prototype-based matrix factorization method for explainable collaborative filtering recommendations. In UIPC-MF, both users and items are associated with sets of prototypes, capturing general collaborative attributes. To enhance explainability, UIPC-MF learns connection weights that reflect the associative relations between user and item prototypes for recommendations. UIPC-MF outperforms other prototype-based baseline methods in terms of Hit Ratio and Normalized Discounted Cumulative Gain on three datasets, while also providing better transparency.
We demonstrate the use of a variational method to determine a quantitative lower bound on the rate of convergence of Markov Chain Monte Carlo (MCMC) algorithms as a function of the target density and proposal density. The bound relies on approximating the second largest eigenvalue in the spectrum of the MCMC operator using a variational principle and the approach is applicable to problems with continuous state spaces. We apply the method to one dimensional examples with Gaussian and quartic target densities, and we contrast the performance of the Random Walk Metropolis-Hastings (RWMH) algorithm with a ``smart'' variant that incorporates gradient information into the trial moves. We find that the variational method agrees quite closely with numerical simulations. We also see that the smart MCMC algorithm often fails to converge geometrically in the tails of the target density except in the simplest case we examine, and even then care must be taken to choose the appropriate scaling of the deterministic and random parts of the proposed moves. Again, this calls into question the utility of smart MCMC in more complex problems. Finally, we apply the same method to approximate the rate of convergence in multidimensional Gaussian problems with and without importance sampling. There we demonstrate the necessity of importance sampling for target densities which depend on variables with a wide range of scales.