text
stringlengths
6
128k
To improve the usability of a revision history, change untangling, which reconstructs the history to ensure that changes in each commit belong to one intentional task, is important. Although there are several untangling approaches based on the clustering of fine-grained editing operations of source code, they often produce unsuitable result for a developer, and manual tailoring of the result is necessary. In this paper, we propose ChangeBeadsThreader (CBT), an interactive environment for splitting and merging change clusters to support the manual tailoring of untangled changes. CBT provides two features: 1) a two-dimensional space where fine-grained change history is visualized to help users find the clusters to be merged and 2) an augmented diff view that enables users to confirm the consistency of the changes in a specific cluster for finding those to be split. These features allow users to easily tailor automatically untangled changes.
The linear Navier-Stokes equations in three dimensions are given by: $u_{it}(x,t)-\rho \triangle u_i(x,t)-p_{x_i}(x,t)=$ $w_i(x,t)$ , $div \textbf{u}(x,t)=0,i=1,2,3$ with initial conditions: $\textbf{u}|_{(t=0)\bigcup\partial\Omega}=0$. The Green function to the Dirichlet problem $\textbf{u}|_{(t=0)\bigcup\partial\Omega}=0$ of the equation $u_{it}(x,t)-\rho\triangle u_i(x,t)=f_i(x,t)$ present as: $G(x,t;\xi,\tau)=Z(x,t;\xi,\tau)+V(x,t;\xi,\tau).$ Where $Z(x,t;\xi,\tau)=\frac{1}{8\pi^{3/2}(t-\tau)^{3/2}}\cdot e^{-\frac{(x_1-\xi_1)^2+(x_2-\xi_2)^2+(x_3-\xi_3)^2}{4(t-\tau)}}$ is the fundamental solution to this equation and $V(x,t;\xi,\tau)$ is the smooth function of variables $(x,t;\xi,\tau)$. The construction of the function $G(x,t;\xi,\tau)$ is resulted in the book [1 p.106]. By the Green function we present the Navier-Stokes equation as: $u_i(x,t)=\int_0^t\int_{\Omega}\Big(Z(x,t;\xi,\tau)+V(x,t;\xi,\tau)\Big)\frac{dp(\xi,\tau)}{d\xi}d\xi d\tau +\int_0^t\int_{\Omega}G(x,t;\xi,\tau)w_i(\xi,\tau)d\xi d\tau$. But $div \textbf{u}(x,t)=\sum_1^3 \frac{du_i(x,t)}{dx_i}=0.$ Using these equations and the following properties of the fundamental function: $Z(x,t;\xi,\tau)$: $\frac{dZ(x,t;\xi,\tau)}{d x_i}=-\frac{d Z(x,t; \xi,\tau)}{d \xi_i},$ for the definition of the unknown pressure p(x,t) we shall receive the integral equation. From this integral equation we define the explicit expression of the pressure: $p(x,t)=-\frac{d}{dt}\triangle^{-1}\ast\int_0^t\int_{\Omega}\sum_1^3 \frac{dG(x,t;\xi,\tau)}{dx_i}w_i(\xi,\tau)d\xi d\tau+\rho\cdot\int_0^t\int_{\Omega}\sum_1^3\frac{dG(x,t;\xi,\tau)}{dx_i}w_i(\xi,\tau)d\xi d\tau.$ By this formula the following estimate: $\int_0^t\sum_1^3\Big\|\frac{\partial p(x,\tau)}{\partial x_i}\Big\|_{L_2(\Omega)}^2 d \tau<c\cdot\int_0^t\sum_1^3\|w_i(x,\tau)\|_{L_2(\Omega)}^2 d\tau$ holds.
We find it is common for consumers who are not in financial distress to make credit card payments at or close to the minimum. This pattern is difficult to reconcile with economic factors but can be explained by minimum payment information presented to consumers acting as an anchor that weighs payments down. Building on Stewart (2009), we conduct a hypothetical credit card payment experiment to test an intervention to de-anchor payment choices. This intervention effectively stops consumers selecting payments at the contractual minimum. It also increases their average payments, as well as shifting the distribution of payments. By de-anchoring choices from the minimum, consumers increasingly choose the full payment amount - which potentially seems to act as a target payment for consumers. We innovate by linking the experimental responses to survey responses on financial distress and to actual credit card payment behaviours. We find that the intervention largely increases payments made by less financially-distressed consumers. We are also able to evaluate the potential external validity of our experiment and find that hypothetical responses are closely related to consumers' actual credit card payments.
This is the construction proposal for the STAR Event Plane Detector (EPD). It discusses design considerations, simulations, and physics motivations for the device. It also covers other important details such as connector construction, cost schedule and radiation hardness of the epoxy. This proposal was submitted to STAR in May 2016 and approved shortly thereafter. The device was subsequently constructed and installed into STAR. The design evolved somewhat between the proposal and final construction, but this document contains useful details not found elsewhere. A manuscript detailing the device as constructed has been posted at arXiv:1912.05243.
Thanks to its capability of classifying complex phenomena without explicit modeling, deep learning (DL) has been demonstrated to be a key enabler of Wireless Signal Classification (WSC). Although DL can achieve a very high accuracy under certain conditions, recent research has unveiled that the wireless channel can disrupt the features learned by the DL model during training, thus drastically reducing the classification performance in real-world live settings. Since retraining classifiers is cumbersome after deployment, existing work has leveraged the usage of carefully-tailored Finite Impulse Response (FIR) filters that, when applied at the transmitter's side, can restore the features that are lost because of the the channel actions, i.e., waveform synthesis. However, these approaches compute FIRs using offline optimization strategies, which limits their efficacy in highly-dynamic channel settings. In this paper, we improve the state of the art by proposing Chares, a Deep Reinforcement Learning (DRL)-based framework for channel-resilient adaptive waveform synthesis. Chares adapts to new and unseen channel conditions by optimally computing through DRL the FIRs in real-time. Chares is a DRL agent whose architecture is-based upon the Twin Delayed Deep Deterministic Policy Gradients (TD3), which requires minimal feedback from the receiver and explores a continuous action space. Chares has been extensively evaluated on two well-known datasets. We have also evaluated the real-time latency of Chares with an implementation on field-programmable gate array (FPGA). Results show that Chares increases the accuracy up to 4.1x when no waveform synthesis is performed, by 1.9x with respect to existing work, and can compute new actions within 41us.
In this paper we consider the problem of packing a symplectic manifold with integral Lagrangian tori, that is Lagrangian tori whose area homomorphsims take only integer values. We prove that the Clifford torus in $S^2 \times S^2$ is a maximal integral packing, in the sense that any other integral Lagranian torus must intersect it. In the other direction, we show that in any symplectic polydisk $P(a,b)$ with $a,b>2$, there is at least one integral Lagrangian torus in the complement of the collection of standard product integral Lagrangian tori.
We present a chemical-composition analysis of 77 red-giant stars in Omega Centauri. We have measured abundances for carbon and nitrogen, and combined our results with abundances of O, Na, La, and Fe that we determined in our previous work. Our aim is to better understand the peculiar chemical-enrichment history of this cluster, by studying how the total C+N+O content varies among the different-metallicity stellar groups, and among stars at different places along the Na-O anticorrelation. We find the (anti)correlations among the light elements that would be expected on theoretical ground for matter that has been nuclearly processed via high-temperature proton captures. The overall [(C+N+O)/Fe] increases by 0.5 dex from [Fe/H] -2.0 to [Fe/H] -0.9. Our results provide insight into the chemical-enrichment history of the cluster, and the measured CNO variations provide important corrections for estimating the relative ages of the different stellar populations.
In this paper, we propose an ensemble learning algorithm named \textit{bagged $k$-distance for mode-based clustering} (\textit{BDMBC}) by putting forward a new measurement called the \textit{probability of localized level sets} (\textit{PLLS}), which enables us to find all clusters for varying densities with a global threshold. On the theoretical side, we show that with a properly chosen number of nearest neighbors $k_D$ in the bagged $k$-distance, the sub-sample size $s$, the bagging rounds $B$, and the number of nearest neighbors $k_L$ for the localized level sets, BDMBC can achieve optimal convergence rates for mode estimation. It turns out that with a relatively small $B$, the sub-sample size $s$ can be much smaller than the number of training data $n$ at each bagging round, and the number of nearest neighbors $k_D$ can be reduced simultaneously. Moreover, we establish optimal convergence results for the level set estimation of the PLLS in terms of Hausdorff distance, which reveals that BDMBC can find localized level sets for varying densities and thus enjoys local adaptivity. On the practical side, we conduct numerical experiments to empirically verify the effectiveness of BDMBC for mode estimation and level set estimation, which demonstrates the promising accuracy and efficiency of our proposed algorithm.
We introduce in this article a new method to estimate the minimum distance of codes from algebraic surfaces. This lower bound is generic, i.e. can be applied to any surface, and turns out to be ``liftable'' under finite morphisms, paving the way toward the construction of good codes from towers of surfaces. In the same direction, we establish a criterion for a surface with a fixed finite set of closed points $\mathcal P$ to have an infinite tower of $\ell$--\'etale covers in which $\mathcal P$ splits totally. We conclude by stating several open problems. In particular, we relate the existence of asymptotically good codes from general type surfaces with a very ample canonical class to the behaviour of their number of rational points with respect to their $K^2$ and coherent Euler characteristic.
In this note, with the help of the boundary classification of diffusions, we derive a criterion of the convergence of perpetual integral functionals of transient real-valued diffusions. In the particular case of transient Bessel processes, we note that this criterion agrees with the one obtained via Jeulin's convergence lemma.
We calculate the leading power corrections to the decay rates, distributions and hadronic spectral moments in rare inclusive \bxsll decays in the standard model, using heavy quark expansion (HQE) in $(1/m_b)$ and a phenomenological model implementing the Fermi motion effects of the b-quark bound in the B-hadron. We include next-to-leading order perturbative QCD corrections and work out the dependences of the spectra, decay rates and hadronic moments on the model parameters in either HQE and the Fermi motion model. In the latter, we take into account long-distance effects via $B \to X_s +(J/\psi, \psi^\prime,...) \to X_s \ell^+ \ell^- $ with a vector meson dominance ansatz and study the influence of kinematical cuts in the dilepton and hadronic invariant masses on branching ratios, hadron spectra and hadronic moments. We present leading logarithmic QCD corrections to the $b \to s \gamma\gamma$ amplitude. The QCD perturbative improved $B_{s}\to\gamma\gamma$ branching ratio is given in the standard model including our estimate of long-distance effects via $B_s \to \phi \gamma \to \gamma \gamma$ and $B_s \to \phi \psi \to \phi \gamma \to \gamma \gamma$ decays. The uncertainties due to the renormalization scale and the parameters of the HQE inspired bound state model are worked out.
The prevalent use of Large Language Models (LLMs) has necessitated studying their mental models, yielding noteworthy theoretical and practical implications. Current research has demonstrated that state-of-the-art LLMs, such as ChatGPT, exhibit certain theory of mind capabilities and possess relatively stable Big Five and/or MBTI personality traits. In addition, cognitive process features form an essential component of these mental models. Research in cultural psychology indicated significant differences in the cognitive processes of Eastern and Western people when processing information and making judgments. While Westerners predominantly exhibit analytical thinking that isolates things from their environment to analyze their nature independently, Easterners often showcase holistic thinking, emphasizing relationships and adopting a global viewpoint. In our research, we probed the cultural cognitive traits of ChatGPT. We employed two scales that directly measure the cognitive process: the Analysis-Holism Scale (AHS) and the Triadic Categorization Task (TCT). Additionally, we used two scales that investigate the value differences shaped by cultural thinking: the Dialectical Self Scale (DSS) and the Self-construal Scale (SCS). In cognitive process tests (AHS/TCT), ChatGPT consistently tends towards Eastern holistic thinking, but regarding value judgments (DSS/SCS), ChatGPT does not significantly lean towards the East or the West. We suggest that the result could be attributed to both the training paradigm and the training data in LLM development. We discuss the potential value of this finding for AI research and directions for future research.
The nonlinear evolution of the m=1 internal kink mode is studied numerically in a setting where the tokamak core plasma is surrounded by a turbulent region with low magnetic shear. As a starting point we choose configurations with three nearby q=1 surfaces where triple tearing modes (TTMs) with high poloidal mode numbers m are unstable. While the amplitudes are still small, the fast growing high-m TTMs enhance the growth of the m=1 instability. This is interpreted as a fast sawtooth trigger mechanism. The TTMs lead to a partial collapse, leaving behind a turbulent belt with q ~= 1 around the unreconnected core plasma. Although, full reconnection can occur if the core displacement grows large enough, it is shown that the turbulence may actively prevent further reconnection. This is qualitatively similar to experimentally observed partial sawtooth crashes with post-cursor oscillations due to a saturated internal kink.
We calculate the spectrum of the scattered light from quantum degenerate atomic gases obeying Bose-Einstein statistics. The atoms are assumed to occupy two ground states which are optically coupled through a common excited state by two low intensity off-resonant light beams. In the presence of a Bose condensate in both ground states, the atoms may exhibit light induced oscillations between the two condensates analogous to the Josephson effect. The spectrum of the scattered light is calculated in the limit of a low oscillation frequency. In the spectrum we are able to observe qualitative features depending on the phase difference between the macroscopic wave functions of the two condensates. Thus, our optical scheme could possibly be used as an experimental realization of the spontaneous breakdown of the U(1) gauge symmetry in the Bose-Einstein condensation.
We solve the Euclidean Einstein equations with non-Abelian gauge fields of sufficiently large symmetry in various dimensions. In higher-dimensional spaces, we find the solutions which are similar to so-called scalar wormholes. In four-dimensional space-time, we find singular wormhole solutions with infinite Euclidean action. Wormhole solutions in the three-dimensional Einstein-Yang-Mills theory with a Chern-Simons term are also constructed.
We reinvestigate the recently discovered bifurcation phase transition in Causal Dynamical Triangulations (CDT) and provide further evidence that it is a higher order transition. We also investigate the impact of introducing matter in the form of massless scalar fields to CDT. We discuss the impact of scalar fields on the measured spatial volumes and fluctuation profiles in addition to analysing how the scalar fields influence the position of the bifurcation transition.
Most research into anti-phishing defence assumes that the mal-actor is attempting to harvest end-users' personally identifiable information or login credentials and, hence, focuses on detecting phishing websites. The defences for this type of attack are usually activated after the end-user clicks on a link, at which point the link is checked. This is known as after-the-click detection. However, more sophisticated phishing attacks (such as spear-phishing and whaling) are rarely designed to get the end-user to visit a website. Instead, they attempt to get the end-user to perform some other action, for example, transferring money from their bank account to the mal-actors account. These attacks are rarer, and before-the-click defence has been investigated less than after-the-click defence. To better integrate and contextualize these studies in the overall anti-phishing research, this paper presents a systematic literature review of proposed anti-phishing defences. From a total of 6330 papers, 21 primary studies and 335 secondary studies were identified and examined. The current research was grouped into six primary categories, blocklist/allowlist, heuristics, content, visual, artificial intelligence/machine learning and proactive, with an additional category of "other" for detection techniques that do not fit into any of the primary categories. It then discusses the performance and suitability of using these techniques for detecting phishing emails before the end-user even reads the email. Finally, it suggests some promising areas for further research.
Using the full data sample collected with the Belle detector at the KEKB asymmetric-energy $e^+e^-$ collider, we present CP violation in charm decays. The $D^0-\bar{D}^0$ mixing parameter $y_{CP}$ and indirect CP violation parameter $A_{\Gamma}$ in $D^0\rightarrow h^+h^-$ decays are reported, where $h$ denotes $K$ and $\pi$. The preliminary results are $y_{CP}=(1.11\pm0.22\pm0.11)%$ and $A_{\Gamma}=(-0.03\pm0.20\pm0.08)%$. We also report searches for CP violation in $D^0\rightarrow h^+h^-$ and $D^+\rightarrow K^0_S K^+$ decays. No evidence for CP violation in $D^0\rightarrow h^+h^-$ is observed with $A^{KK}_{CP}=(-0.32\pm0.21\pm0.09)%$ and $A^{\pi\pi}_{CP}=(+0.55\pm0.36\pm0.09)%$. The CP asymmetry difference between $D^0\rightarrow K^+K^-$ and $D^0\rightarrow\pi^+\pi^-$ decays is measured with $\Delta A^{hh}_{CP}=(-0.87\pm0.41\pm0.06)%$. The CP asymmetry in $D^+\rightarrow K^0_S K^+$ decay is measured to be $(-0.25\pm0.28\pm0.14)%$. After subtracting CP violation due to $K^0-\bar{K}^0$ mixing, the CP asymmetry in $D^+\rightarrow\bar{K}^0 K^+$ decay is found to be $(+0.08\pm0.28\pm0.14)%$.
The nature of subproton scale fluctuations in the solar wind is an open question, partly because two similar types of electromagnetic turbulence can occur: kinetic Alfven turbulence and whistler turbulence. These two possibilities, however, have one key qualitative difference: whistler turbulence, unlike kinetic Alfven turbulence, has negligible power in density fluctuations. In this Letter, we present new observational data, as well as analytical and numerical results, to investigate this difference. The results show, for the first time, that the fluctuations well below the proton scale are predominantly kinetic Alfven turbulence, and, if present at all, the whistler fluctuations make up only a small fraction of the total energy.
Although neural sequence-to-sequence models have been successfully applied to semantic parsing, they fail at compositional generalization, i.e., they are unable to systematically generalize to unseen compositions of seen components. Motivated by traditional semantic parsing where compositionality is explicitly accounted for by symbolic grammars, we propose a new decoding framework that preserves the expressivity and generality of sequence-to-sequence models while featuring lexicon-style alignments and disentangled information processing. Specifically, we decompose decoding into two phases where an input utterance is first tagged with semantic symbols representing the meaning of individual words, and then a sequence-to-sequence model is used to predict the final meaning representation conditioning on the utterance and the predicted tag sequence. Experimental results on three semantic parsing datasets show that the proposed approach consistently improves compositional generalization across model architectures, domains, and semantic formalisms.
We demonstrate that in photonic gap antennas composed of an epsilon-near-zero (ENZ) layer embedded within a high-index dielectric, hybrid modes emerge from the strong coupling between the ENZ thin film and the photonic modes of the dielectric antenna. These hybrid modes show giant electric field enhancements, large enhancements of the far-field spontaneous emission rate and a unidirectional radiation response. We analyze both parent and hybrid modes using quasinormal mode theory and find that the hybridization can be well understood using a coupled oscillator model. Under plane wave illumination, hybrid ENZ antennas can concentrate light with an electric field amplitude $\sim$100 times higher than that of the incident wave, which places them on par with the best plasmonic antennas. In addition, the far-field spontaneous emission rate of a dipole embedded at the antenna hotspot reaches up to $\sim$2300 that in free space, with nearly perfect unidirectional emission.
Particle therapy is an established method to treat deep-seated tumours using accelerator-produced ion beams. For treatment planning, the precise knowledge of the relative stopping power (RSP) within the patient is vital. Conversion errors from x-ray computed tomography (CT) measurements to RSP introduce uncertainties in the applied dose distribution. Using a proton computed tomography (pCT) system to measure the SP directly could potentially increase the accuracy of treatment planning. A pCT demonstrator, consisting of double-sided silicon strip detectors (DSSD) as tracker and plastic scintillator slabs coupled to silicon photomultipliers (SiPM) as a range telescope, was developed. After a significant hardware upgrade of the range telescope, a 3D tomogram of an aluminium stair phantom was recorded at the MedAustron facility in Wiener Neustadt, Austria. In total, 80 projections with 6.5x10^5 primary events were acquired and used for the reconstruction of the RSP distribution in the phantom. After applying a straight-line approximation for the particle path inside the phantom, the most probable value (MPV) of the RSP distribution could be measured with an accuracy of 0.59%. The RSP resolution inside the phantom was only 9.3% due to a limited amount of projections and measured events per projection.
Nonlinear Schr\"odinger equation (with the Schwarzian initial data) is important in nonlinear optics, Bose condensation and in the theory of strongly correlated electrons. The asymptotic solutions in the region $x/t={\cal O}(1)$, $t\to\infty$, can be represented as a double series in $t^{-1}$ and $\ln t$. Our current purpose is the description of the asymptotics of the coefficients of the series.
We predict hyper-entanglement generation during binary scattering of mesoscopic bound states, solitary waves in Bose-Einstein condensates containing thousands of identical Bosons. The underlying many-body Hamiltonian must not be integrable, and the pre-collision quantum state of the solitons fragmented. Under these conditions, we show with pure state quantum field simulations that the post-collision state will be hyper-entangled in spatial degrees of freedom and atom number within solitons, for realistic parameters. The effect links aspects of non-linear systems and quantum-coherence and the entangled post-collision state challenges present entanglement criteria for identical particles. Our results are based on simulations of colliding quantum solitons in a quintic interaction model beyond the mean-field, using the truncated Wigner approximation.
Observations by the Large Area Telescope (LAT) on the \textit{Fermi} mission of diffuse $\gamma$-rays in a mid-latitude region in the third quadrant (Galactic longitude $l$ from $200\arcdeg$ to $260\arcdeg$ and latitude $| b |$ from $22\arcdeg$ to $60\arcdeg$) are reported. The region contains no known large molecular cloud and most of the atomic hydrogen is within 1 kpc of the solar system. The contributions of $\gamma$-ray point sources and inverse Compton scattering are estimated and subtracted. The residual $\gamma$-ray intensity exhibits a linear correlation with the atomic gas column density in energy from 100 MeV to 10 GeV. The measured integrated $\gamma$-ray emissivity is $(1.63 \pm 0.05) \times 10^{-26} {\rm photons s^{-1} sr^{-1} H\mathchar`-atom^{-1}}$ and $(0.66 \pm 0.02) \times 10^{-26} {\rm photons s^{-1} sr^{-1} H\mathchar`-atom^{-1}}$ above 100 MeV and above 300 MeV, respectively, with additional systematic error of $\sim 10%$. The differential emissivity in 100 MeV--10 GeV agrees with calculations based on cosmic ray spectra consistent with those directly measured, at the 10% level. The results obtained indicate that cosmic ray nuclei spectra within 1 kpc from the solar system in regions studied are close to the local interstellar spectra inferred from direct measurements at the Earth within $\sim 10%$.
The TREC 2017 Common Core Track aimed at gathering a diverse set of participating runs and building a new test collection using advanced pooling methods. In this paper, we describe the participation of the IlpsUvA team at the TREC 2017 Common Core Track. We submitted runs created using two methods to the track: (1) BOIR uses Bayesian optimization to automatically optimize retrieval model hyperparameters. (2) NVSM is a latent vector space model where representations of documents and query terms are learned from scratch in an unsupervised manner. We find that BOIR is able to optimize hyperparameters as to find a system that performs competitively amongst track participants. NVSM provides rankings that are diverse, as it was amongst the top automated unsupervised runs that provided the most unique relevant documents.
From pressure and surface dilation measurements, we show that a solid-liquid-type transition occurs at low excitation frequencies in vertically vibrated granular layers. This transition precedes subharmonic bifurcations from flat surface to standing wave patterns, indicating that these waves are in fact associated with the fluid like behavior of the layer. In the limit of high excitation frequencies, we show that a new kind of subharmonic waves can be distinguished. These waves do not involve any lateral transfer of grains within the layer and correspond to excitations for which the layer slightly bends alternately in time and space. These bending waves have very low amplitude and we observe them in a vibrated two-dimensional layer of photoelastic particles.
The overall aim of the software industry is to ensure delivery of high quality software to the end user. To ensure high quality software, it is required to test software. Testing ensures that software meets user specifications and requirements. However, the field of software testing has a number of underlying issues like effective generation of test cases, prioritisation of test cases etc which need to be tackled. These issues demand on effort, time and cost of the testing. Different techniques and methodologies have been proposed for taking care of these issues. Use of evolutionary algorithms for automatic test generation has been an area of interest for many researchers. Genetic Algorithm (GA) is one such form of evolutionary algorithms. In this research paper, we present a survey of GA approach for addressing the various issues encountered during software testing.
Explainable artificial intelligence has rapidly emerged since lawmakers have started requiring interpretable models for safety-critical domains. Concept-based neural networks have arisen as explainable-by-design methods as they leverage human-understandable symbols (i.e. concepts) to predict class memberships. However, most of these approaches focus on the identification of the most relevant concepts but do not provide concise, formal explanations of how such concepts are leveraged by the classifier to make predictions. In this paper, we propose a novel end-to-end differentiable approach enabling the extraction of logic explanations from neural networks using the formalism of First-Order Logic. The method relies on an entropy-based criterion which automatically identifies the most relevant concepts. We consider four different case studies to demonstrate that: (i) this entropy-based criterion enables the distillation of concise logic explanations in safety-critical domains from clinical data to computer vision; (ii) the proposed approach outperforms state-of-the-art white-box models in terms of classification accuracy and matches black box performances.
An analytical model is derived for the probability of failure (P-fail) to spatially acquire an optical link with a jittering search beam. The analytical model accounts for an arbitrary jitter spectrum and considers the associated correlations between jitter excursions on adjacent tracks of the search spiral. An expression of P-fail in terms of basic transcendental functions is found by linearizing the exact analytical model with respect to the correlation strength. Predictions from the models indicate a strong decrease of P-fail with increasing correlation-strength, which is found to be in excellent agreement to results from Monte Carlo simulations. The dependency of P-fail on track-width and scan speed is investigated, confirming previous assumptions on the impact of correlations. Expressions and applicable constraints are derived for the limits of full and no correlations, and the optimal track width to minimize the acquisition time is computed for a range of scan speeds. The model is applicable to optical terminals equipped with a fast beam steering mirror, as often found for optical communication missions in space.
Scanning tunneling spectroscopy was performed on single crystals of superconducting 2H-NbSe2, at 300 mK and in a magnetic field, up to 5 T, applied parallel to the ab-plane. This novel field geometry allows the quasiparticle density-of-states spectrum to be measured under finite superfluid momentum, while avoiding contributions from the vortex-core bound states. At zero field, we observed a fully-gapped conductance spectrum with both gap-edge peaks and sub-gap kinks. These spectral features show a systematic evolution with the applied field: the kinks close in while the peaks move apart in low fields, and the zero-bias conductance has a two-sloped behavior over the entire field range, though dipping anomalously at 0.7 T. Our data was analyzed with recent theoretical models for quasiparticle tunneling into a current-carrying superconductor, and yielded distinct evidence for multiple superconducting gaps coming from various Fermi-surface sheets of different topologies, as well as possible implications on the origin of the coexisting charge-density-wave order.
In many areas of physics, the Kramers-Kronig (KK) relations are used to extract information about the real part of the optical response of a medium from its imaginary counterpart. In this paper we discuss an alternative but mathematically equivalent approach based on the Hilbert transform. We apply the Hilbert transform to transmission spectra to find the group and refractive indices of a Cs vapor, and thereby demonstrate how the Hilbert transform allows indirect measurement of the refractive index, group index and group delay whilst avoiding the use of complicated experimental set ups.
In this paper we develop new methods of study of generalized normal homogeneous Riemannian manifolds. In particular, we obtain a complete classification of generalized normal homogeneous Riemannian metrics on spheres. We prove that for any connected (almost effective) transitive on $S^n$ compact Lie group $G$, the family of $G$-invariant Riemannian metrics on $S^n$ contains generalized normal homogeneous but not normal homogeneous metrics if and only if this family depends on more than one parameters. Any such family (that exists only for $n=2k+1$) contains a metric $g_{\can}$ of constant sectional curvature 1 on $S^n$. We also prove that $(S^{2k+1}, g_{\can})$ is Clifford-Wolf homogeneous, and therefore generalized normal homogeneous, with respect to $G$ (excepting the groups $G=SU(k+1)$ with odd $k+1$). The space of unit Killing vector fields on $(S^{2k+1}, g_{\can})$ from Lie algebra $\mathfrak{g}$ of Lie group $G$ is described as some symmetric space (excepting the case $G=U(k+1)$ when one obtains the union of all complex Grassmannians in $\mathbb{C}^{k+1}$).
Seal in classical information is simply impossible. Since classical information can be easily copied any number of times. Based on quantum information, esp. quantum unclonable theorem, quantum seal maybe constructed perfectly. But it is shown that perfect quantum seal is impossible, and the success probability is bounded. In this paper, we show how to exceed the optimal bound by using the TCF (Trapdoor Claw Free) functions, which can be constructed based on LWE assumption. Hence it is post-quantum secure.
In this work, the relation between input-to-state stability and integral input-to-state stability is studied for linear infinite-dimensional systems with an unbounded control operator. Although a special focus is laid on the case $L^{\infty}$, general function spaces are considered for the inputs. We show that integral input-to-state stability can be characterized in terms of input-to-state stability with respect to Orlicz spaces. Since we consider linear systems, the results can also be formulated in terms of admissibility. For parabolic diagonal systems with scalar inputs, both stability notions with respect to $L^\infty$ are equivalent.
A crossover between logarithmic and exponential temperature dependence of the conductance (weak and strong localization) has been observed in ultrathin films of metals deposited onto substrates held at liquid helium temperatures. The resistance at the crossover is well defined by the onset of a nearly linear dependence of conductance on thickness at fixed temperature in a sequence of in situ evaporated films. The results of a finite size scaling analysis treating thickness as a control parameter suggest the existence of a T=0 quantum critical point which we suggest is a charge, or electron glass melting transition.
This paper proposes a strategy for detecting the presence of a gravito-magnetic field due to the rotation of the galactic dark halo. Visible matter in galaxies rotates and dark matter, supposed to form a halo incorporating barionic matter, rotates also, since it interacts gravitationally with the rest. Pursuing the same line of reasoning, dark matter should produce all gravitational effects predicted by general relativity, including a gravito-magnetic field. I discuss a possible strategy for measuring that field. The idea recovers the old Sagnac effect and proposes to use a triangle having three Lagrange points of the Sun-Earth pair at its vertices. The asymmetry in the times of flight along the loop in opposite directions is proportional to the gravito-magnetic galactic field.
Classical T Tauri stars (CTTS) are young (< 10 Myr), cool stars that actively accrete matter from a disk. They show strong, broad and asymmetric, atomic FUV emission lines. Neither the width, nor the line profile is understood. Likely, different mechanisms influence the line profile; the best candidates are accretion, winds and stellar activity. We monitored the C IV 1548/1550 Ang doublet in the nearby, bright CTTS TW Hya with the Hubble Space Telescope Cosmic Origin Spectrograph (HST/COS) to correlate it with i) the cool wind, as seen in COS NUV Mg II line profiles, ii) the photometric period from joint ground-based monitoring, iii) the accretion rate as determined from the UV continuum, and iv) the Ha line profile from independent ground-based observations. The observations span 10 orbits distributed over a few weeks to cover the typical time scales of stellar rotation, accretion and winds. Here we describe a model with intrinsically asymmetric C IV lines.
In unparticle dark matter (unmatter) models the equation of state of the unmatter is given by $p=\rho/(2d_U+1)$, where $d_U$ is the scaling factor. Unmatter with such equations of state would have a significant impact on the expansion history of the universe. Using type Ia supernovae (SNIa), the baryon acoustic oscillation (BAO) measurements and the shift parameter of the cosmic microwave background (CMB) to place constraints on such unmatter models we find that if only the SNIa data is used the constraints are weak. However, with the BAO and CMB shift parameter data added strong constraints can be obtained. For the $\Lambda$UDM model, in which unmatter is the sole dark matter, we find that $d_U > 60$ at 95% C.L. For comparison, in most unparticle physics models it is assumed $d_U<2$. For the $\Lambda$CUDM model, in which unmatter co-exists with cold dark matter, we found that the unmatter can at most make up a few percent of the total cosmic density if $d_U<10$, thus it can not be the major component of dark matter.
Suppose agents can exert costly effort that creates nonrival, heterogeneous benefits for each other. At each possible outcome, a weighted, directed network describing marginal externalities is defined. We show that Pareto efficient outcomes are those at which the largest eigenvalue of the network is 1. An important set of efficient solutions, Lindahl outcomes, are characterized by contributions being proportional to agents' eigenvector centralities in the network. The outcomes we focus on are motivated by negotiations. We apply the results to identify who is essential for Pareto improvements, how to efficiently subdivide negotiations, and whom to optimally add to a team.
The CLEO-c experiment, running at charm threshold, has measured many charmed meson properties. Here I summarize results on leptonic and semileptonic decays of D mesons, as well as measurements hadronic decay strong phases that are relevant to the extraction of the CKM angle gamma from B decays.
The impact of atomic parity violation experiments on determination of the weak mixing parameter $\sin^2 \theta$ and the Peskin-Takeuchi parameters $S$ and $T$ is reassessed in the light of recent electroweak measurements at LEP, SLAC, and Fermilab. Since the weak charge $Q_W$ provides unique information on $S$, its determination with a factor of four better accuracy than present levels can have a noticeable effect on global fits. However, the measurement of $\Delta Q_W / Q_W$ for two different isotopes provides primarily information on $\sin^2 \theta$. To specify this quantity to an accuracy of $\pm 0.0004$, comparable to that now provided by other electroweak experiments, one would have to determine $\Delta Q_W/Q_W$ in cesium to about 0.1\% of its value, with comparable demands for other nuclei. The relative merits of absolute measurements of $Q_W$ and isotope ratios for discovering effects of new gauge bosons are noted briefly.
We review the theory and phenomenology of heavy-quark symmetry, exclusive weak decays of B mesons, inclusive decay rates and lifetimes of b hadrons.
In assistive robots, compliant actuator is a key component in establishing safe and satisfactory physical human-robot interaction (pHRI). The performance of compliant actuators largely depends on the stiffness of the elastic element. Generally, low stiffness is desirable to achieve low impedance, high fidelity of force control and safe pHRI, while high stiffness is required to ensure sufficient force bandwidth and output force. These requirements, however, are contradictory and often vary according to different tasks and conditions. In order to address the contradiction of stiffness selection and improve adaptability to different applications, we develop a reconfigurable rotary series elastic actuator with nonlinear stiffness (RRSEAns) for assistive robots. In this paper, an accurate model of the reconfigurable rotary series elastic element (RSEE) is presented and the adjusting principles are investigated, followed by detailed analysis and experimental validation. The RRSEAns can provide a wide range of stiffness from 0.095 Nm/deg to 2.33 Nm/deg, and different stiffness profiles can be yielded with respect to different configuration of the reconfigurable RSEE. The overall performance of the RRSEAns is verified by experiments on frequency response, torque control and pHRI, which is adequate for most applications in assistive robots. Specifically, the root-mean-square (RMS) error of the interaction torque results as low as 0.07 Nm in transparent/human-in-charge mode, demonstrating the advantages of the RRSEAns in pHRI.
Radiation exposure in positron emission tomography (PET) imaging limits its usage in the studies of radiation-sensitive populations, e.g., pregnant women, children, and adults that require longitudinal imaging. Reducing the PET radiotracer dose or acquisition time reduces photon counts, which can deteriorate image quality. Recent deep-neural-network (DNN) based methods for image-to-image translation enable the mapping of low-quality PET images (acquired using substantially reduced dose), coupled with the associated magnetic resonance imaging (MRI) images, to high-quality PET images. However, such DNN methods focus on applications involving test data that match the statistical characteristics of the training data very closely and give little attention to evaluating the performance of these DNNs on new out-of-distribution (OOD) acquisitions. We propose a novel DNN formulation that models the (i) underlying sinogram-based physics of the PET imaging system and (ii) the uncertainty in the DNN output through the per-voxel heteroscedasticity of the residuals between the predicted and the high-quality reference images. Our sinogram-based uncertainty-aware DNN framework, namely, suDNN, estimates a standard-dose PET image using multimodal input in the form of (i) a low-dose/low-count PET image and (ii) the corresponding multi-contrast MRI images, leading to improved robustness of suDNN to OOD acquisitions. Results on in vivo simultaneous PET-MRI, and various forms of OOD data in PET-MRI, show the benefits of suDNN over the current state of the art, quantitatively and qualitatively.
Common meadows are fields expanded with a total inverse function. Division by zero produces an additional value denoted with "a" that propagates through all operations of the meadow signature (this additional value can be interpreted as an error element). We provide a basis theorem for so-called common cancellation meadows of characteristic zero, that is, common meadows of characteristic zero that admit a certain cancellation law.
A self-consistent non-minimal non-Abelian Einstein-Yang-Mills model, containing three phenomenological coupling constants, is formulated. The ansatz of a vanishing Yang-Mills induction is considered as a particular case of the self-duality requirement for the gauge field. Such an ansatz is shown to allow obtaining an exact solution of the self-consistent set of equations when the space-time has a constant curvature. An example describing a pure magnetic gauge field in the de Sitter cosmological model is discussed in detail.
A classic result by Bass says that the class of all projective modules is covering, if and only if it is closed under direct limits. Enochs extended the if-part by showing that every class of modules $\mathcal C$, which is precovering and closed under direct limits, is covering, and asked whether the converse is true. We employ the tools developed in [18] and give a positive answer when $\mathcal C = \mathcal A$, or $\mathcal C$ is the class of all locally $\mathcal A ^{\leq \omega}$-free modules, where $\mathcal A$ is any class of modules fitting in a cotorsion pair $(\mathcal A, \mathcal B)$ such that $\mathcal B$ is closed under direct limits. This setting includes all cotorsion pairs and classes of locally free modules arising in (infinite-dimensional) tilting theory. We also consider two particular applications: to pure-semisimple rings, and artin algebras of infinite representation type.
Unsupervised anomalous sound detection aims to detect unknown abnormal sounds of machines from normal sounds. However, the state-of-the-art approaches are not always stable and perform dramatically differently even for machines of the same type, making it impractical for general applications. This paper proposes a spectral-temporal fusion based self-supervised method to model the feature of the normal sound, which improves the stability and performance consistency in detection of anomalous sounds from individual machines, even of the same type. Experiments on the DCASE 2020 Challenge Task 2 dataset show that the proposed method achieved 81.39\%, 83.48\%, 98.22\% and 98.83\% in terms of the minimum AUC (worst-case detection performance amongst individuals) in four types of real machines (fan, pump, slider and valve), respectively, giving 31.79\%, 17.78\%, 10.42\% and 21.13\% improvement compared to the state-of-the-art method, i.e., Glow\_Aff. Moreover, the proposed method has improved AUC (average performance of individuals) for all the types of machines in the dataset. The source codes are available at https://github.com/liuyoude/STgram_MFN
In this paper we analyze so-called Parisian ruin probability that happens when surplus process stays below zero longer than fixed amount of time $\zeta>0$. We focus on general spectrally negative L\'{e}vy insurance risk process. For this class of processes we identify expression for ruin probability in terms of some other quantities that could be possibly calculated explicitly in many models. We find its Cram\'{e}r-type and convolution-equivalent asymptotics when reserves tends to infinity. Finally, we analyze few explicit examples.
We show a general method to solve 2+1 dimensional dilatonic Maxwell-Einstein equation with a positive or negative cosmological constant. All the physical solutions are listed with assumptions that they are static, rotationally symmetric, and has a nonzero magnetic field and a nonzero dilaton field. On the contrary to the magnetic solution without a dilaton field, some of the present solutions with a dilaton field possess a horizon.
Recently the Ihara zeta function for the finite graph was extended to infinite one by Clair and Chinta et al. In this paper, we obtain the same expressions by a different approach from their analytical method. Our new approach is to take a suitable limit of a sequence of finite graphs via the Konno-Sato theorem. This theorem is related to explicit formulas of characteristic polynomials for the evolution matrix of the Grover walk. The walk is one of the most well-investigated quantum walks which are quantum counterpart of classical random walks. We call the relation between the Grover walk and the zeta function based on the Konno-Sato theorem "Grover/Zeta Correspondence" here.
Titanium dioxide (TiO$_2$) plays a central role in the study of artificial photosynthesis, owing to its ability to perform photocatalytic water splitting. Despite over four decades of intense research efforts in this area, there is still some debate over the nature of the first water monolayer on the technologically-relevant anatase TiO$_2$ (101) surface. In this work we use first-principles calculations to reverse-engineer the experimental high-resolution X-ray photoelectron spectra measured for this surface in [Walle et al., J. Phys. Chem. C 115, 9545 (2011)], and find evidence supporting the existence of a mix of dissociated and molecular water in the first monolayer. Using both semilocal and hybrid functional calculations we revise the current understanding of the adsorption energetics by showing that the energetic cost of water dissociation is reduced via the formation of a hydrogen-bonded hydroxyl-water complex. We also show that such a complex can provide an explanation of an unusual superstructure observed in high-resolution scanning tunneling microscopy experiments.
In this paper, we initiate the study of a triple $(X,\Delta,D)$ which consists of a pair $(X,\Delta)$ and a polarizing pseudoeffective divisor $D$. The adjoint asymptotic multiplier ideal sheaf $\mathcal{J}(X,\Delta;\lVert D \rVert)$ associated to the triple gives a simultaneous generalization of the multiplier ideal sheaf $\mathcal{J}(D)$ and asymptotic multiplier ideal sheaf $\mathcal{J}(\lVert D \rVert)$. We describe the closed set defined by the ideal sheaf $\mathcal{J}(X,\Delta;\lVert D \rVert)$ in terms of the minimal model program. We also characterize the case where $\mathcal{J}(X,\Delta;\lVert D \rVert)=\mathcal{O}_X$. Lastly, we also prove a Nadel type vanishing theorem of cohomology using $\mathcal{J}(X,\Delta;\lVert D \rVert)$.
Despite having the simplest atomic structure, bulk FeSe has an observed electronic structure with the largest deviation from the band theory predictions among all Fe-based superconductors and exhibits a low temperature nematic electronic state without intervening magnetic order. We show that the Fe-Fe interatomic Coulomb repulsion $V$ offers a natural explanation for the puzzling electron correlation effects in FeSe superconductors. It produces a strongly renormalized low-energy band structure where the van Hove singularity sits remarkably close to Fermi level in the high-temperature electron liquid phase as observed experimentally. This proximity enables the quantum fluctuations in $V$ to induce a rotational symmetry breaking electronic bond order in the $d$-wave channel. We argue that this emergent low-temperature $d$-wave bond nematic state, different from the commonly discussed ferro-orbital order and spin-nematicity, has been observed recently by several angle resolved photoemission experiments detecting the lifting of the band degeneracies at high symmetry points in the Brillouin zone. We present a symmetry analysis of the space group and identify the hidden antiunitary $T$-symmetry that protects the band degeneracy and the electronic order/interaction that can break the symmetry and lift the degeneracy. We show that the $d$-wave nematic bond order, together with the spin-orbit coupling, provide a unique explanation of the temperature dependence, momentum space anisotropy, and domain effects observed experimentally. We discuss the implications of our findings on the structural transition, the absence of magnetic order, and the intricate competition between nematicity and superconductivity in FeSe superconductors.
In this paper, we study the homology of the coloring complex and the cyclic coloring complex of a complete $k$-uniform hypergraph. We show that the coloring complex of a complete $k$-uniform hypergraph is shellable, and we determine the rank of its unique nontrivial homology group in terms of its chromatic polynomial. We also show that the dimension of the $(n-k-1)^{st}$ homology group of the cyclic coloring complex of a complete $k$-uniform hypergraph is given by a binomial coefficient. Further, we discuss a complex whose $r$-faces consist of all ordered set partitions $[B_1, \hdots, B_{r+2}]$ where none of the $B_i$ contain a hyperedge of the complete $k$-uniform hypergraph $H$ and where $1 \in B_1$. It is shown that the dimensions of the homology groups of this complex are given by binomial coefficients. As a consequence, this result gives the dimensions of the multilinear parts of the cyclic homology groups of $\C[x_1, \hdots, x_n]/ \{x_{i_1} \hdots x_{i_k} \mid i_{1} \hdots i_{k}$ is a hyperedge of $H \}$.
Using high-resolution hydrodynamical simulations of galaxy clusters, we study the interaction between the brightest cluster galaxy, its supermassive black hole (BH) and the intracluster medium (ICM). We create initial conditions for which the ICM is in hydrostatic equilibrium within the gravitational potential from the galaxy and an NFW dark matter halo. Two free parameters associated with the thermodynamic profiles determine the cluster gas fraction and the central temperature, where the latter can be used to create cool-core or non-cool-core systems. Our simulations include radiative cooling, star formation, BH accretion, and stellar and active galactic nucleus (AGN) feedback. Even though the energy of AGN feedback is injected thermally and isotropically, it leads to anisotropic outflows and buoyantly rising bubbles. We find that the BH accretion rate (BHAR) is highly variable and only correlates strongly with the star formation rate (SFR) and the ICM when it is averaged over more than $1~\rm Myr$. We generally find good agreement with the theoretical precipitation framework. In $10^{13}~\rm M_\odot$ haloes, AGN feedback quenches the central galaxy and converts cool-core systems into non-cool-core systems. In contrast, higher-mass, cool-core clusters evolve cyclically. Episodes of high BHAR raise the entropy of the ICM out to the radius where the ratio of the cooling time and the local dynamical time $t_{\rm cool}/t_{\rm dyn} > 10$, thus suppressing condensation and, after a delay, the BHAR. The corresponding reduction in AGN feedback allows the ICM to cool and become unstable to precipitation, thus initiating a new episode of high SFR and BHAR.
In [10] the third author of this paper presented two conjectures on the additive decomposability of the sequence of ''smooth'' (or ''friable'') numbers. Elsholtz and Harper [4] proved (by using sieve methods) the second (less demanding) conjecture. The goal of this paper is to extend and sharpen their result in three directions by using a different approach (based on the theory of $S$-unit equations).
In this Letter we report on the results of our search for photons from a U(1) gauge factor in the hidden sector of the full theory. With our experimental setup we observe the single spectrum in a HPGe detector arising as a result of the photoelectric-like absorption of hidden photons emitted from the Sun on germanium atoms inside the detector. The main ingredient of the theory used in our analysis, a severely constrained kinetic mixing from the two U(1) gauge factors and massive hidden photons, entails both photon into hidden state oscillations and a minuscule coupling of hidden photons to visible matter, of which the latter our experimental setup has been designed to observe. On a theoretical side, full account was taken of the effects of refraction and damping of photons while propagating in Sun's interior as well as in the detector. We exclude hidden photons with kinetic couplings chi > (2.2 x 10^{-13}- 3 x 10^{-7}) in the mass region 0.2 eV < m_gamma' < 30 keV. Our constraints on the mixing parameter chi in the mass region from 20 eV up to 15 keV prove even slightly better then those obtained recently by using data from the CAST experiment, albeit still somewhat weaker than those obtained from solar and HB stars lifetime arguments.
We devise a novel neural network-based universal denoiser for the finite-input, general-output (FIGO) channel. Based on the assumption of known noisy channel densities, which is realistic in many practical scenarios, we train the network such that it can denoise as well as the best sliding window denoiser for any given underlying clean source data. Our algorithm, dubbed as Generalized CUDE (Gen-CUDE), enjoys several desirable properties; it can be trained in an unsupervised manner (solely based on the noisy observation data), has much smaller computational complexity compared to the previously developed universal denoiser for the same setting, and has much tighter upper bound on the denoising performance, which is obtained by a theoretical analysis. In our experiments, we show such tighter upper bound is also realized in practice by showing that Gen-CUDE achieves much better denoising results compared to other strong baselines for both synthetic and real underlying clean sequences.
We study maximal clades in random phylogenetic trees with the Yule-Harding model or, equivalently, in binary search trees. We use probabilistic methods to reprove and extend earlier results on moment asymptotics and asymptotic normality. In particular, we give an explanation of the curious phenomenon observed by Drmota, Fuchs and Lee (2014) that asymptotic normality holds, but one should normalize using half the variance.
Pulsar positions can be measured with high precision using both pulsar timing methods and very-long-baseline interferometry (VLBI). Pulsar timing positions are referenced to a solar-system ephemeris, whereas VLBI positions are referenced to distant quasars. Here we compare pulsar positions from published VLBI measurements with those obtained from pulsar timing data from the Nanshan and Parkes radio telescopes in order to relate the two reference frames. We find that the timing positions differ significantly from the VLBI positions (and also differ between different ephemerides). A statistically significant change in the obliquity of the ecliptic of $2.16\pm0.33$\,mas is found for the JPL ephemeris DE405, but no significant rotation is found in subsequent JPL ephemerides. The accuracy with which we can relate the two frames is limited by the current uncertainties in the VLBI reference source positions and in matching the pulsars to their reference source. Not only do the timing positions depend on the ephemeris used in computing them, but also different segments of the timing data lead to varying position estimates. These variations are mostly common to all ephemerides, but slight changes are seen at the 10$\mu$as level between ephemerides.
Estimating the structure of directed acyclic graphs (DAGs, also known as Bayesian networks) is a challenging problem since the search space of DAGs is combinatorial and scales superexponentially with the number of nodes. Existing approaches rely on various local heuristics for enforcing the acyclicity constraint. In this paper, we introduce a fundamentally different strategy: We formulate the structure learning problem as a purely \emph{continuous} optimization problem over real matrices that avoids this combinatorial constraint entirely. This is achieved by a novel characterization of acyclicity that is not only smooth but also exact. The resulting problem can be efficiently solved by standard numerical algorithms, which also makes implementation effortless. The proposed method outperforms existing ones, without imposing any structural assumptions on the graph such as bounded treewidth or in-degree. Code implementing the proposed algorithm is open-source and publicly available at https://github.com/xunzheng/notears.
The CMS Collaboration has released the results of its search for supersymmetry, by applying an alphaT method to 1.1/fb of data at 7 TeV. The null result excludes (at 95% CL) a low-mass region of the Constrained MSSM's parameter space that was previously favored by other experiments. Additionally, the negative result of the XENON100 dark matter search has excluded (at 90% CL) values of the spin-independent scattering cross sections sigma^SI_p as low as 10^-8 pb. We incorporate these improved experimental constraints into a global Bayesian fit of the Constrained MSSM by constructing approximate likelihood functions. In the case of the alphaT limit, we simulate detector efficiency for the CMS alphaT 1.1/fb and validate our method against the official 95% CL contour. We identify the 68% and 95% credible posterior regions of the CMSSM parameters, and also find the best-fit point. We find that the credible regions change considerably once a likelihood from alphaT is included, in particular the narrow light Higgs resonance region becomes excluded, but the focus point/horizontal branch region remains allowed at the 1sigma level. Adding the limit from XENON100 has a weaker additional effect, in part due to large uncertainties in evaluating sigma^SI_p, which we include in a conservative way, although we find that it reduces the posterior probability of the focus point region to the 2sigma level. The new regions of high posterior favor squarks lighter than the gluino and all but one Higgs bosons heavy. The dark matter neutralino mass is found in the range 250 GeV <~ m_Chi1 <~ 343 GeV (at 1sigma) while, as the result of improved limits from the LHC, the favored range of sigma^SI_p is pushed down to values below 10^{-9} pb. We highlight tension between (g-2)_mu and BR(b->sg), which is exacerbated by including the alphaT limit; each constraint favors a different region of the CMSSM's mass parameters.
Microcanonical description is characterized by the presence of an internal symmetry closely related with the dynamical origin of this ensemble: the reparametrization invariance. Such symmetry possibilities the development of a non Riemannian geometric formulation within the microcanonical description of an isolated system, which leads to an unexpected generalization of the Gibbs canonical ensemble and the classical fluctuation theory for the open systems (where the inverse temperature and the total energy E behave as complementary thermodynamical quantities), the improvement of Monte Carlo simulations based on the canonical ensemble, as well as a reconsideration of any classification scheme of the phase transitions based on the concavity of the microcanonical entropy.
In this work, we investigate the bosonic chiral string in the sectorized interpretation, computing its spectrum, kinetic action and $3$-point amplitudes. As expected, the bosonic ambitwistor string is recovered in the tensionless limit. We also consider an extension of the bosonic model with current algebras. In that case, we compute the effective action and show that it is essentially the same as the action of the mass-deformed $(DF)^{2}$ theory found by Johansson and Nohle. Aspects which might seem somewhat contrived in the original construction --- such as the inclusion of a scalar transforming in some real representation of the gauge group --- are shown to follow very naturally from the worldsheet formulation of the theory.
The (homogeneous) Essentially Isolated Determinantal Variety is the natural generalization of generic determinantal variety, and is fundamental example to study non-isolated singularities. In this paper we study the characteristic classes on these varieties. We give explicit formulas of their Chern-Schwartz-MacPherson classes and Chern-Mather classes via standard Schubert calculus. As corollaries we obtain formulas for their (generic) sectional Euler characteristics, characteristic cycles and polar classes.
In this article we study a nonlocal Nambu--Jona-Lasinio (nNJL) model with a Gaussian regulator in presence of a uniform magnetic field. We take a mixed approach to the incorporation of temperature in the model, and consider aspects of both real and imaginary time formalisms. We include confinement in the model through the quasiparticle interpretation of the poles of the propagator. The effect of the magnetic field in the deconfinement phase transition is then studied. It is found that, like with chiral symmetry restoration, magnetic catalysis occurs for the deconfinement phase transition. It is also found that the magnetic field enhances the thermodynamical instability of the system. We work in the weak field limit, i.e. $(eB)<5m_\pi^2$. At this level there is no splitting of the critical temperatures for chiral and deconfinement phase transitions.
We say that a $q$-ary length $n$ code is \emph{non-overlapping} if the set of non-trivial prefixes of codewords and the set of non-trivial suffices of codewords are disjoint. These codes were first studied by Levenshtein in 1964, motivated by applications in synchronisation. More recently these codes were independently invented (under the name \emph{cross-bifix-free} codes) by Baji\'c and Stojanovi\'c. We provide a simple construction for a class of non-overlapping codes which has optimal cardinality whenever $n$ divides $q$. Moreover, for all parameters $n$ and $q$ we show that a code from this class is close to optimal, in the sense that it has cardinality within a constant factor of an upper bound due to Levenshtein from 1970. Previous constructions have cardinality within a constant factor of the upper bound only when $q$ is fixed. Chee, Kiah, Purkayastha and Wang showed that a $q$-ary length $n$ non-overlapping code contains at most $q^n/(2n-1)$ codewords; this bound is weaker than the Levenshtein bound. Their proof appealed to the application in synchronisation: we provide a direct combinatorial argument to establish the bound of Chee \emph{et al}. We also consider codes of short length, finding the leading term of the maximal cardinality of a non-overlapping code when $n$ is fixed and $q\rightarrow \infty$. The largest cardinality of non-overlapping codes of lengths $3$ or less is determined exactly.
Let $H(\hbar)=-\hbar^2d^2/dx^2+V(x)$ be a Schr\"odinger operator on the real line, $W(x)$ be a bounded observable depending only on the coordinate and $k$ be a fixed integer. Suppose that an energy level $E$ intersects the potential $V(x)$ in exactly two turning points and lies below $V_\infty=\liminf_{|x|\to\infty} V(x)$. We consider the semiclassical limit $n\to\infty$, $\hbar=\hbar_n\to0$ and $E_n=E$ where $E_n$ is the $n$th eigen-energy of $H(\hbar)$. An asymptotic formula for $<{}n|W(x)|n+k>$, the non-diagonal matrix elements of $W(x)$ in the eigenbasis of $H(\hbar)$, has been known in the theoretical physics for a long time. Here it is proved in a mathematically rigorous manner.
Two-dimensional lattices of coupled micropillars etched in a planar semiconductor microcavity offer a workbench to engineer the band structure of polaritons. We report experimental studies of honeycomb lattices where the polariton low-energy dispersion is analogous to that of electrons in graphene. Using energy-resolved photoluminescence we directly observe Dirac cones, around which the dynamics of polaritons is described by the Dirac equation for massless particles. At higher energies, we observe p orbital bands, one of them with the nondispersive character of a flatband. The realization of this structure which holds massless, massive and infinitely massive particles opens the route towards studies of the interplay of dispersion, interactions, and frustration in a novel and controlled environment.
A variant of the U-Net convolutional neural network architecture is proposed to estimate linear elastic compatibility stresses in a-Zr (hcp) polycrystalline grain structures. Training data was generated using VGrain software with a regularity alpha of 0.73 and uniform random orientation for the grain structures and ABAQUS to evaluate the stress welds using the finite element method. The initial dataset contains 200 samples with 20 held from training for validation. The network gives speedups of around 200x to 6000x using a CPU or GPU, with signifcant memory savings, compared to finite element analysis with a modest reduction in accuracy of up to 10%. Network performance is not correlated with grain structure regularity or texture, showing generalisation of the network beyond the training set to arbitrary Zr crystal structures. Performance when trained with 200 and 400 samples was measured, finding an improvement in accuracy of approximately 10% when the size of the dataset was doubled.
J/Psi and eta_c above the QCD critical temperature T_c are studied in anisotropic quenched lattice QCD, considering whether the c\bar c systems above T_c are spatially compact (quasi-)bound states or scattering states. We adopt the standard Wilson gauge action and O(a)-improved Wilson quark action with renormalized anisotropy a_s/a_t =4.0 at \beta=6.10 on 16^3\times (14-26) lattices, which correspond to the spatial lattice volume V\equiv L^3\simeq(1.55{\rm fm})^3 and temperatures T\simeq(1.11-2.07)T_c. We investigate the c\bar c system above T_c from the temporal correlators with spatially-extended operators, where the overlap with the ground state is enhanced. To clarify whether compact charmonia survive in the deconfinement phase, we investigate spatial boundary-condition dependence of the energy of c\bar c systems above T_c. In fact, for low-lying S-wave c \bar c scattering states, it is expected that there appears a significant energy difference \Delta E \equiv E{\rm (APBC)}-E{\rm (PBC)}\simeq2\sqrt{m_c^2+3\pi^2/L^2}-2m_c (m_c: charm quark mass) between periodic and anti-periodic boundary conditions on the finite-volume lattice. In contrast, for compact charmonia, there is no significant energy difference between periodic and anti-periodic boundary conditions. As a lattice QCD result, almost no spatial boundary-condition dependence is observed for the energy of the c\bar c system in J/\Psi and \eta_c channels for T\simeq(1.11-2.07)T_c. This fact indicates that J/\Psi and \eta_c would survive as spatially compact c\bar c (quasi-)bound states below 2T_c. We also investigate a $P$-wave channel at high temperature with maximally entropy method (MEM) and find no low-lying peak structure corresponding to \chi_{c1} at 1.62T_c.
Optical bottle beams can be used to trap atoms and small low-index particles. We introduce a figure of merit for optical bottle beams, specifically in the context of optical traps, and use it to compare optical bottle-beam traps obtained by three different methods. Using this figure of merit and an optimization algorithm, we identified optical bottle-beam traps based on a Gaussian beam illuminating a metasurface that are superior in terms of power efficiency than existing approaches. We numerically demonstrate a silicon metasurface for creating an optical bottle-beam trap.
In Part I, we studied the communication for omniscience (CO) problem and proposed a parametric (PAR) algorithm to determine the minimum sum-rate at which a set of users indexed by a finite set $V$ attain omniscience. The omniscience in CO refers to the status that each user in $V$ recovers the observations of a multiple random source. It is called the global omniscience in this paper in contrast to the study of the successive omniscience (SO), where the local omniscience is attained subsequently in user subsets. By inputting a lower bound on the minimum sum-rate for CO, we apply the PAR algorithm to search a complimentary subset $X_* \subsetneq V$ such that if the local omniscience in $X_*$ is reached first, the global omniscience whereafter can still be attained with the minimum sum-rate. We further utilize the outputs of the PAR algorithm to outline a multi-stage SO approach that is characterized by $K \leq |V| - 1$ complimentary subsets $X_*^{(k)}, \forall k \in \{1,\dotsc,K\}$ forming a nesting sequence $X_*^{(1)} \subsetneq \dotsc \subsetneq X_*^{(K)} = V$. Starting from stage $k = 1$, the local omniscience in $X_*^{(k)}$ is attained at each stage $k$ until the final global omniscience in $X_*^{(K)} = V$. A $|X_*{(k)}|$-dimensional local omniscience achievable rate vector is also derived for each stage $k$ designating individual users transmitting rates. The sum-rate of this rate vector in the last stage $K$ coincides with the minimized sum-rate for the global omniscience.
Counter reachability games are played by two players on a graph with labelled edges. Each move consists in picking an edge from the current location and adding its label to a counter vector. The objective is to reach a given counter value in a given location. We distinguish three semantics for counter reachability games, according to what happens when a counter value would become negative: the edge is either disabled, or enabled but the counter value becomes zero, or enabled. We consider the problem of deciding the winner in counter reachability games and show that, in most cases, it has the same complexity under all semantics. Surprisingly, under one semantics, the complexity in dimension one depends on whether the objective value is zero or any other integer.
In this paper we adhere to the definition of infra-topological space as it was introduced by Al-Odhari. Namely, we speak about families of subsets which contain empty set and the whole universe X, being at the same time closed under finite intersections (but not necessarily under arbitrary or even finite unions). This slight modification allows us to distinguish between new classes of subsets (infra-open, ps-infra-open and i-genuine). Analogous notions are discussed in the language of closures. The class of minimal infra-open sets is studied too, as well as the idea of generalized infra-spaces. Finally, we obtain characterization of infra-spaces in terms of modal logic, using some of the notions introduced above.
We propose a method to extract predictions from quantum cosmology for inflation that can be confronted with observations. Employing the tunneling boundary condition in quantum geometrodynamics, we derive a probability distribution for the inflaton field. A sharp peak in this distribution can be interpreted as setting the initial conditions for the subsequent phase of inflation. In this way, the peak sets the energy scale at which the inflationary phase has started. This energy scale must be consistent with the energy scale found from the inflationary potential and with the scale found from a potential observation of primordial gravitational waves. Demanding a consistent history of the universe from its quantum origin to its present state, which includes decoherence, we derive a condition that allows one to constrain the parameter space of the underlying model of inflation. We demonstrate our method by applying it to two models: Higgs inflation and natural inflation.
Small 3He-rich solar energetic particle (SEP) events with their anomalous abundances, markedly different from solar system, provide evidence for a unique acceleration mechanism that operates routinely near solar active regions. Although the events are sometimes accompanied by coronal mass ejections (CMEs) it is believed that mass and isotopic fractionation is produced directly in the flare sites on the Sun. We report on a large-scale extreme ultraviolet (EUV) coronal wave observed in association with 3He-rich SEP events. In the two examples discussed, the observed waves were triggered by minor flares and appeared concurrently with EUV jets and type III radio bursts but without CMEs. The energy spectra from one event are consistent with so-called class-1 (characterized by power laws) while the other with class-2 (characterized by rounded 3He and Fe spectra) 3He-rich SEP events, suggesting different acceleration mechanisms in the two. The observation of EUV waves suggests that large-scale disturbances, in addition to more commonly associated jets, may be responsible for the production of 3He-rich SEP events.
Historically, multiple populations in Globular Clusters (GCs) have been mostly studied from ultraviolet and optical filters down to stars that are more massive than ~0.6 solar masses. Here we exploit deep near-infrared (NIR) photometry from the Hubble Space Telescope to investigate multiple populations among M-dwarfs in the GC NGC6752. We discovered that the three main populations (A, B and C), previously observed in the brightest part of the color-magnitude diagram, define three distinct sequences that run from the main-sequence (MS) knee towards the bottom of the MS (~0.15 solar masses). These results, together with similar findings on NGC2808, M4, and omega Centauri, demonstrate that multiple sequences of M-dwarfs are common features of the color-magnitude diagrams of GCs. The three sequences of low-mass stars in NGC6752 are consistent with stellar populations with different oxygen abundances. The range of [O/Fe] needed to reproduce the NIR CMD of NGC6752 is similar to the oxygen spread inferred from high-resolution spectroscopy of red-giant branch (RGB) stars. The relative numbers of stars in the three populations of M-dwarfs are similar to those derived among RGB and MS stars more massive than ~0.6 solar masses. As a consequence, the evidence that the properties of multiple populations do not depend on stellar mass is a constraint for the formation scenarios.
Multimodal machine translation (MMT) systems have been shown to outperform their text-only neural machine translation (NMT) counterparts when visual context is available. However, recent studies have also shown that the performance of MMT models is only marginally impacted when the associated image is replaced with an unrelated image or noise, which suggests that the visual context might not be exploited by the model at all. We hypothesize that this might be caused by the nature of the commonly used evaluation benchmark, also known as Multi30K, where the translations of image captions were prepared without actually showing the images to human translators. In this paper, we present a qualitative study that examines the role of datasets in stimulating the leverage of visual modality and we propose methods to highlight the importance of visual signals in the datasets which demonstrate improvements in reliance of models on the source images. Our findings suggest the research on effective MMT architectures is currently impaired by the lack of suitable datasets and careful consideration must be taken in creation of future MMT datasets, for which we also provide useful insights.
The gap equation with dressed propagators is solved in symmetric nuclear matter. Nucleon self-energies are obtained within the self-consistent in medium T matrix approximation. The off-shell gap equation is compared to an effective quasiparticle gap equation with reduced interaction. At normal density, we find a reduction of the superfluid gap from 6.5MeV to .45MeV when self-energy effects are included.
We construct an unbounded strictly pseudoconvex Kobayashi hyperbolic and complete domain in $\mathbb{C}^2$, which also possesses complete Bergman metric, but has no nonconstant bounded holomorphic functions.
Two different physical phenomena, described by the bias flow aperture theory and the Coriolis flowmeter "bubble theory", are compared. The bubble theory is simplified and analogies with the bias flow aperture theory are appraised.
We have theoretically investigated Su-Schrieffer-Heeger chains modelled as optical lattices (OL) loaded with exciton-polaritons. The chains have been subject to the resonant pumping of the edge site and shaken in either adiabatic or high-frequency regime. The topological state has been controlled by the relative phases of the lasers constructing the OL. The dynamic problem of the occupation of the lattice sites and eigenstates has been semi-classically solved. Finally, the analysis of the occupation numbers evolution has revealed that gapless, topologically trivial and non-trivial chain configurations demonstrate perceptible behaviour from both qualitative (occupation pattern) and quantitative (total occupation) points of view.
We use the Hubble Space Telescope ACS camera to obtain the first spatially resolved, nebular imaging in the light of C IV 1548,1551 by using the F150LP and F165LP filters. These observations of the local starburst Mrk 71 in NGC 2366 show emission apparently originating within the interior cavity around the dominant super star cluster (SSC), Knot A. Together with imaging in He II 4686 and supporting STIS FUV spectroscopy, the morphology and intensity of the C IV nebular surface brightness and the C IV / He II ratio map provide direct evidence that the mechanical feedback is likely dominated by catastrophic radiative cooling, which strongly disrupts adiabatic superbubble evolution. The implied extreme mass loading and low kinetic efficiency of the cluster wind are reasonably consistent with the wind energy budget, which is probably enhanced by radiation pressure. In contrast, the Knot B SSC lies within a well-defined superbubble with associated soft X-rays and He II 1640 emission, which are signatures of adiabatic, energy-driven feedback from a supernova-driven outflow. This system lacks clear evidence of C IV from the limb-brightened shell, as expected for this model, but the observations may not be deep enough to confirm its presence. We also detect a small C IV-emitting object that is likely an embedded compact H II region. Its C IV emission may indicate the presence of very massive stars (> 100 M_sun) or strongly pressure-confined stellar feedback.
Affine iterations of the form x(n+1) = Ax(n) + b converge, using real arithmetic, if the spectral radius of the matrix A is less than 1. However, substituting interval arithmetic to real arithmetic may lead to divergence of these iterations, in particular if the spectral radius of the absolute value of A is greater than 1. We will review different approaches to limit the overestimation of the iterates, when the components of the initial vector x(0) and b are intervals. We will compare, both theoretically and experimentally, the widths of the iterates computed by these different methods: the naive iteration, methods based on the QR-and SVD-factorization of A, and Lohner's QR-factorization method. The method based on the SVD-factorization is computationally less demanding and gives good results when the matrix is poorly scaled, it is superseded either by the naive iteration or by Lohner's method otherwise.
It is widely accepted that blockchain systems cannot execute calls to external systems or services due to each node having to reach a deterministic state. However, in this paper we show that this belief is preconceived by demonstrating a method that enables blockchain and distributed ledger technologies to perform calls to external systems initiated from the blockchain/DLT itself.
Let $V$ be a set of cardinality $v$ (possibly infinite). Two graphs $G$ and $G'$ with vertex set $V$ are {\it isomorphic up to complementation} if $G'$ is isomorphic to $G$ or to the complement $\bar G$ of $G$. Let $k$ be a non-negative integer, $G$ and $G'$ are {\it $k$-hypomorphic up to complementation} if for every $k$-element subset $K$ of $V$, the induced subgraphs $G\_{\restriction K}$ and $G'\_{\restriction K}$ are isomorphic up to complementation. A graph $G$ is {\it $k$-reconstructible up to complementation} if every graph $G'$ which is $k$-hypomorphic to $G$ up to complementation is in fact isomorphic to $G$ up to complementation. We give a partial characterisation of the set $\mathcal S$ of pairs $(n,k)$ such that two graphs $G$ and $G'$ on the same set of $n$ vertices are equal up to complementation whenever they are $k$-hypomorphic up to complementation. We prove in particular that $\mathcal S$ contains all pairs $(n,k)$ such that $4\leq k\leq n-4$. We also prove that 4 is the least integer $k$ such that every graph $G$ having a large number $n$ of vertices is $k$-reconstructible up to complementation; this answers a question raised by P. Ille
One problem to solve in the context of information fusion, decision-making, and other artificial intelligence challenges is to compute justified beliefs based on evidence. In real-life examples, this evidence may be inconsistent, incomplete, or uncertain, making the problem of evidence fusion highly non-trivial. In this paper, we propose a new model for measuring degrees of beliefs based on possibly inconsistent, incomplete, and uncertain evidence, by combining tools from Dempster-Shafer Theory and Topological Models of Evidence. Our belief model is more general than the aforementioned approaches in two important ways: (1) it can reproduce them when appropriate constraints are imposed, and, more notably, (2) it is flexible enough to compute beliefs according to various standards that represent agents' evidential demands. The latter novelty allows the users of our model to employ it to compute an agent's (possibly) distinct degrees of belief, based on the same evidence, in situations when, e.g, the agent prioritizes avoiding false negatives and when it prioritizes avoiding false positives. Finally, we show that computing degrees of belief with this model is #P-complete in general.
A number of interesting properties of graphene and graphite are postulated to derive from the peculiar bandstructure of graphene. This bandstructure consists of conical electron and hole pockets that meet at a single point in momentum (k) space--the Dirac crossing, at energy $E_{D} = \hbar \omega_{D}$. Direct investigations of the accuracy of this bandstructure, the validity of the quasiparticle picture, and the influence of many-body interactions on the electronic structure have not been addressed for pure graphene by experiment to date. Using angle resolved photoelectron spectroscopy (ARPES), we find that the expected conical bands are distorted by strong electron-electron, electron-phonon, and electron-plasmon coupling effects. The band velocity at $E_{F}$ and the Dirac crossing energy $E_{D}$ are both renormalized by these many-body interactions, in analogy with mass renormalization by electron-boson coupling in ordinary metals. These results are of importance not only for graphene but also graphite and carbon nanotubes which have similar bandstructures.
The increase in parameter size of multimodal large language models (MLLMs) introduces significant capabilities, particularly in-context learning, where MLLMs enhance task performance without updating pre-trained parameters. This effectiveness, however, hinges on the appropriate selection of in-context examples, a process that is currently biased towards visual data, overlooking textual information. Furthermore, the area of supervised retrievers for MLLMs, crucial for optimal in-context example selection, continues to be uninvestigated. Our study offers an in-depth evaluation of the impact of textual information on the unsupervised selection of in-context examples in multimodal contexts, uncovering a notable sensitivity of retriever performance to the employed modalities. Responding to this, we introduce a novel supervised MLLM-retriever MSIER that employs a neural network to select examples that enhance multimodal in-context learning efficiency. This approach is validated through extensive testing across three distinct tasks, demonstrating the method's effectiveness. Additionally, we investigate the influence of modalities on our supervised retrieval method's training and pinpoint factors contributing to our model's success. This exploration paves the way for future advancements, highlighting the potential for refined in-context learning in MLLMs through the strategic use of multimodal data.
Visible Light Communication (VLC) technology using light emitting diodes (LEDs) has been gaining increasing attention in recent years as it is appealing for a wide range of applications such as indoor positioning. Orthogonal frequency division multiplexing (OFDM) has been applied to indoor wireless optical communications in order to mitigate the effect of multipath distortion of the optical channel as well as increasing data rate. In this paper, we investigate the indoor positioning accuracy of optical based OFDM techniques used in VLC systems. A positioning algorithm based on power attenuation is used to estimate the receiver coordinates. We further calculate the positioning errors in all the locations of a room and compare them with those of single carrier modulation scheme, i.e., on-off keying (OOK) modulation. We demonstrate that OFDM positioning system outperforms its conventional counterpart.
Using first-principles calculations, we explore the electronic and magnetic properties of graphene nanomesh (GNM), a regular network of large vacancies, produced either by lithography or nanoimprint. When removing an equal number of A and B sites of the graphene bipartite lattice, the nanomesh made mostly of zigzag (armchair) type edges exhibit antiferromagnetic (spin unpolarized) states. In contrast, in situation of sublattice symmetry breaking, stable ferri(o)magnetic states are obtained. For hydrogen-passivated nanomesh, the formation energy is dramatically decreased, and ground state is found to strongly depend on the vacancies shape and size. For triangular shaped holes, the obtained net magnetic moments increase with the number difference of removed A and B sites in agreement with Lieb's theorem for even A+B. For odd A+B triangular meshes and all cases of non-triangular nanomeshes including the one with even A+B, Lieb's theorem does not hold anymore which can be partially attributed to introduction of armchair edges. In addition, large triangular shaped GNM could be as robust as non-triangular GNMs, providing possible solution to overcome one of crucial challenges for the sp-magnetism. Finally, significant exchange splitting values as large as $\sim 0.5$ eV can be obtained for highly asymmetric structures evidencing the potential of GNM for room temperature carbon based spintronics. These results demonstrate that a turn from 0-dimensional graphene nanoflakes throughout 1-dimensional graphene nanoribbons with zigzag edges to GNM breaks localization of unpaired electrons and provides deviation from the rules based on Lieb's theorem. Such delocalization of the electrons leads the switch of the ground state of system from antiferromagnetic narrow gap insulator discussed for graphene nanoribons to ferromagnetic or nonmagnetic metal.
Networked systems display complex patterns of interactions between a large number of components. In physical networks, these interactions often occur along structural connections that link components in a hard-wired connection topology, supporting a variety of system-wide dynamical behaviors such as synchronization. While descriptions of these behaviors are important, they are only a first step towards understanding the relationship between network topology and system behavior, and harnessing that relationship to optimally control the system's function. Here, we use linear network control theory to analytically relate the topology of a subset of structural connections (those linking driver nodes to non-driver nodes) to the minimum energy required to control networked systems. As opposed to the numerical computations of control energy, our accurate closed-form expressions yield general structural features in networks that require significantly more or less energy to control, providing topological principles for the design and modification of network behavior. To illustrate the utility of the mathematics, we apply this approach to high-resolution connectomes recently reconstructed from drosophila, mouse, and human brains. We use these principles to show that connectomes of increasingly complex species are wired to reduce control energy. We then use the analytical expressions we derive to perform targeted manipulation of the brain's control profile by removing single edges in the network, a manipulation that is accessible to current clinical techniques in patients with neurological disorders. Cross-species comparisons suggest an advantage of the human brain in supporting diverse network dynamics with small energetic costs, while remaining unexpectedly robust to perturbations. Our results ground the expectation of a system's dynamical behavior in its network architecture.
In the light of the recent LHC data, we study precision tests sensitive to the violation of lepton universality, in particular the violation of unitarity in neutrino mixing. Keeping all data we find no satisfatory fit, even allowing for violations of unitarity in neutrino mixing. Leaving out sin$^2 \theta_{\scriptsize \mbox{eff}}$ from the hadronic forward-backward asymmetry at LEP, we find a good fit to the data with some evidence of lepton universality violation at the $\mathcal{O}(10^{-3})$ level.
Large Language Models (LLMs) (e.g., ChatGPT) have shown impressive performance in code generation. LLMs take prompts as inputs, and Chain-of-Thought (CoT) prompting is the state-of-the-art prompting technique. CoT prompting asks LLMs first to generate CoTs (i.e., intermediate natural language reasoning steps) and then output the code. However, CoT prompting is designed for natural language generation and has low accuracy in code generation. In this paper, we propose Structured CoTs (SCoTs) and present a novel prompting technique for code generation, named SCoT prompting. Our motivation is source code contains rich structural information and any code can be composed of three program structures (i.e., sequence, branch, and loop structures). Intuitively, structured intermediate reasoning steps make for structured source code. Thus, we ask LLMs to use program structures to build CoTs, obtaining SCoTs. Then, LLMs generate the final code based on SCoTs. Compared to CoT prompting, SCoT prompting explicitly constrains LLMs to think about how to solve requirements from the view of source code and further the performance of LLMs in code generation. We apply SCoT prompting to two LLMs (i.e., ChatGPT and Codex) and evaluate it on three benchmarks (i.e., HumanEval, MBPP, and MBCPP). (1) SCoT prompting outperforms the state-of-the-art baseline - CoT prompting by up to 13.79% in Pass@1. (2) Human evaluation shows human developers prefer programs from SCoT prompting. (3) SCoT prompting is robust to examples and achieves substantial improvements.
The Linked Data Benchmark Council's Financial Benchmark (LDBC FinBench) is a new effort that defines a graph database benchmark targeting financial scenarios such as anti-fraud and risk control. The benchmark has one workload, the Transaction Workload, currently. It captures OLTP scenario with complex, simple read queries and write queries that continuously insert or delete data in the graph. Compared to the LDBC SNB, the LDBC FinBench differs in application scenarios, data patterns, and query patterns. This document contains a detailed explanation of the data used in the LDBC FinBench, the definition of transaction workload, a detailed description for all queries, and instructions on how to use the benchmark suite.
We study the effects of disorder on long-range antiferromagnetic correlations in the half-filled, two dimensional, repulsive Hubbard model at T=0. A mean field approach is first employed to gain a qualitative picture of the physics and to guide our choice for a trial wave function in a constrained path quantum Monte Carlo (CPQMC) method that allows for a more accurate treatment of correlations. Within the mean field calculation, we observe both Anderson and Mott insulating antiferromagnetic phases. There are transitions to a paramagnet only for relatively weak coupling, U < 2t in the case of bond disorder, and U < 4t in the case of on-site disorder. Using ground-state CPQMC we demonstrate that this mean field approach significantly overestimates magnetic order. For U=4t, we find a critical bond disorder of Vc = (1.6 +- 0.4)t even though within mean field theory no paramagnetic phase is found for this value of the interaction. In the site disordered case, we find a critical disorder of Vc = (5.0 +- 0.5)t at U=4t.
More than 50 years ago, Laszlo Fuchs asked which abelian groups can be the group of units of a ring. Though progress has been made, the question remains open. One could equally well pose the question for various classes of nonabelian groups. In this paper, we prove that D_2, D_4, D_6, D_8, and D_12 are the only dihedral groups that appear as the group of units of a ring of positive characteristic (or, equivalently, of a finite ring), and D_2 and D_4k, where k is odd, are the only dihedral groups that appear as the group of units of a ring of characteristic 0.