text
stringlengths
6
128k
We study estimation of (semi-)inner products between two nonparametric probability distributions, given IID samples from each distribution. These products include relatively well-studied classical $\mathcal{L}^2$ and Sobolev inner products, as well as those induced by translation-invariant reproducing kernels, for which we believe our results are the first. We first propose estimators for these quantities, and the induced (semi)norms and (pseudo)metrics. We then prove non-asymptotic upper bounds on their mean squared error, in terms of weights both of the inner product and of the two distributions, in the Fourier basis. Finally, we prove minimax lower bounds that imply rate-optimality of the proposed estimators over Fourier ellipsoids.
Let $\ell \geq 5$ be a prime and let $N$ be a square-free integer prime to $\ell$. For each prime $p$ dividing $N$, let $a_p$ be either $1$ or $-1$. We give sufficient criteria for the existence of a newform $f$ of weight 2 for $\Gamma_0(N)$ such that the mod $\ell$ Galois representation attached to $f$ is reducible and $U_p f = a_p f$ for primes $p$ dividing $N$. The main techniques used are level raising methods based on an exact sequence due to Ribet.
In this paper, we investigate Nash-regret minimization in congestion games, a class of games with benign theoretical structure and broad real-world applications. We first propose a centralized algorithm based on the optimism in the face of uncertainty principle for congestion games with (semi-)bandit feedback, and obtain finite-sample guarantees. Then we propose a decentralized algorithm via a novel combination of the Frank-Wolfe method and G-optimal design. By exploiting the structure of the congestion game, we show the sample complexity of both algorithms depends only polynomially on the number of players and the number of facilities, but not the size of the action set, which can be exponentially large in terms of the number of facilities. We further define a new problem class, Markov congestion games, which allows us to model the non-stationarity in congestion games. We propose a centralized algorithm for Markov congestion games, whose sample complexity again has only polynomial dependence on all relevant problem parameters, but not the size of the action set.
The modified quasichemical model in the quadruplet approximation (MQMQA) considers the first- and the second-nearest-neighbor coordination and interactions, particularly useful in describing short-range ordering in complex liquids such as molten salts, slag in metal processing, and electrolytic solutions. The present work implements the MQMQA into the Python based open-source software PyCalphad for thermodynamic calculations. This endeavor facilitates the development of MQMQA-based thermodynamic database with uncertainty quantification (UQ) using the open-source software ESPEI. A new database structure based on Extensible Markup Language (XML) is proposed for ESPEI evaluation of MQMQA model parameters. Using the KF-NiF2 system as an example, we demonstrate the successful implementation of MQMQA in PyCalphad through thermodynamic calculations of Gibbs energy, equilibrium quadruplet fractions, and phase diagram, as well as database development with UQ using ESPEI. The present implementation offers an open-source capability for performing CALPHAD modeling for complex liquids with short-range ordering using MQMQA.
We investigate the energetics of droplets sourced by the thermal fluctuations in a system undergoing a first-order transition. In particular, we confine our studies to two dimensions with explicit calulations in the plane and on the sphere. Using an isoperimetric inequality from the differential geometry literature and a theorem on the inequality's saturation, we show how geometry informs the critical droplet size and shape. This inequality establishes a "mean field" result for nucleated droplets. We then study the effects of fluctuations on the interfaces of droplets in two dimensions, treating the droplet interface as a fluctuating line. We emphasize that care is needed in deriving the line curvature energy from the Landau-Ginzburg energy functional and in interpreting the scalings of the nucleation rate with the size of the droplet. We end with a comparison of nucleation in the plane and on a sphere.
This paper focuses on the probability that a portion of DNA closes on itself through thermal fluctuations. We investigate the dependence of this probability upon the size r of a protein bridge and/or the presence of a kink at half DNA length. The DNA is modeled by the Worm-Like Chain model, and the probability of loop formation is calculated in two ways: exact numerical evaluation of the constrained path integral and the extension of the Shimada and Yamakawa saddle point approximation. For example, we find that the looping free energy of a 100 base pairs DNA decreases from 24 kT to 13 kT when the loop is closed by a protein of r = 10 nm length. It further decreases to 5 kT when the loop has a kink of 120 degrees at half-length.
We investigate the zero-temperature phase diagram of the fully frustrated transverse field Ising model on the square lattice both in the classical limit and in the presence of quantum fluctuations. At the classical level (the limit of infinite spin $S$), we find that upon decreasing the transverse field $\Gamma$ this model exhibits a phase transition from the fully polarized state into an eight-fold degenerate translational symmetry breaking state. This phase can be identified to correspond to plaquette order in the dimer language and remains the lowest-energy state in the entire range of fields below the critical one, $\Gamma_c$. The eight-fold degenerate solution which corresponds to columnar order in the dimer language is a saddle point of the classical energy. It is degenerate with the plaquette solution at $\Gamma=0$ and is only slightly higher in energy in the whole interval $0<\Gamma<\Gamma_c$. The effect of quantum fluctuations is investigated in the context of a large S expansion both for the plaquette and columnar structures. For this purpose we employ an approximate method allowing to estimate from above the fluctuation-induced correction to the energy of a configuration which at the classical level is a saddle point of the energy, \textit{not} a local minimum. Although the convergence of the $1/S$ expansion in the $\Gamma/J\rightarrow 0$ limit remains an open question, harmonic quantum fluctuations show a clear tendency to overcome the energy difference between the two states and to change the classical picture favoring the columnar order over the plaquette one in a wide parameter range.
For large matrix factorisation problems, we develop a distributed Markov Chain Monte Carlo (MCMC) method based on stochastic gradient Langevin dynamics (SGLD) that we call Parallel SGLD (PSGLD). PSGLD has very favourable scaling properties with increasing data size and is comparable in terms of computational requirements to optimisation methods based on stochastic gradient descent. PSGLD achieves high performance by exploiting the conditional independence structure of the MF models to sub-sample data in a systematic manner as to allow parallelisation and distributed computation. We provide a convergence proof of the algorithm and verify its superior performance on various architectures such as Graphics Processing Units, shared memory multi-core systems and multi-computer clusters.
New spectroscopic observations of 36 HII regions in NGC 4258 obtained with the Gemini telescope are combined with existing data from the literature to measure the radial oxygen abundance gradient in this galaxy. The [OIII]4363 auroral line was detected in four of the outermost targets (17 to 22 kpc from the galaxy center), allowing a determination of the electron temperature Te of the ionized gas. From the use of different calibrations of the R23 abundance indicator an oxygen abundance gradient of approximately -0.012 +/- 0.002 dex/kpc is derived. Such a shallow gradient, combined with the difference in the distance moduli measured from the Cepheid Period-Luminosity relation by Macri et al. between two distinct fields in NGC 4258, would yield an unrealistically strong effect of metallicity on the Cepheid distances. This strengthens the suggestion that systematic biases might affect the Cepheid distance of the outer field. Evidence for a similar effect in the differential study of M33 by Scowcroft et al. is presented. A revision of the transformation between strong-line and Te-based abundances in Cepheid-host galaxies is discussed. In the Te abundance scale, the oxygen abundance of the inner field of NGC 4258 is found to be comparable with the LMC value.
We propose a technique for sensitive magnetic point force detection using a suspended carbon nanotube (CNT) mechanical resonator combined with a magnetic field gradient generated by a ferromagnetic gate electrode. Numerical calculations of the mechanical resonance frequency show that single Bohr magneton changes in the magnetic state of an individual magnetic molecule grafted to the CNT can translate to detectable frequency shifts, on the order of a few kHz. The dependences of the resonator response to device parameters such as length, tension, CNT diameter, and gate voltage are explored and optimal operating conditions are identified. A signal-to-noise analysis shows that in principle, magnetic switching at the level of a single Bohr magneton can be read out in a single shot on timescales as short as 10 microseconds. This force sensor should enable new studies of spin dynamics in isolated single molecule magnets, free from the crystalline or ensemble settings typically studied.
The process of training an artificial neural network involves iteratively adapting its parameters so as to minimize the error of the network's prediction, when confronted with a learning task. This iterative change can be naturally interpreted as a trajectory in network space -- a time series of networks -- and thus the training algorithm (e.g. gradient descent optimization of a suitable loss function) can be interpreted as a dynamical system in graph space. In order to illustrate this interpretation, here we study the dynamical properties of this process by analyzing through this lens the network trajectories of a shallow neural network, and its evolution through learning a simple classification task. We systematically consider different ranges of the learning rate and explore both the dynamical and orbital stability of the resulting network trajectories, finding hints of regular and chaotic behavior depending on the learning rate regime. Our findings are put in contrast to common wisdom on convergence properties of neural networks and dynamical systems theory. This work also contributes to the cross-fertilization of ideas between dynamical systems theory, network theory and machine learning
A novel digital reconfigurable 2_bit metamaterial, equipped with a substrate integrated feeding system, is designed for industrial quality control applications within the terahertz frequency range. The proposed feeding mechanism facilitates azimuthal beam steering, spanning from negative 90 degree to positive 90 degree, thereby enabling the reconfiguration of beam patterns within the digital metamaterial. Utilizing the phase distribution concept and comprehensive analysis of coupling and the e_field effect on individual unit cells, the metamaterial array spacing is meticulously designed. Operational at 0.7 THz, the system offers versatile reconfigurability, supporting single, dual, and multibeam modes. Through meticulous optimization, the system demonstrates an impressive negative 138 degree to positive 138 degree beam steering capability. This dynamic beamforming ability, transitioning seamlessly from a singular beam to multibeam configurations, requires minimal software-hardware integration for scanning and inter_satellite links, thus presenting significant potential for enhancing product quality within industrial environments and satellite communication.
Interferometric principles are widely used in precision physics experiments and/or in advanced laboratory-based phase measurement systems. Phase resolution of such systems is a few orders of magnitude higher compared to that of standard mixer-based quadrature demodulators or lock-in technique. The first attempt of applying interferometric signal processing to transmitter-target-receiver based electromagnetic (EM) surveying in geophysical prospecting is described. It is shown that it is possible to build an EM single carrier surveying system that is, firstly, immune to amplitude variations of both the primary and the secondary EM fields, and, secondly, can directly measure phase variations between the primary and secondary EM fields. Its inherent phase noise floor, if limited by the interferometer itself, can be as low as tens of nanoradians/\surdHz or below -140 dBc/\surdHz level. A practical example of an EM gradiometric surveying system based on an interferometric principle and operating in the Extremely Low Frequency (ELF) range, is presented. The system has been tested in regional outback Australia, in the presence of a highly conducting overburden, in the search of a nickel sulphide deposit. Key words: EM gradiometer, interferometric methods, geophysical prospecting
Tritium beta-decay is the most promising approach to measure the absolute masses of active light neutrinos in the laboratory and in a model-independent fashion. The development of Cyclotron Radiation Emission Spectroscopy techniques and the use of atomic tritium has the potential to improve the current limits by an order of magnitude in future experiments. In this paper, we analyse the potential sensitivity of such future searches to keV-mass sterile neutrinos and exotic interactions of either the active or sterile neutrinos. We calculate the relevant decay distributions in both energy and angle of the emitted electron with respect to a potential polarisation of the tritium, including the interference with the Standard Model case as well as incorporating relevant final state corrections for atomic tritium. We present projected sensitivities on the active-sterile neutrino mixing and effective coupling constants of exotic currents, demonstrating the potential to probe New Physics in tritium experiments.
In this paper, we first obtain the energy density by the approach of the new agegraphic dark energy model, and then the $f(T,B)$ gravity model is studied as an alternative to the dark energy in a viscous fluid by flat-FRW background, in which $T$ and $B$ are torsion scalar and boundary term. The Friedmann equations will obtain in the framework of modified teleparallel gravity by tetrad components. We consider that the universe dominates with components such as matter and dark energy by an interacting model. The Hubble parameter is parameterized by the power-law for the scale factor, and then we fit the corresponding Hubble parameter with observational data constraints. The variation of the equation of state (EoS) for dark energy is plotted as a function of the redshift parameter, and the accelerated expansion of the universe is explored. In what follows, the stability of the model is also studied on the base of the sound speed parameter. Finally, the generalized second law of thermodynamics is investigated by entropies of inside and on the boundary of the apparent horizon in thermodynamics equilibrium.
Health disparity research often evaluates health outcomes across demographic subgroups. Multilevel regression and poststratification (MRP) is a popular approach for small subgroup estimation due to its ability to stabilize estimates by fitting multilevel models and to adjust for selection bias by poststratifying on auxiliary variables, which are population characteristics predictive of the analytic outcome. However, the granularity and quality of the estimates produced by MRP are limited by the availability of the auxiliary variables' joint distribution; data analysts often only have access to the marginal distributions. To overcome this limitation, we embed the estimation of population cell counts needed for poststratification into the MRP workflow: embedded MRP (EMRP). Under EMRP, we generate synthetic populations of the auxiliary variables before implementing MRP. All sources of estimation uncertainty are propagated with a fully Bayesian framework. Through simulation studies, we compare different methods and demonstrate EMRP's improvements over alternatives on the bias-variance tradeoff to yield valid subpopulation inferences of interest. As an illustration, we apply EMRP to the Longitudinal Survey of Wellbeing and estimate food insecurity prevalence among vulnerable groups in New York City. We find that all EMRP estimators can correct for the bias in classical MRP while maintaining lower standard errors and narrower confidence intervals than directly imputing with the WFPBB and design-based estimates. Performances from the EMRP estimators do not differ substantially from each other, though we would generally recommend the WFPBB-MRP for its consistently high coverage rates.
We derive here Lagrangian fluctuation-dissipation relations for advected scalars in wall-bounded flows. The relations equate the dissipation rate for either passive or active scalars to the variance of scalar inputs from the initial values, boundary values, and internal sources, as those are sampled backward in time by stochastic Lagrangian trajectories. New probabilistic concepts are required to represent scalar boundary conditions at the walls: the boundary local-time density at points on the wall where scalar fluxes are imposed and the boundary first hitting-time at points where scalar values are imposed. These concepts are illustrated both by analytical results for the problem of pure heat conduction and by numerical results from a database of channel-flow flow turbulence, which also demonstrate the scalar mixing properties of near-wall turbulence. As an application of the fluctuation-dissipation relation, we examine for wall-bounded flows the relation between anomalous scalar dissipation and Lagrangian spontaneous stochasticity, i.e. the persistent non-determinism of Lagrangian particle trajectories in the limit of vanishing viscosity and diffusivity. In the first paper of this series, we showed that spontaneous stochasticity is the only possible mechanism for anomalous dissipation of passive or active scalars, away from walls. Here it is shown that this remains true when there are no scalar fluxes through walls. Simple examples show, on the other hand, that a distinct mechanism of non-vanishing scalar dissipation can be thin scalar boundary layers near the walls. Nevertheless, we prove for general wall-bounded flows that spontaneous stochasticity is another possible mechanism of anomalous scalar dissipation.
Biological agents, such as humans and animals, are capable of making decisions out of a very large number of choices in a limited time. They can do so because they use their prior knowledge to find a solution that is not necessarily optimal but good enough for the given task. In this work, we study the motion coordination of multiple drones under the above-mentioned paradigm, Bounded Rationality (BR), to achieve cooperative motion planning tasks. Specifically, we design a prior policy that provides useful goal-directed navigation heuristics in familiar environments and is adaptive in unfamiliar ones via Reinforcement Learning augmented with an environment-dependent exploration noise. Integrating this prior policy in the game-theoretic bounded rationality framework allows agents to quickly make decisions in a group considering other agents' computational constraints. Our investigation assures that agents with a well-informed prior policy increase the efficiency of the collective decision-making capability of the group. We have conducted rigorous experiments in simulation and in the real world to demonstrate that the ability of informed agents to navigate to the goal safely can guide the group to coordinate efficiently under the BR framework.
While Transformers have had significant success in paragraph generation, they treat sentences as linear sequences of tokens and often neglect their hierarchical information. Prior work has shown that decomposing the levels of granularity~(e.g., word, phrase, or sentence) for input tokens has produced substantial improvements, suggesting the possibility of enhancing Transformers via more fine-grained modeling of granularity. In this work, we propose a continuous decomposition of granularity for neural paraphrase generation (C-DNPG). In order to efficiently incorporate granularity into sentence encoding, C-DNPG introduces a granularity-aware attention (GA-Attention) mechanism which extends the multi-head self-attention with: 1) a granularity head that automatically infers the hierarchical structure of a sentence by neurally estimating the granularity level of each input token; and 2) two novel attention masks, namely, granularity resonance and granularity scope, to efficiently encode granularity into attention. Experiments on two benchmarks, including Quora question pairs and Twitter URLs have shown that C-DNPG outperforms baseline models by a remarkable margin and achieves state-of-the-art results in terms of many metrics. Qualitative analysis reveals that C-DNPG indeed captures fine-grained levels of granularity with effectiveness.
We propose instance segmentation as a useful tool for image analysis in materials science. Instance segmentation is an advanced technique in computer vision which generates individual segmentation masks for every object of interest that is recognized in an image. Using an out-of-the-box implementation of Mask R-CNN, instance segmentation is applied to images of metal powder particles produced through gas atomization. Leveraging transfer learning allows for the analysis to be conducted with a very small training set of labeled images. As well as providing another method for measuring the particle size distribution, we demonstrate the first direct measurements of the satellite content in powder samples. After analyzing the results for the labeled data dataset, the trained model was used to generate measurements for a much larger set of unlabeled images. The resulting particle size measurements showed reasonable agreement with laser scattering measurements. The satellite measurements were self-consistent and showed good agreement with the expected trends for different samples. Finally, we provide a small case study showing how instance segmentation can be used to measure spheroidite content in the UltraHigh Carbon Steel Database, demonstrating the flexibility of the technique.
The lack of large video databases obtained from real patients with respiratory disorders makes the design and optimization of video-based monitoring systems quite critical. The purpose of this study is the development of suitable models and simulators of breathing behaviors and disorders, such as respiratory pauses and apneas, in order to allow efficient design and test of video-based monitoring systems. More precisely, a novel Continuous-Time Markov Chain (CTMC) statistical model of breathing patterns is presented. The Respiratory Rate (RR) pattern, estimated by measured vital signs of hospital-monitored patients, is approximated as a CTMC, whose states and parameters are selected through an appropriate statistical analysis. Then, two simulators, software- and hardware-based, are proposed. After validation of the CTMC model, the proposed simulators are tested with previously developed video-based algorithms for the estimation of the RR and the detection of apnea events. Examples of application to assess the performance of systems for video-based RR estimation and apnea detection are presented. The results, in terms of Kullback-Leibler divergence, show that realistic breathing patterns, including specific respiratory disorders, can be accurately described by the proposed model; moreover, the simulators are able to reproduce practical breathing patterns for video analysis. The presented CTMC statistical model can be strategic to describe realistic breathing patterns and devise simulators useful to develop and test novel and effective video processing-based monitoring systems.
The puncture method for dealing with black holes in the numerical simulation of vacuum spacetimes is remarkably successful when combined with the BSSN formulation of the Einstein equations. We examine a generalized class of formulations modeled along the lines of the Laguna-Shoemaker system, including BSSN as a special case. The formulation is a two parameter generalization of the choice of variables used in standard BSSN evolutions. Numerical stability of the standard finite difference methods is proven for the formulation in the linear regime around flat space, a special case of which is the numerical stability of BSSN. Numerical evolutions are presented and compared with a standard BSSN implementation. We find that a significant portion of the parameter space leads to stable evolutions and that standard BSSN is located near the edge of the stability region. Non-standard parameter choices typically result in smoother behaviour of the evolution variables close to the puncture and thus hold promise for improved accuracy in, e.g., long-term BH binary inspirals, and for overcoming (numerical) stability problems still encountered in some types of black-hole simulations, e.g., in $D \ge 6$ dimensions.
A subset S of a group G invariably generates G if G = <s^(g(s)) | s in S> for each choice of g(s) in G, s in S. In this paper we study invariable generation of infinite groups, with emphasis on linear groups. Our main result shows that a finitely generated linear group is invariably generated by some finite set of elements if and only if it is virtually solvable. We also show that the profinite completion of an arithmetic group having the congruence subgroup property is invariably generated by a finite set of elements.
We introduce a machine learning model to predict atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only. The problem of solving the molecular Schr\"odinger equation is mapped onto a non-linear statistical regression problem of reduced complexity. Regression models are trained on and compared to atomization energies computed with hybrid density-functional theory. Cross-validation over more than seven thousand small organic molecules yields a mean absolute error of ~10 kcal/mol. Applicability is demonstrated for the prediction of molecular atomization potential energy curves.
We investigate the problem of counting 1/16 BPS operators in N=4 Super-Yang-Mills theory at weak coupling. We present the complete set of 1/16 BPS operators in the infinite N limit, which agrees with the counting of free BPS multi-graviton states in the gravity dual AdS5xS5. Further, we conjecture that all 1/16 BPS operators in N=4 SYM are of the multi-graviton form, and give numerical evidences for this conjecture. We discuss the implication of our conjecture and the seeming failure in reproducing the entropy of large 1/16 BPS black holes in AdS5.
We show how in a class of models Peccei--Quinn symmetry can be realized as an automatic consequence of a gauged $U(1)$ family symmetry. These models provide a solution to the strong CP problem either via a massless $u$--quark or via the DFSZ invisible axion. The local family symmetry protects against potentially large corrections to $\overline{\theta} $ induced by quantum gravitational effects. In a supersymmetric extension, the `$\mu$--problem' is shown to have a natural solution in the context of gravitationally induced operators. We also present a plausible mechanism which can explain the inter--generational mass hierarchy in such a context.
Using the simple (symmetric) Hubbard dimer, we analyze some important features of the $GW$ approximation. We show that the problem of the existence of multiple quasiparticle solutions in the (perturbative) one-shot $GW$ method and its partially self-consistent version is solved by full self-consistency. We also analyze the neutral excitation spectrum using the Bethe-Salpeter equation (BSE) formalism within the standard $GW$ approximation and find, in particular, that i) some neutral excitation energies become complex when the electron-electron interaction $U$ increases, which can be traced back to the approximate nature of the $GW$ quasiparticle energies; ii) the BSE formalism yields accurate correlation energies over a wide range of $U$ when the trace (or plasmon) formula is employed; iii) the trace formula is sensitive to the occurrence of complex excitation energies (especially singlet), while the expression obtained from the adiabatic-connection fluctuation-dissipation theorem (ACFDT) is more stable (yet less accurate); iv) the trace formula has the correct behavior for weak (\ie, small $U$) interaction, unlike the ACFDT expression.
Magnetic fields, which are undoubtedly present in extragalactic jets and responsible for the observed synchrotron radiation, can affect the morphology and dynamics of the jets and their interaction with the ambient cluster medium. We examine the jet propagation, morphology and magnetic field structure for a wide range of density contrasts, using a globally consistent setup for both the jet interaction and the magnetic field. The MHD code NIRVANA is used to evolve the simulation, using the constrained-transport method. The density contrasts are varied between \eta = 10^{-1} and 10^{-4} with constant sonic Mach number 6. The jets are supermagnetosonic and simulated bipolarly due to the low jet densities and their strong backflows. The helical magnetic field is largely confined to the jet, leaving the ambient medium nonmagnetic. We find magnetic fields with plasma \beta \sim 10 already stabilize and widen the jet head. Furthermore they are efficiently amplified by a shearing mechanism in the jet head and are strong enough to damp Kelvin-Helmholtz instabilities of the contact discontinuity. The cocoon magnetic fields are found to be stronger than expected from simple flux conservation and capable to produce smoother lobes, as found observationally. The bow shocks and jet lengths evolve self-similarly. The radio cocoon aspect ratios are generally higher for heavier jets and grow only slowly (roughly self-similar) while overpressured, but much faster when they approach pressure balance with the ambient medium. In this regime, self-similar models can no longer be applied. Bow shocks are found to be of low excentricity for very light jets and have low Mach numbers. Cocoon turbulence and a dissolving bow shock create and excite waves and ripples in the ambient gas. Thermalization is found to be very efficient for low jet densities.
We study the relaxation to equilibrium of two dimensional islands containing up to 20000 atoms by Kinetic Monte Carlo simulations. We find that the commonly assumed relaxation mechanism - curvature-driven relaxation via atom diffusion - cannot explain the results obtained at low temperatures, where the island edges consist in large facets. Specifically, our simulations show that the exponent characterizing the dependence of the equilibration time on the island size is different at high and low temperatures, in contradiction with the above cited assumptions. Instead, we propose that - at low temperatures - the relaxation is limited by the nucleation of new atomic rows on the large facets : this allows us to explain both the activation energy and the island size dependence of the equilibration time.
We use Renormalization Group to prove local well posedness for a generalized KPZ equation introduced by H. Spohn in the context of stochastic hydrodynamics. The equation requires the addition of counter terms diverging with a cutoff $\epsilon$ as $\epsilon^{-1}$ and $\log\epsilon^{-1}$.
Cooperative multi-agent policy gradient (MAPG) algorithms have recently attracted wide attention and are regarded as a general scheme for the multi-agent system. Credit assignment plays an important role in MAPG and can induce cooperation among multiple agents. However, most MAPG algorithms cannot achieve good credit assignment because of the game-theoretic pathology known as \textit{centralized-decentralized mismatch}. To address this issue, this paper presents a novel method, \textit{\underline{M}ulti-\underline{A}gent \underline{P}olarization \underline{P}olicy \underline{G}radient} (MAPPG). MAPPG takes a simple but efficient polarization function to transform the optimal consistency of joint and individual actions into easily realized constraints, thus enabling efficient credit assignment in MAPG. Theoretically, we prove that individual policies of MAPPG can converge to the global optimum. Empirically, we evaluate MAPPG on the well-known matrix game and differential game, and verify that MAPPG can converge to the global optimum for both discrete and continuous action spaces. We also evaluate MAPPG on a set of StarCraft II micromanagement tasks and demonstrate that MAPPG outperforms the state-of-the-art MAPG algorithms.
We consider self-loops and multiple edges in the configuration model as the size of the graph tends to infinity. The interest in these random variables is due to the fact that the configuration model, conditioned on being simple, is a uniform random graph with prescribed degrees. Simplicity corresponds to the absence of self-loops and multiple edges. We show that the number of self-loops and multiple edges converges in distribution to two independent Poisson random variables when the second moment of the empirical degree distribution converges. We also provide an estimate on the total variation distance between the number of self-loops and multiple edges and the Poisson limit of their sum. This revisits previous works of Bollob\'as, of Janson, of Wormald and others. The error estimates also imply sharp asymptotics for the number of simple graphs with prescribed degrees. The error estimates follow from an application of Stein's method for Poisson convergence, which is a novel method for this problem. The asymptotic independence of self-loops and multiple edges follows from a Poisson version of the Cram\'er-Wold device using thinning, which is of independent interest. When the degree distribution has infinite second moment, our general results break down. We can, however, prove a central limit theorem for the number of self-loops, and for the multiple edges between vertices of degrees much smaller than the square root of the size of the graph, or when we truncate the degrees similarly. Our results and proofs easily extend to directed and bipartite configuration models.
Multimodal image alignment involves finding spatial correspondences between volumes varying in appearance and structure. Automated alignment methods are often based on local optimization that can be highly sensitive to their initialization. We propose a global optimization method for rigid multimodal 3D image alignment, based on a novel efficient algorithm for computing similarity of normalized gradient fields (NGF) in the frequency domain. We validate the method experimentally on a dataset comprised of 20 brain volumes acquired in four modalities (T1w, Flair, CT, [18F] FDG PET), synthetically displaced with known transformations. The proposed method exhibits excellent performance on all six possible modality combinations, and outperforms all four reference methods by a large margin. The method is fast; a 3.4Mvoxel global rigid alignment requires approximately 40 seconds of computation, and the proposed algorithm outperforms a direct algorithm for the same task by more than three orders of magnitude. Open-source implementation is provided.
We consider a linear stochastic bandit problem where the dimension $K$ of the unknown parameter $\theta$ is larger than the sampling budget $n$. In such cases, it is in general impossible to derive sub-linear regret bounds since usual linear bandit algorithms have a regret in $O(K\sqrt{n})$. In this paper we assume that $\theta$ is $S-$sparse, i.e. has at most $S-$non-zero components, and that the space of arms is the unit ball for the $||.||_2$ norm. We combine ideas from Compressed Sensing and Bandit Theory and derive algorithms with regret bounds in $O(S\sqrt{n})$.
The Golomb ruler problem is defined as follows: Given a positive integer n, locate n marks on a ruler such that the distance between any two distinct pair of marks are different from each other and the total length of the ruler is minimized. The Golomb ruler problem has applications in information theory, astronomy and communications, and it can be seen as a challenge for combinatorial optimization algorithms. Although constructing high quality rulers is well-studied, proving optimality is a far more challenging task. In this paper, we provide a computational comparison of different optimization paradigms, each using a different model (linear integer, constraint programming and quadratic integer) to certify that a given Golomb ruler is optimal. We propose several enhancements to improve the computational performance of each method by exploring bound tightening, valid inequalities, cutting planes and branching strategies. We conclude that a certain quadratic integer programming model solved through a Benders decomposition and strengthened by two types of valid inequalities performs the best in terms of solution time for small-sized Golomb ruler problem instances. On the other hand, a constraint programming model improved by range reduction and a particular branching strategy could have more potential to solve larger size instances due to its promising parallelization features.
We show that a new Regge trajectory with \alpha_{f_1} (0) \approx 1 and slope \alpha_{f_1}'(0) \approx 0 explains the features of hadron-hadron scattering and photoproduction of the rho and phi mesons at large energy and momentum transfer. This trajectory with quantum numbers P = C = +1 and odd signature can be considered as a natural partner of the Pomeron which has even signature. The odd signature of the new exchange leads to contributions to the spin-dependent cross sections, which do not vanish at large energy. The links between the anomalous properties of this trajectory, the axial anomaly and the flavor singlet axial vector f_1 (1285) meson are discussed.
For disordered elastic manifolds in the ground state (equilibrium) we obtain the critical exponents for the roughness and the correction-to-scaling up to 3-loop order, i.e. third order in $\epsilon=4-d$, where $d$ is the internal dimension $d$. We also give the full 2-point function up to order $\epsilon^{2}$, i.e. at 2-loop order.
We study universal aspects of fluctuations in an ensemble of noninteracting continuous quantum thermal machines in the steady state limit. Considering an individual machine, such as a refrigerator, in which relative fluctuations (and high order cumulants) of the cooling heat current to the absorbed heat current, $\eta^{(n)}$, are upper-bounded, $\eta^{(n)}\leq \eta_C^n$ with $n\geq 2$ and $\eta_C$ the Carnot efficiency, we prove that an {\it ensemble} of $N$ distinct machines similarly satisfies this upper bound on the relative fluctuations of the ensemble, $\eta_N^{(n)}\leq \eta_C^n$. For an ensemble of distinct quantum {\it refrigerators} with components operating in the tight coupling limit we further prove the existence of a {\it lower bound} on $\eta_N^{(n)}$ in specific cases, exemplified on three-level quantum absorption refrigerators and resonant-energy thermoelectric junctions. Beyond special cases, the existence of a lower bound on $\eta_N^{(2)}$ for an ensemble of quantum refrigerators is demonstrated by numerical simulations.
Recently, it has been shown that entropy can be used to sort Brownian particles according to their size. In particular, a combination of a static and a time-dependent force applied on differently sized particles which are confined in an asymmetric periodic structure can be used to separate them efficiently, by forcing them to move in opposite directions. In this paper, we investigate the optimization of the performance of the 'entropic splitter'. Specifically, the splitting mechanism and how it depends on the geometry of the channel, and the frequency and strength of the periodic forcing is analyzed. Using numerical simulations, we demonstrate that a very efficient and fast separation with a practically 100% purity can be achieved by a proper optimization of the control variables. The results of this work could be useful for a more efficient separation of dispersed phases such as DNA fragments or colloids dependent on their size.
We prove that the canonical twist $\zeta \colon K(\mathbb{Z},3) \rightarrow BGL_1(MSpin^c)$ does not extend to a twist for unitary bordism by showing that every continuous map $f \colon K(\mathbb{Z},3) \rightarrow BGL_1(MU)$ loops to a null homotopic map.
Both neural networks and decision trees are popular machine learning methods and are widely used to solve problems from diverse domains. These two classifiers are commonly used base classifiers in an ensemble framework. In this paper, we first present a new variant of oblique decision tree based on a linear classifier, then construct an ensemble classifier based on the fusion of a fast neural network, random vector functional link network and oblique decision trees. Random Vector Functional Link Network has an elegant closed form solution with extremely short training time. The neural network partitions each training bag (obtained using bagging) at the root level into C subsets where C is the number of classes in the dataset and subsequently, C oblique decision trees are trained on such partitions. The proposed method provides a rich insight into the data by grouping the confusing or hard to classify samples for each class and thus, provides an opportunity to employ fine-grained classification rule over the data. The performance of the ensemble classifier is evaluated on several multi-class datasets where it demonstrates a superior performance compared to other state-of- the-art classifiers.
The theory interest group in the International Virtual Observatory Alliance (IVOA) has the goal of ensuring that theoretical data and services are taken into account in the IVOA standards process. In this poster we present some of the efforts carried out by this group to include evolutionary synthesis models in the VO framework. In particular we present the VO tool PGos3, developed by the INAOE (Mexico) and the Spanish Virtual Observatory which includes most of public SSP models in the VO framework (e.g. VOSpec). We also describe the problems related with the inclusion of synthesis models in the VO framework and we try to encourage people to define the way in which synthesis models should be described. This issue has implications not only for the inclusion of synthesis models in the the VO framework but also for a proper usage of synthesis models.
Knowing which factors are significant in credit rating assignment leads to better decision-making. However, the focus of the literature thus far has been mostly on structured data, and fewer studies have addressed unstructured or multi-modal datasets. In this paper, we present an analysis of the most effective architectures for the fusion of deep learning models for the prediction of company credit rating classes, by using structured and unstructured datasets of different types. In these models, we tested different combinations of fusion strategies with different deep learning models, including CNN, LSTM, GRU, and BERT. We studied data fusion strategies in terms of level (including early and intermediate fusion) and techniques (including concatenation and cross-attention). Our results show that a CNN-based multi-modal model with two fusion strategies outperformed other multi-modal techniques. In addition, by comparing simple architectures with more complex ones, we found that more sophisticated deep learning models do not necessarily produce the highest performance; however, if attention-based models are producing the best results, cross-attention is necessary as a fusion strategy. Finally, our comparison of rating agencies on short-, medium-, and long-term performance shows that Moody's credit ratings outperform those of other agencies like Standard & Poor's and Fitch Ratings.
In these lecture notes, we present a connection between the complex dynamics of a family of rational functions $f_t: \mathbb{P}^1\to \mathbb{P}^1$, parameterized by $t$ in a Riemann surface $X$, and the arithmetic dynamics of $f_t$ on rational points $\mathbb{P}^1(k)$ where $k = \mathbb{C}(X)$ or $\bar{\mathbb{Q}}(X)$. An explicit relation between stability and canonical height is explained, with a proof that contains a piece of the Mordell-Weil theorem for elliptic curves over function fields. Our main goal is to pose some questions and conjectures about these families, guided by the principle of "unlikely intersections" from arithmetic geometry, as in [Zannier 2012]. We also include a proof that the hyperbolic postcritically-finite maps are Zariski dense in the moduli space of rational maps of any given degree $d>1$. These notes are based on four lectures at KAWA 2015, in Pisa, Italy, designed for an audience specializing in complex analysis, expanding upon the main results of [Baker-DeMarco 2013, DeMarco 2016, DeMarco-Wang-Ye 2016].
For analyzing anisotropic low relative-velocity correlation-functions and the associated emission sources, we propose an expansion in terms of cartesian spherical harmonics. The expansion coefficients represent angular moments of the investigated functions. The respective coefficients for the correlation and source are directly related to each other via one-dimensional integral transforms. The shape features of the source may be partly read off from the respective features of the correlation function and can be, otherwise, imaged.
We study an one{dimensional quasilinear system proposed by J. Tello and M. Winkler [19] which models the population dynamics of two competing species attracted by the same chemical. The kinetics terms of the interacting species are chosen to be the Lotka{Volterra type. We prove the existence of global bounded and classical solutions for all chemoattraction rates. Under homogeneous Neumann boundary conditions, we establish the existence of nonconstant steady states by local bifurcation theory. The stability of the bifurcating solutions is also obtained when the diffusivity of both species is large. Finally, we perform extensive numerical studies to demonstrate the formation of stable positive steady states with various interesting spatial structures.
Thanks to the excellent performances of ATLAS and CMS in triggering on muon signals and reconstructing these particles down to low transverse momentum, large samples of heavy-flavored hadrons have been collected in the 2011 LHC run at sqrt(s) = 7 TeV. The analysis of these samples has enabled both experiments to perform competitive measurements of heavy-flavor properties, such as quarkonium polarization, lifetime and CP-violation measurements, hadron spectroscopy and branching ratios of rare B decays.
StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at \url{https://github.com/buxiangzhiren/Asymmetric_VQGAN}.
We use a modified SampleRNN architecture to generate music in modern genres such as black metal and math rock. Unlike MIDI and symbolic models, SampleRNN generates raw audio in the time domain. This requirement becomes increasingly important in modern music styles where timbre and space are used compositionally. Long developmental compositions with rapid transitions between sections are possible by increasing the depth of the network beyond the number used for speech datasets. We are delighted by the unique characteristic artifacts of neural synthesis.
The type IV pilus retraction motor is found in many important bacterial pathogens. It is the strongest known linear motor protein and is required for bacterial infectivity. We characterize the dynamics of type IV pilus retraction in terms of a stochastic chemical reaction model. We find that a two state model can describe the experimental force velocity relation and qualitative dependence of ATP concentration. The results indicate that the dynamics is limited by an ATP-dependent step at low load and a force-dependent step at high load, and that at least one step is effectively irreversible in the measured range of forces. The irreversible nature of the sub-step(s) lead to interesting predictions for future experiments: We find different parameterizations with mathematically identical force velocity relations but different fluctuations (diffusion constant). We also find a longer elementary step compared to an earlier analysis, which agrees better with known facts about the structure of the pilus filament and energetic considerations. We conclude that more experimental data is needed, and that further retraction experiments are likely to resolve interesting details and give valuable insights into the PilT machinery. In light of our findings, the fluctuations of the retraction dynamics emerge as a key property to be studied in future experiments.
We present the results of a multiplicity survey of 212 T Tauri stars in the Chamaeleon I and Taurus-Auriga star-forming regions, based on high-resolution spectra from the Magellan Clay 6.5 m telescope. From these data, we achieved a typical radial velocity precision of ~80 m/s with slower rotators yielding better precision, in general. For 174 of these stars, we obtained multi-epoch data with sufficient time baselines to identify binaries based on radial velocity variations. We identified eight close binaries and four close triples, of which three and two, respectively, are new discoveries. The spectroscopic multiplicity fractions we find for Cha I (7%) and Tau-Aur (6%) are similar to each other, and to the results of field star surveys in the same mass and period regime. However, unlike the results from imaging surveys, the frequency of systems with close companions in our sample is not seen to depend on primary mass. Additionally, we do not find a strong correlation between accretion and close multiplicity. This implies that close companions are not likely the main source of the accretion shut down observed in weak-lined T Tauri stars. Our results also suggest that sufficient radial velocity precision can be achieved for at least a subset of slowly rotating young stars to search for hot Jupiter planets.
The certification of quantum resources is a critical tool in the development of quantum information processing. In particular, quantum state verification is a fundamental building block for communication and computation applications, determining whether the involved parties can trust the resources at hand or whether the application should be aborted. Self-testing methods have been used to tackle such verification tasks in a device-independent (DI) setting. However, these approaches commonly consider the limit of large (asymptotic), identically and independently distributed (IID) samples, which weakens the DI claim and poses serious challenges to their experimental implementation. Here we overcome these challenges by adopting a theoretical protocol enabling the certification of quantum states in the few-copies and non-IID regime and by leveraging a high-fidelity multipartite entangled photon source. This allows us to show the efficient and device-independent certification of a single copy of a four-qubit GHZ state that can readily be used for the robust and reliable implementation of quantum information tasks.
We present the new code ALCAR developed to model multidimensional, multi energy-group neutrino transport in the context of supernovae and neutron-star mergers. The algorithm solves the evolution equations of the 0th- and 1st-order angular moments of the specific intensity, supplemented by an algebraic relation for the 2nd-moment tensor to close the system. The scheme takes into account frame-dependent effects of order O(v/c) as well as the most important types of neutrino interactions. The transport scheme is significantly more efficient than a multidimensional solver of the Boltzmann equation, while it is more accurate and consistent than the flux-limited diffusion method. The finite-volume discretization of the essentially hyperbolic system of moment equations employs methods well-known from hydrodynamics. For the time integration of the potentially stiff moment equations we employ a scheme in which only the local source terms are treated implicitly, while the advection terms are kept explicit, thereby allowing for an efficient computational parallelization of the algorithm. We investigate various problem setups in one and two dimensions to verify the implementation and to test the quality of the algebraic closure scheme. In our most detailed test, we compare a fully dynamic, one-dimensional core-collapse simulation with two published calculations performed with well-known Boltzmann-type neutrino-hydrodynamics codes and we find very satisfactory agreement.
Axially symmetric stationary metrics governed by the Einstein-Euler equations for slowly rotating perfect fluids have been constructed in an arbitrarily large bounded domain containing the support of the mass density. However the problem of global prolongation of the metric is still open. On the other hand the so called matter-vacuum matching problem, particularly as the source problem for the Kerr metric, has been discussed by several authors. This can be regarded as the approach to the same open problem in the opposite direction. We give a remark on this open problem.
We consider whether the asymptotic distributions for the log-likelihood ratio test statistic are expected to be Gaussian or chi-squared. Two straightforward examples provide insight on the difference.
We present a study of the radio continuum properties of two luminous/ultraluminous infrared galaxy samples: the OH megamaser (OHM) sample (74 objects) and the control sample (128 objects) without detected maser emission. We carried out pilot observations for 140 objects with the radio telescope RATAN-600 at 1.2, 2.3, 4.7, 8.2, 11.2, and 22.3 GHz in 2019-2021. The OHM sample has two times more flat-spectrum sources (32 per cent) than the control sample. Steep radio spectra prevail in both samples. The median spectral index at 4.7 GHz $\alpha_{4.7}=-0.59$ for the OHM sample, and $\alpha_{4.7}=-0.71$ for the non-OHM galaxies. We confirm a tight correlation of the far-infrared (FIR) and radio luminosities for the OHM sample. We found correlations between isotropic OH line luminosity $L_{OH}$ and the spectral index $\alpha_{4.7}$ ($\rho$=0.26, p-val.=0.04) and between $L_{OH}$ and radio luminosity $P_{1.4}$ ($\rho$=0.35, p-val.=0.005). Reviewing subsamples of masers powered by active galactic nuclei (AGNs) and star formation revealed insignificant differences for their FIR and radio properties. Nonetheless, AGN-powered galaxies exhibit larger scatter in a range of parameters and their standard deviations. The similarities in the radio and FIR properties in the two samples are presumably caused by the presence of a significant amount of AGN sources in both samples (47 and 30 per cent in the OHM and control samples) and/or possibly by the presence of undetected OH emission sources in the control sample.
In traditional ELM and its improved versions suffer from the problems of outliers or noises due to overfitting and imbalance due to distribution. We propose a novel hybrid adaptive fuzzy ELM(HA-FELM), which introduces a fuzzy membership function to the traditional ELM method to deal with the above problems. We define the fuzzy membership function not only basing on the distance between each sample and the center of the class but also the density among samples which based on the quantum harmonic oscillator model. The proposed fuzzy membership function overcomes the shortcoming of the traditional fuzzy membership function and could make itself adjusted according to the specific distribution of different samples adaptively. Experiments show the proposed HA-FELM can produce better performance than SVM, ELM, and RELM in text classification.
We present an analysis of the effects of environment on the photometric properties of galaxies in the core of the Shapley Supercluster at z=0.05, one of the most massive structures in the local universe. The Shapley Optical Survey (SOS) comprises archive WFI optical imaging of a 2.0 deg^2 region containing the rich clusters A3556, A3558 and A3562 which demonstrate a highly complex dynamical situation including ongoing cluster mergers. The B-R/R colour-magnitude relation has an intrinsic dispersion of 0.045 mag and is 0.015\pm0.005 mag redder in the highest-density regions, indicative of the red sequence galaxy population being 500 Myr older in the cluster cores than towards the virial radius. The B-R colours of galaxies are dependent on their environment, whereas their luminosities are independent of the local density, except for the very brightest galaxies (M_R<-22). The global colours of faint (>M*+2) galaxies change from the cluster cores where ~90% of galaxies lie along the cluster red sequence to the virial radius, where the fraction has dropped to just ~20%. This suggests that processes related to the supercluster environment are responsible for transforming faint galaxies, rather than galaxy merging, which should be infrequent in any of the regions studied here. The largest concentrations of faint blue galaxies are found between the clusters, coincident with regions containing high fractions of ~L* galaxies with radio emission indicating starbursts. Their location suggests star-formation triggered by cluster mergers, in particular the merger of A3562 and the poor cluster SC1329-313, although they may also represent recent arrivals in the supercluster core complex. (abstract truncated)
Let $G$ be the semidirect product $\Gamma \rtimes F_2$ where $\Gamma$ is either the free group $F_n$, $n > 1$ or the fundamental group $S_g$ of a closed surface of genus $g > 1$. We prove that $G$ is incoherent, solving two problems posed by D. Wise. This implies an affirmative answer to a question of J. Hillman on the fundamental group of a surface bundle over a surface. Although many groups have been shown to be incoherent using virtual algebraic fibering, we also show that not every free-by-free group virtually algebraically fibers.
We construct a spherically symmetric noncommutative space in three dimensions by foliating the space with concentric fuzzy spheres. We show how to construct a gauge theory in this space and in particular we derive the noncommutative version of a Yang-Mills-Higgs theory. We find numerical monopole solutions of the equations of motion.
Owing to the increasing popularity of lead-based hybrid perovskites for photovoltaic (PV) applications, it is crucial to understand their defect physics and its influence on their optoelectronic properties. In this work, we simulate various point defects in pseudo-cubic structures of mixed iodide-bromide and bromide-chloride methylammonium lead perovskites with the general formula MAPbI_{3-y}Br_{y} or MAPbBr_{3-y}Cl_{y} (where y is between 0 and 3), and use first principles based density functional theory computations to study their relative formation energies and charge transition levels. We identify vacancy defects and Pb on MA anti-site defect as the lowest energy native defects in each perovskite. We observe that while the low energy defects in all MAPbI_{3-y}Br_{y} systems only create shallow transition levels, the Br or Cl vacancy defects in the Cl-containing pervoskites have low energy and form deep levels which become deeper for higher Cl content. Further, we study extrinsic substitution by different elements at the Pb site in MAPbBr_{3}, MAPbCl_{3} and the 50-50 mixed halide perovskite, MAPbBr_{1.5}Cl_{1.5}, and identify some transition metals that create lower energy defects than the dominant intrinsic defects and also create mid-gap charge transition levels.
We compute multiprecision solutions of the Lane-Emden equation. This differential equation arises when introducing the well-known polytropic model into the equation of hydrostatic equilibrium for a nondistorted star. Since such multiprecision computations are time-consuming, we apply to this problem parallel programming techniques and thus the execution time of the computations is drastically reduced.
We construct an open enumerative theory for the Landau-Ginzburg (LG) model $(\mathbb{C}^2, \mu_r\times \mu_s, x^r+y^s)$. The invariants are defined as integrals of multisections of a Witten bundle with descendents over a moduli space that is a real orbifold with corners. In turn, a generating function for these open invariants yields the mirror LG model and a versal deformation of it with flat coordinates. After establishing an open topological recursion result, we prove an LG/LG open mirror symmetry theorem in dimension two with all descendents. The open invariants we define are not unique but depend on boundary conditions that, when altered, exhibit wall-crossing phenomena for the invariants. We describe an LG wall-crossing group classifying the wall-crossing transformations that can occur.
We report the detection of type-B quasi-periodic oscillation (QPO) of the black hole X-ray binary Swift J1728.9-3613 observed by NICER during the 2019 outburst. A type- B QPO was observed for the first two days and it disappeared as flux increased, but again appeared at $\sim$ 7.70 Hz when flux was dramatically decreased. The source was found in the soft-intermediate state during these observations. We further studied the energy dependence of the QPO. We found that QPO was observed only for a higher energy range implying that the origin of QPO is possibly due to the corona emitting higher energy photons by the inverse Compton process. The variation of spectral parameters can be explained with the disk truncation model. The fractional rms found to be monotonically increased with energy. The phase lag spectrum followed the U-shaped curve. The rms and phase lag spectrum are modelled and explained with the single-component comptonization model vkompthdk.
We present a family-non-universal extension of the Standard Model where the the first two families feature both quark-lepton and electroweak-flavour unification, via the $SU(4) \times Sp(4)_L \times Sp(4)_R$ gauge group, whereas quark-lepton unification for the third family is realised \`a la Pati-Salam. Via staggered symmetry breaking steps, this construction offers a natural explanation for the observed hierarchical pattern of fermion masses and mixings, while providing a natural suppression for flavour-changing processes involving the first two generations. The last-but-one step in the symmetry-breaking chain is a non-universal 4321 model, characterised by a vector leptoquark naturally coupled mainly to the third generation. The stability of the Higgs sector points to a 4321$\to$SM symmetry-breaking scale around the TeV, with interesting phenomenological consequences in $B$ physics and collider processes that differ from those of other known 4321 completions.
We report on a search for the flavor-changing neutral-current decay D0 \to {\mu}+ {\mu}- in pp collisions at \surd s = 1.96 TeV using 360 pb-1 of integrated luminosity collected by the CDF II detector at the Fermilab Tevatron collider. A displaced vertex trigger selects long-lived D0 candidates in the {\mu}+ {\mu}-, {\pi}+{\pi}-, and K-{\pi}+ decay modes. We use the Cabibbo-favored D0 \to K-{\pi}+ channel to optimize the selection criteria in an unbiased manner, and the kinematically similar D0 \to{\pi}+ {\pi}- channel for normalization. We set an upper limit on the branching fraction (D0 --> {\mu}+ {\mu}-) < 2.1 E-7 (3.0 E-7) at the 90% (95%) confidence level.
Vibrational dynamics in conventional molecules usually takes place on a timescale of picoseconds or shorter. A striking exception are ultralong-range Rydberg molecules, for which dynamics is dramatically slowed down as a consequence of the huge bond length of up to several micrometers. Here, we report on the direct observation of vibrational dynamics of a recently observed Rydberg-atom-ion molecule. By applying a weak external electric field of a few mV/cm, we are able to control the orientation of the photoassociated ultralong-range Rydberg molecules and induce vibrational dynamics by quenching the electric field. A high resolution ion microscope allows us to detect the molecule's orientation and its temporal vibrational dynamics in real space. Our study opens the door to the control of molecular dynamics in Rydberg molecules.
The rise in urbanization throughout the United States (US) in recent years has required urban planners and transportation engineers to have greater consideration for the transportation services available to residents of a metropolitan region. This compels transportation authorities to provide better and more reliable modes of public transit through improved technologies and increased service quality. These improvements can be achieved by identifying and understanding the factors that influence urban public transit demand. Common factors that can influence urban public transit demand can be internal and/or external factors. Internal factors include policy measures such as transit fares, service headways, and travel times. External factors can include geographic, socioeconomic, and highway facility characteristics. There is inherent simultaneity between transit supply and demand, thus a two-stage least squares (2SLS) regression modeling procedure should be conducted to forecast urban transit supply and demand. As such, two multiple linear regression models should be developed: one to predict transit supply and a second to predict transit demand. It was found that service area density, total average cost per trip, and the average number of vehicles operated in maximum service can be used to forecast transit supply, expressed as vehicle revenue hours. Furthermore, estimated vehicle revenue hours and total average fares per trip can be used to forecast transit demand, expressed as unlinked passenger trips. Additional data such as socioeconomic information of the surrounding areas for each transit agency and travel time information of the various transit systems would be useful to improve upon the models developed.
We investigate low-temperature electronic properties of the nondimeric organic superconductor $\beta^{\prime\prime}$-(BEDT-TTF)$_4$[(H$_3$O)Ga(C$_2$O$_4$)$_3$]PhNO$_2$. By examining ultrasonic properties, charge disproportionation (CD) without magnetic field dependence is detected below $T_{\rm CD}$$\sim$8~K just above the superconducting critical temperature $T_{\rm c}$$\sim$6~K. From quantum oscillations in high fields, we find variation in the Fermi surface and mass enhancement induced by the CD. Heat capacity studies elucidate that the superconducting gap function is fully gapped in the Fermi surface, but anisotropic with fourfold symmetry. We point out that the pairing mechanism of the superconductivity is possibly dominated by charge fluctuations.
In this paper, we study the physical significance of the thermodynamic volumes of AdS black holes using the Noether charge formalism of Iyer and Wald. After applying this formalism to study the extended thermodynamics of a few examples, we discuss how the extended thermodynamics interacts with the recent complexity = action proposal of Brown et al. (CA-duality). We, in particular, discover that their proposal for the late time rate of change of complexity has a nice decomposition in terms of thermodynamic quantities reminiscent of the Smarr relation. This decomposition strongly suggests a geometric, and via CA-duality holographic, interpretation for the thermodynamic volume of an AdS black hole. We go on to discuss the role of thermodynamics in complexity = action for a number of black hole solutions, and then point out the possibility of an alternate proposal, which we dub "complexity = volume 2.0". In this alternate proposal, the complexity would be thought of as the spacetime volume of the Wheeler-DeWitt patch. Finally, we provide evidence that, in certain cases, our proposal for complexity is consistent with the Lloyd bound whereas CA-duality is not.
The exploration of neural network quantum states has become widespread in the search for ground states in complex quantum many-body systems. However, achieving high precision remains challenging due to the intricate sign structures and the exponential growth of the Hilbert space. In this work, we propose a neural network state method that is confined to a significantly smaller symmetric subspace, by the full space summation and Markov chain Metropolis sampling approaches. Using symmetries and group theory, the proposed method significantly reduces the number of parameters in neural network states and achieves better accuracy and convergence properties. We validate our method using the frustrated spin-$1/2$ $J_1$-$J_2$ antiferromagnetic Heisenberg chain and compare its performance against NetKet, the standard library of neural network states. The results indicate that our symmetrized neural network states achieve a substantial improvement over the conventional neural network states method, reducing energy errors by two orders of magnitude. We also compare degenerate eigenstates with different quantum numbers, highlighting the advantage of operating within a smaller variational space.
Computer science would not be the same without personal computers. In the West the so called PC revolution started in the late '70s and has its roots in hobbyists and do-it-yourself clubs. In the following years the diffusion of home and personal computers has made the discipline closer to many people. A bit later, to a lesser extent, yet in a similar way, the revolution took place also in East European countries. Today, the scenario of personal computing has completely changed, however the computers of the '80s are still objects of fascination for a number of retrocomputing fans who enjoy using, programming and hacking the old 8-bits. The paper highlights the continuity between yesterday's hobbyists and today's retrocomputing enthusiasts, particularly focusing on East European PCs. Besides the preservation of old hardware and software, the community is engaged in the development of emulators and cross compilers. Such tools can be used for historical investigation, for example to trace the origins of the BASIC interpreters loaded in the ROMs of East European PCs.
In this paper the theory of high-frequency acoustic signal detection by Schottky diodes is presented. Physically, the detection was found to be due to the quasi-static screening of the potential perturbation caused by the acoustic strain by charge carriers. The total charge required for screening changes with the value of strain at the edge of the semiconductor depletion region and metal-semiconductor interface giving rise to displacement current. The magnitude and frequency dependence of the electrical signals are analyzed for both piezoelectric and deformation potential coupling mechanisms. The obtained results are in good agreement with the recent experimental observations and suggest feasibility of high-frequency (up to terahertz band) acoustic wave detection provided that proper electrical measuring scheme is available.
For a double solid $V\to P_3(C)$ branched over a surface $B\subset P_3(C)$ with only ordinary nodes as singularities, we give a set of generators of the divisor class group $Pic(\tilde{V}})$ in terms of contact surfaces of $B$ with only superisolated singularities in the nodes of $B$. As an application we give a condition when the integral cohomology of $\tilde{V}$ has no 2-torsion. All possible cases are listed if $B$ is a quartic surface. Furthermore we give a new lower bound for the dimension of the code of $B$.
Modeling the formation of the ice giants Uranus and Neptune is a long-lasting problem in planetary science. Due to gas-drag, collisional damping, and resonant shepherding, the planetary embryos repel the planetesimals away from their reach and thus they stop growing (Levison et al. 2010). This problem persists independently of whether the accretion took place at the current locations of the ice giants or closer to the Sun. Instead of trying to push the runaway/oligarchic growth of planetary embryos up to 10-15 Earth masses, we envision the possibility that the planetesimal disk could generate a system of planetary embryos of only 1-3 Earth masses. Then we investigate whether these embryos could have collided with each other and grown enough to reach the masses of current Uranus and Neptune. Our results point to two major problems. First, there is typically a large difference in mass between the first and the second most massive core formed and retained beyond Saturn. Second, in many simulations the final planetary system has more than two objects beyond Saturn. The growth of a major planet from a system of embryos requires strong damping of eccentricities and inclinations from the disk of gas. But strong damping also favors embryos and cores to find a stable resonant configuration, so that systems with more than two surviving objects are found. In addition to these problems, in order to have substantial mutual accretion among embryos, it is necessary to assume that the surface density of the gas was several times higher than that of the minimum-mass solar nebula. However this contrasts with the common idea that Uranus and Neptune formed in a gas-starving disk, which is suggested by the relatively small amount of hydrogen and helium contained in the atmospheres of these planets. Only one of our simulations "by chance" successfully reproduced the structure of the outer Solar System.
Understanding and controlling the flow of heat is a major challenge in nanoelectronics. When a junction is driven out of equilibrium by light or the flow of electric charge, the vibrational and electronic degrees of freedom are, in general, no longer described by a single temperature[1-6]. Moreover, characterizing the steady-state vibrational and electronic distributions {\it in situ} is extremely challenging. Here we show that surface-enhanced Raman emission may be used to determine the effective temperatures for both the vibrational modes and the flowing electrons in a biased metallic nanoscale junction decorated with molecules[7]. Molecular vibrations show mode-specific pumping by both optical excitation[8] and dc current[9], with effective temperatures exceeding several hundred Kelvin. AntiStokes electronic Raman emission\cite[10,11] indicates electronic effective temperature also increases to as much as three times its no-current values at bias voltages of a few hundred mV. While the precise effective temperatures are model-dependent, the trends as a function of bias conditions are robust, and allow direct comparisons with theories of nanoscale heating.
We have used the IRAM 30-m telescope to map some targets with HCO$^+$ (1-0) and H$^{13}$CO$^+$ (1-0) lines in order to search for gas infall evidence in the clumps. In this paper, we report the mapping results for 13 targets. All of these targets show HCO$^+$ emissions, while H$^{13}$CO$^+$ emissions are observed in ten of them. The HCO$^+$ integrated intensity maps of ten targets show clear clumpy structures, and nine targets show clumpy structures in the H$^{13}$CO$^+$ maps. Using the RADEX radiative transfer code, we estimate the column density of H$^{13}$CO$^+$, and determine the abundance ratio [H$^{13}$CO$^+$]/[H$_2$] to be approximately 10$^{-12}$ to 10$^{-10}$. Based on the asymmetry of the HCO$^+$ line profiles, we identify 11 targets show blue profiles, while six clumps have global infall evidence. We use the RATRAN and two-layer models to fit the HCO$^+$ line profiles of these infall sources, and analyze their spatial distribution of the infall velocity. The average infall velocities estimated by these two models are 0.24 -- 1.85 km s$^{-1}$ and 0.28 -- 1.45 km s$^{-1}$, respectively. The mass infall rate ranges from approximately 10$^{-5}$ to 10$^{-2}$ M$_{\odot}$ yr$^{-1}$, which suggests that intermediate- or high-mass stars may be forming in the target regions.
We present the results of a 9.3 square degree infrared (ZYJHK) survey in the Upper Scorpius association extracted from the UKIRT Infrared Deep Sky Survey (UKIDSS) Galactic Cluster Survey Early Data Release. We have selected a total of 112 candidates from the ($Z-J$,$Z$) colour-magnitude diagram over the Z=12.5-20.5 magnitude range, corresponding to M = 0.25-0.01 Msun at an age of 5 Myr and a distance of 145 pc. Additional photometry in J and K filters revealed most of them as reddened stars, leaving 32 possible members. Among them, 15 have proper motion consistent with higher mass members from Hipparcos and optical spectra with strong Halpha in emission and weak gravity features. We have also extracted two lower mass candidate members for which no optical spectra are in hand. Three members exhibit strong Halpha equivalent widths (>20 Angstroms), suggesting that they could still undergo accretion whereas two other dwarfs show signs of chromospheric activity. The likelihood of the binarity of a couple of new stellar and substellar members is discussed as well.
We consider an inertial active Ornstein-Uhlenbeck particle self-propelling in a saw-tooth ratchet potential. Using the Langevin simulation and matrix continued fraction method, the particle transport, steady state diffusion, and coherence in transport are investigated throughout the ratchet. Spatial asymmetry is found to be the key criterion for the possibility of directed transport in the ratchet. Interestingly, the simulated particle trajectories and the corresponding position and velocity distribution functions reveal that the system passes through an activity-induced transition in the transport from the running phase to the locked phase with the self-propulsion/activity time of the dynamics. This is further corroborated by the mean square displacement (MSD) calculation. The MSD gets suppressed with increase in the persistence of activity in the medium and finally approaches zero for very large value of self propulsion time, reflecting a kind of trapping of the particle by the ratchet for longer persistent of activity in the medium. The non-monotonic behaviour of the particle current and Peclet number with self propulsion time confirms that the particle transport and it's coherence can be enhanced or reduced by fine tuning the persistent duration of activity. Moreover, for an intermediate range of self-propulsion time in the dynamics as well as for an intermediate range of mass of the particle, even though the particle current shows a pronounced unusual maximum with mass, there is no enhancement in the Peclet number, instead the Peclet number decreases with mass, confirming the degradation of coherence in transport. Finally, from the analytical calculations, it is observed that for a highly viscous medium, where the inertial influence is negligibly small, the particle current approaches the current in the over damped regime.
Through stacking engineering of two-dimensional (2D) materials, a switchable interface polarization can be generated through interlayer sliding, so called sliding ferroelectricity, which is advantageous over the traditional ferroelectricity due to ultra-thin thickness, high switching speed and low fatigue. However, 2D materials with intrinsic sliding ferroelectricity are still rare, with the exception of rhombohedral-stacked MoS2, which limits sliding ferroelectricity for practical applications such as high-speed storage, photovoltaic, and neuromorphic computing. Here, we reported the observation of sliding ferroelectricity with multiple states in undoped rhombohedral-stacked InSe ({\gamma}-InSe) via dual-frequency resonance tracking piezoresponse force microscopy, scanning Kelvin probe microscopy and conductive atomic force microscopy. The tunable bulk photovoltaic effect via the electric field is achieved in the graphene/{\gamma}-InSe/graphene tunneling device with a photovoltaic current density of ~15 mA/cm2, which is attributed to the multiple sliding steps in {\gamma}-InSe according to our theoretical calculations. The vdw tunneling device also features a high photo responsivity of ~255 A/W and a fast response time for real-time imaging. Our work not only enriches rhombohedral-stacked 2D materials for sliding ferroelectricity, but also sheds light on their potential for tunable photovoltaics and imaging applications.
As part of a long term monitoring campaign of Mrk 335, deep XMM-Newton observations catch the narrow-line Seyfert 1 galaxy (NLS1) in a complex, intermediate flux interval as the active galaxy is transiting from low- to high-flux. Other works on these same data examined the general behaviour of the NLS1 (Grupe et al.) and the conditions of its warm absorber (Longinotti et al.). The analysis presented here demonstrates the X-ray continuum and timing properties can be described in a self-consistent manner adopting a blurred reflection model with no need to invoke partial covering. The rapid spectral variability appears to be driven by changes in the shape of the primary emitter that is illuminating the inner accretion disc around a rapidly spinning black hole (a > 0.7). While light bending is certainly prominent, the rather constant emissivity profile and break radius obtained in our spectral fitting suggest that the blurring parameters are not changing as would be expected if the primary source is varying its distance from the disc. Instead changes could be intrinsic to the power law component. One possibility is that material in an unresolved jet above the disc falls to combine with material at the base of the jet producing the changes in the primary emitter (spectral slope and flux) without changing its distance from the disc.
R-parity violating supersymmetric models (RPV SUSY) are becoming increasingly more appealing than its R-parity conserving counterpart in view of the hitherto non-observation of SUSY signals at the LHC. In this talk, RPV scenarios where neutrino masses are naturally generated are discussed, namely RPV through bilinear terms (bRPV) and the "mu from nu" supersymmetric standard model. The latter is characterised by a rich Higgs sector that easily accommodates a 125-GeV Higgs boson. The phenomenology of such models at the LHC is reviewed, giving emphasis on final states with displaced objects, and relevant results obtained by LHC experiments are presented. The implications for dark matter for these theoretical proposals is also addressed.
We consider generalized inversions and descents in finite Weyl groups. We establish Coxeter-theoretic properties of indicator random variables of positive roots such as the covariance of two such indicator random variables. We then compute the variances of generalized inversions and descents in classical types. We finally use the dependency graph method to prove central limit theorems for general antichains in root posets and in particular for generalized descents, and then for generalized inversions.
Jets around low- and intermediate-mass young stellar objects (YSOs) contain a fossil record of the recent accretion and outflow activity of their parent star-forming systems. We aim to understand whether the accretion/ejection process is similar across the entire stellar mass range of the parent YSOs. To this end we have obtained VLT/X-shooter spectra of HH 1042 and HH 1043, two newly discovered jets in the massive star-forming region RCW 36. HH 1042 is associated with the intermediate-mass YSO 08576nr292. Over 90 emission lines are detected in the spectra. High-velocity (up to 220 km/s) blue- and redshifted emission from a bipolar flow is observed in typical shock tracers. Low-velocity emission from the background cloud is detected in nebular tracers, including lines from high ionization species. We applied combined optical and infrared spectral diagnostic tools in order to derive the physical conditions (density, temperature, and ionization) in the jets. The measured mass outflow rates are Mjet ~ 10^-7 Msun/yr. We measure a high accretion rate for HH 1042 (Macc ~ 10^-6 Msun/yr) and Mjet/Macc ~ 0.1, comparable to low-mass sources and consistent with models for magneto-centrifugal jet launching. The knotted structure and velocity spread in both jets are interpreted as fossil signatures of a variable outflow rate. The mean velocities in both lobes of the jets are comparable, but the variations in Mjet and velocity in the two lobes are not symmetric, suggesting that the launching mechanism on either side of the accretion disk is not synchronized. For HH 1042, we have constructed an interpretative physical model with a stochastic or periodic outflow rate and a description of a ballistic flow as its constituents. The knotted structure and velocity spread can be reproduced qualitatively with the model, indicating that the outflow velocity varies on timescales on the order of 100 yr.
The discipline of process mining has a solid track record of successful applications to the healthcare domain. Within such research space, we conducted a case study related to the Intensive Care Unit (ICU) ward of the Uniklinik Aachen hospital in Germany. The aim of this work is twofold: developing a normative model representing the clinical guidelines for the treatment of COVID-19 patients, and analyzing the adherence of the observed behavior (recorded in the information system of the hospital) to such guidelines. We show that, through conformance checking techniques, it is possible to analyze the care process for COVID-19 patients, highlighting the main deviations from the clinical guidelines. The results provide physicians with useful indications for improving the process and ensuring service quality and patient satisfaction. We share the resulting model as an open-source BPMN file.
We present a catalog of 182 galaxy clusters detected through the Sunyaev-Zel'dovich effect by the Atacama Cosmology Telescope in a contiguous 987.5 deg$^{2}$ field. The clusters were detected as SZ decrements by applying a matched filter to 148 GHz maps that combine the original ACT equatorial survey with data from the first two observing seasons using the ACTPol receiver. Optical/IR confirmation and redshift measurements come from a combination of large public surveys and our own follow-up observations. Where necessary, we measured photometric redshifts for clusters using a pipeline that achieves accuracy $\Delta z/(1 + z)=0.015$ when tested on SDSS data. Under the assumption that clusters can be described by the so-called Universal Pressure Profile and its associated mass-scaling law, the full signal-to-noise > 4 sample spans the mass range $1.6 < M^{\rm UPP}_{\rm 500c}/10^{14}{\rm M}_{\odot}<9.1$, with median $M^{\rm UPP}_{\rm 500c}=3.1 \times 10^{14}$ M$_{\odot}$. The sample covers the redshift range $0.1 < z < 1.4$ (median $z = 0.49$) and 28 clusters are new discoveries (median $z = 0.80$). We compare our catalog with other overlapping cluster samples selected using the SZ, optical,and X-ray wavelengths. We find the ratio of the UPP-based SZ mass to richness-based weak-lensing mass is $\langle M^{\rm UPP}_{\rm 500c} \rangle / \langle M^{\rm \lambda WL}_{\rm 500c} \rangle = 0.68 \pm 0.11$. After applying this calibration, the mass distribution for clusters with $M_{\rm 500c} > 4 \times 10^{14}$ M$_{\odot}$ is consistent with the number of such clusters found in the South Pole Telescope SZ survey.
We prove that a smooth proper universally CH_0-trivial variety X over a field k has universally trivial Brauer group. This fills a gap in the literature concerning the p-torsion of the Brauer group when k has characteristic p.
The discovery of the highly relativistic neutron star (NS) binary (in which both NS's are pulsars) not only increases the estimated merging rate for the two NS's by a large factor, but also adds the missing link in the double helium star model of binary NS evolution. This model gives $\sim 20$ times more gravitational merging of low-mass black-hole (LMBH), NS binaries than binary NS's, whatever the rate for the latter is.
Magnetotransport in chaotic quantum dots at low magnetic fields is investigated by means of a tight binding Hamiltonian on L x L clusters of the square lattice. Chaoticity is induced by introducing L bulk vacancies. The dependence of weak localization on the Fermi energy, dot size and leads width is investigated in detail and the results compared with those of previous analyses, in particular with random matrix theory predictions. Our results indicate that the dependence of the critical flux Phi_c on the square root of the number of open modes, as predicted by random matrix theory, is obscured by the strong energy dependence of the proportionality constant. Instead, the size dependence of the critical flux predicted by Efetov and random matrix theory, namely, Phi_c ~ sqrt{1/L}, is clearly illustrated by the present results. Our numerical results do also show that the weak localization term significantly decreases as the leads width W approaches L. However, calculations for W=L indicate that the weak localization effect does not disappear as L increases.
This paper describes a bounded generation result concerning the minimal natural number $K$ such that for $Q(C_2,2R):=\{A\varepsilon_{\phi}(2x)A^{-1}|x\in R,A\in{\rm Sp}_4(R),\phi\in C_2\}$, one has $N_{C_2,2R}=\{X_1\cdots X_K|\forall 1\leq i\leq K:X_i\in Q(C_2,2R)\}$ for rings of algebraic integers $R$ and the principal congruence subgroup $N_{C_2,2R}$ in ${\rm Sp}_4(R).$ This gives an explicit version of an abstract bounded generation result of a similar type as presented by Morris. Furthermore, the result presented does not depend on several number-theoretic quantities unlike Morris' result. Using this bounded generation result, we further give explicit bounds for the strong boundedness of ${\rm Sp}_4(R)$ for certain examples of rings $R,$ thereby giving explicit versions of results in an earlier paper. We further give a classification of normally generating subsets of ${\rm Sp}_4(R)$ for $R$ a ring of algebraic integers.
We present the new parallel version (pCRASH2) of the cosmological radiative transfer code CRASH2 for distributed memory supercomputing facilities. The code is based on a static domain decomposition strategy inspired by geometric dilution of photons in the optical thin case that ensures a favourable performance speed-up with increasing number of computational cores. Linear speed-up is ensured as long as the number of radiation sources is equal to the number of computational cores or larger. The propagation of rays is segmented and rays are only propagated through one sub-domain per time step to guarantee an optimal balance between communication and computation. We have extensively checked pCRASH2 with a standardised set of test cases to validate the parallelisation scheme. The parallel version of CRASH2 can easily handle the propagation of radiation from a large number of sources and is ready for the extension of the ionisation network to species other than hydrogen and helium.
I review recently completed (since Lattice 2013) and ongoing lattice calculations in charm and bottom flavor physics. A comparison of the precision of lattice and experiment is made using both current experimental results and projected experimental precision in 2020. The combination of experiment and theory reveals several tensions between nature and the Standard Model. These tensions are reviewed in light of recent lattice results.
We present a new, inexpensive, bench-top method for measuring groove period over large areas with high mapping resolution and high measurement accuracy, dubbed the grating mapper for accurate period (GMAP). The GMAP has the ability to measure large groove period changes and non-parallel grooves, both of which cannot be measured via optical interferometry. In this paper, we detail the calibration and setup of the GMAP, and employ the instrument to measure three distinct gratings. Two of these measured gratings have customized groove patterns that prevent them from being measured via other traditional methods, such as optical interferometry. Our implementation of this tool achieves a spatial resolution of 0.1 mm$\times$0.1 mm and a period error of 1.7 nm for a 3 $\mu$m size groove period.
Hyperparameter tuning is one of the most time-consuming workloads in deep learning. State-of-the-art optimizers, such as AdaGrad, RMSProp and Adam, reduce this labor by adaptively tuning an individual learning rate for each variable. Recently researchers have shown renewed interest in simpler methods like momentum SGD as they may yield better test metrics. Motivated by this trend, we ask: can simple adaptive methods based on SGD perform as well or better? We revisit the momentum SGD algorithm and show that hand-tuning a single learning rate and momentum makes it competitive with Adam. We then analyze its robustness to learning rate misspecification and objective curvature variation. Based on these insights, we design YellowFin, an automatic tuner for momentum and learning rate in SGD. YellowFin optionally uses a negative-feedback loop to compensate for the momentum dynamics in asynchronous settings on the fly. We empirically show that YellowFin can converge in fewer iterations than Adam on ResNets and LSTMs for image recognition, language modeling and constituency parsing, with a speedup of up to 3.28x in synchronous and up to 2.69x in asynchronous settings.
Two new forms of strongly coupled plasmas will be discussed. They have become possible to create and observe in the laboratory only recently and exhibit a wealth of intriguing complex behavior which can be studied, in many cases for the first time, experimentally. Plasmas, gases of charged particles, are universal in the sense that certain properties of complex behavior do only depend on ratios of characteristic parameters of the plasma, not on the parameters themselves. Therefore, it is of fundamental and far reaching consequence, to be able to create and observe a strongly coupled plasma since its behavior is paradigmatic for an entire class of plasmas.
The gluon mass generation is a purely non-perturbative effect, and the natural framework to study it in the continuum are the Schwinger-Dyson equations (SDEs) of the theory. At the level of the SDEs the generation of such a mass is associated with the existence of infrared finite solutions for the gluon propagator. From the theoretical point of view, the dynamical gluon mass generation has been traditionally plagued with seagull divergences. In this work, we will review how such divergences can be eliminated completely by virtue of a characteristic identity, valid in dimensional regularization. As a pedagogical example, we will first discuss in the context of scalar QED how it is possible to eliminate all seagull divergences, by triggering the aforementioned special identity, which enforces the masslessness of the photon. Then, we will discuss what happens in QCD and present an Ansatz for the three gluon vertex, which completely eliminates all seagull divergences and at same time allows for the possibility of a dynamical gluon mass generation.
Symmetry in biological and physical systems is a product of self organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry based feature extrac-tion or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human observers. To this end, we exploit a neural network metric in the output of a biologically inspired Self Organizing Map, the Quantization Error (SOM QE). Shape pairs with perfect geometric mirror symmetry but a non-homogenous appearance, caused by local variations in hue, saturation, or lightness within or across the shapes in a given pair produce, as shown here, longer choice RT for yes responses relative to symmetry. These data are consistently mirrored by the variations in the SOM QE from unsupervised neural network analysis of the same stimulus images. The neural network metric is thus capable of detecting and scaling human symmetry uncertainty in response to patterns. Such capacity is tightly linked to the metrics proven selectivity to local contrast and color variations in large and highly complex image data.
The symplectic group Sp(2g,Z) is a subgroup of the linear group SL(2g,Z) and admits a faithful action on the sphere S^(2g-1), induced from its linear action on Euclidean space R^(2g). Generalizing corresponding results for linear groups, we show that, if m < 2g-1 and g > 2, any continuous action of Sp(2g,Z) on a homology m-sphere, and in particular on S^m, is trivial.
It is shown that the estimates obtained by Manfredo P. do Carmo and Detang Zhou, in their paper "Eigenvalue estimate on complete noncompact Riemannian manifolds and applications", for the first eigenvalue of the Laplace-Beltrami operator on open manifolds, via an oscillation theorem, can be naturally extended for the semi-elliptic singular operator operator, p-Laplace on manifolds.
The peristaltic motion of the stomach walls combines with the secretion of enzymes to initiate the process that breaks down food. Computational modelling of this phenomenon can help reveal the details that would be hard to capture via in-vivo or in-vitro means. In this study, the digestion of a liquid meal containing protein is simulated in a human-stomach model based on imaging data. Pepsin, the gastric enzyme for protein hydrolysis, is secreted from the proximal region of the stomach walls and allowed to react with the contents of the stomach. The jet velocities, the emptying rate, and the extent of hydrolysis are quantified for a control case, and also for three other cases of reduced motility with varying peristaltic amplitudes. The findings quantify the effect of motility on the rate of food breakdown and emptying, and correlate the observations with the mixing in the stomach induced by the antral contraction waves.