text
stringlengths
6
128k
Conceptual tests are widely used by physics instructors to assess students' conceptual understanding and compare teaching methods. It is common to look at students' changes in their answers between a pre-test and a post-test to quantify a transition in student's conceptions. This is often done by looking at the proportion of incorrect answers in the pre-test that changes to correct answers in the post-test -- the gain -- and the proportion of correct answers that changes to incorrect answers -- the loss. By comparing theoretical predictions to experimental data on the Force Concept Inventory, we shown that Item Response Theory (IRT) is able to fairly well predict the observed gains and losses. We then use IRT to quantify the student's changes in a test-retest situation when no learning occurs and show that $i)$ up to 25\% of total answers can change due to the non-deterministic nature of student's answer and that $ii)$ gains and losses can go from 0\% to 100\%. Still using IRT, we highlight the conditions that must satisfy a test in order to minimize gains and losses when no learning occurs. Finally, recommandations on the interpretation of such pre/post-test progression with respect to the initial level of students are proposed.
It has been pointed out by several groups that ekpyrotic and cyclic models generate significant non-gaussianity. In this paper, we present a physically intuitive, semi-analytic estimate of the bispectrum. We show that, in all such models, there is an intrinsic contribution to the non-gaussianity parameter f_{NL} that is determined by the geometric mean of the equation of state w_{ek} during the ekpyrotic phase and w_{c} during the phase that curvature perturbations are generated and whose value is O(100) or more times the intrinsic value predicted by simple slow-roll inflationary models, f_{NL}^{intrinsic} = O(0.1). Other contributions to f_{NL}, which we also estimate, can increase |f_{NL}| but are unlikely to decrease it significantly, making non-gaussianity a useful test of these models. Furthermore, we discuss a predicted correlation between the non-gaussianity and scalar spectral index that sharpens the test.
For $O$ an imaginary quadratic ring, we compute a fundamental polyhedron of $\text{PE}_2(O)$, the projective elementary subgroup of $\text{PSL}_2(O)$. This allows for new, simplified proofs of theorems of Cohn, Nica, Fine, and Frohman. Namely, we obtain a presentation for $\text{PE}_2(O)$, show that it has infinite-index and is its own normalizer in $\text{PSL}_2(O)$, and split $\text{PSL}_2(O)$ into a free product with amalgamation that has $\text{PE}_2(O)$ as one of its factors.
Turbulence in superfluids depends crucially on the dissipative damping in vortex motion. This is observed in the B phase of superfluid 3He where the dynamics of quantized vortices changes radically in character as a function of temperature. An abrupt transition to turbulence is the most peculiar consequence. As distinct from viscous hydrodynamics, this transition to turbulence is not governed by the velocity-dependent Reynolds number, but by a velocity-independent dimensionless parameter 1/q which depends only on the temperature-dependent mutual friction -- the dissipation which sets in when vortices move with respect to the normal excitations of the liquid. At large friction and small values of 1/q < 1 the dynamics is vortex number conserving, while at low friction and large 1/q > 1 vortices are easily destabilized and proliferate in number. A new measuring technique was employed to identify this hydrodynamic transition: the injection of a tight bundle of many small vortex loops in applied vortex-free flow at relatively high velocities. These vortices are ejected from a vortex sheet covering the AB interface when a two-phase sample of 3He-A and 3He-B is set in rotation and the interface becomes unstable at a critical rotation velocity, triggered by the superfluid Kelvin-Helmholtz instability.
In this paper we analyse 70 years of archival Harvard and Schmidt plate data of the 16.6 d Be X-ray binary A0538-66 in order to search for the presence of the long-term period of 420.82 +/- 0.79 d found in MACHO photometry (Alcock et al. 2001). We find evidence for a long-term period of 421.29 +/- 0.95 d in the archival data, and examine its stability. We also combine the archival and MACHO datasets in order to improve the accuracy of the orbital period determination, using a cycle-counting analysis to refine its value to 16.6460 +/- 0.0004 d. We also test the model proposed in our previous paper (Alcock et al. 2001) with observations documented in the literature for A0538-66 from 1980-1995, constraining the system inclination to be i > 74.9 +/- 6.5 deg.
Using a new impurity density matrix renormalization group scheme, we establish a reliable picture of how the low lying energy levels of a $S=1$ Heisenberg antiferromagnetic chain change {\it quantitatively} upon bond doping. A new impurity state gradually occurs in the Haldane gap as $J' < J$, while it appears only if $J'/J>\gamma_c$ with $1/\gamma_c=0.708$ as $J'>J$. The system is non-perturbative as $1\leq J'/J\leq\gamma_c$. This explains the appearance of a new state in the Haldane gap in a recent experiment on Y$_{2-x}$Ca$_x$BaNiO$_5$ [J.F. DiTusa, et al., Phys. Rev. Lett. 73 1857(1994)].
We use broadband ultra-fast pump-probe spectroscopy in the visible range to study the lowest excitations across the Mott-Hubbard gap in the orbitally ordered insulator YVO3. Separating thermal and non-thermal contributions to the optical transients, we show that the total spectral weight of the two lowest peaks is conserved, demonstrating that both excitations correspond to the same multiplet. The pump-induced transfer of spectral weight between the two peaks reveals that the low-energy one is a Hubbard exciton, i.e. a resonance or bound state between a doublon and a holon. Finally, we speculate that the pump-driven spin-disorder can be used to quantify the kinetic energy gain of the excitons in the ferromagnetic phase.
Many imaging problems can be formulated as mapping problems. A general mapping problem aims to obtain an optimal mapping that minimizes an energy functional subject to the given constraints. Existing methods to solve the mapping problems are often inefficient and can sometimes get trapped in local minima. An extra challenge arises when the optimal mapping is required to be diffeomorphic. In this work, we address the problem by proposing a deep-learning framework based on the Quasiconformal (QC) Teichmuller theories. The main strategy is to learn the Beltrami coefficient (BC) that represents a mapping as the latent feature vector in the deep neural network. The BC measures the local geometric distortion under the mapping, with which the interpretability of the deep neural network can be enhanced. Under this framework, the diffeomorphic property of the mapping can be controlled via a simple activation function within the network. The optimal mapping can also be easily regularized by integrating the BC into the loss function. A crucial advantage of the proposed framework is that once the network is successfully trained, the optimized mapping corresponding to each input data information can be obtained in real time. To examine the efficacy of the proposed framework, we apply the method to the diffeomorphic image registration problem. Experimental results outperform other state-of-the-art registration algorithms in both efficiency and accuracy, which demonstrate the effectiveness of our proposed framework to solve the mapping problem.
Theoretical expectations concerning small $x$ behaviour of the spin dependent structure function $g_1$ are summarised. This includes discussion of the Regge pole model predictions and of the double $ln^2(1/x)$ effects implied by perturbative QCD. The quantitative implementation of the latter is described within the unified scheme incorporating both Altarelli-Parisi evolution and the double $ln^2(1/x)$ resummation. The double $ln^2(1/x)$ effects are found to be important in the region of $x$ which can possibly be probed at HERA. Predictions for the polarized gluon distribution $\Delta G(x,Q^2)$ at low $x$ are also given.
We provide a framework to compute the dynamics of massive Dirac fermions using holography. To this end we consider two bulk Dirac fermions that are coupled via a Yukawa interaction and propagate on a gravitational background in which a mass deformation is introduced. Moreover, we discuss the incorporation of this approach in semiholography. The resulting undoped fermionic spectral functions indeed show that the Yukawa coupling induces a gap in the holographic spectrum, whereas the semiholographic extension is in general gapped but additionally contains a quantum critical point at which the effective fermion mass vanishes and a topological phase transition occurs. Furthermore, when introducing doping, the fermionic spectral functions show a quantum phase transition between a gapped material and a Fermi liquid.
A theoretical analysis using an impact-parameter description of the collisions of deuterons with nuclei is carried out in the high-energy diffraction approximation. It is used to obtain the intensities and integrated cross sections for elastic scattering, for the emergence of the two incident nucleons from the collision whether they appear as an elastically scattered deuteron or as two unbound nucleons, and for the diffraction-induced dissociation of the deuteron into a free neutron and a free proton, as well as the total cross section. The cross section for collisions in which one or both of the nucleons is absorbed is derived in terms of the sum of the neutron-nucleus and proton-nucleus effective phase shifts. Expressions for the cross section for processes in which the proton (or neutron) is absorbed whether the neutron (or proton) is absorbed or not, and for the cross section for processes in which the neutron (or proton) is absorbed and the proton (or neutron) remains free are derived. A reduced form of a two-particle density matrix is introduced to directly derive expressions for the cross section for two-particle absorption in which both the proton and neutron are absorbed and for the cross section for stripping processes in which the proton (or neutron) is absorbed and the neutron (or proton) emerges as a free particle. The expression for the cross section for the breakup of the deuteron and the resulting emergence of one or two free nucleons is also derived. The mechanism by which the diffraction dissociation of the deuteron is induced is understood in an approximate semi-quantitative basis in classical terms (primarily the radial derivative of the radial impulse), allowing an estimate of where in the nuclear potential (beyond the "radius", near the "surface") the dissociation process tends to predominantly occur.
In this Thesis we study quantum corrections to the classical dynamics for mean values in field theory. To that end we make use of the formalism of the closed time path effective action to get real and causal equations of motion. We introduce a coarse grained effective action, which is useful in the study of phase transitions in field theory. We derive an exact renormalization group equation that describes how this action varies with the coarse graining scale. We develop different approximation methods to solve that equation, and we obtain non perturbative improvements to the effective potential for a self interacting scalar field theory. We also discuss the stochastic aspects contained in this action. On the other hand, using the effective action, we find low energy and large distance quantum corrections for the gravitational potential, treating relativity as an effective low energy theory. We include the effect of scalar fields, fermions and gravitons. The inclusion of metric fluctuations causes Einstein semiclassical equations to depend on the gauge fixing parameters, and they are therefore non physical. We solve this problem identifying as a physical observable the trayectory of a test particle. We explicitly show that the geodesic equation for such particle is independent of the arbitrary parameters of the gauge fixing.
Pushdown systems (PDSs) and recursive state machines (RSMs), which are linearly equivalent, are standard models for interprocedural analysis. Yet RSMs are more convenient as they (a) explicitly model function calls and returns, and (b) specify many natural parameters for algorithmic analysis, e.g., the number of entries and exits. We consider a general framework where RSM transitions are labeled from a semiring and path properties are algebraic with semiring operations, which can model, e.g., interprocedural reachability and dataflow analysis problems. Our main contributions are new algorithms for several fundamental problems. As compared to a direct translation of RSMs to PDSs and the best-known existing bounds of PDSs, our analysis algorithm improves the complexity for finite-height semirings (that subsumes reachability and standard dataflow properties). We further consider the problem of extracting distance values from the representation structures computed by our algorithm, and give efficient algorithms that distinguish the complexity of a one-time preprocessing from the complexity of each individual query. Another advantage of our algorithm is that our improvements carry over to the concurrent setting, where we improve the best-known complexity for the context-bounded analysis of concurrent RSMs. Finally, we provide a prototype implementation that gives a significant speed-up on several benchmarks from the SLAM/SDV project.
We show that the upper bound of the classical QCD axion window can be significantly relaxed for low-scale inflation. If the Gibbons-Hawking temperature during inflation is lower than the QCD scale, the initial QCD axion misalignment angle follows the Bunch-Davies distribution. The distribution is peaked at the strong CP conserving minimum if there is no other light degree of freedom contributing to the strong CP phase. As a result, the axion overproduction problem is significantly relaxed even for an axion decay constant larger than $10^{12}$GeV. We also provide concrete hilltop inflation models where the Hubble parameter during inflation is comparable to or much smaller than the QCD scale, with successful reheating taking place via perturbative decays or dissipation processes.
We consider the zero mass limit of a relativistic Thomas-Fermi-Weizsaecker model of atoms and molecules. We find bounds for the critical nuclear charges that ensure stability.
Results of processing experimental data on charm production in hadron-hadron interactions are presented. The analysis is carried out within the frame of phenomenological model of diffraction production and quark statistics based on additive quark model (AQM). In low energy region sqrt s = 20 - 40GeV, the cross sections si_ {pN to c bar cX} (s), si_ {pi N to c bar cX} (s) are fitted by logarithmic function with the parameters connected by relationship of AQM. At collider energies 200, 540, 900, 1800 GeV, the values of si_{bar pp to c bar cX} (s) were obtained by a quark statistics method from the data on diffraction dissociation. It is established, that logarithmic function with universal numerical parameters describes the whole set of low-energy and high-energy data with high accuracy. The expected values of cross sections are si_{pp to c bar cX} = 250 pm 40 mu b and 355 pm 57 mu b at TEVATRON energy sqrt {s} = 1.96 TeV and LHC energy sqrt {s} = 14 TeV accordingly. Opportunities of use of the obtained results for calibration of a flux of "prompt" muons in high-energy component of cosmic rays are discussed.
In multielectron bubbles, the electrons form an effectively two-dimensional layer at the inner surface of the bubble in helium. The modes of oscillation of the bubble surface (the ripplons) are influenced by the charge redistribution of the electrons along the surface. The dispersion relation for these charge redistribution modes (`longitudinal plasmons') is derived and the coupling of these modes to the ripplons is analysed. We find that the ripplon-plasmon coupling in a multielectron bubble differs markedly from that of electrons a flat helium surface. An equation is presented relating the spherical harmonic components of the charge redistribution to those of the shape deformation of the bubble.
We perform a calculation of the linewidth of a micromaser, using the master equation and the quantum regression approach. A `dephasing' contribution is identified from pumping processes that conserve the photon number and do not appear in the photon statistics. We work out examples for a single-atom maser with a precisely controlled coupling and for a laser where the interaction time is broadly distributed. In the latter case, we also assess the convergence of a recently developed uniform Lindblad approximation to the master equation; it is relatively slow.
Partial cubes are graphs isometrically embeddable into hypercubes. In this paper it is proved that every cubic, vertex-transitive partial cube is isomorphic to one of the following graphs: $K_2 \, \square \, C_{2n}$, for some $n\geq 2$, the generalized Petersen graph $G(10,3)$, the cubic permutahedron, the truncated cuboctahedron, or the truncated icosidodecahedron. This classification is a generalization of results of Bre\v{s}ar et al.~from 2004 on cubic mirror graphs, it includes all cubic, distance-regular partial cubes (Weichsel, 1992), and presents a contribution to the classification of all cubic partial cubes.
In this paper the explicit form of the operator of transformation of the vacuum states for the general two-mode Bogolubov transformation is found
We present an extensive analysis of relative deviation bounds, including detailed proofs of two-sided inequalities and their implications. We also give detailed proofs of two-sided generalization bounds that hold in the general case of unbounded loss functions, under the assumption that a moment of the loss is bounded. These bounds are useful in the analysis of importance weighting and other learning tasks such as unbounded regression.
Recently, video conferencing apps have become functional by accomplishing such computer vision-based features as real-time background removal and face beautification. Limited variability in existing portrait segmentation and face parsing datasets, including head poses, ethnicity, scenes, and occlusions specific to video conferencing, motivated us to create a new dataset, EasyPortrait, for these tasks simultaneously. It contains 40,000 primarily indoor photos repeating video meeting scenarios with 13,705 unique users and fine-grained segmentation masks separated into 9 classes. Inappropriate annotation masks from other datasets caused a revision of annotator guidelines, resulting in EasyPortrait's ability to process cases, such as teeth whitening and skin smoothing. The pipeline for data mining and high-quality mask annotation via crowdsourcing is also proposed in this paper. In the ablation study experiments, we proved the importance of data quantity and diversity in head poses in our dataset for the effective learning of the model. The cross-dataset evaluation experiments confirmed the best domain generalization ability among portrait segmentation datasets. Moreover, we demonstrate the simplicity of training segmentation models on EasyPortrait without extra training tricks. The proposed dataset and trained models are publicly available.
This paper derives master equations for an atomic two-level system for a large set of unitarily equivalent Hamiltonians without employing the rotating wave and certain Markovian approximations. Each Hamiltonian refers to physically different components as representing the "atom" and as representing the "field" and hence results in a different master equation, when assuming a photon-absorbing environment. It is shown that the master equations associated with the minimal coupling and the multipolar Hamiltonians predict enormous stationary state narrowband photon emission rates, even in the absence of external driving, for current experiments with single quantum dots and colour centers in diamond. These seem to confirm that the rotating wave Hamiltonian identifies the components of the atom-field system most accurately.
In this work, to efficiently help escape the stationary and saddle points, we propose, analyze, and generalize a stochastic strategy performed as an operator for a first-order gradient descent algorithm in order to increase the target accuracy and reduce time consumption. Unlike existing algorithms, the proposed stochastic the strategy does not require any batches and sampling techniques, enabling efficient implementation and maintaining the initial first-order optimizer's convergence rate, but provides an incomparable improvement of target accuracy when optimizing the target functions. In short, the proposed strategy is generalized, applied to Adam, and validated via the decomposition of biomedical signals using Deep Matrix Fitting and another four peer optimizers. The validation results show that the proposed random strategy can be easily generalized for first-order optimizers and efficiently improve the target accuracy.
We propose a set of constraints on the ground-state wavefunctions of fracton phases, which provide a possible generalization of the string-net equations used to characterize topological orders in two spatial dimensions. Our constraint equations arise by exploiting a duality between certain fracton orders and quantum phases with "subsystem" symmetries, which are defined as global symmetries on lower-dimensional manifolds, and then studying the distinct ways in which the defects of a subsystem symmetry group can be consistently condensed to produce a gapped, symmetric state. We numerically solve these constraint equations in certain tractable cases to obtain the following results: in $d=3$ spatial dimensions, the solutions to these equations yield gapped fracton phases that are distinct as conventional quantum phases, along with their dual subsystem symmetry-protected topological (SSPT) states. For an appropriate choice of subsystem symmetry group, we recover known fracton phases such as Haah's code, along with new, symmetry-enriched versions of these phases, such as non-stabilizer fracton models which are distinct from both the X-cube model and the checkerboard model in the presence of global time-reversal symmetry, as well as a variety of fracton phases enriched by spatial symmetries. In $d=2$ dimensions, we find solutions that describe new weak and strong SSPT states, such as ones with both line-like subsystem symmetries and global time-reversal symmetry. In $d=1$ dimension, we show that any group cohomology solution for a symmetry-protected topological state protected by a global symmetry, along with lattice translational symmetry necessarily satisfies our consistency conditions.
The well-known 1-2-3 Conjecture asserts that the edges of every graph without isolated edges can be weighted with $1$, $2$ and $3$ so that adjacent vertices receive distinct weighted degrees. This is open in general. We prove that every $d$-regular graph, $d\geq 2$, can be decomposed into at most $2$ subgraphs (without isolated edges) fulfilling the 1-2-3 Conjecture if $d\notin\{10,11,12,13,15,17\}$, and into at most $3$ such subgraphs in the remaining cases. Additionally, we prove that in general every graph without isolated edges can be decomposed into at most $24$ subgraphs fulfilling the 1-2-3 Conjecture, improving the previously best upper bound of $40$. Both results are partly based on applications of the Lov\'asz Local Lemma.
Flow-fields are ubiquitous systems that are able to transport vital signalling molecules necessary for system function. While information regarding the location and transport of such particles is often crucial, it is not well-understood how to quantify the information in such stochastic systems. Using the framework of nonequilibrium statistical physics, we develop theoretical tools to address this question. We observe that rotation in a flow-field does not explicitly appear in the generalized potential that governs the rate of system entropy production. Specifically, in the neighborhood of a flow-field, rotation contributes to the information content only in the presence of strain -- and then with a comparatively weaker contribution than strain and at higher orders in time. Indeed, strain and especially the flow divergence, contribute most strongly to transport properties such as particle residence time and the rate of information change. These results shed light on how information can be analyzed and controlled in complex artificial and living flow-based systems.
Muscovite mica sheets with a thickness of 25 {\mu}m were irradiated by various kinds of swift heavy ions (Sn, Xe and Bi) in HIRFL. The fluences ranged from 1$\times$10^{10} ions/cm^2 to 8$\times$10^{11} ions/cm^2. The electronic energy loss (dE/dx)_e was increased from 14.7 keV/nm to 31.2 keV/nm. The band gap and Urbach energy of pristine and irradiated mica were analyzed by ultraviolet- visible spectroscopy. Periodic fringes in long wave length of the absorption spectra caused by interference phenomenon, were disturbed as the (dE/dx)_e increased. It was suggested that the chemical bonds between Tetrahedral-Octohedral-Tetrahedral (TOT) layers of mica were destroyed. Thus the smooth surface was cleaved after irradiation. The band gap was narrowed down with the increasing (dE/dx)_e and fluences. The values of Urbach energy were increased as the (dE/dx)_e and fluences gradually increased. It was indicated that the amount of defects and the proportion of amorphous structure were increased in mica irradiated under increased (dE/dx)_e and fluences. Fluences took a distinctly important role in optical properties of mica.
We investigate the formation dynamics of sonic horizons in a Bose gas confined in a (quasi) one-dimensional trap. This system is one of the most promising realizations of the analogue gravity paradigm and has already been successfully studied experimentally. Taking advantage of the exact solution of the one-dimensional, hard-core, Bose model (Tonks-Girardeau gas) we show that, by switching on a step potential, either a sonic (black-hole-like) horizon or a black/white hole pair may form, according to the initial velocity of the fluid. Our simulations never suggest the formation of an isolated white-hole horizon, although a stable stationary solution of the dynamical equations with those properties is analytically found. Moreover, we show that the semiclassical dynamics, based on the Gross-Pitaevskii equation, conforms to the exact solution only in the case of fully subsonic flows while a stationary solution exhibiting a supersonic transition is never reached dynamically.
In this paper, we present a method for factor analysis of discrete data. This is accomplished by fitting a dependent Poisson model with a factor structure. To be able to analyze ordinal data, we also consider a truncated Poisson distribution. We try to find the model with the lowest AIC by employing a forward selection procedure. The probability to find the correct model is investigated in a simulation study. Moreover, we heuristically derive the corresponding asymptotic probabilities. An empirical study is also included.
Some time ago it was conjectured that the coefficients of an expansion of the Jones polynomial in terms of the cosmological constant could provide an infinite string of knot invariants that are solutions of the vacuum Hamiltonian constraint of quantum gravity in the loop representation. Here we discuss the status of this conjecture at third order in the cosmological constant. The calculation is performed in the extended loop representation, a generalization of the loop representation. It is shown that the the Hamiltonian does not annihilate the third coefficient of the Jones polynomal ($J_3$) for general extended loops. For ordinary loops the result acquires an interesting geometrical meaning and new possibilities appear for $J_3$ to represent a quantum state of gravity.
To understand the collinear-magnetism-driven ferroelectricity in multiferroic Ca3CoMnO6 compound, we have established an elastic diatomic Ising spin-chain model with axial-next-nearest-neighbor interaction to describe its magnetoelectric properties. By employing magneto-phonon decoupling and transfer-matrix method, the possible ground-state configurations and thermodynamic behaviors of the system have been exactly determined. From the perspective of the ground-state configuration, we analyze the computational results and make a detail comparison with experimental data. The parameter relation for the appearance of electric polarization has been discussed. Our data indicate that the magnetic coupling between nearest-neighbor spin pair is antiferromagnetic rather than ferromagnetic. The system under the driven of external magnetic field undergoes a different series of transitions from the up-up-down-down spin configuration to the up-down-up state with peculiar 1/3 magnetization plateau, then to the up-up-up-down state, and finally saturated at up-up-up-up state.
Context. Centaurs are icy objects in transition between the transneptunian region and the inner solar system, orbiting the Sun in the giant planet region. Some Centaurs display cometary activity, which cannot be sustained by the sublimation of water ice in this part of the solar system, and has been hypothesized to be due to the crystallization of amorphous water ice. Aims. In this work, we look at Centaurs discovered by the Outer Solar System Origins Survey (OSSOS) and search for cometary activity. Tentative detections would improve understanding of the origins of activity among these objects. Methods. We search for comae and structures by fitting and subtracting both Point Spread Functions (PSF) and Trailed point-Spread Functions (TSF) from the OSSOS images of each Centaur. When available, Col-OSSOS images were used to search for comae too. Results. No cometary activity is detected in the OSSOS sample. We track the recent orbital evolution of each new Centaur to confirm that none would actually be predicted to be active, and we provide size estimates for the objects. Conclusions. The addition of 20 OSSOS objects to the population of 250 known Centaurs is consistent with the currently understood scenario, in which drastic drops in perihelion distance induce changes in the thermal balance prone to trigger cometary activity in the giant planet region.
We consider a complex of tori of length 2 defined over a number field k. We establish here some local and global duality theorems for the (\'etale or Galois) hypercohomology of such a complex. We prove the existence of a Poitou-Tate exact sequence for such a complex, which generalizes the Poitou-Tate exact sequences for finite Galois modules and tori. In particular, we obtain a Poitou-Tate exact sequence for k-groups of multiplicative type. The general results proven here lie at the root of recent results about the defect of strong approximation in connected linear algebraic groups and about some arithmetic duality theorems for the (non-abelian) Galois cohomology of such groups.
Stable luminescent pi-radicals with doublet emission have aroused a growing interest for functional molecular materials. We have demonstrated a neutral pi-radical dye (4-N-carbazolyl-2,6-dichlorophenyl)bis(2,4,6-trichlorophenyl)-methyl (TTM-1Cz) with remarkable doublet emission, which could be used as triplet sensitizer to initiate the photophysical process of triplet-triplet annihilation photon upconversion (TTA-UC). Dexter-like excited doublet-triplet energy transfer (DTET) was confirmed by theoretical calculation. A mixed solution of TTM-1Cz and aromatic emitters could ambidextrous upconvert red light to cyan light or green light = 532 nm) to blue light. This finding of DTET phenomena provides a new perspective in designing new triplet sensitizer for TTA-UC.
Lomonaco and Kauffman introduced knot mosaics in 2008 to model physical quantum states. These mosaics use a set of tiles to represent knots on $n x n$ grids. In 2023 Heap introduced a new set of tiles that can represent knots on a smaller board for small knots. Completing an exhaustive search of all knots or links, $K$, on different board sizes and types is the most common way to determine invariants for knots, such as the smallest board size needed to represent a knot, $m(K)$, and the least number of tiles needed to represent a knot, $t(K)$. In this paper, we propose a solution to an open question by providing a proof that all knots or links can be represented on corner connection mosaics using fewer tiles than traditional mosaics $t_c(K) < t(K)$, where $t_c(K)$ is the smallest number of corner connection tiles needed to represent knot \textit{K}. We also define bounds for corner connection mosaic size, $m_c(K)$, in terms of crossing number, $c(K)$, and simultaneously create a tool called the \textit{Corner Mosaic Complement} that we use to discover a relationship between traditional tiles and corner connection tiles. Finally, we construct an infinite family of links $L_n$ where the corner connection mosaic number $m_c(K)$ is known and provide a tool to analyze the efficiency of corner connection mosaic tiles.
In the Edge-Disjoint Paths with Congestion problem (EDPwC), we are given an undirected n-vertex graph G, a collection M={(s_1,t_1),...,(s_k,t_k)} of demand pairs and an integer c. The goal is to connect the maximum possible number of the demand pairs by paths, so that the maximum edge congestion - the number of paths sharing any edge - is bounded by c. When the maximum allowed congestion is c=1, this is the classical Edge-Disjoint Paths problem (EDP). The best current approximation algorithm for EDP achieves an $O(\sqrt n)$-approximation, by rounding the standard multi-commodity flow relaxation of the problem. This matches the $\Omega(\sqrt n)$ lower bound on the integrality gap of this relaxation. We show an $O(poly log k)$-approximation algorithm for EDPwC with congestion c=2, by rounding the same multi-commodity flow relaxation. This gives the best possible congestion for a sub-polynomial approximation of EDPwC via this relaxation. Our results are also close to optimal in terms of the number of pairs routed, since EDPwC is known to be hard to approximate to within a factor of $\tilde{\Omega}((\log n)^{1/(c+1)})$ for any constant congestion c. Prior to our work, the best approximation factor for EDPwC with congestion 2 was $\tilde O(n^{3/7})$, and the best algorithm achieving a polylogarithmic approximation required congestion 14.
The general relativistic version is developed for Robertson's discussion of the Poynting-Robertson effect that he based on special relativity and Newtonian gravity for point radiation sources like stars. The general relativistic model uses a test radiation field of photons in outward radial motion with zero angular momentum in the equatorial plane of the exterior Schwarzschild or Kerr spacetime.
This paper presents a density-based topology optimization approach to design structures under self-weight load. Such loads change their magnitude and/or location as the topology optimization advances and pose several unique challenges, e.g., non-monotonous behavior of compliance objective, parasitic effects of the low-stiffness elements, and unconstrained nature of the problems. The modified SIMP material scheme is employed with the three-field density representation technique (original, filtered, and projected design fields) to achieve optimized solutions~close~to~0-1. A novel mass density interpolation strategy is proposed using a smooth Heaviside function, which provides a continuous transition between solid and void states of elements and facilitates tuning of the non-monotonous behavior of the objective. A constraint that implicitly imposes a lower bound on the permitted volume is conceptualized using the maximum permitted mass and the current mass of the evolving design. Sensitivities of the objective and self-weight are evaluated using the adjoint-variable method. Compliance of the domain is minimized to achieve the optimized designs using the Method of Moving Asymptotes. The Efficacy and robustness of the presented approach are demonstrated by designing various 2D and 3D structures involving self-weight. The proposed approach maintains the constrained nature of the optimization problems and provides smooth and rapid objective convergence.
For a given set of points $U$ on a sphere $S$, the order $k$ spherical Voronoi diagram $SV_k(U)$ decomposes the surface of $S$ into regions whose points have the same $k$ nearest points of $U$. Hyeon-Suk Na, Chung-Nim Lee, and Otfried Cheong (Comput. Geom., 2002) applied inversions to construct $SV_1(U)$. We generalize their construction for spherical Voronoi diagrams from order $1$ to any order $k$. We use that construction to prove formulas for the numbers of vertices, edges, and faces in $SV_k(U)$. These formulas were not known before. We obtain several more properties for $SV_k(U)$, and we also show that $SV_k(U)$ has a small orientable cycle double cover.
How much energy does it take to stamp a thin elastic shell flat? Motivated by recent experiments on the wrinkling patterns of floating shells, we develop a rigorous method via $\Gamma$-convergence for answering this question to leading order in the shell's thickness and other small parameters. The observed patterns involve "ordered" regions of well-defined wrinkles alongside "disordered" regions whose local features are less robust; as little to no tension is applied, the preference for order is not a priori clear. Rescaling by the energy of a typical pattern, we derive a limiting variational problem for the effective displacement of the shell. It asks, in a linearized way, to cover up a maximum area with a length-shortening map to the plane. Convex analysis yields a boundary value problem characterizing the accompanying patterns via their defect measures. Partial uniqueness and regularity theorems follow from the method of characteristics on the ordered part of the shell. In this way, we can deduce from the principle of minimum energy the leading order features of stamped elastic shells.
Bloch's theorem is the centerpiece of topological band theory, which itself has defined an era of quantum materials research. However, Bloch's theorem is broken by a perpendicular magnetic field, making it difficult to study topological systems in strong flux. For the first time, moir\'e materials have made this problem experimentally relevant, and its solution is the focus of this work. We construct gauge-invariant irreps of the magnetic translation group at $2\pi$ flux on infinite boundary conditions, allowing us to give analytical expressions in terms of the Siegel theta function for the magnetic Bloch Hamiltonian, non-Abelian Wilson loop, and many-body form factors. We illustrate our formalism using a simple square lattice model and the Bistritzer-MacDonald Hamiltonian of twisted bilayer graphene, obtaining reentrant ground states at $2\pi$ flux under the Coulomb interaction.
In this paper, we showed that gravity might have been repulsive in the first moments of the Universe! To find this, we used quantization of the anti-commuting space and derived a gravitational equation in the limit $T \gg T_p$, which shows interesting behaviors. We saw that gravity is repulsive in the distances less than $\mathcal{R} = 4.37 \times 10^{-32} \times \sqrt{M}$, where $M$ is the mass of the object which gives rise to the gravitational field. Also, we calculated an acceleration of the order $\simeq - 3.494 \times 10^{52}$ for the first moment of the Universe ($r = 0$), where the temperature is $T \gg T_p$. Our results can explain the inflation in the first stages of the Universe and do not have any singularity at $r = 0$.
We compute the spectral form factor of two integrable quantum-critical many body systems in one spatial dimension. The spectral form factor of the quantum Ising chain is periodic in time in the scaling limit described by a conformal field theory; we also compute corrections from lattice effects and deviation from criticality. Criticality in the random Ising chain is described by rare regions associated with a strong randomness fixed point, and these control the long time limit of the spectral form factor.
We determine the D/H ratio in the interstellar medium toward the DO white dwarf PG0038+199 using spectra from the Far Ultraviolet Spectroscopic Explorer (FUSE), with ground-based support from Keck HIRES. We employ curve of growth, apparent optical depth and profile fitting techniques to measure column densities and limits of many other species (H2, NaI, CI, CII, CIII, NI, NII, OI, SiII, PII, SIII, ArI and FeII) which allow us to determine related ratios such as D/O, D/N and the H2 fraction. Our efforts are concentrated on measuring gas-phase D/H, which is key to understanding Galactic chemical evolution and comparing it to predictions from Big Bang nucleosynthesis. We find column densities log N(HI) = 20.41+-0.08, log N(DI)=15.75+-0.08 and log N(H2) = 19.33+-0.04, yielding a molecular hydrogen fraction of 0.14+-0.02 (2 sigma errors), with an excitation temperature of 143+-5K. The high HI column density implies that PG0038+199 lies outside of the Local Bubble; we estimate its distance to be 297 (+164,-104)pc (1 sigma). D/[HI+2H2] toward PG0038+199 is 1.91(+0.52,-0.42) e-5 (2 sigma). There is no evidence of component structure on the scale of Delta v > 8 km/s based on NaI, but there is marginal evidence for structure on smaller scales. The D/H value is high compared to the majority of recent D/H measurements, but consistent with the values for two other measurements at similar distances. D/O is in agreement with other distant measurements. The scatter in D/H values beyond ~100pc remains a challenge for Galactic chemical evolution.
We demonstrate the effectiveness of an adaptive explicit Euler method for the approximate solution of the Cox-Ingersoll-Ross model. This relies on a class of path-bounded timestepping strategies which work by reducing the stepsize as solutions approach a neighbourhood of zero. The method is hybrid in the sense that a convergent backstop method is invoked if the timestep becomes too small, or to prevent solutions from overshooting zero and becoming negative. Under parameter constraints that imply Feller's condition, we prove that such a scheme is strongly convergent, of order at least 1/2. Control of the strong error is important for multi-level Monte Carlo techniques. Under Feller's condition we also prove that the probability of ever needing the backstop method to prevent a negative value can be made arbitrarily small. Numerically, we compare this adaptive method to fixed step implicit and explicit schemes, and a novel semi-implicit adaptive variant. We observe that the adaptive approach leads to methods that are competitive in a domain that extends beyond Feller's condition, indicating suitability for the modelling of stochastic volatility in Heston-type asset models.
Due to some results by John P. D Angelo and Dusty Grundmeier about CR- mappings the main results of my 2001 paper about recurrences for some sequences of binomial sums can be simplified.
The kernel relation $K$ on the lattice $\mathcal{L}(\mathcal{CR})$ of varieties of completely regular semigroups has been a central component in many investigations into the structure of $\mathcal{L}(\mathcal{CR})$. However, apart from the $K$-class of the trivial variety, which is just the lattice of varieties of bands, the detailed structure of kernel classes has remained a mystery until recently. Kad'ourek [RK2] has shown that for two large classes of subvarieties of $\mathcal{CR}$ their kernel classes are singletons. Elsewhere (see [RK1], [RK2], [RK3]) we have provided a detailed analysis of the kernel classes of varieties of abelian groups. Here we study more general kernel classes. We begin with a careful development of the concept of duality in the lattice of varieties of completely regular semigroups and then show that the kernel classes of many varieties, including many self-dual varieties, of completely regular semigroups contain multiple copies of the lattice of varieties of bands as sublattices.
The number of atoms trapped within the mode of an optical cavity is determined in real time by monitoring the transmission of a weak probe beam. Continuous observation of atom number is accomplished in the strong coupling regime of cavity quantum electrodynamics and functions in concert with a cooling scheme for radial atomic motion. The probe transmission exhibits sudden steps from one plateau to the next in response to the time evolution of the intracavity atom number, from N >= 3 to N = 2 to 1 to 0, with some trapping events lasting over 1 second.
We investigate collective effects in the strong pinning model of disordered charge and spin density waves (CDWs and SDWs) in connection with heat relaxation experiments. We discuss the classical and quantum limits that contribute to two distinct contribution to the specific heat (a $C_v \sim T^{-2}$ contribution and a $C_v \sim T^{\alpha}$ contribution respectively), with two different types of disorder (strong pinning versus substitutional impurities). From the calculation of the two level system energy splitting distribution in the classical limit we find no slow relaxation in the commensurate case and a broad spectrum of relaxation times in the incommensurate case. In the commensurate case quantum effects restore a non vanishing energy relaxation, and generate stronger disorder effects in incommensurate systems. For substitutional disorder we obtain Friedel oscillations of bound states close to the Fermi energy. With negligible interchain couplings this explains the power-law specific heat $C_v \sim T^{\alpha}$ observed in experiments on CDWs and SDWs combined to the power-law susceptibility $\chi(T)\sim T^{-1+\alpha}$ observed in the CDW o-TaS$_3$.
Dijet production in deep-inelastic scattering (DIS) in the range 150 < Q^2 < 35000 GeV^2 has been measured by the H1 collaboration using the Durham jet algorithm in the laboratory frame. QCD calculations in next-to-leading order (NLO) are found to give a good description of the data when requiring a small minimum jet separation, which selects a dijet sample containing 1/3 of DIS events in contrast to approximately 1/10 with more typical jet analyses.
Evaluation of service oriented system has been a challenge, though there are large number of evaluation metrics exist but none of them is efficient to evaluate these systems effectively.This paper discusses the different testing tools and evaluation methods available for SOA and summarizes their limitation and support in context of service oriented architectures.
Recently, there has been increasing attention in robot research towards the whole-body collision avoidance. In this paper, we propose a safety-critical controller that utilizes time-varying control barrier functions (time varying CBFs) constructed by Robo-centric Euclidean Signed Distance Field (RC-ESDF) to achieve dynamic collision avoidance. The RC-ESDF is constructed in the robot body frame and solely relies on the robot's shape, eliminating the need for real-time updates to save computational resources. Additionally, we design two control Lyapunov functions (CLFs) to ensure that the robot can reach its destination. To enable real-time application, our safety-critical controller which incorporates CLFs and CBFs as constraints is formulated as a quadratic program (QP) optimization problem. We conducted numerical simulations on two different dynamics of an L-shaped robot to verify the effectiveness of our proposed approach.
The rapid increase in the number of cyber-attacks in recent years raises the need for principled methods for defending networks against malicious actors. Deep reinforcement learning (DRL) has emerged as a promising approach for mitigating these attacks. However, while DRL has shown much potential for cyber-defence, numerous challenges must be overcome before DRL can be applied to autonomous cyber-operations (ACO) at scale. Principled methods are required for environments that confront learners with very high-dimensional state spaces, large multi-discrete action spaces, and adversarial learning. Recent works have reported success in solving these problems individually. There have also been impressive engineering efforts towards solving all three for real-time strategy games. However, applying DRL to the full ACO problem remains an open challenge. Here, we survey the relevant DRL literature and conceptualize an idealised ACO-DRL agent. We provide: i.) A summary of the domain properties that define the ACO problem; ii.) A comprehensive evaluation of the extent to which domains used for benchmarking DRL approaches are comparable to ACO; iii.) An overview of state-of-the-art approaches for scaling DRL to domains that confront learners with the curse of dimensionality, and; iv.) A survey and critique of current methods for limiting the exploitability of agents within adversarial settings from the perspective of ACO. We conclude with open research questions that we hope will motivate future directions for researchers and practitioners working on ACO.
An overview over the current status of modeling galaxies by means of numerical simulations is given. After a short description of how galaxies form in hierarchically clustering scenarios, success and failures of current simulations are demonstrated using three different applications: the morphology of present day galaxies; the appearance of high redshift galaxies; and the nature of the Ly-alpha forest and metal absorption lines. It is shown that current simulations can qualitatively account for many observed features of galaxies. However, the objects which form in these simulations suffer from a strong overcooling problem. Star formation and feedback processes are likely to be indispensable ingredients for a realistic description even of the most basic parameters of a galaxy. The progenitors of todays galaxies are expected to be highly irregular and concentrated, as supported by recent observations. Though they exhibit a velocity dispersion similar to present day L > L^* galaxies, they may be much less massive. The filamentary distribution of the gas provides a natural explanation for Ly-alpha and metal absorption systems. Furthermore, numerical simulations can be used to avoid misinterpretations of observed data and are able to alleviate some apparent contradictions in the size estimates of Ly-alpha absorption systems.
Mobile robot platforms will increasingly be tasked with activities that involve grasping and manipulating objects in open world environments. Affordance understanding provides a robot with means to realise its goals and execute its tasks, e.g. to achieve autonomous navigation in unknown buildings where it has to find doors and ways to open these. In order to get actionable suggestions, robots need to be able to distinguish subtle differences between objects, as they may result in different action sequences: doorknobs require grasp and twist, while handlebars require grasp and push. In this paper, we improve affordance perception for a robot in an open-world setting. Our contribution is threefold: (1) We provide an affordance representation with precise, actionable affordances; (2) We connect this knowledge base to a foundational vision-language models (VLM) and prompt the VLM for a wider variety of new and unseen objects; (3) We apply a human-in-the-loop for corrections on the output of the VLM. The mix of affordance representation, image detection and a human-in-the-loop is effective for a robot to search for objects to achieve its goals. We have demonstrated this in a scenario of finding various doors and the many different ways to open them.
Comparison-Based Optimization (CBO) is an optimization paradigm that assumes only very limited access to the objective function f(x). Despite the growing relevance of CBO to real-world applications, this field has received little attention as compared to the adjacent field of Zeroth-Order Optimization (ZOO). In this work we propose a relatively simple method for converting ZOO algorithms to CBO algorithms, thus greatly enlarging the pool of known algorithms for CBO. Via PyCUTEst, we benchmarked these algorithms against a suite of unconstrained problems. We then used hyperparameter tuning to determine optimal values of the parameters of certain algorithms, and utilized visualization tools such as heat maps and line graphs for purposes of interpretation. All our code is available at https://github.com/ishaslavin/Comparison_Based_Optimization.
The invariant cross sections for direct photon production in hadron-hadron collisions are calculated for several initial energies (SPS, ISR, S$p \bar p$S, RHIC, Tevatron, LHC) including initial parton transverse momenta within the formalism of unintegrated parton distributions (UPDF). Kwieci\'nski UPDFs provide very good description of all world data, especially at SPS and ISR energies. Inclusion of the QCD evolution effects and especially their effect on initial parton transverse momenta allowed to solve the long-standing problem of understanding the low energy and low transverse momentum data.
We report the detection and characterization of the transiting sub-Neptune TOI-1759 b, using photometric time-series from TESS and near infrared spectropolarimetric data from SPIRou on the CFHT. TOI-1759 b orbits a moderately active M0V star with an orbital period of $18.849975\pm0.000006$ d, and we measure a planetary radius and mass of $3.06\pm0.22$ R$_\oplus$ and $6.8\pm2.0$ M$_\oplus$. Radial velocities were extracted from the SPIRou spectra using both the CCF and the LBL methods, optimizing the velocity measurements in the near infrared domain. We analyzed the broadband SED of the star and the high-resolution SPIRou spectra to constrain the stellar parameters and thus improve the accuracy of the derived planet parameters. A LSD analysis of the SPIRou Stokes $V$ polarized spectra detects Zeeman signatures in TOI-1759. We model the rotational modulation of the magnetic stellar activity using a GP regression with a quasi-periodic covariance function, and find a rotation period of $35.65^{+0.17}_{-0.15}$ d. We reconstruct the large-scale surface magnetic field of the star using ZDI, which gives a predominantly poloidal field with a mean strength of $18\pm4$ G. Finally, we perform a joint Bayesian MCMC analysis of the TESS photometry and SPIRou RVs to optimally constrain the system parameters. At $0.1176\pm0.0013$ au from the star, the planet receives $6.4$ times the bolometric flux incident on Earth, and its equilibrium temperature is estimated at $433\pm14$ K. TOI-1759 b is a likely gas-dominated sub-Neptune with an expected high rate of photoevaporation. Therefore, it is an interesting target to search for neutral hydrogen escape, which may provide important constraints on the planetary formation mechanisms responsible for the observed sub-Neptune radius desert.
Observation of topological phases beyond two-dimension (2D) has been an open challenge for ultracold atoms. Here, we realize for the first time a 3D spin-orbit coupled nodal-line semimetal in an optical lattice and observe the bulk line nodes with ultracold fermions. The realized topological semimetal exhibits an emergent magnetic group symmetry. This allows to detect the nodal lines by effectively reconstructing the 3D topological band from a series of measurements of integrated spin textures, which precisely render spin textures on the parameter-tuned magnetic-group-symmetric planes. The detection technique can be generally applied to explore 3D topological states of similar symmetries. Furthermore, we observe the band inversion lines from topological quench dynamics, which are bulk counterparts of Fermi arc states and connect the Dirac points, reconfirming the realized topological band. Our results demonstrate the first approach to effectively observe 3D band topology, and open the way to probe exotic topological physics for ultracold atoms in high dimensions.
In the paper we extend the spectral invariance of pseudodifferential operators acting on (non-weighted) classical modulation spaces to allow the Lebesgue exponents to be smaller than one. These spaces occur naturally in approximation theory and data compression problems.
This is a draft of a monograph to appear in the Springer series "Encyclopaedia of Mathematical Sciences", subseries "Invariant Theory and Algebraic Transformation Groups". The subject is homogeneous spaces of algebraic groups and their equivariant embeddings. The style of exposition is intermediate between survey and detailed monograph: some results are supplied with detailed proofs, while the other are cited without proofs but with references to the original papers. The content is briefly as follows. Starting with basic properties of algebraic homogeneous spaces and related objects, such as induced representations, we focus the attention on homogeneous spaces of reductive groups and introduce two important invariants, called complexity and rank. For the embedding theory it is important that homogeneous spaces of small complexity admit a transparent combinatorial description of their equivariant embeddings. We consider the Luna-Vust theory of equivariant embeddings, paying special attention to the case of complexity not greater than one. A special chapter is devoted to spherical varieties (= embeddings of homogeneous spaces of complexity zero), due to their particular importance and ubiquity. A relation between equivariant embedding theory and equivariant symplectic geometry is also discussed. The book contains several classification results (homogeneous spaces of small complexity, etc). The text presented here is not in a final form, and the author will be very grateful to any interested reader for his comments and/or remarks, which may be sent to the author by email.
As renewable energy sources replace traditional power sources (such as thermal generators), uncertainty grows while there are fewer controllable units. To reduce operational risks and avoid frequent real-time emergency controls, a preparatory schedule of renewable generation curtailment is required. This paper proposes a novel two-stage robust generation dispatch (RGD) model, where the preparatory curtailment schedule is optimized in the pre-dispatch stage. The curtailment schedule will then influence the variation range of real-time renewable power output, resulting in a decision-dependent uncertainty (DDU) set. In the re-dispatch stage, the controllable units adjust their outputs within the reserve capacities to maintain power balancing. To overcome the difficulty in solving the RGD with DDU, an adaptive column-and-constraint generation (AC\&CG) algorithm is developed. We prove that the proposed algorithm can generate the optimal solution in finite iterations. Numerical examples show the advantages of the proposed model and algorithm, and validate their practicability and scalability.
Speech enhancement tasks have seen significant improvements with the advance of deep learning technology, but with the cost of increased computational complexity. In this study, we propose an adaptive boosting approach to learning locality sensitive hash codes, which represent audio spectra efficiently. We use the learned hash codes for single-channel speech denoising tasks as an alternative to a complex machine learning model, particularly to address the resource-constrained environments. Our adaptive boosting algorithm learns simple logistic regressors as the weak learners. Once trained, their binary classification results transform each spectrum of test noisy speech into a bit string. Simple bitwise operations calculate Hamming distance to find the K-nearest matching frames in the dictionary of training noisy speech spectra, whose associated ideal binary masks are averaged to estimate the denoising mask for that test mixture. Our proposed learning algorithm differs from AdaBoost in the sense that the projections are trained to minimize the distances between the self-similarity matrix of the hash codes and that of the original spectra, rather than the misclassification rate. We evaluate our discriminative hash codes on the TIMIT corpus with various noise types, and show comparative performance to deep learning methods in terms of denoising performance and complexity.
In this paper we prove that the nonzero elements of a finite field with odd characteristic can be partitioned into pairs with prescribed difference (maybe, with some alternatives) in each pair. The algebraic and topological approaches to such problems are considered. We also give some generalizations of these results to packing translates in a finite or infinite field, and give a short proof of a particular case of the Eliahou--Kervaire--Plaigne theorem about sum-sets.
Underwater acoustic cameras are high potential devices for many applications in ecology, notably for fisheries management and monitoring. However how to extract such data into high value information without a time-consuming entire dataset reading by an operator is still a challenge. Moreover the analysis of acoustic imaging, due to its low signal-to-noise ratio, is a perfect training ground for experimenting with new approaches, especially concerning Deep Learning techniques. We present hereby a novel approach that takes advantage of both CNN (Convolutional Neural Network) and classical CV (Computer Vision) techniques, able to detect a generic class ''fish'' in acoustic video streams. The pipeline pre-treats the acoustic images to extract 2 features, in order to localise the signals and improve the detection performances. To ensure the performances from an ecological point of view, we propose also a two-step validation, one to validate the results of the trainings and one to test the method on a real-world scenario. The YOLOv3-based model was trained with data of fish from multiple species recorded by the two common acoustic cameras, DIDSON and ARIS, including species of high ecological interest, as Atlantic salmon or European eels. The model we developed provides satisfying results detecting almost 80% of fish and minimizing the false positive rate, however the model is much less efficient for eel detections on ARIS videos. The first CNN pipeline for fish monitoring exploiting video data from two models of acoustic cameras satisfies most of the required features. Many challenges are still present, such as the automation of fish species identification through a multiclass model. 1 However the results point a new solution for dealing with complex data, such as sonar data, which can also be reapplied in other cases where the signal-to-noise ratio is a challenge.
Surely we want solid foundations. What kind of castle can we build on sand? What is the point of devoting effort to balconies and minarets, if the foundation may be so weak as to allow the structure to collapse of its own weight? We want our foundations set on bedrock, designed to last for generations. Who would want an architect who cannot certify the soundness of the foundations of his buildings?
Exclusive production of axial-vector $f_{1}(1285)$ meson in proton-proton collisions via pomeron-pomeron fusion within the tensor-pomeron approach is discussed. Two ways to construct the pomeron-pomeron-$f_{1}$ coupling are presented. We adjust the parameters of our model to the WA102 experimental data and compare with predictions of the Sakai-Sugimoto model. Predictions for LHC experiments are given.
We consider the problem of designing models to leverage a recently introduced approximate model averaging technique called dropout. We define a simple new model called maxout (so named because its output is the max of a set of inputs, and because it is a natural companion to dropout) designed to both facilitate optimization by dropout and improve the accuracy of dropout's fast approximate model averaging technique. We empirically verify that the model successfully accomplishes both of these tasks. We use maxout and dropout to demonstrate state of the art classification performance on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100, and SVHN.
For $0\le \alpha <1$ and $\beta>2$, we consider a linear mod 1 transformation on a unit interval; $x\mapsto\beta x+\alpha$ (${\rm mod}\ 1$), and prove that it satisfies the level-2 large deviation principle with the unique measure of maximal entropy. For the proof, we use the density of periodic measures and Hofbauer's Markov Diagram.
Deep surveys in many wavebands have shown that the rate at which stars were forming was at least a factor of 10 higher at z > 1 than today. Heavy elements (metals) are produced by stars, and the star formation history deduced by these surveys implies that a significant fraction of metals in the universe today should already exist at z~2-3. However, only 10% of the total metals expected to exist at this epoch have so far been accounted for (in DLAs and the Lyman forest). In this paper, we use the results of submillimetre surveys of the local and high redshift universe to show that there was much more dust in galaxies in the past. We find that a large proportion of the missing metals are traced by this dust, bringing the metals implied from the star formation history and observations into agreement. We also show that the observed distribution of dust masses at high redshift can be reproduced remarkably well by a simple model for the evolution of dust in spheroids, suggesting that the descendants of the dusty galaxies found in the deep submm surveys are the relatively dust-free spiral bulges and ellipticals in the universe today.
Using RR Lyrae stars in the Gaia Data Release 2 and Pan-STARRS1 we study the properties of the Pisces Overdensity, a diffuse sub-structure in the outer halo of the Milky Way. We show that along the line of sight, Pisces appears as a broad and long plume of stars stretching from 40 to 110 kpc with a steep distance gradient. On the sky Pisces's elongated shape is aligned with the Magellanic Stream. Using follow-up VLT FORS2 spectroscopy, we have measured the velocity distribution of the Pisces candidate member stars and have shown it to be as broad as that of the Galactic halo but offset to negative velocities. Using a suite of numerical simulations, we demonstrate that the structure has many properties in common with the predicted behaviour of the Magellanic wake, i.e. the Galactic halo overdensity induced by the in-fall of the Magellanic Clouds.
In this work we present a model of dark matter based on scalar-tensor theory of gravity. With this scalar field dark matter model we study the non-linear evolution of the large scale structures in the universe. The equations that govern the evolution of the scale factor of the universe are derived together with the appropriate Newtonian equations to follow the non-linear evolution of the structures. Results are given in terms of the power spectrum that gives quantitative information on the large-scale structure formation. The initial conditions we have used are consistent with the so called concordance $\Lambda$CDM model.
The issue of so-called maximal regularity is discussed within a Hilbert space framework for a class of evolutionary equations. Viewing evolutionary equations as a sums of two unbounded operators, showing maximal regularity amounts to establishing that the operator sum considered with its natural domain is already closed. For this we use structural constraints of the coefficients rather than semi-group strategies or sesqui-linear form methods, which would be difficult to come by for our general problem class. Our approach, although limited to the Hilbert space case, complements known strategies for approaching maximal regularity and extends them in a different direction. The abstract findings are illustrated by re-considering some known maximal regularity results within the framework presented.
Searching for words in Sanskrit E-text is a problem that is accompanied by complexities introduced by features of Sanskrit such as euphonic conjunctions or sandhis. A word could occur in an E-text in a transformed form owing to the operation of rules of sandhi. Simple word search would not yield these transformed forms of the word. Further, there is no search engine in the literature that can comprehensively search for words in Sanskrit E-texts taking euphonic conjunctions into account. This work presents an optimal binary representational schema for letters of the Sanskrit alphabet along with algorithms to efficiently process the sandhi rules of Sanskrit grammar. The work further presents an algorithm that uses the sandhi processing algorithm to perform a comprehensive word search on E-text.
We prove that rational and 1-rational singularities of complex spaces are stable under taking quotients by holomorphic actions of reductive and compact Lie groups. This extends a result of Boutot to the analytic category and yields a refinement of his result in the algebraic category. As one of the main technical tools vanishing theorems for cohomology groups with support on fibres of resolutions are proven.
We consider eternal inflation in hilltop-type inflation models, favored by current data, in which the scalar field in inflation rolls off of a local maximum of the potential. Unlike chaotic or plateau-type inflation models, in hilltop inflation the region of field space which supports eternal inflation is finite, and the expansion rate $H_{EI}$ during eternal inflation is almost exactly the same as the expansion rate $H_*$ during slow roll inflation. Therefore, in any given Hubble volume, there is a finite and calculable expectation value for the lifetime of the "eternal" inflation phase, during which quantum flucutations dominate over classical field evolution. We show that despite this, inflation in hilltop models is nonetheless eternal in the sense that the volume of the spacetime at any finite time is exponentially dominated by regions which continue to inflate. This is true regardless of the energy scale of inflation, and eternal inflation is supported for inflation at arbitrarily low energy scale.
We summarize recent theoretical results for the signatures of strongly correlated ultra-cold fermions in optical lattices. In particular, we focus on: collective mode calculations, where a sharp decrease in collective mode frequency is predicted at the onset of the Mott metal-insulator transition; and correlation functions at finite temperature, where we employ a new exact technique that applies the stochastic gauge technique with a Gaussian operator basis.
In a previous study it was proposed that the Galactic dark matter being detected by gravitational microlensing experiments such as MACHO may reside in a population of dim halo globular clusters comprising mostly or entirely low-mass stars just above the hydrogen-burning limit. It was shown that, for the case of a standard isothermal halo, the scenario is consistent not only with MACHO observations but also with cluster dynamical constraints and number-count limits imposed by 20 Hubble Space Telescope (HST) fields. The present work extends the original study by considering the dependency of the results on halo model, and by increasing the sample of HST fields to 51 (including the Hubble Deep Field and Groth Strip fields). The model dependency of the results is tested using the same reference power-law halo models employed by the MACHO team. For the unclustered scenario HST counts imply a model-dependent halo fraction of at most 0.5-1.1% (95% confidence), well below the inferred MACHO fraction. For the cluster scenario all the halo models permit a range of cluster masses and radii to satisfy HST, MACHO and dynamical constraints. Whilst the strong HST limits on the unclustered scenario imply that at least 95% of halo stars must reside in clusters at present, this limit is weakened if the stars which have escaped from clusters retain a degree of clumpiness in their distribution.
We introduce a new representation of an integer spin $S$ via bosonic operators which is useful in describing the paramagnetic phase and transitions to magnetically ordered phases in magnetic systems with large single-ion easy-plane anisotropy $D$. Considering the exchange interaction between spins as a perturbation and using the diagram technique we derive the elementary excitation spectrum and the ground state energy in the third order of the perturbation theory. In the special case of S=1 we obtain these expressions also using simpler spin representations some of which were introduced before. Comparison with results of previous numerical studies of 2D systems with S=1 demonstrates that our approach works better than other analytical methods applied before for such systems. We apply our results for the elementary excitation spectrum analysis obtained experimentally in $\rm NiCl_2$-$\rm 4SC(NH_2)_2$ (DTN). It is demonstrated that a set of model parameters (exchange constants and $D$) which has been used for DTN so far describes badly the experimentally obtained spectrum. A new set of parameters is proposed using which we fit the spectrum and values of two critical fields of DTN.
We present COMO, a real-time monocular mapping and odometry system that encodes dense geometry via a compact set of 3D anchor points. Decoding anchor point projections into dense geometry via per-keyframe depth covariance functions guarantees that depth maps are joined together at visible anchor points. The representation enables joint optimization of camera poses and dense geometry, intrinsic 3D consistency, and efficient second-order inference. To maintain a compact yet expressive map, we introduce a frontend that leverages the covariance function for tracking and initializing potentially visually indistinct 3D points across frames. Altogether, we introduce a real-time system capable of estimating accurate poses and consistent geometry.
A recent submillimeter line survey of Orion KL claimed detection of SiH. This paper reports on GBT observations of the 5.7 GHz Lambda-doubling transitions of SiH in Orion. Many recombination lines, including C164-delta, are seen, but SiH is not detected. The nondetection corresponds to an upper limit of 1.5 x 10^15 cm^-2 (4 sigma) for the beam-averaged column density of SiH. This suggests that the fractional abundance of SiH in the extended ridge is no more than twice that in the hot core.
Motivated by recent progresses in the holographic descriptions of the Kerr and Reissner-Nordstr{\o}m (RN) black holes, we explore the hidden conformal symmetry of nonextremal uplifted 5D RN black hole by studying the near horizon wave equation of a massless scalar field propagating in this background. Similar to the Kerr black hole case, this hidden symmetry is broken by the periodicity of the associated angle coordinate in the background geometry, but the results somehow testify the dual CFT description of the nonextremal RN black holes. The duality is further supported by matching of the entropies and absorption cross sections calculated from both CFT and gravity sides.
Oscillations of the Dirac neutrinos of three generations in vacuum are considered with allowance made for the effect of the CP-violating leptonic phase (analogue of the quark CP phase) in the lepton mixing matrix. The general formulas for the probabilities of neutrino transition from one sort to another in oscillations are obtained as functions of three mixing angles and the CP phase. It is found that the leptonic CP phase can, in principle, be reconstructed by measuring the oscillation-averaged probabilities of neutrino transition from one sort to another. The manifestation of the CP phase as a deviation of the probabilities of direct processes from those of inverse processes is an effect that is practically unobservable as yet.
Local formulations of quantum field theory provide a powerful framework in which non-perturbative aspects of QCD can be analysed. Here we report on how this approach can be used to elucidate the general analytic features of QCD propagators, and why this is relevant for understanding confinement.
The electroencephalogram (EEG) provides a non-invasive, minimally restrictive, and relatively low cost measure of mesoscale brain dynamics with high temporal resolution. Although signals recorded in parallel by multiple, near-adjacent EEG scalp electrode channels are highly-correlated and combine signals from many different sources, biological and non-biological, independent component analysis (ICA) has been shown to isolate the various source generator processes underlying those recordings. Independent components (IC) found by ICA decomposition can be manually inspected, selected, and interpreted, but doing so requires both time and practice as ICs have no particular order or intrinsic interpretations and therefore require further study of their properties. Alternatively, sufficiently-accurate automated IC classifiers can be used to classify ICs into broad source categories, speeding the analysis of EEG studies with many subjects and enabling the use of ICA decomposition in near-real-time applications. While many such classifiers have been proposed recently, this work presents the ICLabel project comprised of (1) an IC dataset containing spatiotemporal measures for over 200,000 ICs from more than 6,000 EEG recordings, (2) a website for collecting crowdsourced IC labels and educating EEG researchers and practitioners about IC interpretation, and (3) the automated ICLabel classifier. The classifier improves upon existing methods in two ways: by improving the accuracy of the computed label estimates and by enhancing its computational efficiency. The ICLabel classifier outperforms or performs comparably to the previous best publicly available method for all measured IC categories while computing those labels ten times faster than that classifier as shown in a rigorous comparison against all other publicly available EEG IC classifiers.
We study three estimators for the interval censoring case 2 problem, a histogram-type estimator, proposed in Birg\'e (1999), the maximum likelihood estimator (MLE) and the smoothed MLE, using a smoothing kernel. Our focus is on the asymptotic distribution of the estimators at a fixed point. The estimators are compared in a simulation study.
We study the antipodal subsets of the full flag manifolds $\mathcal{F}(\mathbb{R}^d)$. As a consequence, for natural numbers $d \ge 2$ such that $d\ne 5$ and $d \not\equiv 0,\pm1 \mod 8$, we show that Borel Anosov subgroups of ${\rm SL}(d,\mathbb{R})$ are virtually isomorphic to either a free group or the fundamental group of a closed hyperbolic surface. This gives a partial answer to a question asked by Andr\'es Sambarino. Furthermore, we show restrictions on the hyperbolic spaces admitting uniformly regular quasi-isometric embeddings into the symmetric space $X_d$ of ${\rm SL}(d,\mathbb{R})$.
The measurement data from the boundary layer measurement masts in Hamburg-Billwerder (Germany) and Hyytiala (Finland) make thermals visible. For this, the temperature and humidity data must first be transformed to be independent of altitude. The subsequent visualisation of this data shows an impressive cross-section of the lower atmosphere with individual updrafts, but also with air bubbles detaching from the ground or falling from higher altitudes.
Gate-based quantum programming languages are ubiquitous but measurement-based languages currently exist only on paper. This work introduces MCBeth, a quantum programming language which allows programmers to directly represent, program, and simulate measurement-based and cluster state computation by building upon the measurement calculus. While MCBeth programs are meant to be executed directly on hardware, to take advantage of current machines we also provide a compiler to gate-based instructions. We argue that there are clear advantages to measurement-based quantum computation compared to gate-based when it comes to implementing common quantum algorithms and distributed quantum computation.
Complex soft matter systems can be efficiently studied with the help of adaptive resolution simulation methods, concurrently employing two levels of resolution in different regions of the simulation domain. The non-matching properties of high- and low-resolution models, however, lead to thermodynamic imbalances between the system's subdomains. Such inhomogeneities can be healed by appropriate compensation forces, whose calculation requires nontrivial iterative procedures. In this work we employ the recently developed Hamiltonian Adaptive Resolution Simulation method to perform Monte Carlo simulations of a binary mixture, and propose an efficient scheme, based on Kirkwood Thermodynamic Integration, to regulate the thermodynamic balance of multi-component systems.
In this paper, a new type of colouring called Johan colouring is introduced. This colouring concept is motivated by the newly introduced invariant called the rainbow neighbourhood number of a graph. The study ponders on maximal colouring opposed to minimum colouring. An upper bound for a connected graph is presented and a number of explicit results are presented for cycles, complete graphs, wheel graphs and for a complete $l$-partite graph.
In 1738, the King of Naples and future King of Spain, Carlos III, commissioned the Spanish military engineer Roque Joaqu\'in de Alcubierre to begin the excavations of the ruins of the ancient Roman city of Pompeii and its surroundings, buried by the terrible explosion of Vesuvius in AD 79. Since that time, archaeologists have brought to light wonderful treasures found in the among ruins. Among them, the Sator Square is one of the most peculiar, apparently simple but mysterious. Supernatural and medicinal powers have been attributed to this object and its use was widespread during the Middle Age. Studies to explain its origin and meaning have been varied. There are theories that relate it to religion, the occult, medicine and music. However, no explanation has been convincing beyond pseudo-scientific sensationalism. In this study, the author intends to eliminate the mystical character of the Sator Square and suggests considering it as a simple palindrome or a game of words with certain symmetrical properties. However, these properties are not exclusive to the Sator Suare but are present in various mathematical and geometric objects.
Muon spin rotation ({\mu}+SR) measurements of square-root second moments of local magnetic fields {\sigma} in superconducting mixed states, as published for oriented crystals and powder samples of YBa2Cu3O7-{\delta} ({\delta} {\approx} 0.05), YBa2Cu4O8 and La2-xSrxCuO4 (x ~ 0.15-0.17), are subjected to comparative analysis for superconducting gap symmetry. For oriented crystals it is shown that anomalous dependences of {\sigma} on temperature T and applied field H, as-measured and extracted a- and b-axial components, are attributable to fluxon depinning and disorder that obscure the intrinsic character of the superconducting penetration depth. Random averages derived from oriented-crystal data differ markedly from corresponding non-oriented powders, owing to weaker influence of pinning in high-quality crystals. Related indicators for pinning perturbations such as non-monotonic H dependence of {\sigma}, irreproducible data and strong H dependence of apparent transition temperatures are also evident. Strong intrinsic pinning suppresses thermal anomalies in c-axis components of {\sigma}, which reflect nodeless gap symmetries in YBa2Cu3O7-{\delta} and YBa2Cu4O8. For YBa2Cu3O7-{\delta}, the crystal (a-b components, corrected for depinning) and powder data all reflect a nodeless gap (however, a-b symmetries remain unresolved for crystalline YBa2Cu4O8 and La1.83Sr0.17CuO4). Inconsistencies contained in multiple and noded gap interpretations of crystal data, and observed differences between bulk {\mu}+SR and surface-sensitive measurements are discussed.
We report on an XMM-Newton observation of the accreting millisecond pulsar, IGR J17511-3057. Pulsations at 244.8339512(1) Hz are observed with an RMS pulsed fraction of 14.4(3)%. A precise solution for the P_orb=12487.51(2)s binary system is derived. The measured mass function indicates a main sequence companion with a mass between 0.15 and 0.44 Msun. The XMM-Newton spectrum of the source can be modelled by at least three components, multicoloured disc emission, thermal emission from the NS surface and thermal Comptonization emission. Spectral fit of the XMM-Newton data and of the RXTE data, taken in a simultaneous temporal window, constrain the Comptonization parameters: the electron temperature, kT_e=51(+6,-4) keV, is rather high, while the optical depth (tau=1.34(+0.03,-0.06)) is moderate. The energy dependence of the pulsed fraction supports the interpretation of the cooler thermal component as coming from the accretion disc, and indicates that the Comptonizing plasma surrounds the hot spots on the NS surface, which provide the seed photons. Signatures of reflection, such as a broadened iron K-alpha emission line and a Compton hump at 30 keV ca., are also detected. We derive from the smearing of the reflection component an inner disc radius of ~> 40 km for a 1.4 Msun neutron star, and an inclination between 38{\deg} and 68{\deg}. XMM-Newton also observed two type-I X-ray bursts, probably ignited in a nearly pure helium environment. No photospheric radius expansion is observed, thus leading to an upper limit on the distance to the source of 10 kpc. A lower limit of 6.5 kpc can be also set if it is assumed that emission during the decaying part of the burst involves the whole neutron star surface. Pulsations observed during the burst decay are compatible with being phase locked, and have a similar amplitude, than pre-burst pulsations.
When the behaviour of the singularities, which are used to represent masses, charges or currents in exact solutions to the field equations of the Hermitian theory of relativity, is restricted by a no-jump rule, conditions are obtained, which determine the relative positions of masses, charges and currents. Due to these conditions the Hermitian theory of relativity appears to provide a unified description of gravitational, colour and electromagnetic forces.
We prove several results regarding the simplicity of germs and multigerms obtained via the operations of augmentation, simultaneous augmentation and concatenation and generalised concatenation. We also give some results in the case where one of the branches is a non stable primitive germ. Using our results we obtain a list which includes all simple multigerms from $\mathbb C^3$ to $\mathbb C^3$.
Very soon a new generation of reactor and accelerator neutrino oscillation experiments - Double Chooz, Daya Bay, Reno and T2K - will seek for oscillation signals generated by the mixing parameter theta_13. The knowledge of this angle is a fundamental milestone to optimize further experiments aimed at detecting CP violation in the neutrino sector. Leptonic CP violation is a key phenomenon that has profound implications in particle physics and cosmology but it is clearly out of reach for the aforementioned experiments. Since late 90's, a world-wide activity is in progress to design facilities that can access CP violation in neutrino oscillation and perform high precision measurements of the lepton counterpart of the Cabibbo-Kobayashi-Maskawa matrix. In this paper the status of these studies will be summarized, focusing on the options that are best suited to exploit existing European facilities (firstly CERN and the INFN Gran Sasso Laboratories) or technologies where Europe has a world leadership. Similar considerations will be developed in more exotic scenarios - beyond the standard framework of flavor oscillation among three active neutrinos - that might appear plausible in the occurrence of anomalous results from post-MiniBooNE experiments or the CNGS.
The concept of sphere of influence of a planet is useful in both the context of impact monitoring of asteroids with the Earth and of the design of interplanetary trajectories for spacecrafts. After reviewing the classical results, we propose a new definition for this sphere that depends on the position and velocity of the small body for given values of the Jacobi constant $C$. Here we compare the orbit of the small body obtained in the framework of the circular restricted three-body problem, with orbits obtained by patching two-body solutions. Our definition is based on an optimisation process, minimizing a suitable target function with respect to the assumed radius of the sphere of influence. For different values of $C$ we represent the results in the planar case: we show the values of the selected radius as a function of two angles characterising the orbit. In this case, we also produce a database of radii of the sphere of influence for several initial conditions, allowing an interpolation.
We introduce a low dimensional function of the site frequency spectrum that is tailor-made for distinguishing coalescent models with multiple mergers from Kingman coalescent models with population growth, and use this function to construct a hypothesis test between these model classes. The null and alternative sampling distributions of the statistic are intractable, but its low dimensionality renders them amenable to Monte Carlo estimation. We construct kernel density estimates of the sampling distributions based on simulated data, and show that the resulting hypothesis test dramatically improves on the statistical power of a current state-of-the-art method. A key reason for this improvement is the use of multi-locus data, in particular averaging observed site frequency spectra across unlinked loci to reduce sampling variance. We also demonstrate the robustness of our method to nuisance and tuning parameters. Finally we show that the same kernel density estimates can be used to conduct parameter estimation, and argue that our method is readily generalisable for applications in model selection, parameter inference and experimental design.