text
stringlengths
6
128k
New approach to p-adic and adelic strings, which takes into account that not only world sheet but also Minkowski space-time and string momenta can be p-adic and adelic, is formulated. p-Adic and adelic string amplitudes are considered within Feynman's path integral formalism. The adelic Veneziano amplitude is calculated. Some discreteness of string momenta is obtained. Also, adelic coupling constant is equal to unity.
In previous works, we proposed a method to characterize jointly self-similarity and anisotropy properties of a large class of self--similar Gaussian random fields. We provide here a mathematical analysis of our approach, proving that the sharpest way of measuring smoothness is related to these anisotropies and thus to the geometry of these fields.
Treating Coulomb scattering of two free electrons in a stationary approach, we explore the momentum and spin entanglement created by the interaction. We show that a particular discretisation provides an estimate of the von Neumann entropy of the one-electron reduced density matrix from the experimentally accessible Shannon entropy. For spinless distinguishable electrons the entropy is sizeable at low energies, indicating strong momentum entanglement, and drops to almost zero at energies of the order of 10 keV when the azimutal degree of freedom is integrated out, i.e. practically no entanglement and almost pure one-electron states. If spin is taken into account, the entropy for electrons with antiparallel spins should be larger than in the parallel-spin case, since it embodies both momentum and spin entanglement. Surprisingly, this difference, as well as the deviation from the spin-less case, is extremely small for the complete scattering state. Strong spin entanglement can however be obtained by post-selecting states at scattering angle pi/2.
It was recently discovered that fractional quantum Hall (FQH) states can be classified by the way ground state wave functions go to zero when electrons are brought close together. Quasiparticles in the FQH states can be classified in a similar way, by the pattern of zeros that result when electrons are brought close to the quasiparticles. In this paper we combine the pattern-of-zero approach and the conformal-field-theory (CFT) approach to calculate the topological properties of quasiparticles. We discuss how the quasiparticles in FQH states naturally form representations of a magnetic translation algebra, with members of a representation differing from each other by Abelian quasiparticles. We find that this structure dramatically simplifies topological properties of the quasiparticles, such as their fusion rules, charges, and scaling dimensions, and has consequences for the ground state degeneracy of FQH states on higher genus surfaces. We find constraints on the pattern of zeros of quasiparticles that can fuse together, which allow us to obtain the fusion rules of quasiparticles from their pattern of zeros, at least in the case of the (generalized and composite) parafermion states. We also calculate from CFT the number of quasiparticle types in the generalized and composite parafermion states, which confirm the result obtained previously through a completely different approach.
Heckenberger introduced the Weyl groupoid of a finite-dimensional Nichols algebra of diagonal type. We replace the matrix of its braiding by a higher tensor and present a construction which yields further Weyl groupoids. Abelian cohomology theory gives evidence for the existence of a higher braiding associated to such a tensor.
High-speed visible imaging of sub-microsecond electric explosion of wires at the low specific energy deposition threshold reveals three distinct modes of wire failure as capacitor charge voltage and energy deposition are increased. For 100 micron diameter gold-plated tungsten wires of 2 cm length, deposited energies of 1.9 eV/atom produces a liquid column that undergoes hydrodynamic breakup into droplets with radii of order of wire diameter on timescales of 200 microseconds. Instability growth, column breakup, and droplet coalescence follow classic Rayleigh-Plateau predictions for instability of fluid column. Above 3.2 eV/atom of deposited energy, wires are seen to abruptly transition to an expanding mixture of micron scale liquid-droplets and vapor within one frame (less than 3.33 microseconds), which has been termed phase explosion in literature. Between these two limits, at 2.5 eV/atom of deposited energy, wire radius is unchanged for the first 10 microseconds before the onset of a rapid expansion and disintegration that resembles homogeneous nucleation of mechanically unstable bubbles. Thermodynamic calculations are presented that separate cases by temperature obtained during heating: below boiling point, near boiling point, and exceeding boiling point.
Magnetic and orbital structures in KCuF$_{3}$ are revisited by the cluster self-consistent field approach developed recently. We clearly showed that due to the inherent frustration, the ground state of the system with the superexchange and Jahn-Teller phonon-mediated orbital couplings is highly degenerate without broken symmetry; the orthorhombic crystalline field splitting arising from static Jahn-Teller distortion stabilizes the orbital ordering, about 42% in the $x^{2}-y^{2}$ orbit and 58% in the $3z^{2}-r^{2}$ orbit in sublattices. The magnetic moment of Cu is considerably reduced to 0.49$\mu_{B}$, and the magnetic coupling strengths are highly anisotropic, J$_{c}$/J$_{ab}$ $\approx$ 26. These results are in agreement with the experiments, implying that as an orbital selector, the crystalline field plays an essential role in stabilizing the ground state of KCuF$_{3}$. The 1s-3d resonant X-ray scattering amplitudes in KCuF$_{3}$ with the {\it type-a} and {\it type-d} structures are also presented.
When planning for autonomous driving, it is crucial to consider essential traffic elements such as lanes, intersections, traffic regulations, and dynamic agents. However, they are often overlooked by the traditional end-to-end planning methods, likely leading to inefficiencies and non-compliance with traffic regulations. In this work, we endeavor to integrate the perception of these elements into the planning task. To this end, we propose Perception Helps Planning (PHP), a novel framework that reconciles lane-level planning with perception. This integration ensures that planning is inherently aligned with traffic constraints, thus facilitating safe and efficient driving. Specifically, PHP focuses on both edges of a lane for planning and perception purposes, taking into consideration the 3D positions of both lane edges and attributes for lane intersections, lane directions, lane occupancy, and planning. In the algorithmic design, the process begins with the transformer encoding multi-camera images to extract the above features and predicting lane-level perception results. Next, the hierarchical feature early fusion module refines the features for predicting planning attributes. Finally, the double-edge interpreter utilizes a late-fusion process specifically designed to integrate lane-level perception and planning information, culminating in the generation of vehicle control signals. Experiments on three Carla benchmarks show significant improvements in driving score of 27.20%, 33.47%, and 15.54% over existing algorithms, respectively, achieving the state-of-the-art performance, with the system operating up to 22.57 FPS.
In a recent paper, McMullen showed an inequality between the Thurston norm and the Alexander norm of a 3-manifold. This generalizes the well-known fact that twice the genus of a knot is bounded from below by the degree of the Alexander polynomial. We extend the Bennequin inequality for links to an inequality for all points of the Thurston norm, if the manifold is a link complement. We compare these two inequalities on two classes of closed braids. In an additional section we discuss a conjectured inequality due to Morton for certain points of the Thurston norm. We prove Morton's conjecture for closed 3-braids.
The 21-card trick is a way of dealing cards in order to predict the card selected by a volunteer. We give a mathematical explanation of why the well-known 21-card trick works using a simple linear discrete function. The function has a stable fixed point which corresponds to the position where the selected card reaches at the end of the trick. We then generalize the 21(7 x 3)-card trick to a p x q - card trick where p and q are odd integers greater than or equal to three, determine the fixed point and prove that it is also stable.
We study orbital and physical properties of Trojan asteroids of Jupiter. We try to discern all families previously discussed in literature, but we conclude there is only one significant family among Trojans, namely the cluster around asteroid (3548) Eurybates. It is the only cluster, which has all of the following characteristics: (i) it is clearly concentrated in the proper-element space; (ii) size-frequency distribution is different from background asteroids; (iii) we have a reasonable collisional/dynamical model of the family. Henceforth, we can consider it as a real collisional family. We also report a discovery of a possible family around the asteroid (4709) Ennomos, composed mostly of small asteroids. The asteroid (4709) Ennomos is known to have a very high albedo $p_V \simeq 0.15$, which may be related to a hypothetical cratering event which exposed ice (Fern\'andez et al. 2003). The relation between the collisional family and the exposed surface of the parent body is a unique opportunity to study the physics of cratering events. However, more data are needed to confirm the existence of this family and its relationship with Ennomos.
Operators with zero dimensional spectral measures appear naturally in the theory of ergodic Schr\"odinger operators. We develop the concept of a complete family of Hausdorff measure functions in order to analyze and distinguish between these measures with any desired precision. We prove that the dimension of spectral measures of half-line operators with positive upper Lyapunov exponent are at most logarithmic for every possible boundary phase. We show that this is sharp by constructing an explicit operator whose spectral measure obtains this dimension. We also extend and improve some basic results from the theory of rank one perturbations and quantum dynamics to encompass generalized Hausdorff dimensions.
We present a method of directly obtaining the parity of a Gaussian state of light without recourse to photon-number counting. The scheme uses only a simple balanced homodyne technique, and intensity correlation. Thus interferometric schemes utilizing coherent or squeezed light, and parity detection may be practically implemented for an arbitrary photon flux. Specifically we investigate a two-mode, squeezed-light, Mach-Zehnder interferometer and show how the parity of the output state may be obtained. We also show that the detection may be described independent of the parity operator, and that this "parity-by-proxy" measurement has the same signal as traditional parity.
We analyze the Farey spin chain, a one dimensional spin system with effective interaction decaying like the squared inverse distance. Using a polymer model technique, we show that when the temperature is decreased below the (single) critical temperature T_c=1/2, the magnetization jumps from zero to one.
We give a necessary and sufficient condition for the orientability of a locally standard 2-torus manifold with a fixed point which generalizes previous results of Nakayama-Nishimura in 2005 and Soprunova-Sottile in 2013. We construct manifolds with boundary where the boundary is a disjoint union of locally standard 2-torus manifolds. We discuss equivariant oriented cobordism class of locally standard 2-torus manifolds.
We study the possibility of testing some generic properties of Brane-World scenarios at the LHC. In particular, we pay attention to KK-graviton and branon production. Both signals can be dominant depending on the value of the brane tension. We analyze the differences between these two signatures. Finally, we use recent data in the single photon channel from the ATLAS collaboration to constraint the parameter space of both phenomenologies.
We demonstrate how an evolutionary algorithm can be extended with a curriculum learning process that selects automatically the environmental conditions in which the evolving agents are evaluated. The environmental conditions are selected so to adjust the level of difficulty to the ability level of the current evolving agents and so to challenge the weaknesses of the evolving agents. The method does not require domain knowledge and does not introduce additional hyperparameters. The results collected on two benchmark problems, that require to solve a task in significantly varying environmental conditions, demonstrate that the method proposed outperforms conventional algorithms and generates solutions that are robust to variations
We consider a general two Higgs doublet model which can simultaneously solve discrepancies in neutral B meson decay ($b\to s\ell \overline \ell$ distribution) and charged B meson decay ($b\to c\tau\overline\nu$) with a charged Higgs. The model contains two additional neutral scalars at the same mass scale and predicts distinctive signals at the LHC. Based on the recent same-sign top search by the ATLAS collaboration, we found the constraint on the scalar mass spectrum. To probe the remaining mass window, we propose a novel $cg\to t\tau\overline\tau$ process at the LHC.
We conduct binary population synthesis to investigate the formation of wind-fed high-mass X-ray binaries containing black holes (BH-HMXBs). We evolve multiple populations of high-mass binary stars and consider BH-HMXB formation rates, masses, spins and separations. We find that systems similar to Cygnus X-1 likely form after stable Case A mass transfer (MT) from the main sequence progenitors of black holes, provided such MT is characterised by low accretion efficiency, $\beta \lesssim 0.1$, with modest orbital angular momentum losses from the non-accreted material. Additionally, efficient BH-HMXB formation relies on a new simple treatment for Case A MT that allows donors to retain larger core masses compared to traditional rapid population-synthesis assumptions. At solar metallicity, our Preferred model yields $\mathcal{O}(1)$ observable BH-HMXBs in the Galaxy today, consistent with observations. In this simulation, $8\%$ of BH-HMXBs go on to merge as binary black holes or neutron star-black hole binaries within a Hubble time; however, none of the merging binaries have BH-HMXB progenitors with properties similar to Cygnus X-1. With our preferred settings for core mass growth, mass transfer efficiency and angular momentum loss, accounting for an evolving metallicity, and integrating over the metallicity-specific star formation history of the Universe, we find that BH-HMXBs may have contributed $\approx2$--$5$ BBH merger signals to detections reported in the third gravitational-wave transient catalogue of the LIGO-Virgo-KAGRA Collaboration. We also suggest MT efficiency should be higher during stable Case B MT than during Case A MT.
Fully localised patterns involving cellular hexagons or squares have been found experimentally and numerically in various continuum models. However, there is currently no mathematical theory for the emergence of these localised cellular patterns from a quiescent state. A key issue is that standard techniques for one-dimensional patterns have proven insufficient for understanding localisation in higher dimensions. In this work, we present a comprehensive approach to this problem by using techniques developed in the study of axisymmetric patterns. Our analysis covers localised patterns equipped with a wide range of dihedral symmetries, avoiding a restriction to solutions on a predetermined lattice. The context in this paper is a theory for the emergence of such patterns near a Turing instability for a general class of planar reaction-diffusion equations. Posing the reaction-diffusion system in polar coordinates, we carry out a finite-mode Fourier decomposition in the angular variable to yield a large system of coupled radial ordinary differential equations. We then utilise various radial spatial dynamics methods, such as invariant manifolds, rescaling charts, and normal form analysis, leading to an algebraic matching condition for localised patterns to exist in the finite-mode reduction. This algebraic matching condition is nontrivial, which we solve via a combination of by-hand calculations and Gr\"obner bases from polynomial algebra to reveal the existence of a plethora of localised dihedral patterns. These results capture the essence of the emergent localised hexagonal patterns witnessed in experiments. Moreover, we combine computer-assisted analysis and a Newton-Kantorovich procedure to prove the existence of localised patches with 6m-fold symmetry for arbitrarily large Fourier decompositions. This includes the localised hexagon patches that have been elusive to analytical treatment.
Several adaptations of Transformers models have been developed in various domains since its breakthrough in Natural Language Processing (NLP). This trend has spread into the field of Music Information Retrieval (MIR), including studies processing music data. However, the practice of leveraging NLP tools for symbolic music data is not novel in MIR. Music has been frequently compared to language, as they share several similarities, including sequential representations of text and music. These analogies are also reflected through similar tasks in MIR and NLP. This survey reviews NLP methods applied to symbolic music generation and information retrieval studies following two axes. We first propose an overview of representations of symbolic music adapted from natural language sequential representations. Such representations are designed by considering the specificities of symbolic music. These representations are then processed by models. Such models, possibly originally developed for text and adapted for symbolic music, are trained on various tasks. We describe these models, in particular deep learning models, through different prisms, highlighting music-specialized mechanisms. We finally present a discussion surrounding the effective use of NLP tools for symbolic music data. This includes technical issues regarding NLP methods and fundamental differences between text and music, which may open several doors for further research into more effectively adapting NLP tools to symbolic MIR.
We present a combined neutron diffraction and high field muon spin rotation ($\mu$SR) study of the magnetically ordered and superconducting phases of the high-temperature superconductor La$_{1.94}$Sr$_{0.06}$CuO$_{4+y}$ ($T_{\rm c} = 37.5(2)$~K) in a magnetic field applied perpendicular to the CuO$_2$ planes. We observe a linear field-dependence of the intensity of the neutron diffraction peak that reflects the modulated antiferromagnetic stripe order. The magnetic volume fraction extracted from $\mu$SR data likewise increases linearly with applied magnetic field. The combination of these two observations allows us to unambiguously conclude that stripe-ordered regions grow in an applied field, whereas the stripe-ordered magnetic moment itself is field-independent. This contrasts with earlier suggestions that the field-induced neutron diffraction intensity in La-based cuprates is due to an increase in the ordered moment. We discuss a microscopic picture that is capable of reconciling these conflicting viewpoints.
We compute the Hausdorff dimension of the set of $\psi$-exactly approximable vectors, in the simultaneous case, in dimension strictly larger than $2$ and for approximating functions $\psi$ with order at infinity less than or equal to $-2$. Our method relies on the analogous result in dimension $1$, proved by Yann Bugeaud and Carlos Moreira, and a version of Jarn\'ik's Theorem on fibres.
In this paper, we describe sufficient conditions when block-diagonal solutions to Lyapunov and $\mathcal{H}_{\infty}$ Riccati inequalities exist. In order to derive our results, we define a new type of comparison systems, which are positive and are computed using the state-space matrices of the original (possibly nonpositive) systems. Computing the comparison system involves only the calculation of $\mathcal{H}_{\infty}$ norms of its subsystems. We show that the stability of this comparison system implies the existence of block-diagonal solutions to Lyapunov and Riccati inequalities. Furthermore, our proof is constructive and the overall framework allows the computation of block-diagonal solutions to these matrix inequalities with linear algebra and linear programming. Numerical examples illustrate our theoretical results.
Heavy ion collisions pose interesting challenges to quantum chromodynamics, because they probe the parton structure of the incoming nuclei at very small longitudinal momentum fractions. Combined with the large size of nuclei, this may lead to the phenomenon of gluon saturation. The Color Glass Condensate is an effective QCD description that aims to cope with such a situation. In this talk, I show how one may study heavy ion collisions in this framework.
An unprecedented number of new cancer targets are in development, and most are being developed in combination therapies. Early oncology development is strategically challenged in choosing the best combinations to move forward to late stage development. The most common early endpoints to be assessed in such decision-making include objective response rate, duration of response and tumor size change. In this paper, using independent-drug-action and Bliss-drug-independence concepts as a foundation, we introduce simple models to predict combination therapy efficacy for duration of response and tumor size change. These models complement previous publications using the independent action models (Palmer 2017, Schmidt 2020) to predict progression-free survival and objective response rate and serve as new predictive models to understand drug combinations for early endpoints. The models can be applied to predict the combination treatment effect for early endpoints given monotherapy data, or to estimate the possible effect of one monotherapy in the combination if data are available from the combination therapy and the other monotherapy. Such quantitative work facilitates efficient oncology drug development.
Certain physical aspects of quantum error correction are discussed for a quantum computer (n-qubit register) in contact with a decohering environment. Under rather plausible assumptions upon the form of the computer-environment interaction, the efficiency of a general correcting procedure is evaluated as a function of the spontaneous-decay duration and the rank of errors covered by the procedure. It is proved that the probability of errors can be made arbitrarily small by enhancing the correction method, provided the decohering interaction is represented by a bounded operator.
We show that the quantum disk, i.e. the quantum space corresponding to the Toeplitz C*-algebra does not admit any compact quantum group structure. We prove that if such a structure existed the resulting compact quantum group would simultaneously be of Kac type and not of Kac type. The main tools used in the solution come from the theory of type I locally compact quantum groups, but also from the theory of operators on Hilbert spaces.
In this work we argue that the power and effectiveness of the Bohrian approach to quantum mechanics is essentially grounded on an inconsistent form of anti-realist realism which supports not only the uncritical tolerance -- in physics -- towards the "standard" account of the theory of quanta but also -- in philosophy -- to the mad reproduction of mythical (inconsistent and vague) narratives -- known as "interpretations". We will discuss the existence of -- what John Archibald Wheeler named -- "smoky dragons" not only within the standard formulation of the theory but also within the many interpretations that have been -- later on -- introduced by philosophers and philosophically inclined physicists. After analyzing the role of smoky dragons within both contemporary physics and philosophy of physics we will propose a general procedure grounded on a series of necessary theoretical conditions for producing meaningful physical concepts that -- hopefully -- could be used as tools and weapons to capture and defeat these beautiful and powerful mythical creatures.
Many optimization problems can be naturally represented as (hyper) graphs, where vertices correspond to variables and edges to tasks, whose cost depends on the values of the adjacent variables. Capitalizing on the structure of the graph, suitable dynamic programming strategies can select certain orders of evaluation of the variables which guarantee to reach both an optimal solution and a minimal size of the tables computed in the optimization process. In this paper we introduce a simple algebraic specification with parallel composition and restriction whose terms up to structural axioms are the graphs mentioned above. In addition, free (unrestricted) vertices are labelled with variables, and the specification includes operations of name permutation with finite support. We show a correspondence between the well-known tree decompositions of graphs and our terms. If an axiom of scope extension is dropped, several (hierarchical) terms actually correspond to the same graph. A suitable graphical structure can be found, corresponding to every hierarchical term. Evaluating such a graphical structure in some target algebra yields a dynamic programming strategy. If the target algebra satisfies the scope extension axiom, then the result does not depend on the particular structure, but only on the original graph. We apply our approach to the parking optimization problem developed in the ASCENS e-mobility case study, in collaboration with Volkswagen. Dynamic programming evaluations are particularly interesting for autonomic systems, where actual behavior often consists of propagating local knowledge to obtain global knowledge and getting it back for local decisions.
Many recent works have explored using WiFi-based sensing to improve SLAM, robot manipulation, or exploration. Moreover, widespread availability makes WiFi the most advantageous RF signal to leverage. But WiFi sensors lack an accurate, tractable, and versatile toolbox, which hinders their widespread adoption with robot's sensor stacks. We develop WiROS to address this immediate need, furnishing many WiFi-related measurements as easy-to-consume ROS topics. Specifically, WiROS is a plug-and-play WiFi sensing toolbox providing access to coarse-grained WiFi signal strength (RSSI), fine-grained WiFi channel state information (CSI), and other MAC-layer information (device address, packet id's or frequency-channel information). Additionally, WiROS open-sources state-of-art algorithms to calibrate and process WiFi measurements to furnish accurate bearing information for received WiFi signals. The open-sourced repository is: https://github.com/ucsdwcsng/WiROS
We present a detailed analysis of the intrinsic scatter in the integrated SZ effect - cluster mass (Y-M) relation, using semi-analytic and simulated cluster samples. Specifically, we investigate the impact on the Y-M relation of energy feedback, variations in the host halo concentration and substructure populations, and projection effects due to unresolved clusters along the line of sight (the SZ background). Furthermore, we investigate at what radius (or overdensity) one should measure the integrated SZE and define cluster mass so as to achieve the tightest possible scaling. We find that the measure of Y with the least scatter is always obtained within a smaller radius than that at which the mass is defined; e.g. for M_{200} (M_{500}) the scatter is least for Y_{500} (Y_{1100}). The inclusion of energy feedback in the gas model significantly increases the intrinsic scatter in the Y-M relation due to larger variations in the gas mass fraction compared to models without feedback. We also find that variations in halo concentration for clusters of a given mass may partly explain why the integrated SZE provides a better mass proxy than the central decrement. Substructure is found to account for approximately 20% of the observed scatter in the Y-M relation. Above M_{200} = 2x10^{14} h^{-1} msun, the SZ background does not significantly effect cluster mass measurements; below this mass, variations in the background signal reduce the optimal angular radius within which one should measure Y to achieve the tightest scaling with M_{200}.
An association scheme is called amorphic if every possible fusion of relations gives rise to a fusion scheme. We call a pair of relations fusing if fusing that pair gives rise to a fusion scheme. We define the fusing-relations graph on the set of relations, where a pair forms an edge if it fuses. We show that if the fusing-relations graph is connected but not a path, then the association scheme is amorphic. As a side result, we show that an association scheme in which at most one relation is not strongly regular of (negative) Latin square type, is amorphic.
Monochromatic coherent light traversing a disordered photonic medium evolves into a random field whose statistics are dictated by the disorder level. Here we demonstrate experimentally that light statistics can be deterministically tuned in certain disordered lattices, even when the disorder level is held fixed, by controllably breaking the excitation symmetry of the lattice modes. We exploit a lattice endowed with disorder-immune chiral symmetry in which the eigenmodes come in skew-symmetric pairs. If a single lattice site is excited, a "photonic thermalization gap" emerges: the realm of sub-thermal light statistics is inaccessible regardless of the disorder level. However, by exciting two sites with a variable relative phase, as in a traditional two-path interferometer, the chiral symmetry is judiciously broken and interferometric control over the light statistics is exercised, spanning sub-thermal and super-thermal regimes. These results may help develop novel incoherent lighting sources from coherent lasers.
We present a new family of exchangeable stochastic processes, the Functional Neural Processes (FNPs). FNPs model distributions over functions by learning a graph of dependencies on top of latent representations of the points in the given dataset. In doing so, they define a Bayesian model without explicitly positing a prior distribution over latent global parameters; they instead adopt priors over the relational structure of the given dataset, a task that is much simpler. We show how we can learn such models from data, demonstrate that they are scalable to large datasets through mini-batch optimization and describe how we can make predictions for new points via their posterior predictive distribution. We experimentally evaluate FNPs on the tasks of toy regression and image classification and show that, when compared to baselines that employ global latent parameters, they offer both competitive predictions as well as more robust uncertainty estimates.
A pilot survey was sent to chairs of 14 doctoral math departments asking for three types of data: (1) category on job-placements for research post-docs leaving their department in three recent years; (2) category of jobs from which their new faculty hires came in two recent years and two years a decade earlier; and (3) preparation for future careers offered by their department to their research post-docs. Eleven departments submitted data on post-docs. Of the 162 departing post-docs for whom data was supplied, 25% obtained tenure-track jobs in doctoral departments; 22% took another post-doc; and 18% were reported as "unknown/other". The remaining 35% were evenly divided among tenure-track in non-doctoral departments, full-time non-tenure-track, academic outside the US, and business-industry-government. Eight departments gave complete responses to (2): From the early 2000's to the early 2010's tenure track hiring increased by about 35% (from 18 to 25). The changes across this period as strikingly larger for other ranks: 233%(from 21 to 70) for research post-docs and 211% (from 18 to 56)full-time non-tenure-track doctoral teaching faculty. In section (3), all departments reported observing post-docs' teaching and providing feedback. However, relatively few provided explicit preparation for teaching or guidance for communicating with audiences other than research specialists.
The concept of complementarity, originally defined for non-commuting observables of quantum systems with states of non-vanishing dispersion, is extended to classical dynamical systems with a partitioned phase space. Interpreting partitions in terms of ensembles of epistemic states (symbols) with corresponding classical observables, it is shown that such observables are complementary to each other with respect to particular partitions unless those partitions are generating. This explains why symbolic descriptions based on an \emph{ad hoc} partition of an underlying phase space description should generally be expected to be incompatible. Related approaches with different background and different objectives are discussed.
A recently suggested modified BCS (MBCS) model has been studied at finite temperature. We show that this approach does not allow the existence of the normal (non-superfluid) phase at any finite temperature. Other MBCS predictions such as a negative pairing gap, pairing induced by heating in closed-shell nuclei, and ``superfluid -- super-superfluid'' phase transition are discussed also. The MBCS model is tested by comparing with exact solutions for the picket fence model. Here, severe violation of the internal symmetry of the problem is detected. The MBCS equations are found to be inconsistent. The limit of the MBCS applicability has been determined to be far below the ``superfluid -- normal'' phase transition of the conventional FT-BCS, where the model performs worse than the FT-BCS.
We derive masses of the central super-massive black hole (SMBH) and accretion rates for 154 type1 AGN belonging to a well-defined X-ray-selected sample, the XMM-Newton Serendipitous Sample (XBS). To this end, we use the most recent "single-epoch" relations, based on Hbeta and MgII2798A emission lines, to derive the SMBH masses. We then use the bolometric luminosities, computed on the basis of an SED-fitting procedure, to calculate the accretion rates, both absolute and normalized to the Eddington luminosity (Eddington ratio). The selected AGNs cover a range of masses from 10^7 to 10^10 Msun with a peak around 8x10^8 Msun and a range of accretion rates from 0.01 to ~50 Msun/year (assuming an efficiency of 0.1), with a peak at ~1 Msun/year. The values of Eddington ratio range from 0.001 to ~0.5 and peak at 0.1.
Change captioning aims to succinctly describe the semantic change between a pair of similar images, while being immune to distractors (illumination and viewpoint changes). Under these distractors, unchanged objects often appear pseudo changes about location and scale, and certain objects might overlap others, resulting in perturbational and discrimination-degraded features between two images. However, most existing methods directly capture the difference between them, which risk obtaining error-prone difference features. In this paper, we propose a distractors-immune representation learning network that correlates the corresponding channels of two image representations and decorrelates different ones in a self-supervised manner, thus attaining a pair of stable image representations under distractors. Then, the model can better interact them to capture the reliable difference features for caption generation. To yield words based on the most related difference features, we further design a cross-modal contrastive regularization, which regularizes the cross-modal alignment by maximizing the contrastive alignment between the attended difference features and generated words. Extensive experiments show that our method outperforms the state-of-the-art methods on four public datasets. The code is available at https://github.com/tuyunbin/DIRL.
For a torus knot K, we bound the crosscap number c(K) in terms of the genus g(K) and crossing number n(K): c(K) \leq [(g(K)+9)/6] and c(K) \leq [(n(K) + 16)/12]. The (6n-2,3) torus knots show that these bounds are sharp.
The largest underground neutrino observatory, Super-Kamiokande, located near Kamioka, Japan has been collecting data since April 1996. It is located at a depth of roughly 2.7 kmwe in a zinc mine under a mountain, and has an effective area for detecting entering-stopping and through-going muons of about $1238 m^2$ for muons of $>1.7 GeV$. These events are collected at a rate of 1.5 per day from the lower hemisphere of arrival directions, with 2.5 muons per second in the downgoing direction. We report preliminary results from 229 live days analyzed so far with respect to zenith angle variation of the upcoming muons. These results do not yet have enough statistical weight to discriminate between the favored hypothesis for muon neutrino oscillations and no-oscillations. We report on the search for astrophysical sources of neutrinos and high energy neutrino fluxes from the sun and earth center, as might arise from WIMP annihilations. None are found. We also present a topographical map of the overburden made from the downgoing muons. The detector is performing well, and with several years of data we should be able to make significant progress in this area.
The need for wireless communication has driven the communication systems to high performance. However, the main bottleneck that affects the communication capability is the Fast Fourier Transform (FFT), which is the core of most modulators. This study presents an on-chip implementation of pipeline digit-slicing multiplier-less butterfly for FFT structure. The approach is taken, in order to reduce computation complexity in the butterfly, digit-slicing multiplier-less single constant technique was utilized in the critical path of Radix-2 Decimation In Time (DIT) FFT structure. The proposed design focused on the trade-off between the speed and active silicon area for the chip implementation. The new architecture was investigated and simulated with MATLAB software. The Verilog HDL code in Xilinx ISE environment was derived to describe the FFT Butterfly functionality and was downloaded to Virtex II FPGA board. Consequently, the Virtex-II FG456 Proto board was used to implement and test the design on the real hardware. As a result, from the findings, the synthesis report indicates the maximum clock frequency of 549.75 MHz with the total equivalent gate count of 31,159 is a marked and significant improvement over Radix 2 FFT butterfly. In comparison with the conventional butterfly architecture, the design that can only run at a maximum clock frequency of 198.987 MHz and the conventional multiplier can only run at a maximum clock frequency of 220.160 MHz, the proposed system exhibits better results. The resulting maximum clock frequency increases by about 276.28% for the FFT butterfly and about 277.06% for the multiplier. It can be concluded that on-chip implementation of pipeline digit-slicing multiplier-less butterfly for FFT structure is an enabler in solving problems that affect communications capability in FFT and possesses huge potentials for future related works and research areas.
Graph neural networks (GNNs) are the predominant approach for graph-based machine learning. While neural networks have shown great performance at learning useful representations, they are often criticized for their limited high-level reasoning abilities. In this work, we present Graph Reasoning Networks (GRNs), a novel approach to combine the strengths of fixed and learned graph representations and a reasoning module based on a differentiable satisfiability solver. While results on real-world datasets show comparable performance to GNN, experiments on synthetic datasets demonstrate the potential of the newly proposed method.
We recently demonstrated that the superconductor-to-insulator transition induced by ionic liquid gating of the high temperature superconductor YBa2Cu3O7 (YBCO) is accompanied by a deoxygenation of the sample [Perez-Munoz et al., PNAS 114, 215 (2017)]. DFT calculations helped establish that the pronounced changes in the spectral features of the Cu K-edge absorption spectra measured in situ during the gating experiment arise from a decrease of the Cu coordination within the CuO chains. In this work, we provide a detailed analysis of the electronic structure origin of the changes in the spectra resulting from three different types of doping: i) the formation of oxygen vacancies within the CuO chains, ii) the formation of oxygen vacancies within the CuO2 planes and iii) the electrostatic doping. For each case, three stoichiometries are studied and compared to the stoichiometric YBa2Cu3O7, i.e YBa2Cu3O6.75, YBa2Cu3O6.50 and YBa2Cu3O6.25. Computed vacancy formation energies further support the chain-vacancy mechanism. In the case of doping by vacancies within the chains, we study the effect of oxygen ordering on the spectral features and we clarify the connection between the polarization of the x-rays and this doping mechanism. Finally, the inclusion of the Hubbard U correction on the computed spectra for antiferromagnetic YBa2Cu3O6.25 is discussed.
We present an updated analysis of radial velocity data of the HD 82943 planetary system based on 10 years of measurements obtained with the Keck telescope. Previous studies have shown that the HD 82943 system has two planets that are likely in 2:1 mean-motion resonance (MMR), with the orbital periods about 220 and 440 days (Lee et al. 2006). However, alternative fits that are qualitatively different have also been suggested, with two planets in a 1:1 resonance (Gozdziewski & Konacki 2006) or three planets in a Laplace 4:2:1 resonance (Beauge et al. 2008). Here we use \c{hi}2 minimization combined with parameter grid search to investigate the orbital parameters and dynamical states of the qualitatively different types of fits, and we compare the results to those obtained with the differential evolution Markov chain Monte Carlo method. Our results support the coplanar 2:1 MMR configuration for the HD 82943 system, and show no evidence for either the 1:1 or 3-planet Laplace resonance fits. The inclination of the system with respect to the sky plane is well constrained at about 20(+4.9 -5.5) degree, and the system contains two planets with masses of about 4.78 MJ and 4.80 MJ (where MJ is the mass of Jupiter) and orbital periods of about 219 and 442 days for the inner and outer planet, respectively. The best fit is dynamically stable with both eccentricity-type resonant angles {\theta}1 and {\theta}2 librating around 0 degree.
We present our best estimates of the uncertainties due to heavy particle threshold corrections on the unification scale $M_U$, intermediate scale $M_I$ and coupling constant Alpha_U in the minimal non-supersymmetric SO(10) models. Using these , we update the predictions for proton life-time in these models.
This paper concerns estimates of the lifespan of solutions to the semilinear damped wave equation. We give upper estimates of the lifespan for the semilinear damped wave equation with variable coefficients in all space dimensions.
Cosmology based on large scale peculiar velocity preferes volume weighted velocity statistics. However, measuring the volume weighted velocity statistics from inhomogeneously distributed galaxies (simulation particles/halos) suffer from an inevitable and significant sampling artifact. We study this sampling artifact in the velocity power spectrum measured by the nearest-particle (NP) velocity assignment method(Zheng et al. 2013, PRD). We derive the analytical expression of leading and higher order terms. We find that the sampling artifact suppresses the $z=0$ E-mode velocity power spectrum by $\sim 10\%$ at $k=0.1h/$Mpc , for samples with number density $10^{-3}({\rm Mpc}/h)^{-3}$. This suppression becomes larger for larger $k$ and for sparser samples. We argue that, this source of systematic errors in peculiar velocity cosmology, albeit severe, can be self-calibrated in the framework of our theoretical modelling. We also work out the sampling artifact in the density-velocity cross power spectrum measurement. More robust evaluation of related statistics through simulations will be presented in a companion paper (Zheng, Zhang & Jing, 2015, PRD). We also argue that similar sampling artifact exists in other velocity assignment methods and hence must be carefully corrected to avoid systematic bias in peculiar velocity cosmology.
Bacterial microcompartments are large, roughly icosahedral shells that assemble around enzymes and reactants involved in certain metabolic pathways in bacteria. Motivated by microcompartment assembly, we use coarse-grained computational and theoretical modeling to study the factors that control the size and morphology of a protein shell assembling around hundreds to thousands of molecules. We perform dynamical simulations of shell assembly in the presence and absence of cargo over a range of interaction strengths, subunit and cargo stoichiometries, and the shell spontaneous curvature. Depending on these parameters, we find that the presence of a cargo can either increase or decrease the size of a shell relative to its intrinsic spontaneous curvature, as seen in recent experiments. These features are controlled by a balance of kinetic and thermodynamic effects, and the shell size is assembly pathway dependent. We discuss implications of these results for synthetic biology efforts to target new enzymes to microcompartment interiors.
In this paper we characterize primitive branched coverings with minimal defect over the projective plane with respect to the properties decomposable and indecomposable. This minimality is achieved when the covering surface is also the projective plane, which corresponds to the last case to be solved. As a consequence, we have extended the family of realisations of branched coverings on the projective plane and established a type of generalisation of results on primitive permutation groups.
User-generated cinematic creations are gaining popularity as our daily entertainment, yet it is a challenge to master cinematography for producing immersive contents. Many existing automatic methods focus on roughly controlling predefined shot types or movement patterns, which struggle to engage viewers with the circumstances of the actor. Real-world cinematographic rules show that directors can create immersion by comprehensively synchronizing the camera with the actor. Inspired by this strategy, we propose a deep camera control framework that enables actor-camera synchronization in three aspects, considering frame aesthetics, spatial action, and emotional status in the 3D virtual stage. Following rule-of-thirds, our framework first modifies the initial camera placement to position the actor aesthetically. This adjustment is facilitated by a self-supervised adjustor that analyzes frame composition via camera projection. We then design a GAN model that can adversarially synthesize fine-grained camera movement based on the physical action and psychological state of the actor, using an encoder-decoder generator to map kinematics and emotional variables into camera trajectories. Moreover, we incorporate a regularizer to align the generated stylistic variances with specific emotional categories and intensities. The experimental results show that our proposed method yields immersive cinematic videos of high quality, both quantitatively and qualitatively. Live examples can be found in the supplementary video.
Previously published observations of 60 externally-polluted white dwarfs show that none of the stars have accreted from intact refractory-dominated parent bodies composed mainly of Al, Ca and O, although planetesimals with such a distinctive composition have been predicted to form. We propose that such remarkable objects are not detected, by themselves, because, unless they are scattered outward from their initial orbit, they are engulfed and destroyed during the star's Asymptotic Giant Branch evolution. As-yet, there is at most only weak evidence supporting a scenario where the composition of any extrasolar minor planet can be explained by blending of an outwardly scattered refractory-dominated planetesimal with an ambient asteroid.
I discuss the theoretical motivations for R-parity violation, review the experimental bounds and outline the main changes in collider phenomenology compared to conserved R-parity. I briefly comment on the effects of R-parity violation on cosmology.
The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) is a quantum sensor in which a quasi-1D quantum gas images electromagnetic fields emitted from a nearby sample. We report improvements to the microscope. Cryogen usage is reduced by replacing the liquid cryostat with a closed-cycle system and modified cold finger, and cryogenic cooling is enhanced by adding a radiation shield. The minimum accessible sample temperature is reduced from 35 K to 5.8 K while maintaining low sample vibrations. A new sample mount is easier to exchange, and quantum gas preparation is streamlined.
We probe the principle of complementarity by performing a double-slit experiment based on entangled photons created by spontaneous parametric down-conversion from a pump mode in a TEM01-mode. Our setup brings out the need for a careful selection of the signal-idler photon pairs for our study of visibility and distinguishability. Indeed, when the signal photons interfering at the double-slit belong to this double-hump mode we obtain almost perfect visibility of the interference fringes and no "which-slit" information is available. However, when we break the symmetry between the two maxima of the mode by detecting the entangled idler photon, the paths through the slits become distinguishable and the visibility vanishes. It is the mode function of the photons selected by the detection system which decides if interference, or "which-slit" information is accessible in the experiment.
While the majority of approaches to the characterization of complex networks has relied on measurements considering only the immediate neighborhood of each network node, valuable information about the network topological properties can be obtained by considering further neighborhoods. The current work discusses on how the concepts of hierarchical node degree and hierarchical clustering coefficient (introduced in cond-mat/0408076), complemented by new hierarchical measurements, can be used in order to obtain a powerful set of topological features of complex networks. The interpretation of such measurements is discussed, including an analytical study of the hierarchical node degree for random networks, and the potential of the suggested measurements for the characterization of complex networks is illustrated with respect to simulations of random, scale-free and regular network models as well as real data (airports, proteins and word associations). The enhanced characterization of the connectivity provided by the set of hierarchical measurements also allows the use of agglomerative clustering methods in order to obtain taxonomies of relationships between nodes in a network, a possibility which is also illustrated in the current article.
Predicting camera-space hand meshes from single RGB images is crucial for enabling realistic hand interactions in 3D virtual and augmented worlds. Previous work typically divided the task into two stages: given a cropped image of the hand, predict meshes in relative coordinates, followed by lifting these predictions into camera space in a separate and independent stage, often resulting in the loss of valuable contextual and scale information. To prevent the loss of these cues, we propose unifying these two stages into an end-to-end solution that addresses the 2D-3D correspondence problem. This solution enables back-propagation from camera space outputs to the rest of the network through a new differentiable global positioning module. We also introduce an image rectification step that harmonizes both the training dataset and the input image as if they were acquired with the same camera, helping to alleviate the inherent scale-depth ambiguity of the problem. We validate the effectiveness of our framework in evaluations against several baselines and state-of-the-art approaches across three public benchmarks.
A report on the works hep-th/9411050, q-alg/9412017, q-alg/9503013, q-alg/9506011 and a joint work with R.Bezrukavnikov.
A diffusion auction is a market to sell commodities over a social network, where the challenge is to incentivize existing buyers to invite their neighbors in the network to join the market. Existing mechanisms have been designed to solve the challenge in various settings, aiming at desirable properties such as non-deficiency, incentive compatibility and social welfare maximization. Since the mechanisms are employed in dynamic networks with ever-changing structures, buyers could easily generate fake nodes in the network to manipulate the mechanisms for their own benefits, which is commonly known as the Sybil attack. We observe that strategic agents may gain an unfair advantage in existing mechanisms through such attacks. To resist this potential attack, we propose two diffusion auction mechanisms, the Sybil tax mechanism (STM) and the Sybil cluster mechanism (SCM), to achieve both Sybil-proofness and incentive compatibility in the single-item setting. Our proposal provides the first mechanisms to protect the interests of buyers against Sybil attacks with a mild sacrifice of social welfare and revenue.
Heteroskedasticity is a statistical anomaly that describes differing variances of error terms in a time series dataset. The presence of heteroskedasticity in data imposes serious challenges for forecasting models and many statistical tests are not valid in the presence of heteroskedasticity. Heteroskedasticity of the data affects the relation between the predictor variable and the outcome, which leads to false positive and false negative decisions in the hypothesis testing. Available approaches to study heteroskedasticity thus far adopt the strategy of accommodating heteroskedasticity in the time series and consider it an inevitable source of noise. In these existing approaches, two forecasting models are prepared for normal and heteroskedastic scenarios and a statistical test is to determine whether or not the data is heteroskedastic. This work-in-progress research introduces a quantifying measurement for heteroskedasticity. The idea behind the proposed metric is the fact that a heteroskedastic time series features a uniformly distributed local variances. The proposed measurement is obtained by calculating the local variances using linear time invariant filters. A probability density function of the calculated local variances is then derived and compared to a uniform distribution of theoretical ultimate heteroskedasticity using statistical divergence measurements. The results demonstrated on synthetic datasets shows a strong correlation between the proposed metric and number of variances locally estimated in a heteroskedastic time series.
We present an efficient Monte Carlo framework for perturbative calculations of infinite nuclear matter based on chiral two-, three-, and four-nucleon interactions. The method enables the incorporation of all many-body contributions in a straightforward and transparent way, and makes it possible to extract systematic uncertainty estimates by performing order-by-order calculations in the chiral expansion as well as the many-body expansion. The versatility of this new framework is demonstrated by applying it to chiral low-momentum interactions, exhibiting a very good many-body convergence up to fourth order. Following these benchmarks, we explore new chiral interactions up to next-to-next-to-next-to-leading order (N$^3$LO). Remarkably, simultaneous fits to the triton and to saturation properties can be achieved, while all three-nucleon low-energy couplings remain natural. The theoretical uncertainties of nuclear matter are significantly reduced when going from next-to-next-to-leading order to N$^3$LO.
The primary goal in the study of entanglement as a resource theory is to find conditions that determine when one quantum state can or cannot be transformed into another via local operations and classical communication. This is typically done through entanglement monotones or conversion witnesses. Such quantities cannot be computed for arbitrary quantum states in general, but it is useful to consider classes of symmetric states for which closed-form expressions can be found. In this paper, we show how to compute the convex roof of any entanglement monotone for all Werner states. The convex roofs of the well-known Vidal monotones are computed for all isotropic states, and we show how this method can generalize to other entanglement measures and other types of symmetries as well. We also present necessary and sufficient conditions that determine when a pure bipartite state can be deterministically converted into a Werner state or an isotropic state.
EIT waves are a wavelike phenomenon propagating outward from the coronal mass ejection (CME) source region, with expanding dimmings following behind. We present a spectroscopic study of an EIT wave/dimming event observed by Hinode/EIS. Although the identification of the wave front is somewhat affected by the pre-existing loop structures, the expanding dimming is well defined. We investigate the line intensity, width, and Doppler velocity for 4 EUV lines. In addition to the significant blue shift implying plasma outflows in the dimming region as revealed in previous studies, we find that the widths of all the 4 spectral lines increase at the outer edge of the dimmings. We illustrate that this feature can be well explained by the field line stretching model, which claims that EIT waves are apparently moving brightenings that are generated by the successive stretching of the closed field lines.
We have investigated the electronic and magnetic response of a single Fe atom and a pair of interacting Fe atoms placed in patterned dehydrogenated channels in graphane within the framework of density functional theory. We have considered two channels: "armchair" and "zigzag" channels. Fully relaxed calculations have been carried out for three different channel widths. Our calculations reveal that the response to the magnetic impurities is very different for these two channels. We have also shown that one can stabilize magnetic impurities (Fe in the present case) along the channels of bare carbon atoms, giving rise to a magnetic insulator or a spin gapless semiconductor. Our calculations with spin-orbit coupling shows a large in-plane magnetic anisotropy energy for the case of the armchair channel. The magnetic exchange coupling between two Fe atoms placed in the semiconducting channel with an armchair edge is very weakly ferromagnetic whereas a fairly strong ferromagnetic coupling is observed for reasonable separations between Fe atoms in the zigzag-edged metallic channel with the coupling mediated by the bare carbon atoms. The possibility of realizing an ultrathin device with interesting magnetic properties is discussed.
We introduce an interatomic potential for hexagonal boron nitride (hBN) based on the Gaussian approximation potential (GAP) machine learning methodology. The potential is based on a training set of configurations collected from density functional theory (DFT) simulations and is capable of treating bulk and multilayer hBN as well as nanotubes of arbitrary chirality. The developed force field faithfully reproduces the potential energy surface predicted by DFT while improving the efficiency by several orders of magnitude. We test our potential by comparing formation energies, geometrical properties, phonon dispersion spectra and mechanical properties with respect to benchmark DFT calculations and experiments. In addition, we use our model and a recently developed graphene-GAP to analyse and compare thermally and mechanically induced rippling in large scale two-dimensional (2D) hBN and graphene. Both materials show almost identical scaling behaviour with an exponent of $\eta \approx 0.85$ for the height fluctuations agreeing well with the theory of flexible membranes. Based on its lower resistance to bending, however, hBN experiences slightly larger out-of-plane deviations both at zero and finite applied external strain. Upon compression a phase transition from incoherent ripple motion to soliton-ripples is observed for both materials. Our potential is freely available online at [http://www.libatoms.org].
Diffuse emission is produced in energetic cosmic ray (CR) interactions, mainly protons and electrons, with the interstellar gas and radiation field and contains the information about particle spectra in distant regions of the Galaxy. It may also contain information about exotic processes such as dark matter annihilation, black hole evaporation etc. A model of the diffuse emission is important for determination of the source positions and spectra. Calculation of the Galactic diffuse continuum gamma-ray emission requires a model for CR propagation as the first step. Such a model is based on theory of particle transport in the interstellar medium as well as on many kinds of data provided by different experiments in Astrophysics and Particle and Nuclear Physics. Such data include: secondary particle and isotopic production cross sections, total interaction nuclear cross sections and lifetimes of radioactive species, gas mass calibrations and gas distribution in the Galaxy (H_2, H I, H II), interstellar radiation field, CR source distribution and particle spectra at the sources, magnetic field, energy losses, gamma-ray and synchrotron production mechanisms, and many other issues. We are continuously improving the GALPROP model and the code to keep up with a flow of new data. Improvement in any field may affect the Galactic diffuse continuum gamma-ray emission model used as a background model by the GLAST LAT instrument. Here we report about the latest improvements of the GALPROP and the diffuse emission model.
Using the results of arXiv:0804.0009, where all timelike supersymmetric backgrounds of N=2, D=4 matter-coupled supergravity with Fayet-Iliopoulos gauging were classified, we construct genuine nut-charged BPS black holes in AdS_4 with nonconstant moduli. The calculations are exemplified for the SU(1,1)/U(1) model with prepotential F=-iX^0X^1. The resulting supersymmetric black holes have a hyperbolic horizon and carry two electric, two magnetic and one nut charge, which are however not all independent, but are given in terms of three free parameters. We find that turning on a nut charge lifts the flat directions in the effective black hole potential, such that the horizon values of the scalars are completely fixed by the charges. We also oxidize the solutions to eleven dimensions, and find that they generalize the geometry found in hep-th/0105250 corresponding to membranes wrapping holomorphic curves in a Calabi-Yau five-fold. Finally, a class of nut-charged Nernst branes is constructed as well, but these have curvature singularities at the horizon.
Diffuse cluster radio sources, in the form of radio halos and relics, reveal the presence of cosmic rays and magnetic fields in the intracluster medium (ICM). These cosmic rays are thought to be (re-)accelerated through ICM turbulence and shock waves generated by cluster merger events. Here we characterize the presence of diffuse radio emission in known galaxy clusters in the HETDEX Spring Field, covering 424 deg$^2$. For this, we developed a method to extract individual targets from LOFAR observations processed with the LoTSS DDF-pipeline. This procedure enables improved calibration and joint imaging and deconvolution of multiple pointings of selected targets. The calibration strategy can also be used for LOFAR Low-Band Antenna (LBA) and international-baseline observations. The fraction of Planck PSZ2 clusters with any diffuse radio emission apparently associated with the ICM is $73\pm17\%$. We detect a total of 10 radio halos and 12 candidate halos in the HETDEX Spring Field. Five clusters host radio relics. The fraction of radio halos in Planck PSZ2 clusters is $31\pm11\%$, and $62\pm15\%$ when including the candidate radio halos. Based on these numbers, we expect that there will be at least $183 \pm 65$ radio halos found in the LoTSS survey in PSZ2 clusters, in agreement with predictions. The integrated flux densities for the radio halos were computed by fitting exponential models to the radio images. From these flux densities, we determine the cluster mass (M$_{500}$) and Compton Y parameter (Y$_{500}$) 150 MHz radio power (P$_{\rm{150 MHz}}$) scaling relations for Planck PSZ2-detected radio halos. We find that the slopes of these relations are steeper than those determined from the 1.4 GHz radio powers. However, considering the uncertainties this is not a statistically significant result.
We provide the theoretical basis of calorimetry for a class of active particles subject to thermal noise. Simulating AC-calorimetry, we numerically evaluate the heat capacity of run-and-tumble particles in double-well and in periodic potentials, and of systems with a flashing potential. Low-temperature Schottky-like peaks show the role of activity and indicate shape transitions, while regimes of negative heat capacity appear at higher propulsion speeds. From there, a significant increase in heat capacities of active systems may be inferred at low temperatures, as well as the possibility of diagnostic tools for the activity of self-motile artificial or biomimetic systems based on heat capacity measurements.
Answering a question of Benjamini, we present an isometry-invariant random partition of the Euclidean space $\mathbb{R}^d$, $d\geq 3$, into infinite connected indistinguishable pieces, such that the adjacency graph defined on the pieces is the 3-regular infinite tree. Along the way, it is proved that any finitely generated one-ended amenable Cayley graph can be represented in $\mathbb{R}^d$ as an isometry-invariant random partition of $\mathbb{R}^d$ to bounded polyhedra, and also as an isometry-invariant random partition of $\mathbb{R}^d$ to indistinguishable pieces. A new technique is developed to prove indistinguishability for certain constructions, connecting this notion to factor of iid's.
Coherently manipulating Rydberg atoms in mesoscopic systems has proven challenging due to the unwanted population of nearby Rydberg levels by black-body radiation. Recently, there have been some efforts towards understanding these effects using states with a low principal quantum number that only have resonant dipole-dipole interactions. We perform experiments that exhibit black-body-induced dipole-dipole interactions for a state that also has a significant van der Waals interaction. Using an enhanced rate-equation model that captures some of the long-range properties of the dipolar interaction, we show that the initial degree of Rydberg excitation is dominated by the van der Waals interaction, while the observed linewidth at later times is dominated by the dipole-dipole interaction. We also point out some prospects for quantum simulation.
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.
West African Pidgin English is a language that is significantly spoken in West Africa, consisting of at least 75 million speakers. Nevertheless, proper machine translation systems and relevant NLP datasets for pidgin English are virtually absent. In this work, we develop techniques targeted at bridging the gap between Pidgin English and English in the context of natural language generation. %As a proof of concept, we explore the proposed techniques in the area of data-to-text generation. By building upon the previously released monolingual Pidgin English text and parallel English data-to-text corpus, we hope to build a system that can automatically generate Pidgin English descriptions from structured data. We first train a data-to-English text generation system, before employing techniques in unsupervised neural machine translation and self-training to establish the Pidgin-to-English cross-lingual alignment. The human evaluation performed on the generated Pidgin texts shows that, though still far from being practically usable, the pivoting + self-training technique improves both Pidgin text fluency and relevance.
The large majority of the research performed on stance detection has been focused on developing more or less sophisticated text classification systems, even when many benchmarks are based on social network data such as Twitter. This paper aims to take on the stance detection task by placing the emphasis not so much on the text itself but on the interaction data available on social networks. More specifically, we propose a new method to leverage social information such as friends and retweets by generating relational embeddings, namely, dense vector representations of interaction pairs. Our method can be applied to any language and target without any manual tuning. Our experiments on seven publicly available datasets and four different languages show that combining our relational embeddings with textual methods helps to substantially improve performance, obtaining best results for six out of seven evaluation settings, outperforming strong baselines based on large pre-trained language models.
The rate constants required to model the OH$^+$ observations in different regions of the interstellar medium have been determined using state of the art quantum methods. First, state-to-state rate constants for the H$_2(v=0,J=0,1)$+ O$^+$($^4S$) $\rightarrow$ H + OH$^+(X ^3\Sigma^-, v', N)$ reaction have been obtained using a quantum wave packet method. The calculations have been compared with time-independent results to asses the accuracy of reaction probabilities at collision energies of about 1 meV. The good agreement between the simulations and the existing experimental cross sections in the $0.01-$1 eV energy range shows the quality of the results. The calculated state-to-state rate constants have been fitted to an analytical form. Second, the Einstein coefficients of OH$^+$ have been obtained for all astronomically significant ro-vibrational bands involving the $X^3\Sigma^-$ and/or $A^3\Pi$ electronic states. For this purpose the potential energy curves and electric dipole transition moments for seven electronic states of OH$^+$ are calculated with {\it ab initio} methods at the highest level and including spin-orbit terms, and the rovibrational levels have been calculated including the empirical spin-rotation and spin-spin terms. Third, the state-to-state rate constants for inelastic collisions between He and OH$^+(X ^3\Sigma^-)$ have been calculated using a time-independent close coupling method on a new potential energy surface. All these rates have been implemented in detailed chemical and radiative transfer models. Applications of these models to various astronomical sources show that inelastic collisions dominate the excitation of the rotational levels of OH$^+$. In the models considered the excitation resulting from the chemical formation of OH$^+$ increases the line fluxes by about 10 % or less depending on the density of the gas.
The problem of differentiating a function with bounded second derivative in the presence of bounded measurement noise is considered in both continuous-time and sampled-data settings. Fundamental performance limitations of causal differentiators, in terms of the smallest achievable worst-case differentiation error, are shown. A robust exact differentiator is then constructed via the adaptation of a single parameter of a linear differentiator. It is demonstrated that the resulting differentiator exhibits a combination of properties that outperforms existing continuous-time differentiators: it is robust with respect to noise, it instantaneously converges to the exact derivative in the absence of noise, and it attains the smallest possible -- hence optimal -- upper bound on its differentiation error under noisy measurements. For sample-based differentiators, the concept of quasi-exactness is introduced to classify differentiators that achieve the lowest possible worst-case error based on sampled measurements in the absence of noise. A straightforward sample-based implementation of the proposed linear adaptive continuous-time differentiator is shown to achieve quasi-exactness after a single sampling step as well as a theoretically optimal differentiation error bound that, in addition, converges to the continuous-time optimal one as the sampling period becomes arbitrarily small. A numerical simulation illustrates the presented formal results.
The phase structure of QCD-like gauge theories with fermions in various representations is an interesting but generally analytically intractable problem. One way to ensure weak coupling is to define the theory in a small finite volume, in this case S^3 x S^1. Genuine phase transitions can then occur in the large N theory. Here, we use this technique to investigate SU(N) gauge theory with a number N_f of massive adjoint-valued Majorana fermions having non-thermal boundary conditions around S^1. For N_f =1 we find a line of transitions that separate the weak-coupling analogues of the confined and de-confined phases for which the density of eigenvalues of the Wilson line transform from the uniform distribution to a gapped distribution. However, the situation for N_f >1 is much richer and a series of weak-coupling analogues of partially-confined phases appear which leave unbroken a Z_p subgroup of the centre symmetry. In these Z_p phases the eigenvalue density has p gaps and they are separated from the confining phase and from one-another by first order phase transitions. We show that for small enough mR (the mass of the fermions times the radius of the S^3) only the confined phase exists. The large N phase diagram is consistent with the finite N result and with other approaches based on R^3 x S^1 calculations and lattice simulations.
We investigate, through the density-matrix renormalization group and the Lanczos technique, the possibility of a two-leg Kondo ladder present an incommensurate orbital order. Our results indicate a staggered short-range orbital order at half-filling. Away from half-filling our data are consistent with an incommensurate quasi-long-range orbital order. We also observed that an interaction between the localized spins enhances the rung-rung current correlations.
In this paper, a novel conserved Lorentz covariant tensor, termed the helicity tensor, is introduced in Maxwell theory. The conservation of the helicity tensor expresses the conservation laws contained in the helicity array, introduced by Cameron et al., including helicity, spin, and the spin-flux or infra-zilch. The Lorentz covariance of the helicity tensor is in contrast to previous formulations of the helicity hierarchy of conservation laws, which required the non-Lorentz covariant transverse gauge. The helicity tensor is shown to arise as a Noether current for a variational symmetry of a duality-symmetric Lagrangian for Maxwell theory. This symmetry transformation generalizes the duality symmetry and includes the symmetry underlying the conservation of the spin part of the angular momentum.
Atomistic simulations based on the first-principles of quantum mechanics are reaching unprecedented length scales. This progress is due to the growth in computational power allied with the development of new methodologies that allow the treatment of electrons and nuclei as quantum particles. In the realm of materials science, where the quest for desirable emergent properties relies increasingly on soft weakly-bonded materials, such methods have become indispensable. In this perspective, an overview of simulation methods that are applicable for large system sizes and that can capture the quantum nature of electrons and nuclei in the adiabatic approximation is given. In addition, the remaining challenges are discussed, especially regarding the inclusion of nuclear quantum effects (NQE) beyond a harmonic or perturbative treatment, the impact of NQE on electronic properties of weakly-bonded systems, and how different first-principles potential energy surfaces can change the impact of NQE on the atomic structure and dynamics of weakly bonded systems.
Harmonic numbers have been studied since antiquity, while hyperharmonic numbers were intoduced by Conway and Guy in 1996. The degenerate harmonic numbers and degenerate hyperharmonic numbers are their respective degenerate versions. The aim of this paper is to further investigate some properties, recurrence relations and identities involving the degenerate harmonic and degenerate hyperharmonic numbers in connection with degenerate Stirling numbers of the first kind, degenerate Daehee numbers and degenerate derangements.
Rhombic cell analysis as outlined in the first paper of the present series is applied to samples of varying depths and liming luminosities of the IRAS/PSCz Catalogue. Numerical indices are introduced to summarize essential information. Because of the discrete nature of the anlaysis and of the space distribution of galaxies, the indices for a given sample must be regarded as each having an irreducible scatter. Despite the scatter, the mean indices show remarkable variations across the samples. The underlying factor for the variations is shown to be the limiting luminosity rather than the sampling depth. As samples of more and more luminous galaxies are considered over a range of some 2 magnitudes (a factor of some 50 in space density), the morpholgy of the filled and empty regions the galaxies define degrades steadily towards insignificance, and the degrading is faster for the filled than the empty region.
In the propositional setting, the marginal problem is to find a (maximum-entropy) distribution that has some given marginals. We study this problem in a relational setting and make the following contributions. First, we compare two different notions of relational marginals. Second, we show a duality between the resulting relational marginal problems and the maximum likelihood estimation of the parameters of relational models, which generalizes a well-known duality from the propositional setting. Third, by exploiting the relational marginal formulation, we present a statistically sound method to learn the parameters of relational models that will be applied in settings where the number of constants differs between the training and test data. Furthermore, based on a relational generalization of marginal polytopes, we characterize cases where the standard estimators based on feature's number of true groundings needs to be adjusted and we quantitatively characterize the consequences of these adjustments. Fourth, we prove bounds on expected errors of the estimated parameters, which allows us to lower-bound, among other things, the effective sample size of relational training data.
We investigate the scattered palindromic subwords in a finite word. We start by characterizing the words with the least number of scattered palindromic subwords. Then, we give an upper bound for the total number of palindromic subwords in a word of length $n$ in terms of Fibonacci number $F_n$ by proving that at most $F_n$ new scattered palindromic subwords can be created on the concatenation of a letter to a word of length $n-1$. We propose a conjecture on the maximum number of scattered palindromic subwords in a word of length $n$ with $q$ distinct letters. We support the conjecture by showing its validity for words where $q\geq \frac{n}{2}$.
Minimal Supersymmetric Standard Model with gauge mediated supersymmetry breaking has all the necessary ingredients for a successful sub-eV Hubble scale inflation $H_{\rm inf} \sim 10^{-3}-10^{-1}$ eV. The model generates the right amplitude for scalar density perturbations and a spectral tilt within the range, $0.90 \leq n_s \leq 1$. The reheat temperature is $T_{\rm R} \ls 10$ TeV, which strongly prefers electroweak baryogenesis and creates the right abundance of gravitinos with a mass $m_{3/2} \gs 100$ keV to be the dark matter.
We perform a semiclassical analysis for the planar Schr\"odinger-Poisson system \[ \cases{ -\varepsilon^{2} \Delta\psi+V(x)\psi= E(x) \psi \quad \text{in $\mathbb{R}^2$},\cr -\Delta E= |\psi|^{2} \quad \text{in $\mathbb{R}^2$}, \cr } \tag{$SP_\varepsilon$} \] where $\varepsilon$ is a positive parameter corresponding to the Planck constant and $V$ is a bounded external potential. We detect solution pairs $(u_\varepsilon, E_\varepsilon)$ of the system $(SP_\varepsilon)$ as~$\ge \rightarrow 0$.
Alpha and cluster decays are analyzed for heavy nuclei located above $^{208}$Pb on the chart of nuclides: $^{216-220}$Rn and $^{220-224}$Ra, that are also candidates for observing the $2 \alpha$ decay mode. A microscopic theoretical approach based on relativistic Energy Density Functionals (EDF), is used to compute axially-symmetric deformation energy surfaces as functions of quadrupole, octupole and hexadecupole collective coordinates. Dynamical least-action paths for specific decay modes are calculated on the corresponding potential energy surfaces. The effective collective inertia is determined using the perturbative cranking approximation, and zero-point and rotational energy corrections are included in the model. The predicted half-lives for $\alpha$-decay are within one order of magnitude of the experimental values. In the case of single $\alpha$ emission, the nuclei considered in the present study exhibit least-action paths that differ significantly up to the scission point. The differences in alpha-decay lifetimes are not only driven by Q values, but also by variances of the least-action paths prior to scission. In contrast, the $2 \alpha$ decay mode presents very similar paths from equilibrium to scission, and the differences in lifetimes are mainly driven by the corresponding Q values. The predicted $^{14}$C cluster decay half-lives are within three orders of magnitudes of the empirical values, and point to a much more complex pattern compared to the alpha-decay mode.
The orbit-sum method is an algebraic version of the reflection-principle that was introduced by Bousquet-M\'{e}lou and Mishna to solve functional equations that arise in the enumeration of lattice walks with small steps restricted to $\mathbb{N}^2$. Its extension to walks with large steps was started by Bostan, Bousquet-M\'{e}lou and Melczer. We continue it here, making use of the primitive element theorem, Gr\"{o}bner bases and the shape lemma, and the Newton-Puiseux algorithm.
Borrowing constraints are a key component of modern international macroeconomic models. The analysis of Emerging Markets (EM) economies generally assumes collateral borrowing constraints, i.e., firms access to debt is constrained by the value of their collateralized assets. Using credit registry data from Argentina for the period 1998-2020 we show that less than 15% of firms debt is based on the value of collateralized assets, with the remaining 85% based on firms cash flows. Exploiting central bank regulations over banks capital requirements and credit policies we argue that the most prevalent borrowing constraints is defined in terms of the ratio of their interest payments to a measure of their present and past cash flows, akin to the interest coverage borrowing constraint studied by the corporate finance literature. Lastly, we argue that EMs exhibit a greater share of interest sensitive borrowing constraints than the US and other Advanced Economies. From a structural point of view, we show that in an otherwise standard small open economy DSGE model, an interest coverage borrowing constraints leads to significantly stronger amplification of foreign interest rate shocks compared to the standard collateral constraint. This greater amplification provides a solution to the Spillover Puzzle of US monetary policy rates by which EMs experience greater negative effects than Advanced Economies after a US interest rate hike. In terms of policy implications, this greater amplification leads to managed exchange rate policy being more costly in the presence of an interest coverage constraint, given their greater interest rate sensitivity, compared to the standard collateral borrowing constraint.
We present four-dimensional ab initio potential energy surfaces for the three spin states of the NH-NH complex. The potentials are partially based on the work of Dhont et al. [J. Chem. Phys. 123, 184302 (2005)]. The surface for the quintet state is obtained at the RCCSD(T)/aug-cc-pVTZ level of theory and the energy diferences with the singlet and triplet states are calculated at the CASPTn/aug-cc-pVTZ (n = 2; 3) level of theory. The ab initio potentials are fitted to coupled spherical harmonics in the angular coordinates, and the long range is further expanded as a power series in 1/R. The RCCSD(T) potential is corrected for a size-consistency error prior to fitting. The long-range coeficients obtained from the fit are found to be in good agreement with perturbation theory calculations.
Modeling and synthesizing low-light raw noise is a fundamental problem for computational photography and image processing applications. Although most recent works have adopted physics-based models to synthesize noise, the signal-independent noise in low-light conditions is far more complicated and varies dramatically across camera sensors, which is beyond the description of these models. To address this issue, we introduce a new perspective to synthesize the signal-independent noise by a generative model. Specifically, we synthesize the signal-dependent and signal-independent noise in a physics- and learning-based manner, respectively. In this way, our method can be considered as a general model, that is, it can simultaneously learn different noise characteristics for different ISO levels and generalize to various sensors. Subsequently, we present an effective multi-scale discriminator termed Fourier transformer discriminator (FTD) to distinguish the noise distribution accurately. Additionally, we collect a new low-light raw denoising (LRD) dataset for training and benchmarking. Qualitative validation shows that the noise generated by our proposed noise model can be highly similar to the real noise in terms of distribution. Furthermore, extensive denoising experiments demonstrate that our method performs favorably against state-of-the-art methods on different sensors.
Recent advances on 3D object detection heavily rely on how the 3D data are represented, \emph{i.e.}, voxel-based or point-based representation. Many existing high performance 3D detectors are point-based because this structure can better retain precise point positions. Nevertheless, point-level features lead to high computation overheads due to unordered storage. In contrast, the voxel-based structure is better suited for feature extraction but often yields lower accuracy because the input data are divided into grids. In this paper, we take a slightly different viewpoint -- we find that precise positioning of raw points is not essential for high performance 3D object detection and that the coarse voxel granularity can also offer sufficient detection accuracy. Bearing this view in mind, we devise a simple but effective voxel-based framework, named Voxel R-CNN. By taking full advantage of voxel features in a two stage approach, our method achieves comparable detection accuracy with state-of-the-art point-based models, but at a fraction of the computation cost. Voxel R-CNN consists of a 3D backbone network, a 2D bird-eye-view (BEV) Region Proposal Network and a detect head. A voxel RoI pooling is devised to extract RoI features directly from voxel features for further refinement. Extensive experiments are conducted on the widely used KITTI Dataset and the more recent Waymo Open Dataset. Our results show that compared to existing voxel-based methods, Voxel R-CNN delivers a higher detection accuracy while maintaining a real-time frame processing rate, \emph{i.e}., at a speed of 25 FPS on an NVIDIA RTX 2080 Ti GPU. The code is available at \url{https://github.com/djiajunustc/Voxel-R-CNN}.
The theory of locally anisotropic superspaces (supersymmetric generalizations of various types of Kaluza--Klein, Lagrange and Finsler spaces) is laid down. In this framework we perform the analysis of construction of the supervector bundles provided with nonlinear and distinguished connections and metric structures. Two models of locally anisotropic supergravity are proposed and studied in details.
We present a multiple time step algorithm for hybrid path integral Monte Carlo simulations involving rigid linear rotors. We show how to calculate the quantum torques needed in the simulation from the rotational density matrix, for which we develop an approximate expression suitable in the case of heteronuclear molecules. We use this method to study the effect of rotational quantization on the quantum sieving properties of carbon nanotubes, with particular emphasis to the para-T2/para-H2 selectivity at 20 K. We show how to treat classically only some of the degrees of freedom of the hydrogen molecule and we find that in the limit of zero pressure the quantized nature of the rotational degrees of freedom greatly influence the selectivity, especially in the case of the (3,6) nanotube, which is the narrowest tube that we have studied. We also use path integral Monte Carlo simulations to calculate adsorption isotherms of different hydrogen isotopes in the interior of carbon nanotubes and the corresponding selectivity at finite pressures. It is found that the selectivity increases with respect to the zero pressure value and tends to a constant value at saturation. We use a simplified effective harmonic oscillator model to discuss the origin of this behavior.
We show that in the presence of U(1) noncommutative gauge interaction the noncommutative tachyonic system exhibits solitonic solutions for finite value of the noncommutativity parameter.
This paper has been withdrawn by the authors.
We examine the stability issue in the inverse problem of determining a scalar potential appearing in the stationary Schr{\"o}dinger equation in a bounded domain, from a partial elliptic Dirichlet-to-Neumann map. Namely, the Dirichlet data is imposed on the shadowed face of the boundary of the domain and the Neumann data is measured on its illuminated face. We establish a log log stability estimate for the L2-norm (resp. the H minus 1-norm) of bounded (resp. L2) potentials whose difference is lying in any Sobolev space of order positive order.
We present the abundance analysis of 97 nearby metal-poor (-3.3<[Fe/H]<-0.5) stars having kinematics characteristics of the Milky Way (MW) thick disk, inner, and outer stellar halos. The high-resolution, high-signal-to-noise optical spectra for the sample stars have been obtained with the High Dispersion Spectrograph mounted on the Subaru Telescope. Abundances of Fe, Mg, Si, Ca and Ti have been derived using a one-dimensional LTE abundance analysis code with Kurucz NEWODF model atmospheres. By assigning membership of the sample stars to the thick disk, inner or outer halo components based on their orbital parameters, we examine abundance ratios as a function of [Fe/H] and kinematics for the three subsamples in wide metallicity and orbital parameter ranges. We show that, in the metallicity range of -1.5<[Fe/H]<= -0.5, the thick disk stars show constantly high mean [Mg/Fe] and [Si/Fe] ratios with small scatter. In contrast, the inner, and the outer halo stars show lower mean values of these abundance ratios with larger scatter. The [Mg/Fe], [Si/Fe] and [Ca/Fe] for the inner and the outer halo stars also show weak decreasing trends with [Fe/H] in the range [Fe/H]$>-2$. These results favor the scenarios that the MW thick disk formed through rapid chemical enrichment primarily through Type II supernovae of massive stars, while the stellar halo has formed at least in part via accretion of progenitor stellar systems having been chemically enriched with different timescales.
High-performance thermoelectric oxides could offer a great energy solution for integrated and embedded applications in sensing and electronics industries. Oxides, however, often suffer from low Seebeck coefficient when compared with other classes of thermoelectric materials. In search of high-performance thermoelectric oxides, we present a comprehensive density functional investigation, based on GGA$+U$ formalism, surveying the 3d and 4d transition-metal-containing ferrites of the spinel structure. Consequently, we predict MnFe$_2$O$_4$ and RhFe$_2$O$_4$ have Seebeck coefficients of $\sim \pm 600$ $\mu$V K$^{-1}$ at near room temperature, achieved by light hole and electron doping. Furthermore, CrFe$_2$O$_4$ and MoFe$_2$O$_4$ have even higher ambient Seebeck coefficients at $\sim \pm 700$ $\mu$V K$^{-1}$. In the latter compounds, the Seebeck coefficient is approximately a flat function of temperature up to $\sim 700$ K, offering a tremendous operational convenience. Additionally, MoFe$_2$O$_4$ doped with $10^{19}$ holes/cm$^3$ has a calculated thermoelectric power factor of $689.81$ $\mu$W K$^{-2}$ m$^{-1}$ at $300$ K, and $455.67$ $\mu$W K$^{-2}$ m$^{-1}$ at $600$ K. The thermoelectric properties predicted here can bring these thermoelectric oxides to applications at lower temperatures traditionally fulfilled by more toxic and otherwise burdensome materials.