text
stringlengths
6
128k
The electromagnetic and the muonic longitudinal profile at production enclosure important information about the primary particle and the hadronic interactions that rule the shower development. In fact, these two profiles provide two different insights of the shower: the electromagnetic component gives a measurement of the energy and the strength of the neutral pion channel; while the muonic profile, being intimately related with the charged mesons decays, can be used as a direct probe for the high energy hadronic interactions. In this work we explore the interplay between the electromagnetic and muonic profiles, by analysing their phenomenologic behaviour for different primary masses and energies, zenith angles, and also different high energy hadronic interaction models. We have found that the muonic longitudinal profile at production displays universal features similar to what is known for the electromagnetic one. Moreover, we show that both profiles have new primary mass composition variables which are fairly independent of the high energy hadronic interaction model. Finally we discuss how the information in the electromagnetic and the muonic longitudinal profile can be combined to break the degeneracy between the primary mass composition and the high energy hadronic physics.
We demonstrate that the recent measurements of the angular diameter distance of 38 cluster of galaxies using Chandra X-ray data and radio observations from the OVRO and BIMA interferometric arrays place new and independent constraints on deviations in the duality relation between angular and luminosity distances. Using only cluster data, we found that the ratio between the two distances defined as $\eta = D_L/D_A(1+z)^2.$ is bound to be $\eta=0.97\pm0.03$ at 68% c.l. with no evidence for distance duality violation. Comparing the cluster angular diameter distance data with luminosity distance data from type Ia Supernovae, we obtain the model independent constraint $\eta=1.01\pm0.07$ at 68% c.l.. Those results provide an useful check for the cosmological concordance model and for the presence of systematics in SN-Ia and clusters data.
In the past years a wealth of observations has allowed us to unravel the structural properties of the Dark and Luminous mass distribution in spirals. As result, it has been found that their rotation curves follow, out their virial radius, an Universal function (URC) made by two terms: one due to the gravitational potential of a Freeman stellar disk and the other due to that of a dark halo. The importance of the latter is found to decrease with galaxy mass. Individual objects reveal in detail that dark halos have a density core, whose size correlates with its central value. These properties will guide $\Lambda$CDM Cosmology to evolve to match the challenge that observations presently pose.
Ground state properties of multi-orbital Hubbard models are investigated by the auxiliary field quantum Monte Carlo method. A Monte Carlo technique generalized to the multi-orbital systems is introduced and examined in detail. The algorithm contains non-trivial cases where the negative sign problem does not exist. We investigate one-dimensional systems with doubly degenerate orbitals by this new technique. Properties of the Mott insulating state are quantitatively clarified as the strongly correlated insulator, where the charge gap amplitude is much larger than the spin gap. The insulator-metal transitions driven by the chemical potential shows a universality class with the correlation length exponent $\nu=1/2$, which is consistent with the scaling arguments. Increasing level split between two orbitals drives crossover from the Mott insulator with high spin state to the band insulator with low spin state, where the spin gap amplitude increases and becomes closer to the charge gap. Experimental relevance of our results especially to Haldane materials is discussed.
The dynamics of an anapole seen as dark matter at low energies is studied by solving the Schr\"odinger-Pauli equation in a potential involving Dirac-delta and its derivatives in three-dimensions. This is an interesting mathematical problem that, as far as we know, has not been previously discussed. We show how bound states emerge in this approach and the scattering problem is formulated (and solved) directly. The total cross section is in full agreement with independent calculations in the standard model.
In this paper, we present the step by step knowledge acquisition process by choosing a structured method through using a questionnaire as a knowledge acquisition tool. Here we want to depict the problem domain as, how to evaluate teachers performance in higher education through the use of expert system technology. The problem is how to acquire the specific knowledge for a selected problem efficiently and effectively from human experts and encode it in the suitable computer format. Acquiring knowledge from human experts in the process of expert systems development is one of the most common problems cited till yet. This questionnaire was sent to 87 domain experts within all public and private universities in Pakistani. Among them 25 domain experts sent their valuable opinions. Most of the domain experts were highly qualified, well experienced and highly responsible persons. The whole questionnaire was divided into 15 main groups of factors, which were further divided into 99 individual questions. These facts were analyzed further to give a final shape to the questionnaire. This knowledge acquisition technique may be used as a learning tool for further research work.
We present methods for efficient characterization of an optical coherent state $|\alpha\rangle$. We choose measurement settings adaptively and stochastically, based on data while it is collected. Our algorithm divides the estimation into two distinct steps: (i) before the first detection of a vacuum state, the probability of choosing a measurement setting is proportional to detecting vacuum with the setting, which makes using too similar measurement settings twice unlikely; and (ii) after the first detection of vacuum, we focus measurements in the region where vacuum is most likely to be detected. In step (i) [(ii)] the detection of vacuum (a photon) has a significantly larger effect on the shape of the posterior probability distribution of $\alpha$. Compared to nonadaptive schemes, our method makes the number of measurement shots required to achieve a certain level of accuracy smaller approximately by a factor proportional to the area describing the initial uncertainty of $\alpha$ in phase space. While this algorithm is not directly robust against readout errors, we make it such by introducing repeated measurements in step (i).
A comprehensive treatment of the quantification of randomness certified device-independently by using the Hardy and Cabello-Liang-Li (CLL) nonlocality relations is provided in the two parties - two measurements per party - two outcomes per measurement (2-2-2) scenario. For the Hardy nonlocality, it is revealed that for a given amount of nonlocality signified by a particular non-zero value of the Hardy parameter, the amount of Hardy-certifiable randomness is not unique, unlike the way the amount of certifiable randomness is related to the CHSH nonlocality. This is because any specified non-maximal value of Hardy nonlocality parameter characterises a set of quantum extremal distributions. Then this leads to a range of certifiable amounts of randomness corresponding to a given Hardy parameter. On the other hand, for a given amount of CLL-nonlocality, the certifiable randomness is unique, similar to that for the CHSH nonlocality. Furthermore, the tightness of our analytical treatment evaluating the respective guaranteed bounds for the Hardy and CLL relations is demonstrated by their exact agreement with the Semi-Definite-Programming based computed bounds. Interestingly, the analytically evaluated maximum achievable bounds of both Hardy and CLL-certified randomness have been found to be realisable for non-maximal values of the Hardy and CLL nonlocality parameters. In particular, we have shown that even close to the maximum 2 bits of CLL-certified randomness can be realised from non-maximally entangled pure two-qubit states corresponding to small values of the CLL nonlocal parameter. This, therefore, clearly illustrates the quantitative incommensurability between randomness, nonlocality and entanglement.
A new challenge to quantitative finance after the recent financial crisis is the study of credit valuation adjustment (CVA), which requires modeling of the future values of a portfolio. In this paper, following recent work in [Weinan E(2017), Han(2017)], we apply deep learning to attack this problem. The future values are parameterized by neural networks, and the parameters are then determined through optimization. Two concrete products are studied: Bermudan swaption and Mark-to-Market cross-currency swap. We obtain their expected positive/negative exposures, and further study the resulting functional form of future values. Such an approach represents a new framework for modeling XVA, and it also sheds new lights on other methods like American Monte Carlo.
Here we present a detailed analysis of the properties and evolution of different dwarf galaxies, candidate to host the coalescence of black hole binary systems (BHB) generating GW150914-like events. By adopting a novel theoretical framework coupling the binary population synthesis code \texttt{SeBa} with the Galaxy formation model \texttt{GAMESH}, we can investigate the detailed evolution of these objects in a well resolved cosmological volume of 4~cMpc, having a Milky Way-like (MW) galaxy forming at its center. We identify three classes of interesting candidate galaxies: MW progenitors, dwarf satellites and dwarf galaxies evolving in isolation. We find that: (i) despite differences in individual histories and specific environments the candidates reduce to only nine representative galaxies; (ii) among them, $\sim44\%$ merges into the MW halo progenitors by the redshift of the expected signal, while the remaining dwarfs are found as isolated or as satellites of the MW and their evolution is strongly shaped by both peculiar dynamical history and environmental feedback; (iii) a stringent condition for the environments where GW150914-like binaries can form comes from a combination of the accretion history of their DM halos and the radiative feedback in the high redshift universe; (iv) by comparing with the observed catalogues from DGS and ALLSMOG surveys we find two observed dwarfs respecting the properties predicted by our model. We finally note how the present analysis opens the possibility to build future strategies for host galaxy identification.
Most multimodal multi-objective evolutionary algorithms (MMEAs) aim to find all global Pareto optimal sets (PSs) for a multimodal multi-objective optimization problem (MMOP). However, in real-world problems, decision makers (DMs) may be also interested in local PSs. Also, searching for both global and local PSs is more general in view of dealing with MMOPs, which can be seen as a generalized MMOP. In addition, the state-of-the-art MMEAs exhibit poor convergence on high-dimension MMOPs. To address the above two issues, in this study, a novel coevolutionary framework termed CoMMEA for multimodal multi-objective optimization is proposed to better obtain both global and local PSs, and simultaneously, to improve the convergence performance in dealing with high-dimension MMOPs. Specifically, the CoMMEA introduces two archives to the search process, and coevolves them simultaneously through effective knowledge transfer. The convergence archive assists the CoMMEA to quickly approaching the Pareto optimal front (PF). The knowledge of the converged solutions is then transferred to the diversity archive which utilizes the local convergence indicator and the $\epsilon$-dominance-based method to obtain global and local PSs effectively. Experimental results show that CoMMEA is competitive compared to seven state-of-the-art MMEAs on fifty-four complex MMOPs.
Multi-spectral CT (MSCT) is increasingly used in industrial non-destructive testing and medical diagnosis because of its outstanding performance like material distinguishability. The process of obtaining MSCT data can be modeled as nonlinear equations and the basis material decomposition comes down to the inverse problem of the nonlinear equations. For different spectra data, geometric inconsistent parameters cause geometrical inconsistent rays, which will lead to mismatched nonlinear equations. How to solve the mismatched nonlinear equations accurately and quickly is a hot issue. This paper proposes a general iterative method to invert the mismatched nonlinear equations and develops Schmidt orthogonalization to accelerate convergence. The validity of the proposed method is verified by MSCT basis material decomposition experiments. The results show that the proposed method can decompose the basis material images accurately and improve the convergence speed greatly.
We examine the following fourth order H\'enon equation \label{pipe} \Delta^2 u = |x|^\alpha u^p \qquad \text{in}\ \IR^N, where $ 0 < \alpha$. Define the Hardy-Sobolev exponent $ p_4(\alpha):= \frac{N+4 + 2 \alpha}{N-4}$. We show that in dimension N=5 there are no positive bounded classical solutions of (\ref{pipe}) provided $ 1 < p < p_4(\alpha)$.
The true prosoluble completion $P\Cal S (\Gamma)$ of a group $\Gamma$ is the inverse limit of the projective system of soluble quotients of $\Gamma$. Our purpose is to describe examples and to point out some natural open problems. We discuss a question of Grothendieck for profinite completions and its analogue for true prosoluble and true pronilpotent completions. 1. Introduction. 2. Completion with respect to a directed set of normal subgroups. 3. Universal property. 4. Examples of directed sets of normal subgroups. 5. True prosoluble completions. 6. Examples. 7. On the true prosoluble and the true pronilpotent analogues of Grothendieck's problem.
Light plays an important role in human well-being. However, most computer vision tasks treat pixels without considering their relationship to physical luminance. To address this shortcoming, we introduce the Laval Photometric Indoor HDR Dataset, the first large-scale photometrically calibrated dataset of high dynamic range 360{\deg} panoramas. Our key contribution is the calibration of an existing, uncalibrated HDR Dataset. We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device (chroma meter) for multiple scenes across a variety of lighting conditions. Using the resulting measurements, we establish the calibration coefficients to be applied to the HDR images. The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color, and varied types of light sources. We exploit the dataset to introduce three novel tasks, where: per-pixel luminance, per-pixel color and planar illuminance can be predicted from a single input image. Finally, we also capture another smaller photometric dataset with a commercial 360{\deg} camera, to experiment on generalization across cameras. We are optimistic that the release of our datasets and associated code will spark interest in physically accurate light estimation within the community. Dataset and code are available at https://lvsn.github.io/beyondthepixel/.
We present electrical and thermal transport measurements in single crystals of the metallic oxide RuO$_2$. The resistivity and Seebeck coefficient measured up to 970K confirm the metallic nature of transport. Magnetoresistance and Hall effect measurements as a function of orientation can be most easily described by a multiband transport model. We find that the ordinary Hall effect dominates any anomalous Hall signal in single crystals.
We study numerically the adsorption of a mixture of CO$_2$ and CH$_4$ on a graphite substrate covered by graphene nanoribbons (NRs). The NRs are flat and parallel to the graphite surface, at a variable distance ranging from 6 \r{A} to 14 \r{A}. We show that the NRs-graphite substrate acts as an effective filter for CO$_2$. Our study is based on Molecular Dynamics (MD) simulations. Methane is considered a spherical molecule, and carbon dioxide is represented as a linear rigid body. Graphite is modeled as a continuous material, while the NRs are approached atomistically. We observe that when the NRs are placed 6 \r{A} above the graphite surface, methane is blocked out, while CO$_2$ molecules can diffuse and be collected in between the NRs and the graphite surface. Consequently, the selectivity of CO$_2$ is extremely high. We also observe that the initial rate of adsorption of CO$_2$ is much higher than CH$_4$. Overall we show that the filter can be optimized by controlling the gap between NRs and the NRs-graphite separation.
In this study, we explored whether and how area-wide air pollution affected individuals' activity participation and travel behaviors, and how these effects differed by neighborhood context. Using multi-day travel survey data provided by 403 adults from 230 households in a small urban area in northern Utah, US, we analyzed a series of 20 activity and travel outcomes. We investigated the associations of three different metrics of (measured and perceived) air quality with these outcomes, separately for residents of urban and suburban/rural neighborhoods, and controlled for personal and household characteristics. Our models found some measurable changes in activity and travel patterns on days with poor air quality. In urban areas, people engaged in more mandatory (work/school) activities, whereas there was no discernible change in suburban/rural areas. The total travel time for urban residents increased, driven by increases in trip-making and travel time by public modes (bus) and increases in travel time by private modes (car). On the other hand, suburban/rural residents traveled shorter total distances (mostly through lower vehicle miles traveled), and there was a notable uptick in the probability of being an active mode user (walk/bike). Air quality perceptions also seemed to play a role, at least for urban residents who walked/biked longer distances, rode the bus for longer distances/times, and drove fewer miles on days with worse perceived air pollution. Overall, the results are somewhat encouraging, finding more evidence of altruistic than risk-averse travel behavioral responses to episodes of area-wide air pollution; although, more research is needed.
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
Networks with a prescribed power-law scaling in the spectrum of the graph Laplacian can be generated by evolutionary optimization. The Laplacian spectrum encodes the dynamical behavior of many important processes. Here, the networks are evolved to exhibit subdiffusive dynamics. Under the additional constraint of degree-regularity, the evolved networks display an abundance of symmetric motifs arranged into loops and long linear segments. Exploiting results from algebraic graph theory on symmetric networks, we find the underlying backbone structures and how they contribute to the spectrum. The resulting coarse-grained networks provide an intuitive view of how the anomalous diffusive properties can be realized in the evolved structures.
We study singularities of spacelike, constant (non-zero) mean curvature (CMC) surfaces in the Lorentz-Minkowski 3-space $L^3$. We show how to solve the singular Bj\"orling problem for such surfaces, which is stated as follows: given a real analytic null-curve $f_0(x)$, and a real analytic null vector field $v(x)$ parallel to the tangent field of $f_0$, find a conformally parameterized (generalized) CMC $H$ surface in $L^3$ which contains this curve as a singular set and such that the partial derivatives $f_x$ and $f_y$ are given by $\frac{\dd f_0}{\dd x}$ and $v$ along the curve. Within the class of generalized surfaces considered, the solution is unique and we give a formula for the generalized Weierstrass data for this surface. This gives a framework for studying the singularities of non-maximal CMC surfaces in $L^3$. We use this to find the Bj\"orling data -- and holomorphic potentials -- which characterize cuspidal edge, swallowtail and cross cap singularities.
Let $\mathfrak{M}(\Sigma)$ be an open and connected subset of the space of hyperbolic metrics on a closed orientable surface, and $\mathfrak{M}(M)$ an open and connected subset of the space of metrics on an orientable manifold of dimension at least $3$. We impose conditions on $M$ and $\mathfrak{M}(M)$, which are often satisfied when the metrics in $\mathfrak{M}(M)$ have non-positive curvature. Under these conditions, the data of a homotopy class of maps from $\Sigma$ to $M$ gives $\mathfrak{M}(\Sigma)\times \mathfrak{M}(M)$ the structure of a space of harmonic maps. Using transversality theory for Banach manifolds, we prove that the set of somewhere injective harmonic maps is open, dense, and connected in the moduli space. We also prove some results concerning the distribution of harmonic immersions and embeddings in the moduli space.
A module endomorphism $f$ on an algebra $A$ is called an averaging operator if it satisfies $f(xf(y)) = f(x)f(y)$ for any $x, y\in A$. An algebra $A$ with an averaging operator $f$ is called an averaging algebra. Averaging operators have been studied for over one hundred years. We study averaging operators from an algebraic point of view. In the first part, we construct free averaging algebras on an algebra $A$ and on a set $X$, and free objects for some subcategories of averaging algebras. Then we study properties of these free objects and, as an application, we discuss some decision problems of averaging algebras. In the second part, we show how averaging operators induce Lie algebra structures. We discuss conditions under which a Lie bracket operation is induced by an averaging operator. Then we discuss properties of these induced Lie algebra structures. Finally we apply the results from this discussion in the study of averaging operators.
Geometric deep learning has sparked a rising interest in computer graphics to perform shape understanding tasks, such as shape classification and semantic segmentation. When the input is a polygonal surface, one has to suffer from the irregular mesh structure. Motivated by the geometric spectral theory, we introduce Laplacian2Mesh, a novel and flexible convolutional neural network (CNN) framework for coping with irregular triangle meshes (vertices may have any valence). By mapping the input mesh surface to the multi-dimensional Laplacian-Beltrami space, Laplacian2Mesh enables one to perform shape analysis tasks directly using the mature CNNs, without the need to deal with the irregular connectivity of the mesh structure. We further define a mesh pooling operation such that the receptive field of the network can be expanded while retaining the original vertex set as well as the connections between them. Besides, we introduce a channel-wise self-attention block to learn the individual importance of feature ingredients. Laplacian2Mesh not only decouples the geometry from the irregular connectivity of the mesh structure but also better captures the global features that are central to shape classification and segmentation. Extensive tests on various datasets demonstrate the effectiveness and efficiency of Laplacian2Mesh, particularly in terms of the capability of being vulnerable to noise to fulfill various learning tasks.
As an attempt towards assessing the robustness of embodied navigation agents, we propose RobustNav, a framework to quantify the performance of embodied navigation agents when exposed to a wide variety of visual - affecting RGB inputs - and dynamics - affecting transition dynamics - corruptions. Most recent efforts in visual navigation have typically focused on generalizing to novel target environments with similar appearance and dynamics characteristics. With RobustNav, we find that some standard embodied navigation agents significantly underperform (or fail) in the presence of visual or dynamics corruptions. We systematically analyze the kind of idiosyncrasies that emerge in the behavior of such agents when operating under corruptions. Finally, for visual corruptions in RobustNav, we show that while standard techniques to improve robustness such as data-augmentation and self-supervised adaptation offer some zero-shot resistance and improvements in navigation performance, there is still a long way to go in terms of recovering lost performance relative to clean "non-corrupt" settings, warranting more research in this direction. Our code is available at https://github.com/allenai/robustnav
Learning embeddings for entities and relations in knowledge graph (KG) have benefited many downstream tasks. In recent years, scoring functions, the crux of KG learning, have been human-designed to measure the plausibility of triples and capture different kinds of relations in KGs. However, as relations exhibit intricate patterns that are hard to infer before training, none of them consistently perform the best on benchmark tasks. In this paper, inspired by the recent success of automated machine learning (AutoML), we search bilinear scoring functions for different KG tasks through the AutoML techniques. However, it is non-trivial to explore domain-specific information here. We first set up a search space for AutoBLM by analyzing existing scoring functions. Then, we propose a progressive algorithm (AutoBLM) and an evolutionary algorithm (AutoBLM+), which are further accelerated by filter and predictor to deal with the domain-specific properties for KG learning. Finally, we perform extensive experiments on benchmarks in KG completion, multi-hop query, and entity classification tasks. Empirical results show that the searched scoring functions are KG dependent, new to the literature, and outperform the existing scoring functions. AutoBLM+ is better than AutoBLM as the evolutionary algorithm can flexibly explore better structures in the same budget.
Data from e+e- annihilation into hadrons at centre-of-mass energies between 91 GeV and 209 GeV collected with the OPAL detector at LEP, are used to study the four-jet rate as a function of the Durham algorithm resolution parameter ycut. The four-jet rate is compared to next-to-leading order calculations that include the resummation of large logarithms. The strong coupling measured from the four-jet rate is alphas(Mz0)= 0.1182+-0.0003(stat.)+-0.0015(exp.)+-0.0011(had.)+-0.0012(scale)+-0.0013(mass) in agreement with the world average. Next-to-leading order fits to the D-parameter and thrust minor event-shape observables are also performed for the first time. We find consistent results, but with significantly larger theoretical uncertainties.
Contributions to ICRC 2007, Merida, Mexico. Contents pages for the Contribution on behalf of the ANTARES Collaboration to the 30th ICRC that took place in July 2007 in Merida, Mexico. The contents are in html form with clickable links to the papers that exist on the Astrophysics archive.
This study examines on-shell supersymmetry breaking in the Abelian $\mathcal{N}=1$ Chern-Simons-matter model within a three-dimensional spacetime. The classical Lagrangian is scale-invariant, but two-loop radiative corrections to the effective potential break this symmetry, along with gauge and on-shell supersymmetry. To investigate this issue, the Renormalization Group Equation is used to calculate the two-loop effective potential.
It is well-known that the Toda Theories can be obtained by reduction from the Wess-Zumino-Novikov-Witten (WZNW) model, but it is less known that this WZNW $\rightarrow$ Toda reduction is \lq incomplete'. The reason for this incompleteness being that the Gauss decomposition used to define the Toda fields from the WZNW field is valid locally but not globally over the WZNW group manifold, which implies that actually the reduced system is not just the Toda theory but has much richer structures. In this note we furnish a framework which allows us to study the reduced system globally, and thereby present some preliminary results on the global aspects. For simplicity, we analyze primarily 0 $+$ 1 dimensional toy models for $G = SL(n, {\bf R})$, but we also discuss the 1 $+$ 1 dimensional model for $G = SL(2, {\bf R})$ which corresponds to the WZNW $\rightarrow$ Liouville reduction.
Recent advances in interdisciplinary fields as diverse as astrophysics, cosmogeophysics, nuclear geology, etc. have led to interesting developments in the non-organic theory of the genesis of petroleum. This theory, which holds that petroleum is of an abiogenic primordial origin, provides an explanation for certain features of petroleum geology that are hard to explain within the standard organic framework. If the non-organic theory is correct, then hydrocarbon reserves would be enormous and almost inexhaustable.
We present images and a multi-wavelength photometric catalog based on all of the JWST NIRCam observations obtained to date in the region of the Abell 2744 galaxy cluster. These data come from three different programs, namely the GLASS-JWST Early Release Science Program, UNCOVER, and Director's Discretionary Time program 2756. The observed area in the NIRCam wide-band filters - covering the central and extended regions of the cluster, as well as new parallel fields - is 46.5 arcmin$^2$ in total. All images in eight bands (F090W, F115W, F150W, F200W, F277W, F356W, F410M, F444W) have been reduced adopting the latest calibration and reference files available. Data reduction has been performed using an augmented version of the official JWST pipeline, with improvements aimed at removing or mitigating defects in the raw images and improving the background subtraction and photometric accuracy. We obtain a F444W-detected multi-band catalog, including all NIRCam and available HST data, adopting forced aperture photometry on PSF-matched images. The catalog is intended to enable early scientific investigations and is optimized for the study of faint galaxies; it contains 24389 sources, with a 5$\sigma$ limiting magnitude in the F444W band ranging from 28.5 AB to 30.5 AB, as a result of the varying exposure times of the surveys that observed the field. We publicly release the reduced NIRCam images, associated multi-wavelength catalog and code adopted for $1/f$ noise removal with the aim of aiding users to familiarize themselves with JWST NIRCam data and identify suitable targets for follow-up observations.
Machine learning (ML)-based parameterizations have been developed for Earth System Models (ESMs) with the goal to better represent subgrid-scale processes or to accelerate computations. ML-based parameterizations within hybrid ESMs have successfully learned subgrid-scale processes from short high-resolution simulations. However, most studies used a particular ML method to parameterize the subgrid tendencies or fluxes originating from the compound effect of various small-scale processes (e.g., radiation, convection, gravity waves) in mostly idealized settings or from superparameterizations. Here, we use a filtering technique to explicitly separate convection from these processes in simulations with the Icosahedral Non-hydrostatic modelling framework (ICON) in a realistic setting and benchmark various ML algorithms against each other offline. We discover that an unablated U-Net, while showing the best offline performance, learns reverse causal relations between convective precipitation and subgrid fluxes. While we were able to connect the learned relations of the U-Net to physical processes this was not possible for the non-deep learning-based Gradient Boosted Trees. The ML algorithms are then coupled online to the host ICON model. Our best online performing model, an ablated U-Net excluding precipitating tracer species, indicates higher agreement for simulated precipitation extremes and mean with the high-resolution simulation compared to the traditional scheme. However, a smoothing bias is introduced both in water vapor path and mean precipitation. Online, the ablated U-Net significantly improves stability compared to the non-ablated U-Net and runs stable for the full simulation period of 180 days. Our results hint to the potential to significantly reduce systematic errors with hybrid ESMs.
The recently proposed Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system that takes into account the multidimensional structure of the signals when designing the sensing and feature synthesis components. The key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task. Thus, the ability to find such a discriminative tensor subspace and optimize the system to project the signals onto that data manifold plays an important role in Multilinear Compressive Learning. In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable? and How to optimize the sensing and feature synthesis components to transform the original signals to the data manifold found in the first question? In our proposal, the discovery of a high-quality data manifold is conducted by training a nonlinear compressive learning system on the inference task. Its knowledge of the data manifold of interest is then progressively transferred to the MCL components via multi-stage supervised training with the supervisory information encoding how the compressed measurements, the synthesized features, and the predictions should be like. The proposed knowledge transfer algorithm also comes with a semi-supervised adaption that enables compressive learning models to utilize unlabeled data effectively. Extensive experiments demonstrate that the proposed knowledge transfer method can effectively train MCL models to compressively sense and synthesize better features for the learning tasks with improved performances, especially when the complexity of the learning task increases.
In this work, we provide a detailed analysis of the issue of encoding of quantum information which is invariant with respect to arbitrary Lorentz transformations. We significantly extend already known results and provide compliments where necessary. In particular, we introduce novel schemes for invariant encoding which utilize so-called pair-wise helicity -- a physical parameter characterizing pairs of electric-magnetic charges. We also introduce new schemes for ordinary massive and massless particles based on states with fixed total momentum, in contrast to all protocols already proposed, which assumed equal momenta of all the particles involved in the encoding scheme. Moreover, we provide a systematic discussion of already existing protocols and show directly that they are invariant with respect to Lorentz transformations drawn according to any distribution, a fact which was not manifestly shown in previous works.
Large language models (LLMs) have played a pivotal role in building communicative AI to imitate human behaviors but face the challenge of efficient customization. To tackle this challenge, recent studies have delved into the realm of model editing, which manipulates specific memories of language models and changes the related language generation. However, the robustness of model editing remains an open question. This work seeks to understand the strengths and limitations of editing methods, thus facilitating robust, realistic applications of communicative AI. Concretely, we conduct extensive analysis to address the three key research questions. Q1: Can edited LLMs behave consistently resembling communicative AI in realistic situations? Q2: To what extent does the rephrasing of prompts lead LLMs to deviate from the edited knowledge memory? Q3: Which knowledge features are correlated with the performance and robustness of editing? Our experimental results uncover a substantial disparity between existing editing methods and the practical application of LLMs. On rephrased prompts that are complex and flexible but common in realistic applications, the performance of editing experiences a significant decline. Further analysis shows that more popular knowledge is memorized better, easier to recall, and more challenging to edit effectively.
The structure of the divergences for transverse theories of gravity is studied to one-loop order. These theories are invariant only under those diffeomorphisms that enjoy unit Jacobian determinant (TDiff), so that the determinant of the metric transforms as a true scalar instead of a density. Generically, the models include an additional scalar degree of freedom contained in the metric besides the usual spin two component. When the cosmological constant is fine tuned to zero, there are only two theories which are on shell finite, namely the one in which the symmetry is enhanced to the full group of diffeomorphisms, i.e. Einstein's gravity, and another one denoted by WTDiff which enjoys local Weyl invariance. Both of them are free from the additional scalar.
This work is devoted to the numerical simulation of the \BGK equation for two species in the fluid limit using a particle method. Thus, we are interested in a gas mixture consisting of two species without chemical reactions assuming that the number of particles of each species remains constant. We consider the kinetic two species model proposed by Klingenberg, Pirner and Puppo in 2017, which separates the intra and interspecies collisions. We want to study numerically the influence of the two relaxation term, one corresponding to intra, the other to interspecies collisions. For this, we use the method of micro-macro decomposition. First, we derive an equivalent model based on the micro-macro decomposition (see Bennoune, Lemou and Mieussens, 2007 and Crestetto, Crouseilles and Lemou, 2013). The kinetic micro part is solved by a particle method, whereas the fluid macro part is discretized by a standard finite volume scheme. Main advantages of this approach are: (i) the noise inherent to the particle method is reduced compared to a standard (without micro-macro decomposition) particle method, (ii) the computational cost of the method is reduced in the fluid limit since a small number of particles is then sufficient.
The endogenous adaptation of agents, that may adjust their local contact network in response to the risk of being infected, can have the perverse effect of increasing the overall systemic infectiveness of a disease. We study a dynamical model over two geographically distinct but interacting locations, to better understand theoretically the mechanism at play. Moreover, we provide empirical motivation from the Italian National Bovine Database, for the period 2006-2013.
Superconductor/ferromagnet (S/F) proximity effect theory predicts that the superconducting critical temperature of the F1/F2/S or F1/S/F2 trilayers for the parallel orientation of the F1 and F2 magnetizations is smaller than for the antiparallel one. This suggests a possibility of a controlled switching between the superconducting and normal states in the S layer. Here, using the spin switch design F1/F2/S theoretically proposed by Oh et al. [Appl. Phys. Lett. 71, 2376 (1997)], that comprises a ferromagnetic bilayer separated by a non-magnetic metallic spacer layer as a ferromagnetic component, and an ordinary superconductor as the second interface component, we have successfully realized a full spin switch effect for the superconducting current.
We determine the [OIII]$\lambda5007$ equivalent width (EW) distribution of $1.700<\rm{z}<2.274$ rest-frame UV-selected (M$_{\rm{UV}}<-19$) star-forming galaxies in the GOODS North and South fields. We make use of deep HDUV broadband photometry catalogues for selection and 3D-HST WFC3/IR grism spectra for measurement of line properties. The [OIII]$\lambda5007$ EW distribution allows us to measure the abundance of extreme emission line galaxies (EELGs) within this population. We model a log-normal distribution to the [OIII]$\lambda5007$ rest-frame equivalent widths of galaxies in our sample, with location parameter $\mu=4.24\pm0.07$ and variance parameter $\sigma= 1.33\pm0.06$. This EW distribution has a mean [OIII]$\lambda5007$ EW of 168$\pm1\r{A}$. The fractions of $\rm{z}\sim2$ rest-UV-selected galaxies with [OIII]$\lambda5007$ EWs greater than $500, 750$ and $1000\r{A}$ are measured to be $6.8^{+1.0}_{-0.9}\%$, $3.6^{+0.7}_{-0.6}\%$, and $2.2^{+0.5}_{-0.4}\%$ respectively. The EELG fractions do not vary strongly with UV luminosity in the range ($-21.6<M_{\rm{UV}}<-19.0$) considered in this paper, consistent with findings at higher redshifts. We compare our results to $\rm{z}\sim5$ and $\rm{z}\sim7$ studies where candidate EELGs have been discovered through Spitzer/IRAC colours, and we identify rapid evolution with redshift in the fraction of star-forming galaxies observed in an extreme emission line phase (a rise by a factor $\sim10$ between $\rm{z}\sim2$ and $\rm{z}\sim7$). This evolution is consistent with an increased incidence of strong bursts in the galaxy population of the reionisation era. While this population makes a sub-dominant contribution of the ionising emissivity at $\rm{z}\simeq2$, EELGs are likely to dominate the ionising output in the reionisation era.
Unsupervised learning of compact and relevant state representations has been proved very useful at solving complex reinforcement learning tasks. In this paper, we propose a recurrent capsule network that learns such representations by trying to predict the future observations in an agent's trajectory.
Due to Multilingual Neural Machine Translation's (MNMT) capability of zero-shot translation, many works have been carried out to fully exploit the potential of MNMT in zero-shot translation. It is often hypothesized that positional information may hinder the MNMT from outputting a robust encoded representation for decoding. However, previous approaches treat all the positional information equally and thus are unable to selectively remove certain positional information. In sharp contrast, this paper investigates how to learn to selectively preserve useful positional information. We describe the specific mechanism of positional information influencing MNMT from the perspective of linguistics at the token level. We design a token-level position disentangle module (TPDM) framework to disentangle positional information at the token level based on the explanation. Our experiments demonstrate that our framework improves zero-shot translation by a large margin while reducing the performance loss in the supervised direction compared to previous works.
Neuronal activity in the brain generates synchronous oscillations of the Local Field Potential (LFP). The traditional analyses of the LFPs are based on decomposing the signal into simpler components, such as sinusoidal harmonics. However, a common drawback of such methods is that the decomposition primitives are usually presumed from the onset, which may bias our understanding of the signal's structure. Here, we introduce an alternative approach that allows an impartial, high resolution, hands-off decomposition of the brain waves into a small number of discrete, frequency-modulated oscillatory processes, which we call oscillons. In particular, we demonstrate that mouse hippocampal LFP contain a single oscillon that occupies the $\theta$-frequency band and a couple of $\gamma$-oscillons that correspond, respectively, to slow and fast $\gamma$-waves. Since the oscillons were identified empirically, they may represent the actual, physical structure of synchronous oscillations in neuronal ensembles, whereas Fourier-defined "brain waves" are nothing but poorly resolved oscillons.
We introduce a new mixture autoregressive model which combines Gaussian and Student's $t$ mixture components. The model has very attractive properties analogous to the Gaussian and Student's $t$ mixture autoregressive models, but it is more flexible as it enables to model series which consist of both conditionally homoscedastic Gaussian regimes and conditionally heteroscedastic Student's $t$ regimes. The usefulness of our model is demonstrated in an empirical application to the monthly U.S. interest rate spread between the 3-month Treasury bill rate and the effective federal funds rate.
The additional resonant contribution to the potential model is examined in $\alpha$+$^{12}$C elastic scattering and the low-energy $^{12}$C($\alpha$,$\gamma$)$^{16}$O reaction. The excitation function of elastic scattering below $E_{c.m.}= 5$ MeV seems to be reproduced by the potential model satisfactorily, and it is not profoundly disturbed by the additional resonances. The weak coupling is good enough to describe the $^{16}$O structure in the vicinity of the $\alpha$-particle threshold, especially below $E_{c.m.}= 8$ MeV, corresponding to the excitation energy $E_x \approx 15$ MeV. The additional resonances give the complement of the astrophysical $S$-factors from the simple potential model. The $S$-factor of $^{12}$C($\alpha$,$\gamma$)$^{16}$O at $E_{c.m.}=300$ keV is dominated by the $E$2 transition, which is enhanced by the subthreshold 2$^+_1$ state at $E_x= 6.92$ MeV. The contribution from the subthreshold 1$^-_1$ state at $E_x= 7.12$ MeV is predicted to be small. The additional resonances do not give the large contribution to the thermonuclear reaction rates of $^{12}$C($\alpha$,$\gamma$)$^{16}$O at helium burning temperatures.
We consider the asymptotic local behavior of the second correlation function of the characteristic polynomials of sparse non-Hermitian random matrices $X_n$ whose entries have the form $x_{jk}=d_{jk}w_{jk}$ with iid complex standard Gaussian $w_{jk}$ and normalised iid Bernoulli$(p)$ $d_{jk}$. It is shown that, as $p\to\infty$, the local asymptotic behavior of the second correlation function of characteristic polynomials near $z_0\in \mathbb{C}$ coincides with those for Ginibre ensemble: it converges to a determinant with Ginibre kernel in the bulk $|z_0|<1$, and it is factorized if $|z_0|>1$. For the finite $p>0$, the behavior is different and exhibits the transition between three different regimes depending on values of $p$ and $|z_0|^2$.
We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided.
This article proposes a derivation of the Vlasov-Navier-Stokes system for spray/aerosol flows. The distribution function of the dispersed phase is governed by a Vlasov-equation, while the velocity field of the propellant satisfies the Navier-Stokes equations for incompressible fluids. The dynamics of the dispersed phase and of the propellant are coupled through the drag force exerted by the propellant on the dispersed phase. We present a formal derivation of this model from a multiphase Boltzmann system for a binary gaseous mixture, involving the droplets/dust particles in the dispersed phase as one species, and the gas molecules as the other species. Under suitable assumptions on the collision kernels, we prove that the sequences of solutions to the multiphase Boltzmann system converge to distributional solutions to the Vlasov-Navier-Stokes equation in some appropriate distinguished scaling limit. Specifically, we assume (a) that the mass ratio of the gas molecules to the dust particles/droplets is small, (b) that the thermal speed of the dust particles/droplets is much smaller than that of the gas molecules and (c) that the mass density of the gas and of the dispersed phase are of the same order of magnitude.
Cycling of carbon dioxide between the atmosphere and interior of rocky planets can stabilize global climate and enable planetary surface temperatures above freezing over geologic time. However, variations in global carbon budget and unstable feedback cycles between planetary sub-systems may destabilize the climate of rocky exoplanets toward regimes unknown in the Solar System. Here, we perform clear-sky atmospheric radiative transfer and surface weathering simulations to probe the stability of climate equilibria for rocky, ocean-bearing exoplanets at instellations relevant for planetary systems in the outer regions of the circumstellar habitable zone. Our simulations suggest that planets orbiting G- and F-type stars (but not M-type stars) may display bistability between an Earth-like climate state with efficient carbon sequestration and an alternative stable climate equilibrium where CO$_2$ condenses at the surface and forms a blanket of either clathrate hydrate or liquid CO$_2$. At increasing instellation and with ineffective weathering, the latter state oscillates between cool, surface CO$_2$-condensing and hot, non-condensing climates. CO$_2$ bistable climates may emerge early in planetary history and remain stable for billions of years. The carbon dioxide-condensing climates follow an opposite trend in $p$CO$_2$ versus instellation compared to the weathering-stabilized planet population, suggesting the possibility of observational discrimination between these distinct climate categories.
The spin relaxation induced by the Elliott-Yafet mechanism and the extrinsic spin Hall conductivity due to the skew-scattering are investigated in 5d transition-metal ultrathin films with self-adatom impurities as scatterers. The values of the Elliott-Yafet parameter and of the spin-flip relaxation rate reveal a correlation with each other that is in agreement with the Elliott approximation. At 10-layer thickness, the spin-flip relaxation time in 5d transition-metal films is quantitatively reported about few hundred nanoseconds at atomic percent which is one and two orders of magnitude shorter than that in Au and Cu thin films, respectively. The anisotropy effect of the Elliott-Yafet parameter and of the spin-flip relaxation rate with respect to the direction of the spin-quantization axis in relation to the crystallographic axes is also analyzed. We find that the anisotropy of the spin-flip relaxation rate is enhanced due to the Rashba surface states on the Fermi surface, reaching values as high as 97% in 10-layer Hf(0001) film or 71% in 10-layer W(110) film. Finally, the spin Hall conductivity as well as the spin Hall angle due to the skew-scattering off self-adatom impurities are calculated using the Boltzmann approach. Our calculations employ a relativistic version of the first-principles full-potential Korringa-Kohn-Rostoker Green function method.
Incorporating relativistic physics into quantum tunneling can lead to exotic behavior such as perfect transmission via Klein tunneling. Here, we probe the tunneling properties of spin-momentum locked relativistic fermions by designing and implementing a tunneling geometry that utilizes nanowires of the topological Kondo insulator candidate, SmB6. The nanowires are attached to the end of scanning tunneling microscope tips, and used to image the bicollinear stripe spin-order in the antiferromagnet Fe1.03Te with a Neel temperature of ~50 K. The antiferromagnetic stripes become invisible above 10 K concomitant with the suppression of the topological surface states. We further demonstrate that the direction of spin-polarization is tied to the tunneling direction. Our technique establishes SmB6 nanowires as ideal conduits for spin-polarized currents.
An arithmetic word problem typically includes a textual description containing several constant quantities. The key to solving the problem is to reveal the underlying mathematical relations (such as addition and subtraction) among quantities, and then generate equations to find solutions. This work presents a novel approach, Quantity Tagger, that automatically discovers such hidden relations by tagging each quantity with a sign corresponding to one type of mathematical operation. For each quantity, we assume there exists a latent, variable-sized quantity span surrounding the quantity token in the text, which conveys information useful for determining its sign. Empirical results show that our method achieves 5 and 8 points of accuracy gains on two datasets respectively, compared to prior approaches.
Non-contact vital sign monitoring has many advantages over conventional methods in being comfortable, unobtrusive and without any risk of spreading infection. The use of millimeter-wave (mmWave) radars is one of the most promising approaches that enable contact-less monitoring of vital signs. Novel low-power implementations of this technology promise to enable vital sign sensing in embedded, battery-operated devices. The nature of these new low-power sensors exacerbates the challenges of accurate and robust vital sign monitoring and especially the problem of heart-rate tracking. This work focuses on the investigation and characterization of three Frequency Modulated Continuous Wave (FMCW) low-power radars with different carrier frequencies of 24 GHz, 60 GHz and 120 GHz. The evaluation platforms were first tested on phantom models that emulated human bodies to accurately evaluate the baseline noise, error in range estimation, and error in displacement estimation. Additionally, the systems were also used to collect data from three human subjects to gauge the feasibility of identifying heartbeat peaks and breathing peaks with simple and lightweight algorithms that could potentially run in low-power embedded processors. The investigation revealed that the 24 GHz radar has the highest baseline noise level, 0.04mm at 0{\deg} angle of incidence, and an error in range estimation of 3.45 +- 1.88 cm at a distance of 60 cm. At the same distance, the 60 GHz and the 120 GHz radar system shows the least noise level, 0.0lmm at 0{\deg} angle of incidence, and error in range estimation 0.64 +- 0.01 cm and 0.04 +- 0.0 cm respectively. Additionally, tests on humans showed that all three radar systems were able to identify heart and breathing activity but the 120 GHz radar system outperformed the other two.
We define convex projective structures on 2D surfaces with holes and investigate their moduli space. We prove that this moduli space is canonically identified with the higher Teichmuller space for the group PSL_3 defined in our paper math/0311149. We define the quantum version of the moduli space of convex projective structures on surfaces with holes. The present paper can serve as an introduction to math/0311149. In the Appendix we show that the space of configurations of 5 flags in the projective plane is of cluster type E_7.
A new method for explicit computation of the CY moduli space metric was proposed by the authors recently. The method makes use of the connection of the moduli space with a certain Frobenius algebra. Here we clarify this approach and demonstrate its efficiency by computing the Special geometry of the 101-dimensional moduli space of the quintic threefold around the orbifold point.
Quantifying spontaneous, fugitive and venting related methane emissions are often difficult and cumbersome. However, auditing the methane emissions due to conventional and un-conventional hydrocarbon exploitation techniques are becoming necessary. Present generation compact chemical sensors are slower, degrade very fast, and are sensitive broad-spectrum gases. On the other hand, optical sensors are very fast in detection of gases and more precise and can be easily employed in various environments like boreholes and soils. In this study, we report development of an optical sensor that is methane specific, fast for real time applications and has tremendous application potential in the exploration of coal bed methane and other hydrocarbon reserves with methane as a major constituent. The detection process is based on the principle of spectroscopic absorption of light. The detector, NiSi Schottky diode, was fabricated and characterized exclusively for the 1.65 um, narrow bandwidth methane absorption. The probe is of 20 cm length and comprises of a laser source and the NiSi detector aligned optically. This probe can be employed in boreholes, mine vents and soil layers for measuring real-time fluxes in methane concentrations. From the laboratory based experiments it is observed that the detection limits of the developed device is very low (3% by volume) and the response time of detection is very rapid (about 2 seconds). Based on the materials used, fabrication procedures adopted, sensitivity of the device and its compactness, the developed sensor can be considered as a novel, economic device for exploration of coal bed methane.
The complex unfolding of the US opioid epidemic in the last 20 years has been the subject of a large body of medical and pharmacological research, and it has sparked a multidisciplinary discussion on how to implement interventions and policies to effectively control its impact on public health. This study leverages Reddit as the primary data source to investigate the opioid crisis. We aimed to find a large cohort of Reddit users interested in discussing the use of opioids, trace the temporal evolution of their interest, and extensively characterize patterns of the nonmedical consumption of opioids, with a focus on routes of administration and drug tampering. We used a semiautomatic information retrieval algorithm to identify subreddits discussing nonmedical opioid consumption, finding over 86,000 Reddit users potentially involved in firsthand opioid usage. We developed a methodology based on word embedding to select alternative colloquial and nonmedical terms referring to opioid substances, routes of administration, and drug-tampering methods. We modeled the preferences of adoption of substances and routes of administration, estimating their prevalence and temporal unfolding, observing relevant trends such as the surge in synthetic opioids like fentanyl and an increasing interest in rectal administration. Ultimately, through the evaluation of odds ratios based on co-mentions, we measured the strength of association between opioid substances, routes of administration, and drug tampering, finding evidence of understudied abusive behaviors like chewing fentanyl patches and dissolving buprenorphine sublingually. We believe that our approach may provide a novel perspective for a more comprehensive understanding of nonmedical abuse of opioids substances and inform the prevention, treatment, and control of the public health effects.
By a classical result of Gauss and Kuzmin, the continued fraction expansion of a ``random'' real number contains each digit $a\in\mathbb{N}$ with asymptotic frequency $\log_2(1+1/(a(a+2)))$. We generalize this result in two directions: First, for certain sets $A\subset\mathbb{N}$, we establish simple explicit formulas for the frequency with which the continued fraction expansion of a random real number contains a digit from the set $A$. For example, we show that digits of the form $p-1$, where $p$ is prime, appear with frequency $\log_2(\pi^2/6)$. Second, we obtain a simple formula for the frequency with which a string of $k$ consecutive digits $a$ appears in the continued fraction expansion of a random real number. In particular, when $a=1$, this frequency is given by $|\log_2(1+(-1)^k/F_{k+2})|$, where $F_n$ is the $n$th Fibonacci number. Finally, we compare the frequencies predicted by these results with actual frequencies found among the first 300 million continued fraction digits of $\pi$, and we provide strong statistical evidence that the continued fraction expansion of $\pi$ behaves like that of a random real number.
As an analogy to the Weyl point in k-space, we search for energy levels which close at a single point as a function of a three dimensional parameter space. Such points are topologically protected in the sense that any perturbation which acts on the two level subsystem can be corrected by tuning the control parameters. We find that parameter controlled Weyl points are ubiquitous in semiconductor-superconductor quantum dots and that they are deeply related to Majorana zero modes. In this paper, we present several semiconductor-superconductor quantum dot devices which host parameter controlled Weyl points. Further, we show how these points can be observed experimentally via conductance measurements.
Statistical analysis of high-dimensional functional times series arises in various applications. Under this scenario, in addition to the intrinsic infinite-dimensionality of functional data, the number of functional variables can grow with the number of serially dependent observations. In this paper, we focus on the theoretical analysis of relevant estimated cross-(auto)covariance terms between two multivariate functional time series or a mixture of multivariate functional and scalar time series beyond the Gaussianity assumption. We introduce a new perspective on dependence by proposing functional cross-spectral stability measure to characterize the effect of dependence on these estimated cross terms, which are essential in the estimates for additive functional linear regressions. With the proposed functional cross-spectral stability measure, we develop useful concentration inequalities for estimated cross-(auto)covariance matrix functions to accommodate more general sub-Gaussian functional linear processes and, furthermore, establish finite sample theory for relevant estimated terms under a commonly adopted functional principal component analysis framework. Using our derived non-asymptotic results, we investigate the convergence properties of the regularized estimates for two additive functional linear regression applications under sparsity assumptions including functional linear lagged regression and partially functional linear regression in the context of high-dimensional functional/scalar time series.
Inelastic neutron scattering reveals a broad continuum of excitations in Pr$_2$Zr$_2$O$_7$, the temperature and magnetic field dependence of which indicate a continuous distribution of quenched transverse fields ($\Delta$) acting on the non-Kramers Pr$^{3+}$ crystal field ground state doublets. Spin-ice correlations are apparent within 0.2 meV of the Zeeman energy. A random phase approximation provides an excellent account of the data with a transverse field distribution $\rho(\Delta)\propto (\Delta^2+\Gamma^2)^{-1}$ where $ \Gamma=0.27(1)$ meV. Established during high temperature synthesis due to an underlying structural instability, it appears disorder in Pr$_2$Zr$_2$O$_7$ actually induces a quantum spin liquid.
We propose a hybrid neural network (NN) and PDE approach for learning generalizable PDE dynamics from motion observations. Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and constitutive models (or material models). Without explicit PDE knowledge, these approaches cannot guarantee physical correctness and have limited generalizability. We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned. Instead, constitutive models are particularly suitable for learning due to their data-fitting nature. To this end, we introduce a new framework termed "Neural Constitutive Laws" (NCLaw), which utilizes a network architecture that strictly guarantees standard constitutive priors, including rotation equivariance and undeformed state equilibrium. We embed this network inside a differentiable simulation and train the model by minimizing a loss function based on the difference between the simulation and the motion observation. We validate NCLaw on various large-deformation dynamical systems, ranging from solids to fluids. After training on a single motion trajectory, our method generalizes to new geometries, initial/boundary conditions, temporal ranges, and even multi-physics systems. On these extremely out-of-distribution generalization tasks, NCLaw is orders-of-magnitude more accurate than previous NN approaches. Real-world experiments demonstrate our method's ability to learn constitutive laws from videos.
Various sources have reported the WaveNet deep learning architecture being able to generate high-quality speech, but to our knowledge there haven't been studies on the interpretation or visualization of trained WaveNets. This study investigates the possibility that WaveNet understands speech by unsupervisedly learning an acoustically meaningful latent representation of the speech signals in its receptive field; we also attempt to interpret the mechanism by which the feature extraction is performed. Suggested by singular value decomposition and linear regression analysis on the activations and known acoustic features (e.g. F0), the key findings are (1) activations in the higher layers are highly correlated with spectral features; (2) WaveNet explicitly performs pitch extraction despite being trained to directly predict the next audio sample and (3) for the said feature analysis to take place, the latent signal representation is converted back and forth between baseband and wideband components.
Unsupervised domain adaptation without consuming annotation process for unlabeled target data attracts appealing interests in semantic segmentation. However, 1) existing methods neglect that not all semantic representations across domains are transferable, which cripples domain-wise transfer with untransferable knowledge; 2) they fail to narrow category-wise distribution shift due to category-agnostic feature alignment. To address above challenges, we develop a new Critical Semantic-Consistent Learning (CSCL) model, which mitigates the discrepancy of both domain-wise and category-wise distributions. Specifically, a critical transfer based adversarial framework is designed to highlight transferable domain-wise knowledge while neglecting untransferable knowledge. Transferability-critic guides transferability-quantizer to maximize positive transfer gain under reinforcement learning manner, although negative transfer of untransferable knowledge occurs. Meanwhile, with the help of confidence-guided pseudo labels generator of target samples, a symmetric soft divergence loss is presented to explore inter-class relationships and facilitate category-wise distribution alignment. Experiments on several datasets demonstrate the superiority of our model.
We consider the off-policy evaluation (OPE) problem in contextual bandits, where the goal is to estimate the value of a target policy using the data collected by a logging policy. Most popular approaches to the OPE are variants of the doubly robust (DR) estimator obtained by combining a direct method (DM) estimator and a correction term involving the inverse propensity score (IPS). Existing algorithms primarily focus on strategies to reduce the variance of the DR estimator arising from large IPS. We propose a new approach called the Doubly Robust with Information borrowing and Context-based switching (DR-IC) estimator that focuses on reducing both bias and variance. The DR-IC estimator replaces the standard DM estimator with a parametric reward model that borrows information from the 'closer' contexts through a correlation structure that depends on the IPS. The DR-IC estimator also adaptively interpolates between this modified DM estimator and a modified DR estimator based on a context-specific switching rule. We give provable guarantees on the performance of the DR-IC estimator. We also demonstrate the superior performance of the DR-IC estimator compared to the state-of-the-art OPE algorithms on a number of benchmark problems.
We studied the rate equations of direct modulation laser and showed that it may be reduced to the special case when spontaneous carrier decay rate is equal to the photon decay rate. The solution in this case is unique. For the general case, we investigated the vector field of the differential system of the rate equations and pointed out the basic stability problems of this system when the modulation current was required to change.
Based on the Schmidt decomposition new convenient thumbrules are obtained to test entanglement of wavefunctions for bipartite qubit and qutrit systems. For the qubit system there is an underlying SU(2) algebra , while the same for a qutrit system is SU(3).
In this paper, we theoretically develop and numerically validate an asymmetric linear bilateral control model (LBCM). The novelty of the asymmetric LBCM is that using this model all the follower vehicles in a platoon can adjust their acceleration and deceleration to closely follow a constant desired time gap to improve platoon operational efficiency while maintaining local and string stability. We theoretically analyze the local stability of the asymmetric LBCM using the condition for asymptotic stability of a linear time-invariant system and prove the string stability of the asymmetric LBCM using a space gap error attenuation approach. Then, we evaluate the efficacy of the asymmetric LBCM by simulating a closely coupled cooperative adaptive cruise control (CACC) platoon of fully automated trucks in various non-linear acceleration and deceleration states. We choose automated truck platooning as a case study since heavy-duty trucks experience higher delays and lags in the powertrain system, and limited acceleration and deceleration capabilities than passenger cars. To evaluate the platoon operational efficiency of the asymmetric LBCM, we compare the performance of the asymmetric LBCM to a baseline model, i.e., the symmetric LBCM, for different powertrain delays and lags. Our analyses found that the asymmetric LBCM can handle any combined powertrain delays and lags up to 0.6 sec while maintaining a constant desired time gap during a stable platoon operation, whereas the symmetric LBCM fails to ensure stable platoon operation as well as maintain a constant desired time gap for any combined powertrain delays and lags over 0.2 sec. These findings demonstrate the potential of the asymmetric LBCM in improving platoon operational efficiency and stability of an automated truck platoon.
Important information concerning a multivariate data set, such as clusters and modal regions, is contained in the derivatives of the probability density function. Despite this importance, nonparametric estimation of higher order derivatives of the density functions have received only relatively scant attention. Kernel estimators of density functions are widely used as they exhibit excellent theoretical and practical properties, though their generalization to density derivatives has progressed more slowly due to the mathematical intractabilities encountered in the crucial problem of bandwidth (or smoothing parameter) selection. This paper presents the first fully automatic, data-based bandwidth selectors for multivariate kernel density derivative estimators. This is achieved by synthesizing recent advances in matrix analytic theory which allow mathematically and computationally tractable representations of higher order derivatives of multivariate vector valued functions. The theoretical asymptotic properties as well as the finite sample behaviour of the proposed selectors are studied. {In addition, we explore in detail the applications of the new data-driven methods for two other statistical problems: clustering and bump hunting. The introduced techniques are combined with the mean shift algorithm to develop novel automatic, nonparametric clustering procedures which are shown to outperform mixture-model cluster analysis and other recent nonparametric approaches in practice. Furthermore, the advantage of the use of smoothing parameters designed for density derivative estimation for feature significance analysis for bump hunting is illustrated with a real data example.
We obtain the global well-posedness to the 3D incompressible magnetohydrodynamics (MHD) equations in Besov space with negative index of regularity. Particularly, we can get the global solutions for a new class of large initial data. As a byproduct, this result improves the corresponding result in \cite{HHW}. In addition, we also get the global result for this system in $\mathcal{\chi}^{-1}(\R^3)$ originally developed in \cite{LL}. More precisely, we only assume that the norm of initial data is exactly smaller than the sum of viscosity and diffusivity parameters.
In this paper we give a conjecture for the average number of unramified $G$-extensions of a quadratic field for any finite group $G$. The Cohen-Lenstra heuristics are the specialization of our conjecture to the case that $G$ is abelian of odd order. We prove a theorem towards the function field analog of our conjecture, and give additional motivations for the conjecture including the construction of a lifting invariant for the unramified $G$-extensions that takes the same number of values as the predicted average and an argument using the Malle-Bhargava principle. We note that for even $|G|$, corrections for the roots of unity in $\mathbb{Q}$ are required, which can not be seen when $G$ is abelian.
Large language models have ushered in a new era of artificial intelligence research. However, their substantial training costs hinder further development and widespread adoption. In this paper, inspired by the redundancy in the parameters of large language models, we propose a novel training paradigm: Evolving Subnetwork Training (EST). EST samples subnetworks from the layers of the large language model and from commonly used modules within each layer, Multi-Head Attention (MHA) and Multi-Layer Perceptron (MLP). By gradually increasing the size of the subnetworks during the training process, EST can save the cost of training. We apply EST to train GPT2 model and TinyLlama model, resulting in 26.7\% FLOPs saving for GPT2 and 25.0\% for TinyLlama without an increase in loss on the pre-training dataset. Moreover, EST leads to performance improvements in downstream tasks, indicating that it benefits generalization. Additionally, we provide intuitive theoretical studies based on training dynamics and Dropout theory to ensure the feasibility of EST. Our code is available at https://github.com/OpenDFM/EST.
In 1964, a new particle was proposed by several groups to answer the question of where the masses of elementary particles come from; this particle is usually referred to as the Higgs particle or the Higgs boson. In July 2012, this Higgs particle was finally found experimentally, a feat accomplished by the ATLAS Collaboration and the CMS Collaboration using the Large Hadron Collider at CERN. It is the purpose of this review to give my personal perspective on a brief history of the experimental search for this particle since the '80s and finally its discovery in 2012. Besides the early searches, those at the LEP collider at CERN, the Tevatron Collider at Fermilab, and the Large Hadron Collider at CERN are described in some detail. This experimental discovery of the Higgs boson is often considered to be one of the most important advances in particle physics in the last half a century, and some of the possible implications are briefly discussed. This review is based on a talk presented by the author at the conference "OCPA8 International Conference on Physics Education and Frontier Physics", the 8th Joint Meeting of Chinese Physicists Worldwide, Nanyang Technological University, Singapore, June 23-27, 2014.
Engineering the position of the lowest triplet state (T1) relative to the first excited singlet state (S1) is of great importance in improving the efficiencies of organic light emitting diodes and organic photovoltaic cells. We have carried out model exact calculations of substituted polyene chains to understand the factors that affect the energy gap between S1 and T1. The factors studied are backbone dimerisation, different donor-acceptor substitutions and twisted geometry. The largest system studied is an eighteen carbon polyene which spans a Hilbert space of about 991 million. We show that for reverse intersystem crossing (RISC) process, the best system involves substituting all carbon sites on one half of the polyene with donors and the other half with acceptors.
Recommender systems are a kind of data filtering that guides the user to interesting and valuable resources within an extensive dataset. by providing suggestions of products that are expected to match their preferences. However, due to data overloading, recommender systems struggle to handle large volumes of data reliably and accurately before offering suggestions. The main purpose of this work is to address the recommender system's data sparsity and accuracy problems by using the matrix factorization algorithm of collaborative filtering based on the dimensional reduction method and, more precisely, the Nonnegative Matrix Factorization (NMF) combined with ontology. We tested the method and compared the results to other classic methods. The findings showed that the implemented approach efficiently reduces the sparsity of CF suggestions, improves their accuracy, and gives more relevant items as recommendations.
Using an abstract scheme of monotone semiflows, the existence of bistable traveling wave solutions of a competitive recursion system with Ricker nonlinearity is established. The traveling wave solutions formulate the strong inter-specific actions between two competitive species.
Previously, we showed that, owing to effects arising from quantum electrodynamics (QED), magnetohydrodynamic fast modes of sufficient strength will break down to form electron-positron pairs while traversing the magnetospheres of strongly magnetised neutron stars. The bulk of the energy of the fast mode fuels the development of an electron-positron fireball. However, a small, but potentially observable, fraction of the energy ($\sim 10^{33}$ ergs) can generate a non-thermal distribution of electrons and positrons far from the star. In this paper, we examine the cooling and radiative output of these particles. We also investigate the properties of non-thermal emission in the absence of a fireball to understand the breakdown of fast modes that do not yield an optically thick pair plasma. This quiescent, non-thermal radiation associated with fast mode breakdown may account for the recently observed non-thermal emission from several anomalous X-ray pulsars and soft-gamma repeaters.
In this work, we use tools from non-standard analysis to introduce infinite-dimensional quantum systems and quantum fields within the framework of Categorical Quantum Mechanics. We define a dagger compact category *Hilb suitable for the algebraic manipulation of unbounded operators, Dirac deltas and plane-waves. We cover in detail the construction of quantum systems for particles in boxes with periodic boundary conditions, particles on cubic lattices, and particles in real space. Not quite satisfied with this, we show how certain non-separable Hilbert spaces can also be modelled in our non-standard framework, and we explicitly treat the cases of quantum fields on cubic lattices and quantum fields in real space.
We describe a way to introduce physics high school students with no background in programming to computational problem-solving experiences. Our approach builds on the great strides made by the Modeling Instruction reform curriculum. This approach emphasizes the practices of "Developing and using models" and "Computational thinking" highlighted by the NRC K-12 science standards framework. We taught 9th-grade students in a Modeling-Instruction-based physics course to construct computational models using the VPython programming environment. Numerical computation within the Modeling Instruction curriculum provides coherence among the curriculum's different force and motion models, links the various representations which the curriculum employs, and extends the curriculum to include real-world problems that are inaccessible to a purely analytic approach.
Dynamic behaviour of a WMN imposes stringent constraints on the routing policy of the network. In the shortest path based routing the shortest paths needs to be evaluated within a given time frame allowed by the WMN dynamics. The exact reasoning based shortest path evaluation methods usually fail to meet this rigid requirement. Thus, requiring some soft computing based approaches which can replace "best for sure" solutions with "good enough" solutions. This paper proposes a framework for optimal routing in the WMNs; where we investigate the suitability of Big Bang-Big Crunch (BB-BC), a soft computing based approach to evaluate shortest/near-shortest path. In order to make routing optimal we first propose to replace distance between the adjacent nodes with an integrated cost measure that takes into account throughput, delay, jitter and residual energy of a node. A fuzzy logic based inference mechanism evaluates this cost measure at each node. Using this distance measure we apply BB-BC optimization algorithm to evaluate shortest/near shortest path to update the routing tables periodically as dictated by network requirements. A large number of simulations were conducted and it has been observed that BB-BC algorithm appears to be a high potential candidate suitable for routing in WMNs.
A measurement of the cosmic ray positron fraction e+/(e+ + e-) in the energy range of 1-30 GeV is presented. The measurement is based on data taken by the AMS-01 experiment during its 10 day Space Shuttle flight in June 1998. A proton background suppression on the order of 10^6 is reached by identifying converted bremsstrahlung photons emitted from positrons.
The recent detection of the "cosmic dawn" redshifted 21 cm signal at 78 MHz by the EDGES experiment differs significantly from theoretical predictions. In particular, the absorption trough is roughly a factor of two stronger than the most optimistic theoretical models. The early interpretations of the origin of this discrepancy fall into two categories. The first is that there is increased cooling of the gas due to interactions with dark matter, while the second is that the background radiation field includes a contribution from a component in addition to the cosmic microwave background. In this paper we examine the feasibility of the second idea using new data from the first station of the Long Wavelength Array. The data span 40 to 80 MHz and provide important constraints on the present-day background in a frequency range where there are few surveys with absolute temperature calibration suitable for measuring the strength of the radio monopole. We find support for a strong, diffuse radio background that was suggested by the ARCARDE 2 results in the 3 to 10 GHz range. We find that this background is well modeled by a power law with a spectral index of $-$2.58$\pm$0.05 and a temperature at the rest frame 21 cm frequency of 603$^{+102}_{-92}$ mK.
Nonsequential two-photon ionization of inner-shell $np$ subshell of neutral atoms by circularly polarized light is investigated. Detection of subsequent fluorescence as a signature of the process is proposed and the dependence of fluorescence degree of polarization on incident photon beam energy is studied. It is generally expected that the degree of polarization remains approximately constant, except when the beam energy is tuned to an intermediate $n's$ resonance. However, strong unexpected change in the polarization degree is discovered for nonsequential two-photon ionization at specific incident beam energy due to a zero contribution of the otherwise dominant ionization channel. Polarization degree of the fluorescence depends less on the beam parameters and its measurements at this specific beam energy, whose position is very sensitive to the details of the employed theory, are highly desirable for evaluation of theoretical calculations of nonlinear ionization at hitherto unreachable accuracy.
Topological semimetal may have substantial applications in electronics, spintronics and quantum computation. Recently, ZrTe is predicted as a new type of topological semimetal due to coexistence of Weyl fermion and massless triply degenerate nodal points. In this work, the elastic and transport properties of ZrTe are investigated by combining the first-principles calculations and semiclassical Boltzmann transport theory. Calculated elastic constants prove mechanical stability of ZrTe, and the bulk modulus, shear modulus, Young's modulus and Possion's ratio also are calculated. It is found that spin-orbit coupling (SOC) has slightly enhanced effects on Seebeck coefficient, which along a(b) and c directions for pristine ZrTe at 300 K is 46.26 $\mu$V/K and 80.20 $\mu$V/K, respectively. By comparing the experimental electrical conductivity of ZrTe (300 K) with calculated value, the scattering time is determined for 1.59 $\times$ $10^{-14}$ s. The predicted room-temperature electronic thermal conductivity along a(b) and c directions is 2.37 $\mathrm{W m^{-1} K^{-1}}$ and 2.90 $\mathrm{W m^{-1} K^{-1}}$, respectively. The room-temperature lattice thermal conductivity is predicted as 17.56 $\mathrm{W m^{-1} K^{-1}}$ and 43.08 $\mathrm{W m^{-1} K^{-1}}$ along a(b) and c directions, showing very strong anisotropy. Calculated results show that isotope scattering produces observable effect on lattice thermal conductivity. It is noted that average room-temperature lattice thermal conductivity of ZrTe is slightly higher than that of isostructural MoP, which is due to larger phonon lifetimes and smaller Gr$\mathrm{\ddot{u}}$neisen parameters. Finally, the total thermal conductivity as a function of temperature is predicted for pristine ZrTe.
Seating location in the classroom can affect student engagement, attention and academic performance by providing better visibility, improved movement, and participation in discussions. Existing studies typically explore how traditional seating arrangements (e.g. grouped tables or traditional rows) influence students' perceived engagement, without considering group seating behaviours under more flexible seating arrangements. Furthermore, survey-based measures of student engagement are prone to subjectivity and various response bias. Therefore, in this research, we investigate how individual and group-wise classroom seating experiences affect student engagement using wearable physiological sensors. We conducted a field study at a high school and collected survey and wearable data from 23 students in 10 courses over four weeks. We aim to answer the following research questions: 1. How does the seating proximity between students relate to their perceived learning engagement? 2. How do students' group seating behaviours relate to their physiologically-based measures of engagement (i.e. physiological arousal and physiological synchrony)? Experiment results indicate that the individual and group-wise classroom seating experience is associated with perceived student engagement and physiologically-based engagement measured from electrodermal activity. We also find that students who sit close together are more likely to have similar learning engagement and tend to have high physiological synchrony. This research opens up opportunities to explore the implications of flexible seating arrangements and has great potential to maximize student engagement by suggesting intelligent seating choices in the future.
The recent discovery of B-modes in the polarization pattern of the Cosmic Microwave Background by the BICEP2 experiment has important implications for neutrino physics. We revisit cosmological bounds on light sterile neutrinos and show that they are compatible with all current cosmological data provided that the mass is relatively low. Using CMB data, including BICEP-2, we find an upper bound of $m_s < 0.85$ eV ($2\sigma$ Confidence Level). This bound is strengthened to 0.48 eV when HST measurements of $H_0$ are included. However, the inclusion of SZ cluster data from the Planck mission and weak gravitational measurements from the CFHTLenS project favours a non-zero sterile neutrino mass of $0.44^{+0.11}_{-0.16}$ eV. Short baseline neutrino oscillations, on the other hand, indicate a new mass state around 1.2 eV. This mass is highly incompatible with cosmological data if the sterile neutrino is fully thermalised ($\Delta \chi^2>10$). However, if the sterile neutrino only partly thermalises it can be compatible with all current data, both cosmological and terrestrial.
The question of existence of umbilical points, in the CR sense, on compact, three dimensional, strictly pseudoconvex CR manifolds was raised in the seminal paper by S.-S. Chern and J. K. Moser in 1974. In the present paper, we consider compact, three dimensional, strictly pseudoconvex CR manifolds that possess a free, transverse action by the circle group $U(1)$. We show that every such CR manifold $M$ has at least one orbit of umbilical points, {\it provided} that the Riemann surface $X:=M/U(1)$ is not a torus. In particular, every compact, circular and strictly pseudoconvex hypersurface in $\mathbb C^2$ has at least one circle of umbilical points. The existence of umbilical points in the case where $X$ is a torus is left open in general, but it is shown that if such an $M$ has additional symmetries, in a certain sense, then it must possess umbilical points as well.
This paper is devoted to the study of the following fractional Choquard equation $$ \varepsilon^{2s}(-\Delta)^{s} u + V(x)u = \varepsilon^{\mu-N}\left(\frac{1}{|x|^{\mu}}*F(u)\right)f(u) \mbox{ in } \mathbb{R}^{N}, $$ where $\varepsilon>0$ is a parameter, $s\in (0, 1)$, $N>2s$, $(-\Delta)^{s}$ is the fractional Laplacian, $V$ is a positive continuous potential with local minimum, $0<\mu<2s$, and $f$ is a superlinear continuous function with subcritical growth. By using the penalization method and the Ljusternik-Schnirelmann theory, we investigate the multiplicity and concentration of positive solutions for the above problem.
In this paper, curved fronts are constructed for spatially periodic bistable reaction-diffusion equations under the a priori assumption that there exist pulsating fronts in every direction. Some sufficient and some necessary conditions of the existence of curved fronts are given. Furthermore, the curved front is proved to be unique and stable. Finally, a curved front with varying interfaces is also constructed. Despite the effect of the spatial heterogeneity, the result shows the existence of curved fronts for spatially periodic bistable reaction-diffusion equations which is known for the homogeneous case.
Part of what makes the web successful is that anyone can publish content and browsers maintain certain safety guarantees. For example, it's safe to travel between links and make other trust decisions on the web because users can always identify the location they are at. If we want virtual and augmented reality to be successful, we need that same safety. On the traditional, two-dimensional (2D) web, this user interface (UI) is provided by the browser bars and borders (also known as the chrome). However, the immersive, three-dimensional (3D) web has no concept of a browser chrome, preventing routine user inspection of URLs. In this paper, we discuss the unique challenges that fully immersive head-worn computing devices provide to this model, evaluate three different strategies for trusted immersive UI, and make specific recommendations to increase user safety and reduce the risks of spoofing.
Nonparametric identification and maximum likelihood estimation for finite-state hidden Markov models are investigated. We obtain identification of the parameters as well as the order of the Markov chain if the transition probability matrices have full-rank and are ergodic, and if the state-dependent distributions are all distinct, but not necessarily linearly independent. Based on this identification result, we develop nonparametric maximum likelihood estimation theory. First, we show that the asymptotic contrast, the Kullback--Leibler divergence of the hidden Markov model, identifies the true parameter vector nonparametrically as well. Second, for classes of state-dependent densities which are arbitrary mixtures of a parametric family, we show consistency of the nonparametric maximum likelihood estimator. Here, identification of the mixing distributions need not be assumed. Numerical properties of the estimates as well as of nonparametric goodness of fit tests are investigated in a simulation study.
This article concerns the random dynamics and asymptotic analysis of the well known mathematical model, the Navier-Stokes equations. We consider the two-dimensional stochastic Navier-Stokes equations (SNSE) driven by a \textsl{linear multiplicative white noise of It\^o type} on the whole space $\mathbb{R}^2$. Firstly, we prove that the non-autonomous 2D SNSE generates a bi-spatial $(\mathbb{L}^2(\mathbb{R}^2),\mathbb{H}^1(\mathbb{R}^2))$-continuous random cocycle. Due to the bi-spatial continuity property of the random cocycle associated with SNSE, we show that if the initial data is in $\mathbb{L}^2(\mathbb{R}^2)$, then there exists a unique bi-spatial $(\mathbb{L}^2(\mathbb{R}^2),\mathbb{H}^1(\mathbb{R}^2))$-pullback random attractor for non-autonomous SNSE which is compact and attracting not only in $\mathbb{L}^2$-norm but also in $\mathbb{H}^1$-norm. Next, as a consequence of the existence of pullback random attractors, we prove the existence of a family of invariant sample measures for non-autonomous random dynamical system generated by 2D non-autonomous SNSE. Moreover, we show that the family of invariant sample measures satisfies a stochastic Liouville type theorem. Finally, we discuss the existence of an invariant measure for the random cocycle associated with 2D autonomous SNSE. We prove the uniqueness of invariant measures for $\boldsymbol{f}=\mathbf{0}$ and for any $\nu>0$ by using the linear multiplicative structure of the noise coefficient and exponential stability of solutions. The above results for SNSE defined on $\mathbb{R}^2$ are totally new, especially the results on bi-spatial random attractors and stochastic Liouville type theorem for 2D SNSE with linear multiplicative noise are obtained in any kind of domains for the first time.
This note studies Arveson's curvature invariant for d-contractions specialized to the case d=1 of a single contraction operator on a Hilbert space. It establishes a formula which gives an easy-to-understand meaning for the curvature of a single contraction. The formula is applied to give an example of an operator with nonintegral curvature. Under the additional hypothesis that the contraction T be "pure", we show that its curvature K(T) is given by K(T) = - index(T) := -(dim ker T - dim coker T).
We study the largest particle-number-preserving sector of the dilatation operator in maximally supersymmetric gauge theory. After exploring one-loop Bethe Ansatze for the underlying spin chain with psl(2|2) symmetry for simple root systems related to several Kac-Dynkin diagrams, we use the analytic Bethe Anzats to construct eigenvalues of transfer matrices with finite-dimensional atypical representations in the auxiliary space. We derive closed Baxter equations for eigenvalues of nested Baxter operators. We extend these considerations for a non-distinguished root system with FBBF grading to all orders of perturbation theory in 't Hooft coupling. We construct generating functions for all transfer matrices with auxiliary space determined by Young supertableaux (1^a) and (s) and find determinant formulas for transfer matrices with auxiliary spaces corresponding to skew Young supertableaux. The latter yields fusion relations for transfer matrices with auxiliary space corresponding to representations labelled by square Young supertableaux. We derive asymptotic Baxter equations which determine spectra of anomalous dimensions of composite Wilson operators in noncompact psl(2|2) subsector of N=4 super-Yang-Mills theory.
We study the thermal and electric transport of a fluid of interacting Dirac fermions as they arise in single-layer graphene. We include Coulomb interactions, a dilute density of charged impurities and the presence of a magnetic field to describe both the static and the low frequency response as a function of temperature T and chemical potential mu. In the critical regime mu << T where both bands above and below the Dirac point contribute to transport we find pronounced deviations from Fermi liquid behavior, universal, collision-dominated values for transport coefficients and a cyclotron resonance of collective nature. In the collision-dominated high temperature regime the linear thermoelectric transport coefficients are shown to obey the constraints of relativistic magnetohydrodynamics which we derive microscopically from Boltzmann theory. The latter also allows us to describe the crossover to disorder-dominated Fermi liquid behavior at large doping and low temperatures, as well as the crossover to the ballistic regime at high fields.
We analyze the transparency of a thin film of low refractive index (an optical glue or a bonding layer) placed between higher-index media and forming an opto-pair. Examples include a semiconductor light-emitting diode with attached lens or a semiconductor scintillator bonded to a photodiode. The transparency of an opto-pair is highly sensitive to the film thickness due to the so-called frustrated total internal reflection. We show that high transparency in a wide range of the incidence angle can be achieved only with very thin layers, more than an order of magnitude thinner than the wavelength. The angular dependence of the transmission coefficient is shown to satisfy a simple and universal sum rule. Special attention is paid to the angular average of the optical power transmission, which can be cast in a universal form for two practically relevant classes of source layers.
Many online communities rely on postpublication moderation where contributors, even those that are perceived as being risky, are allowed to publish material immediately and where moderation takes place after the fact. An alternative arrangement involves moderating content before publication. A range of communities have argued against prepublication moderation by suggesting that it makes contributing less enjoyable for new members and that it will distract established community members with extra moderation work. We present an empirical analysis of the effects of a prepublication moderation system called FlaggedRevs that was deployed by several Wikipedia language editions. We used panel data from 17 large Wikipedia editions to test a series of hypotheses related to the effect of the system on activity levels and contribution quality. We found that the system was very effective at keeping low-quality contributions from ever becoming visible. Although there is some evidence that the system discouraged participation among users without accounts, our analysis suggests that the system's effects on contribution volume and quality were moderate at most. Our findings imply that concerns regarding the major negative effects of prepublication moderation systems on contribution quality and project productivity may be overstated.
In this paper, we investigate the impact of test-time adversarial attacks on linear regression models and determine the optimal level of robustness that any model can reach while maintaining a given level of standard predictive performance (accuracy). Through quantitative estimates, we uncover fundamental tradeoffs between adversarial robustness and accuracy in different regimes. We obtain a precise characterization which distinguishes between regimes where robustness is achievable without hurting standard accuracy and regimes where a tradeoff might be unavoidable. Our findings are empirically confirmed with simple experiments that represent a variety of settings. This work applies to feature covariance matrices and attack norms of any nature, and extends beyond previous works in this area.
This paper establishes universal formulas describing the global asymptotics of two different models of discrete $\beta$-ensembles in high, low and fixed temperature regimes. Our results affirmatively answer a question posed by the second author and \'Sniady. We first consider the Jack measures on Young diagrams of arbitrary size, which depend on the inverse temperature parameter $\beta>0$ and specialize to Schur measures when $\beta=2$. We introduce a class of Jack measures of Plancherel-type and prove a law of large numbers and central limit theorem in the three regimes. In each regime, we provide explicit formulas for polynomial observables of the limit shape and Gaussian fluctuations around the limit shape. These formulas have surprising positivity properties and are expressed in terms of weighted lattice paths. We also establish connections between these measures and the work of Kerov-Okounkov-Olshanski on Jack-positive specializations and show that this is a rich class of measures parametrized by the elements in the Thoma cone. Second, we show that the formulas from limits of Plancherel-type Jack measures are universal: they also describe the limit shape and Gaussian fluctuations for the second model of random Young diagrams of a fixed size defined by Jack characters with the approximate factorization property (AFP) studied by the second author and \'Sniady. Finally, we discuss the limit shape in the high/low-temperature regimes and show that, contrary to the continuous case of $\beta$-ensembles, there is a phase transition phenomenon in passing from the fixed temperature regime to the high/low temperature regimes. We note that the relation we find between the two different models of random Young diagrams appears to be new, even in the special case of $\beta=2$ that relates Schur measures to the character measures with the AFP studied by Biane and \'Sniady.