text
stringlengths
6
128k
Kaon flavour physics has played in the 1960s and 1970s a very important role in the construction of the Standard Model (SM) and in the 1980s and 1990s in SM tests with the help of CP violation in $K_L\to\pi\pi$ decays represented by $\varepsilon_K$ and the ratio $\varepsilon'/\varepsilon$. In this millennium this role has been taken over by $B_{s,d}$ and $D$ mesons. However there is no doubt that in the coming years we will witness the return of kaon flavour physics with the highlights being the measurements of the theoretically clean branching ratios for the rare decays $K^+\rightarrow \pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ and the improved SM predictions for the ratio $\varepsilon'/\varepsilon$, for $\varepsilon_K$ and the $K^0-\bar K^0$ mixing mass difference $\Delta M_K$. Theoretical progress on the decays $K_{L,S}\to\mu^+\mu^-$ and $K_L\to\pi^0\ell^+\ell^-$ is also expected. They all are very sensitive to new physics (NP) contributions and the correlations between them should help us to identify new dynamics at very short distance scales. These studies will be enriched when theory on the $K\to\pi\pi$ isospin amplitudes ${\rm Re} A_0$ and ${\rm Re} A_2$ improves. This talk summarizes several aspects of this exciting field. In particular we emphasize the role of the Dual QCD approach in getting the insight into the numerical Lattice QCD results on $K^0-\bar K^0$ mixing and $K\to\pi\pi$ decays.
Hypergraphs are mathematical models for many problems in data sciences. In recent decades, the topological properties of hypergraphs have been studied and various kinds of (co)homologies have been constructed (cf. [3, 4, 12]). In this paper, generalising the usual homology of simplicial complexes, we define the embedded homology of hypergraphs as well as the persistent embedded homology of sequences of hypergraphs. As a generalisation of the Mayer-Vietoris sequence for the homology of simplicial complexes, we give a Mayer-Vietoris sequence for the embedded homology of hypergraphs. Moreover, as applications of the embedded homology, we study acyclic hypergraphs and construct some indices for the data analysis of hyper-networks.
We investigate the applicability of machine learning technologies to the development of parsimonious, interpretable, catchment-scale hydrologic models using directed-graph architectures based on the mass-conserving perceptron (MCP) as the fundamental computational unit. Here, we focus on architectural complexity (depth) at a single location, rather than universal applicability (breadth) across large samples of catchments. The goal is to discover a minimal representation (numbers of cell-states and flow paths) that represents the dominant processes that can explain the input-state-output behaviors of a given catchment, with particular emphasis given to simulating the full range (high, medium, and low) of flow dynamics. We find that a HyMod Like architecture with three cell-states and two major flow pathways achieves such a representation at our study location, but that the additional incorporation of an input-bypass mechanism significantly improves the timing and shape of the hydrograph, while the inclusion of bi-directional groundwater mass exchanges significantly enhances the simulation of baseflow. Overall, our results demonstrate the importance of using multiple diagnostic metrics for model evaluation, while highlighting the need for properly selecting and designing the training metrics based on information-theoretic foundations that are better suited to extracting information across the full range of flow dynamics. This study sets the stage for interpretable regional-scale MCP-based hydrological modeling (using large sample data) by using neural architecture search to determine appropriate minimal representations for catchments in different hydroclimatic regimes.
Recently, hyperspectral image (HSI) classification approaches based on deep learning (DL) models have been proposed and shown promising performance. However, because of very limited available training samples and massive model parameters, DL methods may suffer from overfitting. In this paper, we propose an end-to-end 3-D lightweight convolutional neural network (CNN) (abbreviated as 3-D-LWNet) for limited samples-based HSI classification. Compared with conventional 3-D-CNN models, the proposed 3-D-LWNet has a deeper network structure, less parameters, and lower computation cost, resulting in better classification performance. To further alleviate the small sample problem, we also propose two transfer learning strategies: 1) cross-sensor strategy, in which we pretrain a 3-D model in the source HSI data sets containing a greater number of labeled samples and then transfer it to the target HSI data sets and 2) cross-modal strategy, in which we pretrain a 3-D model in the 2-D RGB image data sets containing a large number of samples and then transfer it to the target HSI data sets. In contrast to previous approaches, we do not impose restrictions over the source data sets, in which they do not have to be collected by the same sensors as the target data sets. Experiments on three public HSI data sets captured by different sensors demonstrate that our model achieves competitive performance for HSI classification compared to several state-of-the-art methods
We construct inhomogeneous isoparametric families of hypersurfaces with non-austere focal set on each symmetric space of non-compact type and rank greater than or equal to 3. If the rank is greater than or equal to 4, there are infinitely many such examples. Our construction yields the first examples of isoparametric families on any Riemannian manifold known to have a non-austere focal set. They can be obtained from a new general extension method of submanifolds from Euclidean spaces to symmetric spaces of non-compact type. This method preserves the mean curvature and isoparametricity, among other geometric properties.
The unbound nucleus $^{12}$Li is evaluated by studying three-neutron one-proton excitations within the multistep shell model in the complex energy plane. It is found that the ground state of this system consists of an antibound $2^-$ state. A number of narrow states at low energy are found which ensue from the coupling of resonances in $^{11}$Li to continuum states close to threshold.
An approach to studying lattice gauge models in the weak coupling region is proposed. Conceptually, it is based on the crucial role of the original Z(N) symmetry and the invariant gauge group measure. As an example, we calculate an effective model from the compact Wilson formulation of the SU(2) gauge theory in $d=3D3$ dimensions at zero temperature. Confining properties and phase structure of the effective model are studied in details.
Background: Men and women with a migration background comprise an increasing proportion of incident HIV cases across Western Europe. Several studies indicate a substantial proportion acquire HIV post-migration. Methods: We used partial HIV consensus sequences with linked demographic and clinical data from the opt-out ATHENA cohort of people with HIV in the Netherlands to quantify population-level sources of transmission to Dutch-born and foreign-born Amsterdam men who have sex with men (MSM) between 2010-2021. We identified phylogenetically and epidemiologically possible transmission pairs in local transmission chains and interpreted these in the context of estimated infection dates, quantifying transmission dynamics between sub-populations by world region of birth. Results: We estimate the majority of Amsterdam MSM who acquired their infection locally had a Dutch-born Amsterdam MSM source (56% [53-58%]). Dutch-born MSM were the predominant source population of infections among almost all foreign-born Amsterdam MSM sub-populations. Stratifying by two-year intervals indicated shifts in transmission dynamics, with a majority of infections originating from foreign-born MSM since 2018, although uncertainty ranges remained wide. Conclusions: In the context of declining HIV incidence among Amsterdam MSM, our data suggest whilst native-born MSM have predominantly driven transmissions in 2010-2021, the contribution from foreign-born MSM living in Amsterdam is increasing.
Higgs Computed Axial Tomography, an excerpt. Taking a closer look at the camel-shaped tail of the light Higgs boson resonance and looking to the transformation of the (camel-shaped) signal into a square-root--shaped signal + interference with particular emphasis on residual theoretical uncertainties.
The finite-sample as well as the asymptotic distribution of Leung and Barron's (2006) model averaging estimator are derived in the context of a linear regression model. An impossibility result regarding the estimation of the finite-sample distribution of the model averaging estimator is obtained.
We construct special Lagrangian fibrations for log Calabi-Yau surfaces, and scattering diagrams from Lagrangian Floer theory of the fibres. Then we prove that the scattering diagrams recover the scattering diagrams of Gross-Pandharipande-Siebert and the canonical scattering diagrams of Gross-Hacking-Keel. With an additional assumption on the non-negativity of boundary divisors, we compute the disc potentials of the Lagrangian torus fibres via a holomorphic/tropical correspondence. As an application, we provide a version of mirror symmetry for rank two cluster varieties.
Membrane budding and wrapping of particles, such as viruses and nano-particles, play a key role in intracellular transport and have been studied for a variety of biological and soft matter systems. We study nano-particle wrapping by numerical minimization of bending, surface tension, and adhesion energies. We calculate deformation and adhesion energies as a function of membrane elastic parameters and adhesion strength to obtain wrapping diagrams. We predict unwrapped, partially-wrapped, and completely-wrapped states for prolate and oblate ellipsoids for various aspect ratios and particle sizes. In contrast to spherical particles, where partially-wrapped states exist only for finite surface tensions, partially-wrapped states for ellipsoids occur already for tensionless membranes. In addition, the partially-wrapped states are long-lived, because of an increased energy cost for wrapping of the highly-curved tips. Our results suggest a lower uptake rate of ellipsoidal particles by cells and thereby a higher virulence of tubular viruses compared with icosahedral viruses, as well as co-operative budding of ellipsoidal particles on membranes.
The occurrence of the extensively used herbicide diuron in the environment poses a severe threat to the ecosystem and human health. Four different ligninolytic fungi were studied as biodegradation candidates for the removal of diuron. Among them, T. versicolor was the most effective species, degrading rapidly not only diuron (83%) but also the major metabolite 3,4-dichloroaniline (100%), after 7-day incubation. During diuron degradation, five transformation products (TPs) were found to be formed and the structures for three of them are tentatively proposed. According to the identified TPs, a hydroxylated intermediate 3-(3,4-dichlorophenyl)-1-hydroxymethyl-1-methylurea (DCPHMU) was further metabolized into the N-dealkylated compounds 3-(3,4-dichlorophenyl)-1-methylurea (DCPMU) and 3,4-dichlorophenylurea (DCPU). The discovery of DCPHMU suggests a relevant role of hydroxylation for subsequent N-demethylation, helping to better understand the main reaction mechanisms of diuron detoxification. Experiments also evidenced that degradation reactions may occur intracellularly and be catalyzed by the cytochrome P450 system. A response surface method, established by central composite design, assisted in evaluating the effect of operational variables in a trickle-bed bioreactor immobilized with T. versicolor on diuron removal. The best performance was obtained at low recycling ratios and influent flow rates. Furthermore, results indicate that the contact time between the contaminant and immobilized fungi plays a crucial role in diuron removal. This study represents a pioneering step forward amid techniques for bioremediation of pesticides-contaminated waters using fungal reactors at a real scale.
Sums play a prominent role in the formalisms of quantum mechanics, be it for mixing and superposing states, or for composing state spaces. Surprisingly, a conceptual analysis of quantum measurement seems to suggest that quantum mechanics can be done without direct sums, expressed entirely in terms of the tensor product. The corresponding axioms define classical spaces as objects that allow copying and deleting data. Indeed, the information exchange between the quantum and the classical worlds is essentially determined by their distinct capabilities to copy and delete data. The sums turn out to be an implicit implementation of this capabilities. Realizing it through explicit axioms not only dispenses with the unnecessary structural baggage, but also allows a simple and intuitive graphical calculus. In category-theoretic terms, classical data types are dagger-compact Frobenius algebras, and quantum spectra underlying quantum measurements are Eilenberg-Moore coalgebras induced by these Frobenius algebras.
If the electroweak symmetry-breaking sector becomes strongly interacting at high energies, it can be probed through longitudinal $W$ scattering. We present a model with many inelastic channels in the $W_L W_L$ scattering process, corresponding to the production of heavy fermion pairs. These heavy fermions affect the elastic scattering of $W_L$'s by propagating in loops, greatly reducing the amplitudes in some charge channels. We conclude that the symmetry-breaking sector cannot be fully explored by using, for example, the $W_L^\pm W_L^\pm$ mode alone, even when no resonance is present; all $W_L W_L \to W_L W_L$ scattering modes must be measured.
We give a quantum chemical description of bridge photoisomerization reaction of green fluorescent protein (GFP) chromophores using a representation over three diabatic states. Bridge photoisomerization leads to non-radiative decay, and competes with fluorescence in these systems. In the protein, this pathway is suppressed, leading to fluorescence. Understanding the electronic structure of the photoisomerization is a prerequisite to understanding how the protein suppresses this pathway and preserves the emitting state of the chromophore. We present a solution to the state-averaged complete active space problem, which is spanned at convergence by three fragment-localized orbitals. We generate the diabatic-state representation by applying a block diagonalization transformation to the Hamiltonian calculated for the anionic chromophore model HBDI with multi-reference, multi-state perturbation theory. The diabatic states that emerge are charge-localized structures with a natural valence-bond interpretation. At planar geometries, the diabatic picture recaptures the charge transfer resonance of the anion. The strong S0-S1 excitation at these geometries is reasonably described within a two-state model, but extension to a three-state model is necessary to describe decay via two possible pathways associated with photoisomerization of the (methine) bridge. Parametric Hamiltonians based on the three-state ansatz can be fit directly to data generated using the underlying active space. We provide an illustrative example of such a parametric Hamiltonian.
In surgical oncology, screening colonoscopy plays a pivotal role in providing diagnostic assistance, such as biopsy, and facilitating surgical navigation, particularly in polyp detection. Computer-assisted endoscopic surgery has recently gained attention and amalgamated various 3D computer vision techniques, including camera localization, depth estimation, surface reconstruction, etc. Neural Radiance Fields (NeRFs) and Neural Implicit Surfaces (NeuS) have emerged as promising methodologies for deriving accurate 3D surface models from sets of registered images, addressing the limitations of existing colon reconstruction approaches stemming from constrained camera movement. However, the inadequate tissue texture representation and confused scale problem in monocular colonoscopic image reconstruction still impede the progress of the final rendering results. In this paper, we introduce a novel method for colon section reconstruction by leveraging NeuS applied to endoscopic images, supplemented by a single frame of depth map. Notably, we pioneered the exploration of utilizing only one frame depth map in photorealistic reconstruction and neural rendering applications while this single depth map can be easily obtainable from other monocular depth estimation networks with an object scale. Through rigorous experimentation and validation on phantom imagery, our approach demonstrates exceptional accuracy in completely rendering colon sections, even capturing unseen portions of the surface. This breakthrough opens avenues for achieving stable and consistently scaled reconstructions, promising enhanced quality in cancer screening procedures and treatment interventions.
Recently, a few peculiar Type Ia supernovae (SNe) that show exceptionally large peak luminosity have been discovered. Their luminosity requires more than 1 Msun of 56Ni ejected during the explosion, suggesting that they might have originated from super-Chandrasekhar mass white dwarfs. However, the nature of these objects is not yet well understood. In particular, no data have been taken at late phases, about one year after the explosion. We report on Subaru and Keck optical spectroscopic and photometric observations of the SN Ia 2006gz, which had been classified as being one of these "overluminous" SNe Ia. The late-time behavior is distinctly different from that of normal SNe Ia, reinforcing the argument that SN 2006gz belongs to a different subclass than normal SNe Ia. However, the peculiar features found at late times are not readily connected to a large amount of 56Ni; the SN is faint, and it lacks [Fe II] and [Fe III] emission. If the bulk of the radioactive energy escapes the SN ejecta as visual light, as is the case in normal SNe Ia, the mass of 56Ni does not exceed ~ 0.3 Msun. We discuss several possibilities to remedy the problem. With the limited observations, however, we are unable to conclusively identify which process is responsible. An interesting possibility is that the bulk of the emission might be shifted to longer wavelengths, unlike the case in other SNe Ia, which might be related to dense C-rich regions as indicated by the early-phase data. Alternatively, it might be the case that SN 2006gz, though peculiar, was actually not substantially overluminous at early times.
We report the studies of ultrafast electron nanocrystallography on size-selected Au nanoparticles (2-20 nm) supported on a molecular interface. Reversible surface melting, melting, and recrystallization were investigated with dynamical full-profile radial distribution functions determined with sub-picosecond and picometer accuracies. In an ultrafast photoinduced melting, the nanoparticles are driven to a non-equilibrium transformation, characterized by the initial lattice deformations, nonequilibrium electron-phonon coupling, and upon melting, the collective bonding and debonding, transforming nanocrystals into shelled nanoliquids. The displasive structural excitation at premelting and the coherent transformation with crystal/liquid coexistence during photomelting differ from the reciprocal behavior of recrystallization, where a hot lattice forms from liquid and then thermally contracts. The degree of structural change and the thermodynamics of melting are found to depend on the size of nanoparticle.
A critical comparison is made between recent predictions of the cross sections for diffractive Higgs production at the Tevatron and the LHC. We show that the huge spread of the predictions arises either because different diffractive processes are studied or because important effects are overlooked. Exclusive production offers a reliable, viable Higgs signal at the LHC provided that proton taggers are installed.
We have investigated uudd\sbar pentaquarks by employing quark models with the meson exchange and the effective gluon exchange as qq and q\qbar interactions. The system for five quarks is dynamically solved; two quarks are allowed to have a diquark-like qq correlation. It is found that the lowest mass of the pentaquark is about 1947 -- 2144 MeV. There are parameter sets where the mass of the lowest positive-parity state become lower than that of the negative-parity states. Which parity corresponds to the observed peak is still an open question. Relative distance of two quarks with the attractive interaction is found to be by about 1.2 -- 1.3 times closer than that of the repulsive one. The diquark-like quark correlation seems to play an important role in the pentaquark systems.
We introduce a new quantity, that we term recoverable information, defined for stabilizer Hamiltonians. For such models, the recoverable information provides a measure of the topological information, as well as a physical interpretation, which is complementary to topological entanglement entropy. We discuss three different ways to calculate the recoverable information, and prove their equivalence. To demonstrate its utility, we compute recoverable information for fracton models using all three methods where appropriate. From the recoverable information, we deduce the existence of emergent $Z_2$ Gauss-law type constraints, which in turn imply emergent $Z_2$ conservation laws for point-like quasiparticle excitations of an underlying topologically ordered phase.
Our aim in this paper is to investigate the asymptotic behavior of solutions of the perturbed linear fractional differential system. We show that if the original linear autonomous system is asymptotically stable then under the action of small (either linear or nonlinear) nonautonomous perturbations the trivial solution of the perturbed system is also asymptotically stable.
Recent advances in data augmentation enable one to translate images by learning the mapping between a source domain and a target domain. Existing methods tend to learn the distributions by training a model on a variety of datasets, with results evaluated largely in a subjective manner. Relatively few works in this area, however, study the potential use of image synthesis methods for recognition tasks. In this paper, we propose and explore the problem of image translation for data augmentation. We first propose a lightweight yet efficient model for translating texture to augment images based on a single input of source texture, allowing for fast training and testing, referred to as Single Image Texture Translation for data Augmentation (SITTA). Then we explore the use of augmented data in long-tailed and few-shot image classification tasks. We find the proposed augmentation method and workflow is capable of translating the texture of input data into a target domain, leading to consistently improved image recognition performance. Finally, we examine how SITTA and related image translation methods can provide a basis for a data-efficient, "augmentation engineering" approach to model training. Codes are available at https://github.com/Boyiliee/SITTA.
In the era of Exascale computing, writing efficient parallel programs is indispensable and at the same time, writing sound parallel programs is very difficult. Specifying parallelism with frameworks such as OpenMP is relatively easy, but data races in these programs are an important source of bugs. In this paper, we propose LLOV, a fast, lightweight, language agnostic, and static data race checker for OpenMP programs based on the LLVM compiler framework. We compare LLOV with other state-of-the-art data race checkers on a variety of well-established benchmarks. We show that the precision, accuracy, and the F1 score of LLOV is comparable to other checkers while being orders of magnitude faster. To the best of our knowledge, LLOV is the only tool among the state-of-the-art data race checkers that can verify a C/C++ or FORTRAN program to be data race free.
In this paper, we study 2d Floquet conformal field theory, where the external periodic driving is described by iterated logistic or tent maps. These maps are known to be typical examples of dynamical systems exhibiting the order-chaos transition, and we show that, as a result of such driving, the entanglement entropy scaling develops fractal features when the corresponding dynamical system approaches the chaotic regime. For the driving set by the logistic map, fractal contribution to the scaling dominates, making entanglement entropy highly oscillating function of the subsystem size.
The equivalence group is determined for systems of linear ordinary differential equations in both the standard form and the normal form. It is then shown that the normal form of linear systems reducible by an invertible point transformation to the canonical equation $\by^{(n)}=0$ consists of copies of the same iterative equation. Other properties of iterative linear systems are also derived, as well as the superposition formula for their general solution.
In this paper, a cooperative decision-making is presented, which is suitable for intention-aware automated vehicle functions. With an increasing number of highly automated and autonomous vehicles on public roads, trust is a very important issue regarding their acceptance in our society. The most challenging scenarios arise at low driving speeds of these highly automated and autonomous vehicles, where interactions with vulnerable road users likely occur. Such interactions must be addressed by the automation of the vehicle. The novelties of this paper are the adaptation of a general cooperative and shared control framework to this novel use case and the application of an explicit prediction model of the pedestrian. An extensive comparison with state-of-the-art algorithms is provided in a simplified test environment. The results show the superiority of the proposed model-based algorithm compared to state-of-the-art solutions and its suitability for real-world applications due to its real-time capability.
We study the dust properties of 192 nearby galaxies from the JINGLE survey using photometric data in the 22-850micron range. We derive the total dust mass, temperature T and emissivity index beta of the galaxies through the fitting of their spectral energy distribution (SED) using a single modified black-body model (SMBB). We apply a hierarchical Bayesian approach that reduces the known degeneracy between T and beta. Applying the hierarchical approach, the strength of the T-beta anti-correlation is reduced from a Pearson correlation coefficient R=-0.79 to R=-0.52. For the JINGLE galaxies we measure dust temperatures in the range 17-30 K and dust emissivity indices beta in the range 0.6-2.2. We compare the SMBB model with the broken emissivity modified black-body (BMBB) and the two modified black-bodies (TMBB) models. The results derived with the SMBB and TMBB are in good agreement, thus applying the SMBB, which comes with fewer free parameters, does not penalize the measurement of the cold dust properties in the JINGLE sample. We investigate the relation between T and beta and other global galaxy properties in the JINGLE and Herschel Reference Survey (HRS) sample. We find that beta correlates with the stellar mass surface density (R=0.62) and anti-correlates with the HI mass fraction (M(HI)/M*, R=-0.65), whereas the dust temperature correlates strongly with the SFR normalized by the dust mass (R=0.73). These relations can be used to estimate T and beta in galaxies with insufficient photometric data available to measure them directly through SED fitting.
Supplementing earlier literature by e.g. Tipler, Clarke, & Ellis (1980), Israel (1987), Thorne, (1994), Earman (1999), Senovilla & Garfinkle (2015), Curiel (2019ab), and Landsman (2021ab), I provide a historical and conceptual analysis of Penrose's path-breaking 1965 singularity (or incompleteness) theorem. The emphasis is on the nature and historical origin of the assumptions and definitions used in-or otherwise relevant to-the theorem, as well as on the discrepancy between the (astro)physical goals of the theorem and its actual content: even if its assumptions are met, the theorem fails to prove the existence or formation of black holes.Penrose himself was well aware of this gap, which he subsequently tried to overcome with his visionary and influential cosmic censorship conjectures. Roughly speaking, to infer from (null) geodesic incompleteness that there is a "black" object one needs weak cosmic censorship, whereas in addition a "hole" exists (as opposed to a boundary of an extendible space-time causing the incompleteness of geodesics) if strong cosmic censorship holds.
Neural radiance fields (NeRFs) are a deep learning technique that can generate novel views of 3D scenes using sparse 2D images from different viewing directions and camera poses. As an extension of conventional NeRFs in underwater environment, where light can get absorbed and scattered by water, SeaThru-NeRF was proposed to separate the clean appearance and geometric structure of underwater scene from the effects of the scattering medium. Since the quality of the appearance and structure of underwater scenes is crucial for downstream tasks such as underwater infrastructure inspection, the reliability of the 3D reconstruction model should be considered and evaluated. Nonetheless, owing to the lack of ability to quantify uncertainty in 3D reconstruction of underwater scenes under natural ambient illumination, the practical deployment of NeRFs in unmanned autonomous underwater navigation is limited. To address this issue, we introduce a spatial perturbation field D_omega based on Bayes' rays in SeaThru-NeRF and perform Laplace approximation to obtain a Gaussian distribution N(0,Sigma) of the parameters omega, where the diagonal elements of Sigma correspond to the uncertainty at each spatial location. We also employ a simple thresholding method to remove artifacts from the rendered results of underwater scenes. Numerical experiments are provided to demonstrate the effectiveness of this approach.
Quantum matrix models in the large-N limit arise in many physical systems like Yang-Mills theory with or without supersymmetry, quantum gravity, string-bit models, various low energy effective models of string theory, M(atrix) theory, quantum spin chain models, and strongly correlated electron systems like the Hubbard model. We introduce, in a unifying fashion, a hierachy of infinite-dimensional Lie superalgebras of quantum matrix models. One of these superalgebras pertains to the open string sector and another one the closed string sector. Physical observables of quantum matrix models like the Hamiltonian can be expressed as elements of these Lie superalgebras. This indicates the Lie superalgebras describe the symmetry of quantum matrix models. We present the structure of these Lie superalgebras like their Cartan subalgebras, root vectors, ideals and subalgebras. They are generalizations of well-known algebras like the Cuntz algebra, the Virasoro algebra, the Toeplitz algebra, the Witt algebra and the Onsager algebra. Just like we learnt a lot about critical phenomena and string theory through their conformal symmetry described by the Virasoro algebra, we may learn a lot about quantum chromodynamics, quantum gravity and condensed matter physics through this symmetry of quantum matrix models described by these Lie superalgebras.
Obstructive sleep apnea syndrome is now considered as a major health care topic. An in-vitro setup which reproduces and simplifies upper airway geometry has been the basis to study the fluid/walls interaction that leads to an apnea. It consists of a rigid pipe (the pharynx) in contact with a deformable latex cylinder filled with water (the tongue). Air flows out of the rigid pipe and induces pressure forces on the cylinder. We present a numerical model of this setup: a finite element model of the latex cylinder is in interaction with a fluid model. Simulation of an hypopnea (partial collapsus of the airway) has been possible and in agreement with observations from the in-vitro setup. The same phenomenon has been simulated on a soft palate model obtained from a patient sagittal radiography. These first results encourage us to improve the model so as it could reproduce the complete apnea phenomenon, and be used for a planification purpose in sleep apnea surgery.
Kernel design for Multi-output Gaussian Processes (MOGP) has received increased attention recently. In particular, the Multi-Output Spectral Mixture kernel (MOSM) arXiv:1709.01298 approach has been praised as a general model in the sense that it extends other approaches such as Linear Model of Corregionalization, Intrinsic Corregionalization Model and Cross-Spectral Mixture. MOSM relies on Cram\'er's theorem to parametrise the power spectral densities (PSD) as a Gaussian mixture, thus, having a structural restriction: by assuming the existence of a PSD, the method is only suited for multi-output stationary applications. We develop a nonstationary extension of MOSM by proposing the family of harmonizable kernels for MOGPs, a class of kernels that contains both stationary and a vast majority of non-stationary processes. A main contribution of the proposed harmonizable kernels is that they automatically identify a possible nonstationary behaviour meaning that practitioners do not need to choose between stationary or non-stationary kernels. The proposed method is first validated on synthetic data with the purpose of illustrating the key properties of our approach, and then compared to existing MOGP methods on two real-world settings from finance and electroencephalography.
Along with the development of AI democratization, the machine learning approach, in particular neural networks, has been applied to wide-range applications. In different application scenarios, the neural network will be accelerated on the tailored computing platform. The acceleration of neural networks on classical computing platforms, such as CPU, GPU, FPGA, ASIC, has been widely studied; however, when the scale of the application consistently grows up, the memory bottleneck becomes obvious, widely known as memory-wall. In response to such a challenge, advanced quantum computing, which can represent 2^N states with N quantum bits (qubits), is regarded as a promising solution. It is imminent to know how to design the quantum circuit for accelerating neural networks. Most recently, there are initial works studying how to map neural networks to actual quantum processors. To better understand the state-of-the-art design and inspire new design methodology, this paper carries out a case study to demonstrate an end-to-end implementation. On the neural network side, we employ the multilayer perceptron to complete image classification tasks using the standard and widely used MNIST dataset. On the quantum computing side, we target IBM Quantum processors, which can be programmed and simulated by using IBM Qiskit. This work targets the acceleration of the inference phase of a trained neural network on the quantum processor. Along with the case study, we will demonstrate the typical procedure for mapping neural networks to quantum circuits.
Recently, gauged supergravities in three dimensions with Yang-Mills and Chern-Simons type interactions have been constructed. In this article, we demonstrate that any gauging of Yang-Mills type with semisimple gauge group G_0, possibly including extra couplings to massive Chern-Simons vectors, is equivalent on-shell to a pure Chern-Simons type gauging with non-semisimple gauge group $G_0 \ltimes T \subset G$, where T is a certain translation group, and where G is the maximal global symmetry group of the ungauged theory. We discuss several examples.
Entangling two quantum bits by letting them interact is the crucial requirements for building a quantum processor. For qubits based on the spin of the electron, these two-qubit gates are typically performed by exchange interaction of the electrons captured in two nearby quantum dots. Since the exchange interaction relies on tunneling of the electrons, the range of interaction for conventional approaches is severely limited as the tunneling amplitude decays exponentially with the length of the tunneling barrier. Here, we present a novel approach to couple two spin qubits via a superconducting coupler. In essence, the superconducting coupler provides a tunneling barrier for the electrons which can be tuned with exquisite precision. We show that as a result exchange couplings over a distance of several microns become realistic, thus enabling flexible designs of multi-qubit systems.
The Origins, Spectral Interpretation, Resource Identification, and Security Regolith Explorer (OSIRIS-REx) spacecraft arrived at its target, near-Earth asteroid 101955 Bennu, in December 2018. After one year of operating in proximity, the team selected a primary site for sample collection. In October 2020, the spacecraft descended to the surface of Bennu and collected a sample. The spacecraft departed Bennu in May 2021 and will return the sample to Earth in September 2023. The analysis of the returned sample will produce key data to determine the history of this B-type asteroid and that of its components and precursor objects. The main goal of the OSIRIS-REx Sample Analysis Plan is to provide a framework for the Sample Analysis Team to meet the Level 1 mission requirement to analyze the returned sample to determine presolar history, formation age, nebular and parent-body alteration history, relation to known meteorites, organic history, space weathering, resurfacing history, and energy balance in the regolith of Bennu. To achieve this goal, this plan establishes a hypothesis-driven framework for coordinated sample analyses, defines the analytical instrumentation and techniques to be applied to the returned sample, provides guidance on the analysis strategy for baseline, overguide, and threshold amounts of returned sample, including a rare or unique lithology, describes the data storage, management, retrieval, and archiving system, establishes a protocol for the implementation of a micro-geographical information system to facilitate co-registration and coordinated analysis of sample science data, outlines the plans for Sample Analysis Readiness Testing, and provides guidance for the transfer of samples from curation to the Sample Analysis Team.
Traditional pruning methods are known to be challenging to work in Large Language Models (LLMs) for Generative AI because of their unaffordable training process and large computational demands. For the first time, we introduce the information entropy of hidden state features into a pruning metric design, namely E-Sparse, to improve the accuracy of N:M sparsity on LLM. E-Sparse employs the information richness to leverage the channel importance, and further incorporates several novel techniques to put it into effect: (1) it introduces information entropy to enhance the significance of parameter weights and input feature norms as a novel pruning metric, and performs N:M sparsity without modifying the remaining weights. (2) it designs global naive shuffle and local block shuffle to quickly optimize the information distribution and adequately cope with the impact of N:M sparsity on LLMs' accuracy. E-Sparse is implemented as a Sparse-GEMM on FasterTransformer and runs on NVIDIA Ampere GPUs. Extensive experiments on the LLaMA family and OPT models show that E-Sparse can significantly speed up the model inference over the dense model (up to 1.53X) and obtain significant memory saving (up to 43.52%), with acceptable accuracy loss.
The transformation of organic molecules into the simplest self-replicating living system,a microorganism, is accomplished from a unique event or rare events that occurred early in the Universe. The subsequent dispersal on cosmic scales and evolution of life is guaranteed, being determined by well-understood processes of physics and biology. Entire galaxies and clusters of galaxies can be considered as connected biospheres, with lateral gene transfers, as initially theorized by Joseph (2000), providing for genetic mixing and Darwinian evolution on a cosmic scale. Big bang cosmology modified by modern fluid mechanics suggests the beginning and wide intergalactic dispersal of life occurred immediately after the end of the plasma epoch when the gas of protogalaxies in clusters fragmented into clumps of planets. Stars are born from binary mergers of such planets within such clumps. When stars devour their surrounding planets to excess they explode, distributing necessary fertilizing chemicals created only in stars with panspermial templates created only in adjacent planets, moons and comets, to be gravitationally collected by the planets and further converted to living organisms. Recent infrared images of nearby star forming regions suggest that life formation on planets like Earth is possible, but not inevitable.
We establish a connection between the representation theory of certain noncommutative singular varieties and two-dimensional lattice models. Specifically, we consider noncommutative biparametric deformations of the fiber product of two Kleinian singularities of type $A$. Special examples are closely related to Lie-Heisenberg algebras, the affine Lie algebra $A_1^{(1)}$, and a finite W-algebra associated to $\mathfrak{sl}_4$. The algebras depend on two scalars and two polynomials that must satisfy the Mazorchuk-Turowska Equation (MTE), which we re-interpret as a quantization of the ice rule (local current conservation) in statistical mechanics. Solutions to the MTE, previously classified by the author and D. Rosso, can accordingly be expressed in terms of multisets of higher spin vertex configurations on a twisted cylinder. We first reduce the problem of describing the category of weight modules to the case of a single configuration $\mathscr{L}$. Secondly, we classify all simple weight modules over the corresponding algebras $\mathcal{A}(\mathscr{L})$, in terms of the connected components of the cylinder minus $\mathscr{L}$. Lastly, we prove that $\mathcal{A}(\mathscr{L})$ are crystalline graded rings (as defined by Nauwelaerts and Van Oystaeyen), and describe the center of $\mathcal{A}(\mathscr{L})$ explicitly in terms of $\mathscr{L}$. Along the way we prove several new results about twisted generalized Weyl algebras and their simple weight modules.
Semiconducting single-wall carbon nanotubes (s-SWNTs) have proved to be promising material for nanophotonics and optoelectronics. Due to the possibility of tuning their direct band gap and controlling excitonic recombinations in the near-infrared wavelength range, s-SWNT can be used as efficient light emitters. We report the first experimental demonstration of room temperature intrinsic optical gain as high as 190 cm-1 at a wavelength of 1.3 {\mu}m in a thin film doped with s-SWNT. These results constitute a significant milestone toward the development of laser sources based on carbon nanotubes for future high performance integrated circuits.
We generalize the notion of a Magnus expansion of a free group in order to extend each of the Johnson homomorphisms defined on a decreasing filtration of the Torelli group for a surface with one boundary component to the whole of the automorphism group of a free group $\operatorname{Aut}(F_{n})$. The extended ones are {\it not} homomorphisms, but satisfy an infinite sequence of coboundary relations, so that we call them {\it the Johnson maps}. In this paper we confine ourselves to studying the first and the second relations, which have cohomological consequences about the group $\operatorname{Aut}(F_{n})$ and the mapping class groups for surfaces. The first one means that the first Johnson map is a twisted 1-cocycle of the group $\operatorname{Aut}(F_{n})$. Its cohomology class coincides with ``the unique elementary particle" of all the Morita-Mumford classes on the mapping class group for a surface [Ka1] [KM1]. The second one restricted to the mapping class group is equal to a fundamental relation among twisted Morita-Mumford classes proposed by Garoufalidis and Nakamura [GN] and established by Morita and the author [KM2]. This means we give a simple and coherent proof of the fundamental relation. The first Johnson map gives the abelianization of the induced automorphism group $IA_n$ of a free group in an explicit way.
A 3D fluid-structure interaction solver based on an improved weakly-compressible moving particle simulation (WC-MPS) method and a geometrically nonlinear shell structural model is developed and applied to hydro-elastic free-surface flows. The fluid-structure coupling is performed by a polygon wall boundary model that can handle particles and finite elements of distinct sizes. In WC-MPS, a tuning-free diffusive term is introduced to the continuity equation to mitigate non-physical pressure oscillations. Discrete divergence operators are derived and applied to the polygon wall boundary, of which the numerical stability is enhanced by a repulsive Lennard-Jones force. Additionally, an efficient technique to deal with the interaction between fluid particles placed at opposite sides of zero-thickness walls is proposed. The geometrically nonlinear shell is modeled by an unstructured mesh of six-node triangular elements. Finite rotations are considered with Rodrigues parameters and a hyperelastic constitutive model is adopted. Benchmark examples involving free-surface flows and thin-walled structures demonstrate that the proposed model is robust, numerically stable and offers more efficient computation by allowing mesh size larger than that of fluid particles.
Proof-theoretic semantics (P-tS) is the paradigm of semantics in which meaning in logic is based on proof (as opposed to truth). A particular instance of P-tS for intuitionistic propositional logic (IPL) is its base-extension semantics (B-eS). This semantics is given by a relation called support, explaining the meaning of the logical constants, which is parameterized by systems of rules called bases that provide the semantics of atomic propositions. In this paper, we interpret bases as collections of definite formulae and use the operational view of the latter as provided by uniform proof-search -- the proof-theoretic foundation of logic programming (LP) -- to establish the completeness of IPL for the B-eS. This perspective allows negation, a subtle issue in P-tS, to be understood in terms of the negation-as-failure protocol in LP. Specifically, while the denial of a proposition is traditionally understood as the assertion of its negation, in B-eS we may understand the denial of a proposition as the failure to find a proof of it. In this way, assertion and denial are both prime concepts in P-tS.
Learning from Demonstrations, the field that proposes to learn robot behavior models from data, is gaining popularity with the emergence of deep generative models. Although the problem has been studied for years under names such as Imitation Learning, Behavioral Cloning, or Inverse Reinforcement Learning, classical methods have relied on models that don't capture complex data distributions well or don't scale well to large numbers of demonstrations. In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets. In this survey, we aim to provide a unified and comprehensive review of the last year's progress in the use of deep generative models in robotics. We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks. We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning. One of the most important elements of generative models is the generalization out of distributions. In our survey, we review the different decisions the community has made to improve the generalization of the learned models. Finally, we highlight the research challenges and propose a number of future directions for learning deep generative models in robotics.
Biomedical literature is growing rapidly, making it challenging to curate and extract knowledge manually. Biomedical natural language processing (BioNLP) techniques that can automatically extract information from biomedical literature help alleviate this burden. Recently, large Language Models (LLMs), such as GPT-3 and GPT-4, have gained significant attention for their impressive performance. However, their effectiveness in BioNLP tasks and impact on method development and downstream users remain understudied. This pilot study (1) establishes the baseline performance of GPT-3 and GPT-4 at both zero-shot and one-shot settings in eight BioNLP datasets across four applications: named entity recognition, relation extraction, multi-label document classification, and semantic similarity and reasoning, (2) examines the errors produced by the LLMs and categorized the errors into three types: missingness, inconsistencies, and unwanted artificial content, and (3) provides suggestions for using LLMs in BioNLP applications. We make the datasets, baselines, and results publicly available to the community via https://github.com/qingyu-qc/gpt_bionlp_benchmark.
We investigate the validity of the assertion that eternal inflation populates the landscape of string theory. We verify that bubble solutions do not satisfy the Klein Gordon equation for the landscape potential. Solutions to the landscape potential within the formalism of quantum cosmology are Anderson localized wavefunctions. Those are inconsistent with inflating bubble solutions. The physical reasons behind the failure of a relation between eternal inflation and the landscape are rooted in quantum phenomena such as interference between wavefunction concentrated around the various vacua in the landscape.
The pair-correlation functions for fluid ionic mixtures in arbitrary spatial dimensions are computed in hypernetted chain (HNC) approximation. In the primitive model, all ions are approximated as non-overlapping hyperspheres with Coulomb interactions. Our spectral HNC solver is based on a Fourier-Bessel transform introduced by Talman [J. Comput. Phys., 29, 35 (1978)], with logarithmically spaced computational grids. Numeric efficiency for arbitrary spatial dimensions is a commonly exploited virtue of this transform method. Here, we highlight another advantage of logarithmic grids, consisting in efficient sampling of pair-correlation functions for highly asymmetric ionic mixtures. For three-dimensional fluids, ion size- and charge-ratios larger than one thousand can be treated, corresponding to hitherto computationally not accessed micrometer-sized colloidal spheres in 1-1 electrolyte. Effective colloidal charge numbers are extracted from our primitive model results. For moderately large ion size- and charge-asymmetries, we present Molecular Dynamics simulation results that agree well with the approximate HNC pair correlations.
Multi-task learning (MTL) aims to improve the performance of multiple related prediction tasks by leveraging useful information from them. Due to their flexibility and ability to reduce unknown coefficients substantially, the task-clustering-based MTL approaches have attracted considerable attention. Motivated by the idea of semisoft clustering of data, we propose a semisoft task clustering approach, which can simultaneously reveal the task cluster structure for both pure and mixed tasks as well as select the relevant features. The main assumption behind our approach is that each cluster has some pure tasks, and each mixed task can be represented by a linear combination of pure tasks in different clusters. To solve the resulting non-convex constrained optimization problem, we design an efficient three-step algorithm. The experimental results based on synthetic and real-world datasets validate the effectiveness and efficiency of the proposed approach. Finally, we extend the proposed approach to a robust task clustering problem.
We present a comparative network theoretic analysis of the two largest global transportation networks: The worldwide air-transportation network (WAN) and the global cargoship network (GCSN). We show that both networks exhibit striking statistical similarities despite significant differences in topology and connectivity. Both networks exhibit a discontinuity in node and link betweenness distributions which implies that these networks naturally segragate in two different classes of nodes and links. We introduce a technique based on effective distances, shortest paths and shortest-path trees for strongly weighted symmetric networks and show that in a shortest-path-tree representation the most significant features of both networks can be readily seen. We show that effective shortest-path distance, unlike conventional geographic distance measures, strongly correlates with node centrality measures. Using the new technique we show that network resilience can be investigated more precisely than with contemporary techniques that are based on percolation theory. We extract a functional relationship between node characteristics and resilience to network disruption. Finally we discuss the results, their implications and conclude that dynamic processes that evolve on both networks are expected to share universal dynamic characteristics.
Suppose that we are given independent, identically distributed samples $x_l$ from a mixture $\mu$ of no more than $k$ of $d$-dimensional spherical gaussian distributions $\mu_i$ with variance $1$, such that the minimum $\ell_2$ distance between two distinct centers $y_l$ and $y_j$ is greater than $\sqrt{d} \Delta$ for some $c \leq \Delta $, where $c\in (0,1)$ is a small positive universal constant. We develop a randomized algorithm that learns the centers $y_l$ of the gaussians, to within an $\ell_2$ distance of $\delta < \frac{\Delta\sqrt{d}}{2}$ and the weights $w_l$ to within $cw_{min}$ with probability greater than $1 - \exp(-k/c)$. The number of samples and the computational time is bounded above by $poly(k, d, \frac{1}{\delta})$. Such a bound on the sample and computational complexity was previously unknown when $\omega(1) \leq d \leq O(\log k)$. When $d = O(1)$, this follows from work of Regev and Vijayaraghavan. These authors also show that the sample complexity of learning a random mixture of gaussians in a ball of radius $\Theta(\sqrt{d})$ in $d$ dimensions, when $d$ is $\Theta( \log k)$ is at least $poly(k, \frac{1}{\delta})$, showing that our result is tight in this case.
Symmetry Adapted Perturbation Theory (SAPT) has become an important tool when predicting and analyzing intermolecular interactions. Unfortunately, DFT-SAPT, which uses Density Functional Theory (DFT) for the underlying monomers, has some arbitrariness concerning the exchange-correlation potential and the exchange-correlation kernel involved. By using ab initio Brueckner Doubles densities and constructing Kohn-Sham orbitals via the Zhao-Morrison-Parr (ZMP) method, we are able to lift the dependence of DFT-SAPT on DFT exchange-correlation potential models in first order. This way, we can compute the monomers at the Coupled-Cluster level of theory and utilize SAPT for the intermolecular interaction energy. The resulting ZMP-SAPT approach is tested for small dimer systems involving rare gas atoms, cations, and anions and shown to compare well with the Tang-Toennies model and coupled cluster results.
Let $\Bbb Z$ and $\Bbb N$ be the set of integers and the set of positive integers, respectively. For $a_1,a_2,\ldots,a_k,n\in\Bbb N$ let $N(a_1,a_2,\ldots,a_k;n)$ be the number of representations of $n$ by $a_1x_1^2+a_2x_2^2+\cdots+a_kx_k^2$, and let $t(a_1,a_2,\ldots,a_k;n)$ be the number of representations of $n$ by $a_1\frac{x_1(x_1-1)}2+a_2\frac{x_2(x_2-1)}2+\cdots+a_k\frac{x_k(x_k-1)}2$ $(x_1,\ldots,x_k\in\Bbb Z$). In this paper, by using Ramanujan's theta functions $\varphi(q)$ and $\psi(q)$ we reveal many relations between $t(a_1,a_2,\ldots,a_k;n)$ and $N(a_1,a_2,\ldots,a_k;8n+a_1+\cdots+a_k)$ for $k=3,4$.
Quantitative data is frequently represented using color, yet designing effective color mappings is a challenging task, requiring one to balance perceptual standards with personal color preference. Current design tools either overwhelm novices with complexity or offer limited customization options. We present ColorMaker, a mixed-initiative approach for creating colormaps. ColorMaker combines fluid user interaction with real-time optimization to generate smooth, continuous color ramps. Users specify their loose color preferences while leaving the algorithm to generate precise color sequences, meeting both designer needs and established guidelines. ColorMaker can create new colormaps, including designs accessible for people with color-vision deficiencies, starting from scratch or with only partial input, thus supporting ideation and iterative refinement. We show that our approach can generate designs with similar or superior perceptual characteristics to standard colormaps. A user study demonstrates how designers of varying skill levels can use this tool to create custom, high-quality colormaps. ColorMaker is available at https://colormaker.org
We analyze the modular properties of the effective CFT description for paired states, proposed in cond-mat/0003453, corresponding to the non-standard filling nu =1/(p+1). We construct its characters for the twisted and the untwisted sector and the diagonal partition function. We show that the degrees of freedom entering our partition function naturally go to complete a Z_2-orbifold construction of the CFT for the Halperin state. Different behaviours for the p even and p odd cases are also studied. Finally it is shown that the tunneling phenomenon selects out a twist invariant CFT which is identified with the Moore-Read model.
In the present work, we provide results from specific heat, magnetic susceptibility, dielectric constant, ac conductivity, and electrical polarization measurements performed on the lacunar spinel GaV4Se8. With decreasing temperature, we observe a transition from the paraelectric and paramagnetic cubic state into a polar, probably ferroelectric state at 42 K followed by magnetic ordering at 18 K. The polar transition is likely driven by the Jahn-Teller effect due to the degeneracy of the V4 cluster orbitals. The excess polarization arising in the magnetic phase indicates considerable magnetoelectric coupling. Overall, the behavior of GaV4Se8 in many respects is similar to that of the skyrmion host GaV4S8, exhibiting a complex interplay of orbital, spin, lattice, and polar degrees of freedom. However, its dielectric behavior at the polar transition markedly differs from that of the Jahn-Teller driven ferroelectric GeV4S8, which can be ascribed to the dissimilar electronic structure of the Ge compound.
Recent proposals by C.S. Unnikrishnan concerning locality and Bell's theorem are critically analysed.
Bayesian analyses are often performed using so-called noninformative priors, with a view to achieving objective inference about unknown parameters on which available data depends. Noninformative priors depend on the relationship of the data to the parameters over the sample space. Combining Bayesian updating - multiplying an existing posterior density for parameters being estimated by a likelihood function derived from independent new data that depend on those parameters and renormalizing - with use of noninformative priors gives rise to inconsistency where existing and new data depend on continuous parameters in different ways. In such cases, noninformative priors for inference from only the existing and from only the new data would differ, so Bayesian updating would give different final posterior densities depending on which set of data was used to derive an initial posterior and which was used to update that posterior. I propose a revised Bayesian updating method, which resolves this inconsistency by updating the prior as well as the likelihood function, and involves only a single application of Bayes' theorem. The revised method is also applicable where actual prior information as to parameter values exists and inference that objectively reflects the existing information as well as new data is sought. I demonstrate by numerical testing the probability-matching superiority of the proposed revised updating method, in two cases.
Based on a variant of frequency function, we improve the vanishing order of solutions for Schr\"{o}dinger equations which describes quantitative behavior of strong uniqueness continuation property. For the first time, we investigate the quantitative uniqueness of higher order elliptic equations and show the vanishing order of solutions. Furthermore, strong unique continuation is established for higher order elliptic equations using this variant of frequency function.
We show that negative refraction with minimal absorption can be obtained by means of quantum interference effects similar to electromagnetically induced transparency. Coupling a magnetic dipole transition coherently with an electric dipole transition leads to electromagnetically induced chirality, which can provide negative refraction without requiring negative permeability, and also suppresses absorption. This technique allows negative refraction in the optical regime at densities where the magnetic susceptibility is still small and with refraction/absorption ratios that are orders of magnitude larger than those achievable previously. Furthermore, the value of the refractive index can be fine-tuned via external laser fields, which is essential for practical realization of sub-diffraction-limit imaging.
Temporally and spatially resolved measurements of protein transport inside cells provide important clues to the functional architecture and dynamics of biological systems. Fluorescence Recovery After Photobleaching (FRAP) technique has been used over the past three decades to measure the mobility of macromolecules and protein transport and interaction with immobile structures inside the cell nucleus. A theoretical model is presented that aims to describe protein transport inside the nucleus, a process which is influenced by the presence of a boundary (i.e. membrane). A set of reaction-diffusion equations is employed to model both the diffusion of proteins and their interaction with immobile binding sites. The proposed model has been designed to be applied to biological samples with a Confocal Laser Scanning Microscope (CLSM) equipped with the feature to bleach regions characterised by a scanning beam that has a radially Gaussian distributed profile. The proposed model leads to FRAP curves that depend on the on- and off-rates. Semi-analytical expressions are used to define the boundaries of on- (off-) rate parameter space in simplified cases when molecules move within a bounded domain. The theoretical model can be used in conjunction to experimental data acquired by CLSM to investigate the biophysical properties of proteins in living cells.
Elastic stress concentration at tips of long slender objects moving in viscoelastic fluids has been observed in numerical simulations, but despite the prevalence of flagellated motion in complex fluids in many biological functions, the physics of stress accumulation near tips has not been analyzed. Here we theoretically investigate elastic stress development at tips of slender objects by computing the leading order viscoelastic correction to the equilibrium viscous flow around long cylinders, using the weak-coupling limit. In this limit nonlinearities in the fluid are retained allowing us to study the biologically relevant parameter regime of high Weissenberg number. We calculate a stretch rate from the viscous flow around cylinders to predict when large elastic stress develops at tips, find thresholds for large stress development depending on orientation, and calculate greater stress accumulation near tips of cylinders oriented parallel to motion over perpendicular.
We compare the quenched hadron spectrum on blocked and unblocked lattices for the Wilson quark action, the clover action and the tadpole-improved clover action. The latter gives a spectrum markedly closer to the original one, even though the cutoff is 1/a ~ 500 Mev.
We revisit recent claims that there is a "cold spot" in both number counts and brightness of radio sources in the NVSS survey, with location coincident with the previously detected cold spot in WMAP. Such matching cold spots would be difficult if not impossible to explain in the standard LCDM cosmological model. Contrary to the claim, we find no significant evidence for the radio cold spot, after including systematic effects in NVSS, and carefully accounting for the effect of a posteriori choices when assessing statistical significance.
We explicitly show how the chiral superstring amplitudes can be obtained through factorisation of the higher genus chiral measure induced by suitable degenerations of Riemann surfaces. This powerful tool also allows to derive, at any genera, consistency relations involving the amplitudes and the measure. A key point concerns the choice of the local coordinate at the node on degenerate Riemann surfaces that greatly simplifies the computations. As a first application, starting from recent ansaetze for the chiral measure up to genus five, we compute the chiral two-point function for massless Neveu-Schwarz states at genus two, three and four. For genus higher than three, these computations include some new corrections to the conjectural formulae appeared so far in the literature. After GSO projection, the two-point function vanishes at genus two and three, as expected from space-time supersymmetry arguments, but not at genus four. This suggests that the ansatz for the superstring measure should be corrected for genus higher than four.
We present the direct imaging discovery of a low-mass companion to the nearby accelerating F star, HIP 5319, using SCExAO coupled with the CHARIS, VAMPIRES, and MEC instruments in addition to Keck/NIRC2 imaging. CHARIS $JHK$ (1.1-2.4 $\mu$m) spectroscopic data combined with VAMPIRES 750 nm, MEC $Y$, and NIRC2 $L_{\rm p}$ photometry is best matched by an M3--M7 object with an effective temperature of T=3200 K and surface gravity log($g$)=5.5. Using the relative astrometry for HIP 5319 B from CHARIS and NIRC2 and absolute astrometry for the primary from $Gaia$ and $Hipparcos$ and adopting a log-normal prior assumption for the companion mass, we measure a dynamical mass for HIP 5319 B of $31^{+35}_{-11}M_{\rm J}$, a semimajor axis of $18.6^{+10}_{-4.1}$ au, an inclination of $69.4^{+5.6}_{-15}$ degrees, and an eccentricity of $0.42^{+0.39}_{-0.29}$. However, using an alternate prior for our dynamical model yields a much higher mass of 128$^{+127}_{-88}M_{\rm J}$. Using data taken with the LCOGT NRES instrument we also show that the primary HIP 5319 A is a single star in contrast to previous characterizations of the system as a spectroscopic binary. This work underscores the importance of assumed priors in dynamical models for companions detected with imaging and astrometry and the need to have an updated inventory of system measurements.
We define a new algebra of noncommutative differential forms for any Hopf algebra with an invertible antipode. We prove that there is a one to one correspondence between anti-Yetter-Drinfeld modules, which serve as coefficients for the Hopf cyclic (co)homology, and modules which admit a flat connection with respect to our differential calculus. Thus we show that these coefficient modules can be regarded as ``flat bundles'' in the sense of Connes' noncommutative differential geometry.
Consider the following interacting particle system on the $d$-ary tree, known as the frog model: Initially, one particle is awake at the root and i.i.d. Poisson many particles are sleeping at every other vertex. Particles that are awake perform simple random walks, awakening any sleeping particles they encounter. We prove that there is a phase transition between transience and recurrence as the initial density of particles increases, and we give the order of the transition up to a logarithmic factor.
Because of the significant increase in size and complexity of the networks, the distributed computation of eigenvalues and eigenvectors of graph matrices has become very challenging and yet it remains as important as before. In this paper we develop efficient distributed algorithms to detect, with higher resolution, closely situated eigenvalues and corresponding eigenvectors of symmetric graph matrices. We model the system of graph spectral computation as physical systems with Lagrangian and Hamiltonian dynamics. The spectrum of Laplacian matrix, in particular, is framed as a classical spring-mass system with Lagrangian dynamics. The spectrum of any general symmetric graph matrix turns out to have a simple connection with quantum systems and it can be thus formulated as a solution to a Schr\"odinger-type differential equation. Taking into account the higher resolution requirement in the spectrum computation and the related stability issues in the numerical solution of the underlying differential equation, we propose the application of symplectic integrators to the calculation of eigenspectrum. The effectiveness of the proposed techniques is demonstrated with numerical simulations on real-world networks of different sizes and complexities.
In 1992, Agache and Chaple introduced the concept of a semi-symmetric non-metric connection([1]). The semi-symmetric non-metric connection does not satisfy the Schur`s theorem. The purpose of the present paper is to study some properties of a new semi-symmetric non-metric connection satisfying the Schur`s theorem in a Riemannian manifold. And we considered necessary and sufficient condition that a Riemannian manifold with a semi-symmetric non-metric connection be a Riemannian manifold with constant curvature.
We report the formation of bound star clusters in a sample of high-resolution cosmological zoom-in simulations of z>5 galaxies from the FIRE project. We find that bound clusters preferentially form in high-pressure clouds with gas surface densities over 10^4 Msun pc^-2, where the cloud-scale star formation efficiency is near unity and young stars born in these regions are gravitationally bound at birth. These high-pressure clouds are compressed by feedback-driven winds and/or collisions of smaller clouds/gas streams in highly gas-rich, turbulent environments. The newly formed clusters follow a power-law mass function of dN/dM~M^-2. The cluster formation efficiency is similar across galaxies with stellar masses of ~10^7-10^10 Msun at z>5. The age spread of cluster stars is typically a few Myrs and increases with cluster mass. The metallicity dispersion of cluster members is ~0.08 dex in [Z/H] and does not depend on cluster mass significantly. Our findings support the scenario that present-day old globular clusters (GCs) were formed during relatively normal star formation in high-redshift galaxies. Simulations with a stricter/looser star formation model form a factor of a few more/fewer bound clusters per stellar mass formed, while the shape of the mass function is unchanged. Simulations with a lower local star formation efficiency form more stars in bound clusters. The simulated clusters are larger than observed GCs due to finite resolution. Our simulations are among the first cosmological simulations that form bound clusters self-consistently in a wide range of high-redshift galaxies.
We test a model of inflation with a fast-rolling kinetic-dominated initial condition against data from Planck using Markov chain Monte Carlo parameter estimation. We test both an $m^2 \phi^2$ potential and the $R+R^2$ gravity model and perform a full numerical calculation of both the scalar and tensor primordial power spectra. We find a slight (though not significant) improvement in fit for this model over the standard eternal slow-roll case.
In many models of the Calvin cycle of photosynthesis it is observed that there are solutions where concentrations of key substances belonging to the cycle tend to zero at late times, a phenomenon known as overload breakdown. In this paper we prove theorems about the existence and non-existence of solutions of this type and obtain information on which concentrations tend to zero when overload breakdown occurs. As a starting point we take a model of Pettersson and Ryde-Pettersson which seems to be prone to overload breakdown and a modification of it due to Poolman which was intended to avoid this effect.
A conceptual framework for variational formulations of physical theories is proposed. Such a framework is displayed here just for statics, but it is designed to be subsequently adapted to variational formulations of static field theories and dynamics.
Power systems are growing rapidly, due to the ever-increasing demand for electrical power. These systems require novel methodologies and modern tools and technologies, to better perform, particularly for communication among different parts. Therefore, power systems are facing new challenges such as energy trading and marketing and cyber threats. Using blockchain in power systems, as a solution, is one of the newest methods. Most studies aim to investigate innovative approach-es of blockchain application in power systems. Even though, many articles published to support the research activities, there has not been any bibliometric analysis which specifies the research trends. This paper aims to present a bibliographic analysis of the blockchain application in power systems related literature, in the Web of Science (WoS) database between January 2009 and July 2019. This paper discusses the research activities and performed a detailed analysis by looking at the number of articles published, citations, institutions, research areas, and authors. From the analysis, it was concluded that there are several significant impacts of research activities in China and the USA, in comparison to other countries.
We study the spherical collapse of an over-density of a barotropic fluid with linear equation of state in a cosmological background. Fully relativistic simulations are performed by using the Baumgarte-Shibata-Shapiro-Nakamura formalism jointly with the Valencia formulation of the hydrodynamics. This permits us to test the universality of the critical collapse with respect to the matter type by considering the constant equation of state parameter $\omega$ as a control parameter. We exhibit, for a fixed radial profile of the energy-density contrast, the existence of a critical value $\omega^*$ for the equation of state parameter under which the fluctuation collapses to a black hole and above which it is diluting. It is shown numerically that the mass of the formed black hole, for subcritical solutions, obeys a scaling law $M\propto |\omega - \omega^*|^\gamma$ with a critical exponent $\gamma$ independent on the matter type, revealing the universality. This universal scaling law is shown to be verified in the empty Minkoswki and de Sitter space-times. For the full matter Einstein-de Sitter universe, the universality is not observed if conformally flat sub-horizon initial conditions are used. But when super-horizon initial conditions computed from the long-wavelength approximation are used, the universality appears to be true.
Let g and p be non-negative integers. Let A(g,p) denote the group consisting of all those automorphisms of the free group on {t_1,...,t_p, x_1,...,x_g, y_1,...y_g} which fix the element t_1t_2...t_p[x_1,y_1]...[x_g,y_g] and permute the set of conjugacy classes {[t_1],....,[t_p]}. Labru\`ere and Paris, building on work of Artin, Magnus, Dehn, Nielsen, Lickorish, Zieschang, Birman, Humphries, and others, showed that A(g,p) is generated by a set that is called the ADLH set. We use methods of Zieschang and McCool to give a self-contained, algebraic proof of this result. Labru\`ere and Paris also gave defining relations for the ADLH set in A(g,p); we do not know an algebraic proof of this for g > 1. Consider an orientable surface S(g,p) of genus g with p punctures, such that (g,p) is not (0,0) or (0,1). The algebraic mapping-class group of S(g,p), denoted M(g,p), is defined as the group of all those outer automorphisms of the one-relator group with generating set {t_1,...,t_p, x_1,...,x_g, y_1,...y_g} and relator t_1t_2...t_p[x_1,y_1]...[x_g,y_g] which permute the set of conjugacy classes {[t_1],....,[t_p]}. It now follows from a result of Nielsen that M(g,p) is generated by the image of the ADLH set together with a reflection. This gives a new way of seeing that M(g,p) equals the (topological) mapping-class group of S(g,p), along lines suggested by Magnus, Karrass, and Solitar in 1966.
In this work we improve existing calculations of radiative energy loss by computing corrections that implement energy-momentum conservation, previously only implemented a posteriori, in a rigorous way. Using the path-integral formalism, we compute in-medium splittings allowing transverse motion of all particles in the emission process, thus relaxing the assumption that only the softest particle is permitted such movement. This work constitutes the extension of the computation carried out for x$\rightarrow$1 in Phys. Lett. B718 (2012) 160-168, to all values of x, the momentum fraction of the energy of the parent parton carried by the emitted gluon. In order to accomplish a general description of the whole in-medium showering process, in this work we allow for arbitrary formation times for the emitted gluon. We provide general expressions and their realisation in the path integral formalism within the harmonic oscillator approximation.
We investigate the birth and evolution of Galactic isolated radio pulsars. We begin by estimating their birth space velocity distribution from proper motion measurements of Brisken et al. (2002, 2003). We find no evidence for multimodality of the distribution and favor one in which the absolute one-dimensional velocity components are exponentially distributed and with a three-dimensional mean velocity of 380^{+40}_{-60} km s^-1. We then proceed with a Monte Carlo-based population synthesis, modelling the birth properties of the pulsars, their time evolution, and their detection in the Parkes and Swinburne Multibeam surveys. We present a population model that appears generally consistent with the observations. Our results suggest that pulsars are born in the spiral arms, with a Galactocentric radial distribution that is well described by the functional form proposed by Yusifov & Kucuk (2004), in which the pulsar surface density peaks at radius ~3 kpc. The birth spin period distribution extends to several hundred milliseconds, with no evidence of multimodality. Models which assume the radio luminosities of pulsars to be independent of the spin periods and period derivatives are inadequate, as they lead to the detection of too many old simulated pulsars in our simulations. Dithered radio luminosities proportional to the square root of the spin-down luminosity accommodate the observations well and provide a natural mechanism for the pulsars to dim uniformly as they approach the death line, avoiding an observed pile-up on the latter. There is no evidence for significant torque decay (due to magnetic field decay or otherwise) over the lifetime of the pulsars as radio sources (~100 Myr). Finally, we estimate the pulsar birthrate and total number of pulsars in the Galaxy.
Stellar-mass primordial black holes (PBHs) from the early Universe can directly contribute to the gravitational wave (GW) events observed by LIGO, but can only comprise a subdominant component of the dark matter (DM). The primary DM constituent will generically form massive halos around seeding stellar-mass PBHs. We demonstrate that gravitational lensing of sources at cosmological ($\gtrsim$Gpc) distances can directly explore DM halo dresses engulfing PBHs, challenging for lensing of local sources in the vicinity of Milky Way. Strong lensing analysis of fast radio bursts detected by CHIME survey already starts to probe parameter space of dressed stellar-mass PBHs, and upcoming searches can efficiently explore dressed PBHs over $\sim 10-10^5 M_{\odot}$ mass-range and provide a stringent test of the PBH scenario for the GW events. Our findings establish a general test for a broad class of DM models with stellar-mass PBHs, including those where QCD axions or WIMP-like particles comprise predominant DM. The results open a new route for exploring dressed PBHs with various types of lensing events at cosmological distances, such as supernovae and caustic crossings.
Similarity search in high-dimentional spaces is a pivotal operation found a variety of database applications. Recently, there has been an increase interest in similarity search for online content-based multimedia services. Those services, however, introduce new challenges with respect to the very large volumes of data that have to be indexed/searched, and the need to minimize response times observed by the end-users. Additionally, those users dynamically interact with the systems creating fluctuating query request rates, requiring the search algorithm to adapt in order to better utilize the underline hardware to reduce response times. In order to address these challenges, we introduce hypercurves, a flexible framework for answering approximate k-nearest neighbor (kNN) queries for very large multimedia databases, aiming at online content-based multimedia services. Hypercurves executes on hybrid CPU--GPU environments, and is able to employ those devices cooperatively to support massive query request rates. In order to keep the response times optimal as the request rates vary, it employs a novel dynamic scheduler to partition the work between CPU and GPU. Hypercurves was throughly evaluated using a large database of multimedia descriptors. Its cooperative CPU--GPU execution achieved performance improvements of up to 30x when compared to the single CPU-core version. The dynamic work partition mechanism reduces the observed query response times in about 50% when compared to the best static CPU--GPU task partition configuration. In addition, Hypercurves achieves superlinear scalability in distributed (multi-node) executions, while keeping a high guarantee of equivalence with its sequential version --- thanks to the proof of probabilistic equivalence, which supported its aggressive parallelization design.
We present a sample of 8321 candidate Field Horizontal-Branch (FHB) stars selected by automatic spectral classification in the digital data base of the Hamburg/ESO objective-prism survey. The stars are distributed over 8225 square degrees of the southern sky, at |b| > 30 deg. The average distance of the sample, assuming that they are all FHB stars, is 9.8 kpc, and distances of up to ~30 kpc are reached. Moderate-resolution spectroscopic follow-up observations and UBV photometry of 125 test sample stars demonstrate that the contamination of the full candidate sample with main-sequence A-type stars is < 16%, while it would be up to 50% in a flux-limited sample at high galactic latitudes. Hence more than ~6800 of our FHB candidates are expected to be genuine FHB stars. The candidates are being used as distance probes for high-velocity clouds and for studies of the structure and kinematics of the Galactic halo.
We propose a simple theory for the dynamics of model glass-forming fluids, which should be solvable using a mean-field-like approach. The theory is based on transparent physical assumptions, which can be tested in computer simulations. The theory predicts an ergodicity-breaking transition that is identical to the so-called dynamic transition predicted within the replica approach. Thus, it can provide the missing dynamic component of the random first order transition framework. In the large-dimensional limit the theory reproduces the result of a recent exact calculation of Maimbourg et al. [PRL 116, 015902 (2016)]. Our approach provides an alternative, physically motivated derivation of this result.
We investigate the influence of the external fields on the stability of spin helix states in a XXZ Heisenberg model. Exact diagonalization on finite system shows that random transverse fields in x and y directions drive the transition from integrability to nonintegrability. It results in the fast decay of a static helix state, which is the eigenstate of an unperturbed XXZ Heisenberg model. However, in the presence of uniform z field, the static helix state becomes a dynamic helix state with a relatively long life as a quantum scar state.
We study polar coding for stochastic processes with memory. For example, a process may be defined by the joint distribution of the input and output of a channel. The memory may be present in the channel, the input, or both. We show that $\psi$-mixing processes polarize under the standard Ar\i{}kan transform, under a mild condition. We further show that the rate of polarization of the \emph{low-entropy} synthetic channels is roughly $O(2^{-\sqrt{N}})$, where $N$ is the blocklength. That is, essentially the same rate as in the memoryless case.
PG 0014+067 is one of the most promising pulsating subdwarf B stars for seismic analysis, as it has a rich pulsation spectrum. The richness of its pulsations, however, poses a fundamental challenge to understanding the pulsations of these stars, as the mode density is too complex to be explained only with radial and nonradial low degree (l < 3) p-modes without rotational splittings. One proposed solution, for the case of PG 0014+067 in particular, assigns some modes with high degree (l=3). On the other hand, theoretical models of sdB stars suggest that they may retain rapidly rotating cores, and so the high mode density may result from the presence of a few rotationally-split triplet (l=1), quintuplet (l=2) modes, along with radial (l=0) p-modes. To examine alternative theoretical models for these stars, we need better frequency resolution and denser longitude coverage. Therefore, we observed this star with the Whole Earth Telescope for two weeks in October 2004. In this paper we report the results of Whole Earth Telescope observations of the pulsating subdwarf B star PG 0014+067. We find that the frequencies seen in PG 0014+067 do not appear to fit any theoretical model currently available; however, we find a simple empirical relation that is able to match all of the well-determined frequencies in this star.
We study the response of a thin superconducting amorphous InO film with variable oxygen content to a parallel magnetic field. A field-induced superconductor-insulator transition (SIT) is observed that is very similar to the one in normal magnetic fields. As the boson-vortex duality, which is the key-stone of the theory of the field-induced SIT, is obviously absent in the parallel configuration, we have to draw conclusion about the theory insufficiency.
Universe structure emerges in the unreduced, complex-dynamic interaction process with the simplest initial configuration (two attracting homogeneous fields, quant-ph/9902015). The unreduced interaction analysis gives intrinsically creative cosmology, describing the real, explicitly emerging world structure with dynamic randomness on each scale. Without imposing any postulates or entities, we obtain physically real space, time, elementary particles with their detailed structure and intrinsic properties, causally complete and unified version of quantum and relativistic behaviour, the origin and number of naturally unified fundamental forces, and classical behaviour emergence in closed systems (gr-qc/9906077). Main problems of standard cosmology and astrophysics are consistently solved in this extended picture (without introduction of any additional entities), including those of quantum cosmology and gravity, entropy growth and time, "hierarchy" of elementary particles, "anthropic" difficulties, space-time flatness, and "missing" ("dark") mass and energy. The observed universe structure and laws can be presented as manifestations of the universal symmetry (conservation) of complexity providing the unified, irregular, but exact (never "broken") Order of the World (physics/0404006).
Let $G$~be a real Lie group and let $G^\circ$ be the identity component of~$G$. Let $G$~act on a $C^\infty$ real manifold~$M$. Assume the action is $C^\infty$. Assume that the fixpoint set of any nontrivial element of~$G^\circ$ has empty interior in~$M$. Let $n:=\dim G$. Assume $n\ge1$. Let $F$ be the frame bundle of~$M$ of order $n-1$. We prove: there exists a $G$-invariant dense open subset~$Q$ of~$F$ such that the $G$-action on $Q$ has discrete stabilizers.
We use the shear transformation zone (STZ) theory of dynamic plasticity to study the necking instability in a two-dimensional strip of amorphous solid. Our Eulerian description of large-scale deformation allows us to follow the instability far into the nonlinear regime. We find a strong rate dependence; the higher the applied strain rate, the further the strip extends before the onset of instability. The material hardens outside the necking region, but the description of plastic flow within the neck is distinctly different from that of conventional time-independent theories of plasticity.
We applied computer analysis to classify the broad morphological type of ~3,000,000 SDSS galaxies. The catalog provides for each galaxy the DR8 object ID, right ascension, declination, and the certainty of the automatic classification to spiral or elliptical. The certainty of the classification allows controlling the accuracy of a subset of galaxies by sacrificing some of the least certain classifications. The accuracy of the catalog was tested using galaxies that were classified by the manually annotated Galaxy Zoo catalog. The results show that the catalog contains ~900,000 spiral galaxies and ~600,000 elliptical galaxies with classification certainty that has a statistical agreement rate of ~98% with Galaxy Zoo debiased 'superclean' dataset. That also demonstrates the ability of computers to turn large datasets of galaxy images into structured catalogs of galaxy morphology. The catalog can be downloaded at http://vfacstaff.ltu.edu/lshamir/data/morph_catalog , and can be accessed through public tables on CAS: public.broadMorph.LargeGM, public.broadMorph.LargeWnnGM, and public.broadMorph.SpectraGM. The image analysis software that was used to create the catalog is also publicly available.
We derive the braid relations of the charged anyons interacting with a magnetic field on Riemann surfaces. The braid relations are used to calculate the quasiparticle's spin in the fractional quantum Hall states on Riemann surfaces. The quasiparticle's spin is found to be topological independent and satisfies physical restrictions.
Active fire detection in satellite imagery is of critical importance to the management of environmental conservation policies, supporting decision-making and law enforcement. This is a well established field, with many techniques being proposed over the years, usually based on pixel or region-level comparisons involving sensor-specific thresholds and neighborhood statistics. In this paper, we address the problem of active fire detection using deep learning techniques. In recent years, deep learning techniques have been enjoying an enormous success in many fields, but their use for active fire detection is relatively new, with open questions and demand for datasets and architectures for evaluation. This paper addresses these issues by introducing a new large-scale dataset for active fire detection, with over 150,000 image patches (more than 200 GB of data) extracted from Landsat-8 images captured around the world in August and September 2020, containing wildfires in several locations. The dataset was split in two parts, and contains 10-band spectral images with associated outputs, produced by three well known handcrafted algorithms for active fire detection in the first part, and manually annotated masks in the second part. We also present a study on how different convolutional neural network architectures can be used to approximate these handcrafted algorithms, and how models trained on automatically segmented patches can be combined to achieve better performance than the original algorithms - with the best combination having 87.2% precision and 92.4% recall on our manually annotated dataset. The proposed dataset, source codes and trained models are available on Github (https://github.com/pereira-gha/activefire), creating opportunities for further advances in the field
The fourth generation of cell phones, marketed as 4G/LTE (Long-Term Evolution) is being quickly adopted worldwide. Given the mobile and wireless nature of the involved communications, security is crucial. This paper includes both a theoretical study and a practical analysis of the SNOW 3G generator, included in such a standard for protecting confidentiality and integrity. From its implementation and performance evaluation in mobile devices, several conclusions about how to improve its efficiency are obtained.
While statisticians and quantitative social scientists typically study the "effects of causes" (EoC), Lawyers and the Courts are more concerned with understanding the "causes of effects" (CoE). EoC can be addressed using experimental design and statistical analysis, but it is less clear how to incorporate statistical or epidemiological evidence into CoE reasoning, as might be required for a case at Law. Some form of counterfactual reasoning, such as the "potential outcomes" approach championed by Rubin, appears unavoidable, but this typically yields "answers" that are sensitive to arbitrary and untestable assumptions. We must therefore recognise that a CoE question simply might not have a well-determined answer. It is nevertheless possible to use statistical data to set bounds within which any answer must lie. With less than perfect data these bounds will themselves be uncertain, leading to a compounding of different kinds of uncertainty. Still further care is required in the presence of possible confounding factors. In addition, even identifying the relevant "counterfactual contrast" may be a matter of Policy as much as of Science. Defining the question is as non-trivial a task as finding a route towards an answer. This paper develops some technical elaborations of these philosophical points, and illustrates them with an analysis of a case study in child protection. Keywords: benfluorex, causes of effects, counterfactual, child protection, effects of causes, Fre'chet bound, potential outcome, probability of causation
Dataflow matrix machines are self-referential generalized recurrent neural nets. The self-referential mechanism is provided via a stream of matrices defining the connectivity and weights of the network in question. A natural question is: what should play the role of untyped lambda-calculus for this programming architecture? The proposed answer is a discipline of programming with only one kind of streams, namely the streams of appropriately shaped matrices. This yields Pure Dataflow Matrix Machines which are networks of transformers of streams of matrices capable of defining a pure dataflow matrix machine.
We show that any element of the universal Teichm\"uller space is realized by a unique minimal Lagrangian diffeomorphism from the hyperbolic plane to itself. The proof uses maximal surfaces in the 3-dimensional anti-de Sitter space. We show that, in $AdS^{n+1}$, any subset $E$ of the boundary at infinity which is the boundary at infinity of a space-like hypersurface bounds a maximal space-like hypersurface. In $AdS^3$, if $E$ is the graph of a quasi-symmetric homeomorphism, then this maximal surface is unique, and it has negative sectional curvature. As a by-product, we find a simple characterization of quasi-symmetric homeomorphisms of the circle in terms of 3-dimensional projective geometry.
A long-duration gamma-ray burst (GRB) marks the violent end of a massive star. GRBs are rare in the universe, and their progenitor stars are thought to possess unique physical properties such as low metal content and rapid rotation, while the supernovae (SNe) that are associated with GRBs are expected to be highly aspherical. To date, it has been unclear whether GRB-SNe could be used as standardizable candles, with contrasting conclusions found by different teams. In this paper I present evidence that GRB-SNe have the potential to be used as standardizable candles, and show that a statistically significant relation exists between the brightness and width of their decomposed light curves relative to a template supernova. Every single nearby spectroscopically identified GRB-SN, for which the rest-frame and host contributions have been accurately determined, follows this relation. Additionally, it is shown that not only GRB-SNe, but perhaps all supernovae whose explosion is powered by a central engine, may eventually be used as a standardizable candle. Finally, I suggest that the use of GRB-SNe as standardizable candles likely arises from from a combination of the viewing angle and similar explosion geometry in each event, the latter which is influenced by the explosion mechanism of GRB-SNe.
We consider a problem of recovering a high-dimensional vector $\mu$ observed in white noise, where the unknown vector $\mu$ is assumed to be sparse. The objective of the paper is to develop a Bayesian formalism which gives rise to a family of $l_0$-type penalties. The penalties are associated with various choices of the prior distributions $\pi_n(\cdot)$ on the number of nonzero entries of $\mu$ and, hence, are easy to interpret. The resulting Bayesian estimators lead to a general thresholding rule which accommodates many of the known thresholding and model selection procedures as particular cases corresponding to specific choices of $\pi_n(\cdot)$. Furthermore, they achieve optimality in a rather general setting under very mild conditions on the prior. We also specify the class of priors $\pi_n(\cdot)$ for which the resulting estimator is adaptively optimal (in the minimax sense) for a wide range of sparse sequences and consider several examples of such priors.