title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Direct frequency-comb spectroscopy of $6S_{1/2}$-$8S_{1/2}$ transitions of atomic cesium
Direct frequency-comb spectroscopy is used to probe the absolute frequencies of $6S_{1/2}$-$8S_{1/2}$ two-photon transitions of atomic cesium in hot vapor environment. By utilizing the coherent control method of temporally splitting the laser spectrum above and below the two-photon resonance frequency, Doppler-free absorption is built in two spatially distinct locations and imaged for high-precision spectroscopy. Theoretical analysis finds that these transition lines are measured with uncertainty below $5\times10^{-10}$, mainly contributed from laser-induced AC Stark shift.
0
1
0
0
0
0
Aggregation and Disaggregation of Energetic Flexibility from Distributed Energy Resources
A variety of energy resources has been identified as being flexible in their electric energy consumption or generation. This energetic flexibility can be used for various purposes such as minimizing energy procurement costs or providing ancillary services to power grids. To fully leverage the flexibility available from distributed small-scale resources, their flexibility must be quantified and aggregated. This paper introduces a generic and scalable approach for flexible energy systems to quantitatively describe and price their flexibility based on zonotopic sets. The description proposed allows aggregators to efficiently pool the flexibility of large numbers of systems and to make control and market decisions on the aggregate level. In addition, an algorithm is presented that distributes aggregate-level control decisions among the individual systems of the pool in an economically fair and computationally efficient way. Finally, it is shown how the zonotopic description of flexibility enables an efficient computation of aggregate regulation power bid-curves.
1
0
0
0
0
0
Mapping Walls of Indoor Environment using RGB-D Sensor
Inferring walls configuration of indoor environment could help robot "understand" the environment better. This allows the robot to execute a task that involves inter-room navigation, such as picking an object in the kitchen. In this paper, we present a method to inferring walls configuration from a moving RGB-D sensor. Our goal is to combine a simple wall configuration model and fast wall detection method in order to get a system that works online, is real-time, and does not need a Manhattan World assumption. We tested our preliminary work, i.e. wall detection and measurement from moving RGB-D sensor, with MIT Stata Center Dataset. The performance of our method is reported in terms of accuracy and speed of execution.
1
0
0
0
0
0
The Effects of Memory Replay in Reinforcement Learning
Experience replay is a key technique behind many recent advances in deep reinforcement learning. Allowing the agent to learn from earlier memories can speed up learning and break undesirable temporal correlations. Despite its wide-spread application, very little is understood about the properties of experience replay. How does the amount of memory kept affect learning dynamics? Does it help to prioritize certain experiences? In this paper, we address these questions by formulating a dynamical systems ODE model of Q-learning with experience replay. We derive analytic solutions of the ODE for a simple setting. We show that even in this very simple setting, the amount of memory kept can substantially affect the agent's performance. Too much or too little memory both slow down learning. Moreover, we characterize regimes where prioritized replay harms the agent's learning. We show that our analytic solutions have excellent agreement with experiments. Finally, we propose a simple algorithm for adaptively changing the memory buffer size which achieves consistently good empirical performance.
1
0
0
1
0
0
Portfolio Optimization for Cointelated Pairs: SDEs vs. Machine Learning
We investigate the problem of dynamic portfolio optimization in continuous-time, finite-horizon setting for a portfolio of two stocks and one risk-free asset. The stocks follow the Cointelation model. The proposed optimization methods are twofold. In what we call an Stochastic Differential Equation approach, we compute the optimal weights using mean-variance criterion and power utility maximization. We show that dynamically switching between these two optimal strategies by introducing a triggering function can further improve the portfolio returns. We contrast this with the machine learning clustering methodology inspired by the band-wise Gaussian mixture model. The first benefit of the machine learning over the Stochastic Differential Equation approach is that we were able to achieve the same results though a simpler channel. The second advantage is a flexibility to regime change.
0
0
0
0
0
1
High-mass Starless Clumps in the inner Galactic Plane: the Sample and Dust Properties
We report a sample of 463 high-mass starless clump (HMSC) candidates within $-60°<l<60°$ and $-1°<b<1°$. This sample has been singled out from 10861 ATLASGAL clumps. All of these sources are not associated with any known star-forming activities collected in SIMBAD and young stellar objects identified using color-based criteria. We also make sure that the HMSC candidates have neither point sources at 24 and 70 \micron~nor strong extended emission at 24 $\mu$m. Most of the identified HMSCs are infrared ($\le24$ $\mu$m) dark and some are even dark at 70 $\mu$m. Their distribution shows crowding in Galactic spiral arms and toward the Galactic center and some well-known star-forming complexes. Many HMSCs are associated with large-scale filaments. Some basic parameters were attained from column density and dust temperature maps constructed via fitting far-infrared and submillimeter continuum data to modified blackbodies. The HMSC candidates have sizes, masses, and densities similar to clumps associated with Class II methanol masers and HII regions, suggesting they will evolve into star-forming clumps. More than 90% of the HMSC candidates have densities above some proposed thresholds for forming high-mass stars. With dust temperatures and luminosity-to-mass ratios significantly lower than that for star-forming sources, the HMSC candidates are externally heated and genuinely at very early stages of high-mass star formation. Twenty sources with equivalent radius $r_\mathrm{eq}<0.15$ pc and mass surface density $\Sigma>0.08$ g cm$^{-2}$ could be possible high-mass starless cores. Further investigations toward these HMSCs would undoubtedly shed light on comprehensively understanding the birth of high-mass stars.
0
1
0
0
0
0
Biologically Plausible Online Principal Component Analysis Without Recurrent Neural Dynamics
Artificial neural networks that learn to perform Principal Component Analysis (PCA) and related tasks using strictly local learning rules have been previously derived based on the principle of similarity matching: similar pairs of inputs should map to similar pairs of outputs. However, the operation of these networks (and of similar networks) requires a fixed-point iteration to determine the output corresponding to a given input, which means that dynamics must operate on a faster time scale than the variation of the input. Further, during these fast dynamics such networks typically "disable" learning, updating synaptic weights only once the fixed-point iteration has been resolved. Here, we derive a network for PCA-based dimensionality reduction that avoids this fast fixed-point iteration. The key novelty of our approach is a modification of the similarity matching objective to encourage near-diagonality of a synaptic weight matrix. We then approximately invert this matrix using a Taylor series approximation, replacing the previous fast iterations. In the offline setting, our algorithm corresponds to a dynamical system, the stability of which we rigorously analyze. In the online setting (i.e., with stochastic gradients), we map our algorithm to a familiar neural network architecture and give numerical results showing that our method converges at a competitive rate. The computational complexity per iteration of our online algorithm is linear in the total degrees of freedom, which is in some sense optimal.
0
0
0
1
1
0
Exploring to learn visual saliency: The RL-IAC approach
The problem of object localization and recognition on autonomous mobile robots is still an active topic. In this context, we tackle the problem of learning a model of visual saliency directly on a robot. This model, learned and improved on-the-fly during the robot's exploration provides an efficient tool for localizing relevant objects within their environment. The proposed approach includes two intertwined components. On the one hand, we describe a method for learning and incrementally updating a model of visual saliency from a depth-based object detector. This model of saliency can also be exploited to produce bounding box proposals around objects of interest. On the other hand, we investigate an autonomous exploration technique to efficiently learn such a saliency model. The proposed exploration, called Reinforcement Learning-Intelligent Adaptive Curiosity (RL-IAC) is able to drive the robot's exploration so that samples selected by the robot are likely to improve the current model of saliency. We then demonstrate that such a saliency model learned directly on a robot outperforms several state-of-the-art saliency techniques, and that RL-IAC can drastically decrease the required time for learning a reliable saliency model.
1
0
0
0
0
0
Fast and Robust Shortest Paths on Manifolds Learned from Data
We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data. This amounts to solving a system of ordinary differential equations (ODEs) subject to boundary conditions. Here standard solvers perform poorly because they require well-behaved Jacobians of the ODE, and usually, manifolds learned from data imply unstable and ill-conditioned Jacobians. Instead, we propose a fixed-point iteration scheme for solving the ODE that avoids Jacobians. This enhances the stability of the solver, while reduces the computational cost. In experiments involving both Riemannian metric learning and deep generative models we demonstrate significant improvements in speed and stability over both general-purpose state-of-the-art solvers as well as over specialized solvers.
1
0
0
1
0
0
Typed Graph Networks
Recently, the deep learning community has given growing attention to neural architectures engineered to learn problems in relational domains. Convolutional Neural Networks employ parameter sharing over the image domain, tying the weights of neural connections on a grid topology and thus enforcing the learning of a number of convolutional kernels. By instantiating trainable neural modules and assembling them in varied configurations (apart from grids), one can enforce parameter sharing over graphs, yielding models which can effectively be fed with relational data. In this context, vertices in a graph can be projected into a hyperdimensional real space and iteratively refined over many message-passing iterations in an end-to-end differentiable architecture. Architectures of this family have been referred to with several definitions in the literature, such as Graph Neural Networks, Message-passing Neural Networks, Relational Networks and Graph Networks. In this paper, we revisit the original Graph Neural Network model and show that it generalises many of the recent models, which in turn benefit from the insight of thinking about vertex \textbf{types}. To illustrate the generality of the original model, we present a Graph Neural Network formalisation, which partitions the vertices of a graph into a number of types. Each type represents an entity in the ontology of the problem one wants to learn. This allows - for instance - one to assign embeddings to edges, hyperedges, and any number of global attributes of the graph. As a companion to this paper we provide a Python/Tensorflow library to facilitate the development of such architectures, with which we instantiate the formalisation to reproduce a number of models proposed in the current literature.
1
0
0
1
0
0
The effect of different in-chain impurities on the magnetic properties of the spin chain compound SrCuO$_2$ probed by NMR
The S=1/2 Heisenberg spin chain compound SrCuO2 doped with different amounts of nickel (Ni), palladium (Pd), zinc (Zn) and cobalt (Co) has been studied by means of Cu nuclear magnetic resonance (NMR). Replacing only a few of the S=1/2 Cu ions with Ni, Pd, Zn or Co has a major impact on the magnetic properties of the spin chain system. In the case of Ni, Pd and Zn an unusual line broadening in the low temperature NMR spectra reveals the existence of an impurity-induced local alternating magnetization (LAM), while exponentially decaying spin-lattice relaxation rates $T_1^{-1}$ towards low temperatures indicate the opening of spin gaps. A distribution of gap magnitudes is proven by a stretched spin-lattice relaxation and a variation of $T_1^{-1}$ within the broad resonance lines. These observations depend strongly on the impurity concentration and therefore can be understood using the model of finite segments of the spin 1/2 antiferromagnetic Heisenberg chain, i.e. pure chain segmentation due to S = 0 impurities. This is surprising for Ni as it was previously assumed to be a magnetic impurity with S = 1 which is screened by the neighboring copper spins. In order to confirm the S = 0 state of the Ni, we performed x-ray absorption spectroscopy (XAS) and compared the measurements to simulated XAS spectra based on multiplet ligand-field theory. Furthermore, Zn doping leads to much smaller effects on both the NMR spectra and the spin-lattice relaxation rates, indicating that Zn avoids occupying Cu sites. For magnetic Co impurities, $T_1^{-1}$ does not obey the gap like decrease, and the low-temperature spectra get very broad. This could be related to the increase of the Neel temperature which was observed by recent muSR and susceptibility measurements, and is most likely an effect of the impurity spin $S\neq0$.
0
1
0
0
0
0
L-groups and the Langlands program for covering groups: a historical introduction
In this joint introduction to an Asterisque volume, we give a short discussion of the historical developments in the study of nonlinear covering groups, touching on their structure theory, representation theory and the theory of automorphic forms. This serves as a historical motivation and sets the scene for the papers in the volume. Our discussion is necessarily subjective and will undoubtedly leave out the contributions of many authors, to whom we apologize in earnest.
0
0
1
0
0
0
Uncertainty Reduction for Stochastic Processes on Complex Networks
Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.
1
1
0
0
0
0
Light propagation in Extreme Conditions - The role of optically clear tissues and scattering layers in optical biomedical imaging
The field of biomedical imaging has undergone a rapid growth in recent years, mostly due to the implementation of ad-hoc designed experimental setups, theoretical support methods and numerical reconstructions. Especially for biological samples, the high number of scattering events occurring during the photon propagation process limit the penetration depth and the possibility to outperform direct imaging in thicker and not transparent samples. In this thesis, we will examine theoretically and experimentally the scattering process from two opposite points of view, focusing also on the continuous stimulus offered by the will to tackle some specific challenges in the emerging optical imaging science. Firstly, we will discuss the light propagation in diffusive biological tissues considering the particular case of the presence of optically transparent regions enclosed in a highly scattering environment. The correct inclusion of this information, can ultimately lead to higher resolution reconstruction, especially in neuroimaging. On the other hand, we will examine the extreme case of the three-dimensional imaging of a totally hidden sample, in which the phase has been scrambled by a random scattering layer. By making use of appropriate numerical methods, we will prove how it is possible to outperform such hidden reconstruction in a very efficient way, opening the path toward the unexplored field of three-dimensional hidden imaging. Finally, we will present how, the properties noticed while addressing these problems, leaded us to the development of a novel alignment-free three-dimensional tomographic technique that we refer to as Phase-Retrieved Tomography. Ultimately, we used this technique for the study of the fluorescence distribution in a three-dimensional spherical tumor model, the cancer cell spheroid, one of the most important biological model for the study of such disease.
0
1
0
0
0
0
Design discussion on the ISDA Common Domain Model
A new initiative from the International Swaps and Derivatives Association (ISDA) aims to establish a "Common Domain Model" (ISDA CDM): a new standard for data and process representation across the full range of derivatives instruments. Design of the ISDA CDM is at an early stage and the draft definition contains considerable complexity. This paper contributes by offering insight, analysis and discussion relating to key topics in the design space such as data lineage, timestamps, consistency, operations, events, state and state transitions.
1
0
0
0
0
0
Preserving Intermediate Objectives: One Simple Trick to Improve Learning for Hierarchical Models
Hierarchical models are utilized in a wide variety of problems which are characterized by task hierarchies, where predictions on smaller subtasks are useful for trying to predict a final task. Typically, neural networks are first trained for the subtasks, and the predictions of these networks are subsequently used as additional features when training a model and doing inference for a final task. In this work, we focus on improving learning for such hierarchical models and demonstrate our method on the task of speaker trait prediction. Speaker trait prediction aims to computationally identify which personality traits a speaker might be perceived to have, and has been of great interest to both the Artificial Intelligence and Social Science communities. Persuasiveness prediction in particular has been of interest, as persuasive speakers have a large amount of influence on our thoughts, opinions and beliefs. In this work, we examine how leveraging the relationship between related speaker traits in a hierarchical structure can help improve our ability to predict how persuasive a speaker is. We present a novel algorithm that allows us to backpropagate through this hierarchy. This hierarchical model achieves a 25% relative error reduction in classification accuracy over current state-of-the art methods on the publicly available POM dataset.
1
0
0
0
0
0
Atomistic simulations of dislocation/precipitation interactions in Mg-Al alloys and implications for precipitation hardening
Atomistic simulations were carried out to analyze the interaction between $< a>$ basal dislocations and precipitates in Mg-Al alloys and the associated strengthening mechanisms.
0
1
0
0
0
0
Recovering Dense Tissue Multispectral Signal from in vivo RGB Images
Hyperspectral/multispectral imaging (HSI/MSI) contains rich information clinical applications, such as 1) narrow band imaging for vascular visualisation; 2) oxygen saturation for intraoperative perfusion monitoring and clinical decision making [1]; 3) tissue classification and identification of pathology [2]. The current systems which provide pixel-level HSI/MSI signal can be generally divided into two types: spatial scanning and spectral scanning. However, the trade-off between spatial/spectral resolution, the acquisition time, and the hardware complexity hampers implementation in real-world applications, especially intra-operatively. Acquiring high resolution images in real-time is important for HSI/MSI in intra-operative imaging, to alleviate the side effect caused by breathing, heartbeat, and other sources of motion. Therefore, we developed an algorithm to recover a pixel-level MSI stack using only the captured snapshot RGB images from a normal camera. We refer to this technique as "super-spectral-resolution". The proposed method enables recovery of pixel-level-dense MSI signals with 24 spectral bands at ~11 frames per second (FPS) on a GPU. Multispectral data captured from porcine bowel and sheep/rabbit uteri in vivo has been used for training, and the algorithm has been validated using unseen in vivo animal experiments.
1
0
0
0
0
0
Atomically thin gallium layers from solid-melt exfoliation
Among the large number of promising two-dimensional (2D) atomic layer crystals, true metallic layers are rare. Through combined theoretical and experimental approaches, we report on the stability and successful exfoliation of atomically thin gallenene sheets, having two distinct atomic arrangements along crystallographic twin directions of the parent alpha-gallium. Utilizing the weak interface between solid and molten phases of gallium, a solid-melt interface exfoliation technique is developed to extract these layers. Phonon dispersion calculations show that gallenene can be stabilized with bulk gallium lattice parameters. The electronic band structure of gallenene shows a combination of partially filled Dirac cone and the non-linear dispersive band near Fermi level suggesting that gallenene should behave as a metallic layer. Furthermore it is observed that strong interaction of gallenene with other 2D semiconductors induces semiconducting to metallic phase transitions in the latter paving the way for using gallenene as interesting metallic contacts in 2D devices.
0
1
0
0
0
0
Novel Universality Classes in Ferroelectric Liquid Crystals
Starting from a Langevin formulation of a thermally perturbed nonlinear elastic model of the ferroelectric smectic-C$^*$ (SmC${*}$) liquid crystals in the presence of an electric field, this article characterizes the hitherto unexplored dynamical phase transition from a thermo-electrically forced ferroelectric SmC${}^{*}$ phase to a chiral nematic liquid crystalline phase and vice versa. The theoretical analysis is based on a combination of dynamic renormalization (DRG) and numerical simulation of the emergent model. While the DRG architecture predicts a generic transition to the Kardar-Parisi-Zhang (KPZ) universality class at dynamic equilibrium, in agreement with recent experiments, the numerical simulations of the model show simultaneous existence of two phases, one a "subdiffusive" (SD) phase characterized by a dynamical exponent value of 1, and the other a KPZ phase, characterized by a dynamical exponent value of 1.5. The SD phase flows over to the KPZ phase with increased external forcing, offering a new universality paradigm, hitherto unexplored in the context of ferroelectric liquid crystals.
0
1
0
0
0
0
Feature functional theory - binding predictor (FFT-BP) for the blind prediction of binding free energies
We present a feature functional theory - binding predictor (FFT-BP) for the protein-ligand binding affinity prediction. The underpinning assumptions of FFT-BP are as follows: i) representability: there exists a microscopic feature vector that can uniquely characterize and distinguish one protein-ligand complex from another; ii) feature-function relationship: the macroscopic features, including binding free energy, of a complex is a functional of microscopic feature vectors; and iii) similarity: molecules with similar microscopic features have similar macroscopic features, such as binding affinity. Physical models, such as implicit solvent models and quantum theory, are utilized to extract microscopic features, while machine learning algorithms are employed to rank the similarity among protein-ligand complexes. A large variety of numerical validations and tests confirms the accuracy and robustness of the proposed FFT-BP model. The root mean square errors (RMSEs) of FFT-BP blind predictions of a benchmark set of 100 complexes, the PDBBind v2007 core set of 195 complexes and the PDBBind v2015 core set of 195 complexes are 1.99, 2.02 and 1.92 kcal/mol, respectively. Their corresponding Pearson correlation coefficients are 0.75, 0.80, and 0.78, respectively.
1
1
0
0
0
0
Combining and Steganography of 3D Face Textures
One of the serious issues in communication between people is hiding information from others, and the best way for this, is deceiving them. Since nowadays face images are mostly used in three dimensional format, in this paper we are going to steganography 3D face images, detecting which by curious people will be impossible. As in detecting face only its texture is important, we separate texture from shape matrices, for eliminating half of the extra information, steganography is done only for face texture, and for reconstructing 3D face, we can use any other shape. Moreover, we will indicate that, by using two textures, how two 3D faces can be combined. For a complete description of the process, first, 2D faces are used as an input for building 3D faces, and then 3D textures are hidden within other images.
1
0
0
0
0
0
Degree of sequentiality of weighted automata
Weighted automata (WA) are an important formalism to describe quantitative properties. Obtaining equivalent deterministic machines is a longstanding research problem. In this paper we consider WA with a set semantics, meaning that the semantics is given by the set of weights of accepting runs. We focus on multi-sequential WA that are defined as finite unions of sequential WA. The problem we address is to minimize the size of this union. We call this minimum the degree of sequentiality of (the relation realized by) the WA. For a given positive integer k, we provide multiple characterizations of relations realized by a union of k sequential WA over an infinitary finitely generated group: a Lipschitz-like machine independent property, a pattern on the automaton (a new twinning property) and a subclass of cost register automata. When possible, we effectively translate a WA into an equivalent union of k sequential WA. We also provide a decision procedure for our twinning property for commutative computable groups thus allowing to compute the degree of sequentiality. Last, we show that these results also hold for word transducers and that the associated decision problem is Pspace-complete.
1
0
0
0
0
0
On the orders of the non-Frattini elements of a finite group
Let $G$ be a finite group and let $p_1,\dots,p_n$ be distinct primes. If $G$ contains an element of order $p_1\cdots p_n,$ then there is an element in $G$ which is not contained in the Frattini subgroup of $G$ and whose order is divisible by $p_1\cdots p_n.$
0
0
1
0
0
0
Varieties of general type with small volumes
Generalize Kobayashi's example for the Noether inequality in dimension three, we provide examples of n-folds of general type with small volumes.
0
0
1
0
0
0
Translations in the exponential Orlicz space with Gaussian weight
We study the continuity of space translations on non-parametric exponential families based on the exponential Orlicz space with Gaussian reference density.
0
0
1
1
0
0
Shrinkage Estimation Strategies in Generalized Ridge Regression Models Under Low/High-Dimension Regime
In this study, we propose shrinkage methods based on {\it generalized ridge regression} (GRR) estimation which is suitable for both multicollinearity and high dimensional problems with small number of samples (large $p$, small $n$). Also, it is obtained theoretical properties of the proposed estimators for Low/High Dimensional cases. Furthermore, the performance of the listed estimators is demonstrated by both simulation studies and real-data analysis, and compare its performance with existing penalty methods. We show that the proposed methods compare well to competing regularization techniques.
0
0
1
1
0
0
A path integral approach to Bayesian inference in Markov processes
We formulate Bayesian updates in Markov processes by means of path integral techniques and derive the imaginary-time Schrödinger equation with likelihood to direct the inference incorporated as a potential for the posterior probability distribution
0
0
1
1
0
0
Strong-coupling charge density wave in a one-dimensional topological metal
Scanning tunnelling microscopy and low energy electron diffraction show a dimerization-like reconstruction in the one-dimensional atomic chains on Bi(114) at low temperatures. While one-dimensional systems are generally unstable against such a distortion, its observation is not expected for this particular surface, since there are several factors that should prevent it: One is the particular spin texture of the Fermi surface, which resembles a one-dimensional topological state, and spin protection should hence prevent the formation of the reconstruction. The second is the very short nesting vector $2 k_F$, which is inconsistent with the observed lattice distortion. A nesting-driven mechanism of the reconstruction is indeed excluded by the absence of any changes in the electronic structure near the Fermi surface, as observed by angle-resolved photoemission spectroscopy. However, distinct changes in the electronic structure at higher binding energies are found to accompany the structural phase transition. This, as well as the observed short correlation length of the pairing distortion, suggest that the transition is of the strong coupling type and driven by phonon entropy rather than electronic entropy.
0
1
0
0
0
0
$\mathfrak A$-principal Hopf hypersurfaces in complex quadrics
A real hypersurface in the complex quadric $Q^m=SO_{m+2}/SO_mSO_2$ is said to be $\mathfrak A$-principal if its unit normal vector field is singular of type $\mathfrak A$-principal everywhere. In this paper, we show that a $\mathfrak A$-principal Hopf hypersurface in $Q^m$, $m\geq3$ is an open part of a tube around a totally geodesic $Q^{m+1}$ in $Q^m$. We also show that such real hypersurfaces are the only contact real hypersurfaces in $Q^m$. %, this answers affirmatively a question posted by Berndt (cf. \cite{berndt1})}. The classification for pseudo-Einstein real hypersurfaces in $Q^m$, $m\geq3$, is also obtained.
0
0
1
0
0
0
On the stochastic phase stability of Ti2AlC-Cr2AlC
The quest towards expansion of the MAX design space has been accelerated with the recent discovery of several solid solution and ordered phases involving at least two MAX end members. Going beyond the nominal MAX compounds enables not only fine tuning of existing properties but also entirely new functionality. This search, however, has been mostly done through painstaking experiments as knowledge of the phase stability of the relevant systems is rather scarce. In this work, we report the first attempt to evaluate the finite-temperature pseudo-binary phase diagram of the Ti2AlC-Cr2AlC via first-principles-guided Bayesian CALPHAD framework that accounts for uncertainties not only in ab initio calculations and thermodynamic models but also in synthesis conditions in reported experiments. The phase stability analyses are shown to have good agreement with previous experiments. The work points towards a promising way of investigating phase stability in other MAX Phase systems providing the knowledge necessary to elucidate possible synthesis routes for MAX systems with unprecedented properties.
0
1
0
0
0
0
Note on Attacking Object Detectors with Adversarial Stickers
Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm.
1
0
0
0
0
0
All or Nothing Caching Games with Bounded Queries
We determine the value of some search games where our goal is to find all of some hidden treasures using queries of bounded size. The answer to a query is either empty, in which case we lose, or a location, which contains a treasure. We prove that if we need to find $d$ treasures at $n$ possible locations with queries of size at most $k$, then our chance of winning is $\frac{k^d}{\binom nd}$ if each treasure is at a different location and $\frac{k^d}{\binom{n+d-1}d}$ if each location might hide several treasures for large enough $n$. Our work builds on some results by Csóka who has studied a continuous version of this problem, known as Alpern's Caching Game; we also prove that the value of Alpern's Caching Game is $\frac{k^d}{\binom{n+d-1}d}$ for integer $k$ and large enough $n$.
1
0
1
0
0
0
HATS-43b, HATS-44b, HATS-45b, and HATS-46b: Four Short Period Transiting Giant Planets in the Neptune-Jupiter Mass Range
We report the discovery of four short period extrasolar planets transiting moderately bright stars from photometric measurements of the HATSouth network coupled to additional spectroscopic and photometric follow-up observations. While the planet masses range from 0.26 to 0.90 M$_J$, the radii are all approximately a Jupiter radii, resulting in a wide range of bulk densities. The orbital period of the planets range from 2.7d to 4.7d, with HATS-43b having an orbit that appears to be marginally non-circular (e= 0.173$\pm$0.089). HATS-44 is notable for a high metallicity ([Fe/H]= 0.320$\pm$0.071). The host stars spectral types range from late F to early K, and all of them are moderately bright (13.3<V<14.4), allowing the execution of future detailed follow-up observations. HATS-43b and HATS-46b, with expected transmission signals of 2350 ppm and 1500 ppm, respectively, are particularly well suited targets for atmospheric characterisation via transmission spectroscopy.
0
1
0
0
0
0
Well-balanced mesh-based and meshless schemes for the shallow-water equations
We formulate a general criterion for the exact preservation of the "lake at rest" solution in general mesh-based and meshless numerical schemes for the strong form of the shallow-water equations with bottom topography. The main idea is a careful mimetic design for the spatial derivative operators in the momentum flux equation that is paired with a compatible averaging rule for the water column height arising in the bottom topography source term. We prove consistency of the mimetic difference operators analytically and demonstrate the well-balanced property numerically using finite difference and RBF-FD schemes in the one- and two-dimensional cases.
0
1
1
0
0
0
An Enhanced Initial Margin Methodology to Manage Warehoused Credit Risk
The use of CVA to cover credit risk is widely spread, but has its limitations. Namely, dealers face the problem of the illiquidity of instruments used for hedging it, hence forced to warehouse credit risk. As a result, dealers tend to offer a limited OTC derivatives market to highly risky counterparties. Consequently, those highly risky entities rarely have access to hedging services precisely when they need them most. In this paper we propose a method to overcome this limitation. We propose to extend the CVA risk-neutral framework to compute an initial margin (IM) specific to each counterparty, which depends on the credit quality of the entity at stake, transforming the effective credit rating of a given netting set to AAA, regardless of the credit rating of the counterparty. By transforming CVA requirement into IM ones, as proposed in this paper, an institution could rely on the existing mechanisms for posting and calling of IM, hence ensuring the operational viability of this new form of managing warehoused risk. The main difference with the currently standard framework is the creation of a Specific Initial Margin, that depends in the credit rating of the counterparty and the characteristics of the netting set in question. In this paper we propose a methodology for such transformation in a sound manner, and hence this method overcomes some of the limitations of the CVA framework.
0
0
0
0
0
1
Ensemble Pruning based on Objection Maximization with a General Distributed Framework
Ensemble pruning, selecting a subset of individual learners from an original ensemble, alleviates the deficiencies of ensemble learning on the cost of time and space. Accuracy and diversity serve as two crucial factors while they usually conflict with each other. To balance both of them, we formalize the ensemble pruning problem as an objection maximization problem based on information entropy. Then we propose an ensemble pruning method including a centralized version and a distributed version, in which the latter is to speed up the former's execution. At last, we extract a general distributed framework for ensemble pruning, which can be widely suitable for most of existing ensemble pruning methods and achieve less time consuming without much accuracy decline. Experimental results validate the efficiency of our framework and methods, particularly with regard to a remarkable improvement of the execution speed, accompanied by gratifying accuracy performance.
0
0
0
1
0
0
Psychophysical laws as reflection of mental space properties
The paper is devoted to the relationship between psychophysics and physics of mind. The basic trends in psychophysics development are briefly discussed with special attention focused on Teghtsoonian's hypotheses. These hypotheses pose the concept of the universality of inner psychophysics and enable to speak about psychological space as an individual object with its own properties. Turning to the two-component description of human behavior (I. Lubashevsky, Physics of the Human Mind, Springer, 2017) the notion of mental space is formulated and human perception of external stimuli is treated as the emergence of the corresponding images in the mental space. On one hand, these images are caused by external stimuli and their magnitude bears the information about the intensity of the corresponding stimuli. On the other hand, the individual structure of such images as well as their subsistence after emergence is determined only by the properties of mental space on its own. Finally, the mental operations of image comparison and their scaling are defined in a way allowing for the bounded capacity of human cognition. As demonstrated, the developed theory of stimulus perception is able to explain the basic regularities of psychophysics, e.g., (i) the regression and range effects leading to the overestimation of weak stimuli and the underestimation of strong stimuli, (ii) scalar variability (Weber's and Ekman' laws), and (\textit{iii}) the sequential (memory) effects. As the final result, a solution to the Fechner-Stevens dilemma is proposed. This solution posits that Fechner's logarithmic law is not a consequences of Weber's law but stems from the interplay of uncertainty in evaluating stimulus intensities and the multi-step scaling required to overcome the stimulus incommensurability.
0
0
0
0
1
0
Shimura curves in the Prym locus
We study Shimura curves of PEL type in $\mathsf{A}_g$ generically contained in the Prym locus. We study both the unramified Prym locus, obtained using étale double covers, and the ramified Prym locus, corresponding to double covers ramified at two points. In both cases we consider the family of all double covers compatible with a fixed group action on the base curve. We restrict to the case where the family is 1-dimensional and the quotient of the base curve by the group is $\mathbb{P}^1$. We give a simple criterion for the image of these families under the Prym map to be a Shimura curve. Using computer algebra we check all the examples gotten in this way up to genus 28. We obtain 43 Shimura curves generically contained in the unramified Prym locus and 9 families generically contained in the ramified Prym locus. Most of these curves are not generically contained in the Jacobian locus.
0
0
1
0
0
0
Latent Laplacian Maximum Entropy Discrimination for Detection of High-Utility Anomalies
Data-driven anomaly detection methods suffer from the drawback of detecting all instances that are statistically rare, irrespective of whether the detected instances have real-world significance or not. In this paper, we are interested in the problem of specifically detecting anomalous instances that are known to have high real-world utility, while ignoring the low-utility statistically anomalous instances. To this end, we propose a novel method called Latent Laplacian Maximum Entropy Discrimination (LatLapMED) as a potential solution. This method uses the EM algorithm to simultaneously incorporate the Geometric Entropy Minimization principle for identifying statistical anomalies, and the Maximum Entropy Discrimination principle to incorporate utility labels, in order to detect high-utility anomalies. We apply our method in both simulated and real datasets to demonstrate that it has superior performance over existing alternatives that independently pre-process with unsupervised anomaly detection algorithms before classifying.
1
0
0
1
0
0
View-Invariant Recognition of Action Style Self-Dissimilarity
Self-similarity was recently introduced as a measure of inter-class congruence for classification of actions. Herein, we investigate the dual problem of intra-class dissimilarity for classification of action styles. We introduce self-dissimilarity matrices that discriminate between same actions performed by different subjects regardless of viewing direction and camera parameters. We investigate two frameworks using these invariant style dissimilarity measures based on Principal Component Analysis (PCA) and Fisher Discriminant Analysis (FDA). Extensive experiments performed on IXMAS dataset indicate remarkably good discriminant characteristics for the proposed invariant measures for gender recognition from video data.
1
0
0
0
0
0
Conducting Highly Principled Data Science: A Statistician's Job and Joy
Highly Principled Data Science insists on methodologies that are: (1) scientifically justified, (2) statistically principled, and (3) computationally efficient. An astrostatistics collaboration, together with some reminiscences, illustrates the increased roles statisticians can and should play to ensure this trio, and to advance the science of data along the way.
0
0
0
1
0
0
Regularizing nonlinear Schroedinger equations through partial off-axis variations
We study a class of focusing nonlinear Schroedinger-type equations derived recently by Dumas, Lannes and Szeftel within the mathematical description of high intensity laser beams [7]. These equations incorporate the possibility of a (partial) off-axis variation of the group velocity of such laser beams through a second order partial differential operator acting in some, but not necessarily all, spatial directions. We study the well-posedness theory for such models and obtain a regularizing effect, even in the case of only partial off-axis dependence. This provides an answer to an open problem posed in [7].
0
0
1
0
0
0
On h-Lexicalized Restarting Automata
Following some previous studies on restarting automata, we introduce a refined model - the h-lexicalized restarting automaton (h-RLWW). We argue that this model is useful for expressing lexicalized syntax in computational linguistics. We compare the input languages, which are the languages traditionally considered in automata theory, to the so-called basic and h-proper languages, which are (implicitly) used by categorial grammars, the original tool for the description of lexicalized syntax. The basic and h-proper languages allow us to stress several nice properties of h-lexicalized restarting automata, and they are suitable for modeling the analysis by reduction and, subsequently, for the development of categories of a lexicalized syntax. Based on the fact that a two-way deterministic monotone restarting automaton can be transformed into an equivalent deterministic monotone RL-automaton in (Marcus) contextual form, we obtain a transformation from monotone RLWW-automata that recognize the class CFL of context-free languages as their input languages to deterministic monotone h-RLWW-automata that recognize CFL through their h-proper languages. Through this transformation we obtain automata with the complete correctness preserving property and an infinite hierarchy within CFL, based on the size of the read/write window. Additionally, we consider h-RLWW-automata that are allowed to perform multiple rewrite steps per cycle, and we establish another infinite hierarchy above CFL that is based on the number of rewrite steps that may be executed within a cycle. The corresponding separation results and their proofs illustrate the transparency of h-RLWW-automata that work with the (complete or cyclic) correctness preserving property
1
0
0
0
0
0
Bayesian Nonparametric Inference for M/G/1 Queueing Systems
In this work, nonparametric statistical inference is provided for the continuous-time M/G/1 queueing model from a Bayesian point of view. The inference is based on observations of the inter-arrival and service times. Beside other characteristics of the system, particular interest is in the waiting time distribution which is not accessible in closed form. Thus, we use an indirect statistical approach by exploiting the Pollaczek-Khinchine transform formula for the Laplace transform of the waiting time distribution. Due to this, an estimator is defined and its frequentist validation in terms of posterior consistency and posterior normality is studied. It will turn out that we can hereby make inference for the observables separately and compose the results subsequently by suitable techniques.
0
0
1
1
0
0
Symbol Invariant of Partition and the Construction
The symbol is used to describe the Springer correspondence for the classical groups. We propose equivalent definitions of symbols for rigid partitions in the $B_n$, $C_n$, and $D_n$ theories uniformly. Analysing the new definition of symbol in detail, we give rules to construct symbol of a partition, which are easy to remember and to operate on. We introduce formal operations of a partition, which reduce the difficulties in the proof of the construction rules. According these rules, we give a closed formula of symbols for different theories uniformly. As applications, previous results can be illustrated more clearly by the construction rules of symbol.
0
0
1
0
0
0
Fairness in representation: quantifying stereotyping as a representational harm
While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets.
1
0
0
1
0
0
Probing the possibility of hotspots on the central neutron star in HESS J1731-347
The X-ray spectra of the neutron stars located in the centers of supernova remnants Cas A and HESS J1731-347 are well fit with carbon atmosphere models. These fits yield plausible neutron star sizes for the known or estimated distances to these supernova remnants. The evidence in favor of the presence of a pure carbon envelope at the neutron star surface is rather indirect and is based on the assumption that the emission is generated uniformly by the entire stellar surface. Although this assumption is supported by the absence of pulsations, the observational upper limit on the pulsed fraction is not very stringent. In an attempt to quantify this evidence, we investigate the possibility that the observed spectrum of the neutron star in HESS J1731-347 is a combination of the spectra produced in a hydrogen atmosphere of the hotspots and of the cooler remaining part of the neutron star surface. The lack of pulsations in this case has to be explained either by a sufficiently small angle between the neutron star spin axis and the line of sight, or by a sufficiently small angular distance between the hotspots and the neutron star rotation poles. As the observed flux from a non-uniformly emitting neutron star depends on the angular distribution of the radiation emerging from the atmosphere, we have computed two new grids of pure carbon and pure hydrogen atmosphere model spectra accounting for Compton scattering. Using new hydrogen models, we have evaluated the probability of a geometry that leads to a pulsed fraction below the observed upper limit to be about 8.2 %. Such a geometry thus seems to be rather improbable but cannot be excluded at this stage.
0
1
0
0
0
0
Some Elementary Partition Inequalities and Their Implications
We prove various inequalities between the number of partitions with the bound on the largest part and some restrictions on occurrences of parts. We explore many interesting consequences of these partition inequalities. In particular, we show that for $L\geq 1$, the number of partitions with $l-s \leq L$ and $s=1$ is greater than the number of partitions with $l-s\leq L$ and $s>1$. Here $l$ and $s$ are the largest part and the smallest part of the partition, respectively.
0
0
1
0
0
0
The terrestrial late veneer from core disruption of a lunar-sized impactor
Overabundances in highly siderophile elements (HSEs) of Earth's mantle can be explained by conveyance from a singular, immense (3000 km in a diameter) "Late Veneer" impactor of chondritic composition, subsequent to lunar formation and terrestrial core-closure. Such rocky objects of approximately lunar mass (about 0.01 M_E) ought to be differentiated, such that nearly all of their HSE payload is sequestered into iron cores. Here, we analyze the mechanical and chemical fate of the core of such a Late Veneer impactor, and trace how its HSEs are suspended - and thus pollute - the mantle. For the statistically most-likely oblique collision (about 45degree), the impactor's core elongates and thereafter disintegrates into a metallic hail of small particles (about 10 m). Some strike the orbiting Moon as sesquinary impactors, but most re-accrete to Earth as secondaries with further fragmentation. We show that a single oblique impactor provides an adequate amount of HSEs to the primordial terrestrial silicate reservoirs via oxidation of (<m-sized) metal particles with a hydrous, pre-impact, early Hadean Earth.
0
1
0
0
0
0
A Bayesian model for lithology/fluid class prediction using a Markov mesh prior fitted from a training image
We consider a Bayesian model for inversion of observed amplitude variation with offset (AVO) data into lithology/fluid classes, and study in particular how the choice of prior distribution for the lithology/fluid classes influences the inversion results. Two distinct prior distributions are considered, a simple manually specified Markov random field prior with a first order neighborhood and a Markov mesh model with a much larger neighborhood estimated from a training image. They are chosen to model both horisontal connectivity and vertical thickness distribution of the lithology/fluid classes, and are compared on an offshore clastic oil reservoir in the North Sea. We combine both priors with the same linearised Gaussian likelihood function based on a convolved linearised Zoeppritz relation and estimate properties of the resulting two posterior distributions by simulating from these distributions with the Metropolis-Hastings algorithm. The influence of the prior on the marginal posterior probabilities for the lithology/fluid classes is clearly observable, but modest. The importance of the prior on the connectivity properties in the posterior realisations, however, is much stronger. The larger neighborhood of the Markov mesh prior enables it to identify and model connectivity and curvature much better than what can be done by the first order neighborhood Markov random field prior. As a result, we conclude that the posterior realisations based on the Markov mesh prior appear with much higher lateral connectivity, which is geologically plausible.
0
0
0
1
0
0
Correct Brillouin zone and electronic structure of BiPd
A promising route to the realization of Majorana fermions is in non-centrosymmetric superconductors, in which spin-orbit-coupling lifts the spin degeneracy of both bulk and surface bands. A detailed assessment of the electronic structure is critical to evaluate their suitability for this through establishing the topological properties of the electronic structure. This requires correct identification of the time-reversal-invariant momenta. One such material is BiPd, a recently rediscovered non-centrosymmetric superconductor which can be grown in large, high-quality single crystals and has been studied by several groups using angular resolved photoemission to establish its surface electronic structure. Many of the published electronic structure studies on this material are based on a reciprocal unit cell which is not the actual Brillouin zone of the material. We show here the consequences of this for the electronic structures and show how the inferred topological nature of the material is affected.
0
1
0
0
0
0
Even faster sorting of (not only) integers
In this paper we introduce RADULS2, the fastest parallel sorter based on radix algorithm. It is optimized to process huge amounts of data making use of modern multicore CPUs. The main novelties include: extremely optimized algorithm for handling tiny arrays (up to about a hundred of records) that could appear even billions times as subproblems to handle and improved processing of larger subarrays with better use of non-temporal memory stores.
1
0
0
0
0
0
Evidential Deep Learning to Quantify Classification Uncertainty
Deterministic neural nets have been shown to learn effective predictors on a wide range of machine learning problems. However, as the standard approach is to train the network to minimize a prediction loss, the resultant model remains ignorant to its prediction confidence. Orthogonally to Bayesian neural nets that indirectly infer prediction uncertainty through weight uncertainties, we propose explicit modeling of the same using the theory of subjective logic. By placing a Dirichlet distribution on the class probabilities, we treat predictions of a neural net as subjective opinions and learn the function that collects the evidence leading to these opinions by a deterministic neural net from data. The resultant predictor for a multi-class classification problem is another Dirichlet distribution whose parameters are set by the continuous output of a neural net. We provide a preliminary analysis on how the peculiarities of our new loss function drive improved uncertainty estimation. We observe that our method achieves unprecedented success on detection of out-of-distribution queries and endurance against adversarial perturbations.
0
0
0
1
0
0
Neutron interference in the Earth's gravitational field
This work relates to the famous experiments, performed in 1975 and 1979 by Werner et al., measuring neutron interference and neutron Sagnac effects in the earth's gravitational field. Employing the method of Stodolsky in its weak field approximation, explicit expressions are derived for the two phase shifts, which turn out to be in agreement with the experiments and with the previously obtained expressions derived from semi-classical arguments: these expressions are simply modified by relativistic correction factors.
0
1
0
0
0
0
Solve For Shortest Paths Problem Within Logarithm Runtime
The Shortest Paths Problem (SPP) is no longer unresolved. Just for a large scalar of instance on this problem, even we cannot know if an algorithm achieves the computing. Those cutting-edge methods are still in the low performance. If we go to a strategy the best-first-search to deal with computing, it is awkward that the technical barrier from another field: the database, which with the capable of Online Oriented. In this paper, we will introduce such a synthesis to solve for SPP which comprises various modules therein including such database leads to finish the task in a logarithm runtime. Through experiments taken on three typical instances on mega-scalar data for transaction in a common laptop, we show off a totally robust, tractable and practical applicability for other projects.
1
0
0
0
0
0
A short proof of the error term in Simpson's rule
In this paper we present a short and elementary proof for the error in Simpson's rule.
0
0
1
0
0
0
Neutrino mass and dark energy constraints from redshift-space distortions
Cosmology in the near future promises a measurement of the sum of neutrino masses, a fundamental Standard Model parameter, as well as substantially-improved constraints on the dark energy. We use the shape of the BOSS redshift-space galaxy power spectrum, in combination with CMB and supernova data, to constrain the neutrino masses and the dark energy. Essential to this calculation are several recent advances in non-linear cosmological perturbation theory, including FFT methods, redshift space distortions, and scale-dependent growth. Our 95% confidence upper bound of 200 meV on the sum of masses degrades substantially to 770 meV when the dark energy equation of state and its first derivative are also allowed to vary, representing a significant challenge to current constraints. We also study the impact of additional galaxy bias parameters, finding that a velocity bias or a more complicated scale-dependent density bias shift the preferred neutrino mass values 20%-30% lower while minimally impacting the other cosmological parameters.
0
1
0
0
0
0
Dielectrophoretic assembly of liquid-phase-exfoliated TiS3 nanoribbons for photodetecting applications
Liquid-phase-exfoliation is a technique capable of producing large quantities of two-dimensional material in suspension. Despite many efforts in the optimization of the exfoliation process itself not much has been done towards the integration of liquid-phase-exfoliated materials in working solid-state devices. In this article, we use dielectrophoresis to direct the assembly of liquid-phase-exfoliated TiS3 nanoribbons between two gold electrodes to produce photodetectors working in the visible. Through electrical and optical measurements we characterize the responsivity of the device and we find values as large as 3.8 mA/W, which improve of more than one order of magnitude on the state-of-the-art for devices based on liquid-phase-exfoliated two-dimensional materials assembled by drop-casting or ink-jet methods.
0
1
0
0
0
0
Motion optimization and parameter identification for a human and lower-back exoskeleton model
Designing an exoskeleton to reduce the risk of low-back injury during lifting is challenging. Computational models of the human-robot system coupled with predictive movement simulations can help to simplify this design process. Here, we present a study that models the interaction between a human model actuated by muscles and a lower-back exoskeleton. We provide a computational framework for identifying the spring parameters of the exoskeleton using an optimal control approach and forward-dynamics simulations. This is applied to generate dynamically consistent bending and lifting movements in the sagittal plane. Our computations are able to predict motions and forces of the human and exoskeleton that are within the torque limits of a subject. The identified exoskeleton could also yield a considerable reduction of the peak lower-back torques as well as the cumulative lower-back load during the movements. This work is relevant to the research communities working on human-robot interaction, and can be used as a basis for a better human-centered design process.
1
0
0
0
0
0
A dataset for Computer-Aided Detection of Pulmonary Embolism in CTA images
Todays, researchers in the field of Pulmonary Embolism (PE) analysis need to use a publicly available dataset to assess and compare their methods. Different systems have been designed for the detection of pulmonary embolism (PE), but none of them have used any public datasets. All papers have used their own private dataset. In order to fill this gap, we have collected 5160 slices of computed tomography angiography (CTA) images acquired from 20 patients, and after labeling the image by experts in this field, we provided a reliable dataset which is now publicly available. In some situation, PE detection can be difficult, for example when it occurs in the peripheral branches or when patients have pulmonary diseases (such as parenchymal disease). Therefore, the efficiency of CAD systems highly depends on the dataset. In the given dataset, 66% of PE are located in peripheral branches, and different pulmonary diseases are also included.
1
0
0
0
0
0
Comment on Ben-Amotz and Honig, "Average entropy dissipation in irreversible mesoscopic processes," Phys. Rev. Lett. 96, 020602 (2006)
We point out that most of the classical thermodynamics results in the paper have been known in the literature, see Kestin and Woods, for quite some time and are not new, contrary to what the authors imply. As shown by Kestin, these results are valid for quasistatic irreversible processes only and not for arbitrary irreversible processes as suggested in the paper. Thus, the application to the Jarzynski process is limited.
0
1
0
0
0
0
High-transmissivity Silicon Visible-wavelength Metasurface Designs based on Truncated-cone Nanoantennae
High-transmissivity all-dielectric metasurfaces have recently attracted attention towards the realization of ultra-compact optical devices and systems. Silicon based metasurfaces, in particular, are highly promising considering the possibility of monolithic integration with VLSI circuits. Realization of silicon based metasurfaces operational in the visible wavelengths remains a challenge. A numerical study of silicon metasurfaces based on stepped truncated cone shaped nanoantenna elements is presented. Metasurfaces based on the stepped conical geometry can be designed for operation in the 700nm to 800nm wavelength window and achieve full cycle phase response (0 to pi with an improved transmittance in comparison with previously reported cylindrical geometry [1]. A systematic parameter study of the influence of various geometrical parameters on the achievable amplitude and phase coverage is reported.
0
1
0
0
0
0
Probabilistic Multigraph Modeling for Improving the Quality of Crowdsourced Affective Data
We proposed a probabilistic approach to joint modeling of participants' reliability and humans' regularity in crowdsourced affective studies. Reliability measures how likely a subject will respond to a question seriously; and regularity measures how often a human will agree with other seriously-entered responses coming from a targeted population. Crowdsourcing-based studies or experiments, which rely on human self-reported affect, pose additional challenges as compared with typical crowdsourcing studies that attempt to acquire concrete non-affective labels of objects. The reliability of participants has been massively pursued for typical non-affective crowdsourcing studies, whereas the regularity of humans in an affective experiment in its own right has not been thoroughly considered. It has been often observed that different individuals exhibit different feelings on the same test question, which does not have a sole correct response in the first place. High reliability of responses from one individual thus cannot conclusively result in high consensus across individuals. Instead, globally testing consensus of a population is of interest to investigators. Built upon the agreement multigraph among tasks and workers, our probabilistic model differentiates subject regularity from population reliability. We demonstrate the method's effectiveness for in-depth robust analysis of large-scale crowdsourced affective data, including emotion and aesthetic assessments collected by presenting visual stimuli to human subjects.
1
0
0
1
0
0
Quantum key distribution protocol with pseudorandom bases
Quantum key distribution (QKD) offers a way for establishing information-theoretically secure communications. An important part of QKD technology is a high-quality random number generator (RNG) for quantum states preparation and for post-processing procedures. In the present work, we consider a novel class of prepare-and-measure QKD protocols, utilizing additional pseudorandomness in the preparation of quantum states. We study one of such protocols and analyze its security against the intercept-resend attack. We demonstrate that, for single-photon sources, the considered protocol gives better secret key rates than the BB84 and the asymmetric BB84 protocol. However, the protocol strongly requires single-photon sources.
1
0
0
0
0
0
Deep clustering of longitudinal data
Deep neural networks are a family of computational models that have led to a dramatical improvement of the state of the art in several domains such as image, voice or text analysis. These methods provide a framework to model complex, non-linear interactions in large datasets, and are naturally suited to the analysis of hierarchical data such as, for instance, longitudinal data with the use of recurrent neural networks. In the other hand, cohort studies have become a tool of importance in the research field of epidemiology. In such studies, variables are measured repeatedly over time, to allow the practitioner to study their temporal evolution as trajectories, and, as such, as longitudinal data. This paper investigates the application of the advanced modelling techniques provided by the deep learning framework in the analysis of the longitudinal data provided by cohort studies. Methods: A method for visualizing and clustering longitudinal dataset is proposed, and compared to other widely used approaches to the problem on both real and simulated datasets. Results: The proposed method is shown to be coherent with the preexisting procedures on simple tasks, and to outperform them on more complex tasks such as the partitioning of longitudinal datasets into non-spherical clusters. Conclusion: Deep artificial neural networks can be used to visualize longitudinal data in a low dimensional manifold that is much simpler to interpret than traditional longitudinal plots are. Consequently, practitioners should start considering the use of deep artificial neural networks for the analysis of their longitudinal data in studies to come.
0
0
0
1
0
0
Economic Implications of Blockchain Platforms
In an economy with asymmetric information, the smart contract in the blockchain protocol mitigates uncertainty. Since, as a new trading platform, the blockchain triggers segmentation of market and differentiation of agents in both the sell and buy sides of the market, it recomposes the asymmetric information and generates spreads in asset price and quality between itself and a traditional platform. We show that marginal innovation and sophistication of the smart contract have non-monotonic effects on the trading value in the blockchain platform, its fundamental value, the price of cryptocurrency, and consumers' welfare. Moreover, a blockchain manager who controls the level of the innovation of the smart contract has an incentive to keep it lower than the first best when the underlying information asymmetry is not severe, leading to welfare loss for consumers.
0
0
0
0
0
1
Proof Theory and Ordered Groups
Ordering theorems, characterizing when partial orders of a group extend to total orders, are used to generate hypersequent calculi for varieties of lattice-ordered groups (l-groups). These calculi are then used to provide new proofs of theorems arising in the theory of ordered groups. More precisely: an analytic calculus for abelian l-groups is generated using an ordering theorem for abelian groups; a calculus is generated for l-groups and new decidability proofs are obtained for the equational theory of this variety and extending finite subsets of free groups to right orders; and a calculus for representable l-groups is generated and a new proof is obtained that free groups are orderable.
0
0
1
0
0
0
DF-SLAM: A Deep-Learning Enhanced Visual SLAM System based on Deep Local Features
As the foundation of driverless vehicle and intelligent robots, Simultaneous Localization and Mapping(SLAM) has attracted much attention these days. However, non-geometric modules of traditional SLAM algorithms are limited by data association tasks and have become a bottleneck preventing the development of SLAM. To deal with such problems, many researchers seek to Deep Learning for help. But most of these studies are limited to virtual datasets or specific environments, and even sacrifice efficiency for accuracy. Thus, they are not practical enough. We propose DF-SLAM system that uses deep local feature descriptors obtained by the neural network as a substitute for traditional hand-made features. Experimental results demonstrate its improvements in efficiency and stability. DF-SLAM outperforms popular traditional SLAM systems in various scenes, including challenging scenes with intense illumination changes. Its versatility and mobility fit well into the need for exploring new environments. Since we adopt a shallow network to extract local descriptors and remain others the same as original SLAM systems, our DF-SLAM can still run in real-time on GPU.
1
0
0
0
0
0
A-NICE-MC: Adversarial Training for MCMC
Existing Markov Chain Monte Carlo (MCMC) methods are either based on general-purpose and domain-agnostic schemes which can lead to slow convergence, or hand-crafting of problem-specific proposals by an expert. We propose A-NICE-MC, a novel method to train flexible parametric Markov chain kernels to produce samples with desired properties. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. A-NICE-MC provides the first framework to automatically design efficient domain-specific MCMC proposals. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.
1
0
0
1
0
0
Starobinsky-like Inflation, Supercosmology and Neutrino Masses in No-Scale Flipped SU(5)
We embed a flipped ${\rm SU}(5) \times {\rm U}(1)$ GUT model in a no-scale supergravity framework, and discuss its predictions for cosmic microwave background observables, which are similar to those of the Starobinsky model of inflation. Measurements of the tilt in the spectrum of scalar perturbations in the cosmic microwave background, $n_s$, constrain significantly the model parameters. We also discuss the model's predictions for neutrino masses, and pay particular attention to the behaviours of scalar fields during and after inflation, reheating and the GUT phase transition. We argue in favor of strong reheating in order to avoid excessive entropy production which could dilute the generated baryon asymmetry.
0
1
0
0
0
0
'Viral' Turing Machines, Computation from Noise and Combinatorial Hierarchies
The interactive computation paradigm is reviewed and a particular example is extended to form the stochastic analog of a computational process via a transcription of a minimal Turing Machine into an equivalent asynchronous Cellular Automaton with an exponential waiting times distribution of effective transitions. Furthermore, a special toolbox for analytic derivation of recursive relations of important statistical and other quantities is introduced in the form of an Inductive Combinatorial Hierarchy.
1
1
0
0
0
0
Estimating model evidence using ensemble-based data assimilation with localization - The model selection problem
IIn recent years, there has been a growing interest in applying data assimilation (DA) methods, originally designed for state estimation, to the model selection problem. In this setting, Carrassi et al. (2017) introduced the contextual formulation of model evidence (CME) and showed that CME can be efficiently computed using a hierarchy of ensemble-based DA procedures. Although Carrassi et al. (2017) analyzed the DA methods most commonly used for operational atmospheric and oceanic prediction worldwide, they did not study these methods in conjunction with localization to a specific domain. Yet any application of ensemble DA methods to realistic geophysical models requires the implementation of some form of localization. The present study extends the theory for estimating CME to ensemble DA methods with domain localization. The domain-localized CME (DL-CME) developed herein is tested for model selection with two models: (i) the Lorenz 40-variable mid-latitude atmospheric dynamics model (L95); and (ii) the simplified global atmospheric SPEEDY model. The CME is compared to the root-mean-square-error (RMSE) as a metric for model selection. The experiments show that CME improves systematically over the RMSE, and that this skill improvement is further enhanced by applying localization in the estimate of the CME, using the DL-CME. The potential use and range of applications of the CME and DL-CME as a model selection metric are also discussed.
0
0
0
1
0
0
An efficient spectral-Galerkin approximation and error analysis for Maxwell transmission eigenvalue problems in spherical geometries
We propose and analyze an efficient spectral-Galerkin approximation for the Maxwell transmission eigenvalue problem in spherical geometry. Using a vector spherical harmonic expansion, we reduce the problem to a sequence of equivalent one-dimensional TE and TM modes that can be solved individually in parallel. For the TE mode, we derive associated generalized eigenvalue problems and corresponding pole conditions. Then we introduce weighted Sobolev spaces based on the pole condition and prove error estimates for the generalized eigenvalue problem. The TM mode is a coupled system with four unknown functions, which is challenging for numerical calculation. To handle it, we design an effective algorithm using Legendre-type vector basis functions. Finally, we provide some numerical experiments to validate our theoretical results and demonstrate the efficiency of the algorithms.
0
0
1
0
0
0
Mendelian randomization with fine-mapped genetic data: choosing from large numbers of correlated instrumental variables
Mendelian randomization uses genetic variants to make causal inferences about the effect of a risk factor on an outcome. With fine-mapped genetic data, there may be hundreds of genetic variants in a single gene region any of which could be used to assess this causal relationship. However, using too many genetic variants in the analysis can lead to spurious estimates and inflated Type 1 error rates. But if only a few genetic variants are used, then the majority of the data is ignored and estimates are highly sensitive to the particular choice of variants. We propose an approach based on summarized data only (genetic association and correlation estimates) that uses principal components analysis to form instruments. This approach has desirable theoretical properties: it takes the totality of data into account and does not suffer from numerical instabilities. It also has good properties in simulation studies: it is not particularly sensitive to varying the genetic variants included in the analysis or the genetic correlation matrix, and it does not have greatly inflated Type 1 error rates. Overall, the method gives estimates that are not so precise as those from variable selection approaches (such as using a conditional analysis or pruning approach to select variants), but are more robust to seemingly arbitrary choices in the variable selection step. Methods are illustrated by an example using genetic associations with testosterone for 320 genetic variants to assess the effect of sex hormone-related pathways on coronary artery disease risk, in which variable selection approaches give inconsistent inferences.
0
0
0
1
0
0
Building competitive direct acoustics-to-word models for English conversational speech recognition
Direct acoustics-to-word (A2W) models in the end-to-end paradigm have received increasing attention compared to conventional sub-word based automatic speech recognition models using phones, characters, or context-dependent hidden Markov model states. This is because A2W models recognize words from speech without any decoder, pronunciation lexicon, or externally-trained language model, making training and decoding with such models simple. Prior work has shown that A2W models require orders of magnitude more training data in order to perform comparably to conventional models. Our work also showed this accuracy gap when using the English Switchboard-Fisher data set. This paper describes a recipe to train an A2W model that closes this gap and is at-par with state-of-the-art sub-word based models. We achieve a word error rate of 8.8%/13.9% on the Hub5-2000 Switchboard/CallHome test sets without any decoder or language model. We find that model initialization, training data order, and regularization have the most impact on the A2W model performance. Next, we present a joint word-character A2W model that learns to first spell the word and then recognize it. This model provides a rich output to the user instead of simple word hypotheses, making it especially useful in the case of words unseen or rarely-seen during training.
1
0
0
1
0
0
Convolutional Dictionary Learning: Acceleration and Convergence
Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared to the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large datasets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.
1
0
1
0
0
0
The late-time light curve of the type Ia supernova SN 2011fe
We present late-time optical $R$-band imaging data from the Palomar Transient Factory (PTF) for the nearby type Ia supernova SN 2011fe. The stacked PTF light curve provides densely sampled coverage down to $R\simeq22$ mag over 200 to 620 days past explosion. Combining with literature data, we estimate the pseudo-bolometric light curve for this event from 200 to 1600 days after explosion, and constrain the likely near-infrared contribution. This light curve shows a smooth decline consistent with radioactive decay, except over ~450 to ~600 days where the light curve appears to decrease faster than expected based on the radioactive isotopes presumed to be present, before flattening at around 600 days. We model the 200-1600d pseudo-bolometric light curve with the luminosity generated by the radioactive decay chains of $^{56}$Ni, $^{57}$Ni and $^{55}$Co, and find it is not consistent with models that have full positron trapping and no infrared catastrophe (IRC); some additional energy escape other than optical/near-IR photons is required. However, the light curve is consistent with models that allow for positron escape (reaching 75% by day 500) and/or an IRC (with 85% of the flux emerging in non-optical wavelengths by day 600). The presence of the $^{57}$Ni decay chain is robustly detected, but the $^{55}$Co decay chain is not formally required, with an upper mass limit estimated at 0.014 M$_{\odot}$. The measurement of the $^{57}$Ni/$^{56}$Ni mass ratio is subject to significant systematic uncertainties, but all of our fits require a high ratio >0.031 (>1.3 in solar abundances).
0
1
0
0
0
0
Convergence analysis of the block Gibbs sampler for Bayesian probit linear mixed models with improper priors
In this article, we consider Markov chain Monte Carlo(MCMC) algorithms for exploring the intractable posterior density associated with Bayesian probit linear mixed models under improper priors on the regression coefficients and variance components. In particular, we construct the two-block Gibbs sampler using the data augmentation (DA) techniques. Furthermore, we prove geometric ergodicity of the Gibbs sampler, which is the foundation for building central limit theorems for MCMC based estimators and subsequent inferences. The conditions for geometric convergence are similar to those guaranteeing posterior propriety. We also provide conditions for posterior propriety when the design matrices take commonly observed forms. In general, the Haar parameter expansion for DA (PX- DA) algorithm is an improvement of the DA algorithm and it has been shown that it is theoretically at least as good as the DA algorithm. Here we construct a Haar PX-DA algorithm, which has essentially the same computational cost as the two-block Gibbs sampler.
0
0
1
1
0
0
The complexity of the Multiple Pattern Matching Problem for random strings
We generalise a multiple string pattern matching algorithm, recently proposed by Fredriksson and Grabowski [J. Discr. Alg. 7, 2009], to deal with arbitrary dictionaries on an alphabet of size $s$. If $r_m$ is the number of words of length $m$ in the dictionary, and $\phi(r) = \max_m \ln(s\, m\, r_m)/m$, the complexity rate for the string characters to be read by this algorithm is at most $\kappa_{{}_\textrm{UB}}\, \phi(r)$ for some constant $\kappa_{{}_\textrm{UB}}$. On the other side, we generalise the classical lower bound of Yao [SIAM J. Comput. 8, 1979], for the problem with a single pattern, to deal with arbitrary dictionaries, and determine it to be at least $\kappa_{{}_\textrm{LB}}\, \phi(r)$. This proves the optimality of the algorithm, improving and correcting previous claims.
1
0
0
0
0
0
Functoriality and uniformity in Hrushovski's groupoid-cover correspondence
The correspondence between definable connected groupoids in a theory $T$ and internal generalised imaginary sorts of $T$, established by Hrushovski in ["Groupoids, imaginaries and internal covers," Turkish Journal of Mathematics, 2012], is here extended in two ways: First, it is shown that the correspondence is in fact an equivalence of categories, with respect to appropriate notions of morphism. Secondly, the equivalence of categories is shown to vary uniformly in definable families, with respect to an appropriate relativisation of these categories. Some elaboration on Hrushovki's original constructions are also included.
0
0
1
0
0
0
Simple Conditions for Metastability of Continuous Markov Chains
A family $\{Q_{\beta}\}_{\beta \geq 0}$ of Markov chains is said to exhibit $\textit{metastable mixing}$ with $\textit{modes}$ $S_{\beta}^{(1)},\ldots,S_{\beta}^{(k)}$ if its spectral gap (or some other mixing property) is very close to the worst conductance $\min(\Phi_{\beta}(S_{\beta}^{(1)}), \ldots, \Phi_{\beta}(S_{\beta}^{(k)}))$ of its modes. We give simple sufficient conditions for a family of Markov chains to exhibit metastability in this sense, and verify that these conditions hold for a prototypical Metropolis-Hastings chain targeting a mixture distribution. Our work differs from existing work on metastability in that, for the class of examples we are interested in, it gives an asymptotically exact formula for the spectral gap (rather than a bound that can be very far from sharp) while at the same time giving technical conditions that are easier to verify for many statistical examples. Our bounds from this paper are used in a companion paper to compare the mixing times of the Hamiltonian Monte Carlo algorithm and a random walk algorithm for multimodal target distributions.
0
0
0
1
0
0
Deterministic preparation of highly non-classical macroscopic quantum states
We present a scheme to deterministically prepare non-classical quantum states of a massive mirror including highly non-Gaussian states exhibiting sizeable negativity of the Wigner function. This is achieved by exploiting the non-linear light-matter interaction in an optomechanical cavity by driving the system with optimally designed frequency patterns. Our scheme reveals to be resilient against mechanical and optical damping, as well as mechanical thermal noise and imperfections in the driving scheme. Our proposal thus opens a promising route for table-top experiments to explore and exploit macroscopic quantum phenomena.
0
1
0
0
0
0
On Local Optimizers of Acquisition Functions in Bayesian Optimization
Bayesian optimization is a sample-efficient method for finding a global optimum of an expensive-to-evaluate black-box function. A global solution is found by accumulating a pair of query point and corresponding function value, repeating these two procedures: (i) learning a surrogate model for the objective function using the data observed so far; (ii) the maximization of an acquisition function to determine where next to query the objective function. Convergence guarantees are only valid when the global optimizer of the acquisition function is found and selected as the next query point. In practice, however, local optimizers of acquisition functions are also used, since searching the exact optimizer of the acquisition function is often a non-trivial or time-consuming task. In this paper we present an analysis on the behavior of local optimizers of acquisition functions, in terms of instantaneous regrets over global optimizers. We also present the performance analysis when multi-started local optimizers are used to find the maximum of the acquisition function. Numerical experiments confirm the validity of our theoretical analysis.
1
0
0
1
0
0
One Password: An Encryption Scheme for Hiding Users' Register Information
In recent years, the attack which leverages register information (e.g. accounts and passwords) leaked from 3rd party applications to try other applications is popular and serious. We call this attack "database collision". Traditionally, people have to keep dozens of accounts and passwords for different applications to prevent this attack. In this paper, we propose a novel encryption scheme for hiding users' register information and preventing this attack. Specifically, we first hash the register information using existing safe hash function. Then the hash string is hidden, instead a coefficient vector is stored for verification. Coefficient vectors of the same register information are generated randomly for different applications. Hence, the original information is hardly cracked by dictionary based attack or database collision in practice. Using our encryption scheme, each user only needs to keep one password for dozens of applications.
1
0
0
0
0
0
The RoPES project with HARPS and HARPS-N. I. A system of super-Earths orbiting the moderately active K-dwarf HD 176986
We report the discovery of a system of two super-Earths orbiting the moderately active K-dwarf HD 176986. This work is part of the RoPES RV program of G- and K-type stars, which combines radial velocities (RVs) from the HARPS and HARPS-N spectrographs to search for short-period terrestrial planets. HD 176986 b and c are super-Earth planets with masses of 5.74 and 9.18 M$_{\oplus}$, orbital periods of 6.49 and 16.82 days, and distances of 0.063 and 0.119 AU in orbits that are consistent with circular. The host star is a K2.5 dwarf, and despite its modest level of chromospheric activity (log(R'hk) = - 4.90 +- 0.04), it shows a complex activity pattern. Along with the discovery of the planets, we study the magnetic cycle and rotation of the star. HD 176986 proves to be suitable for testing the available RV analysis technique and further our understanding of stellar activity.
0
1
0
0
0
0
On the possibility of developing quasi-cw high-power high-pressure laser on 4p-4s transition of ArI with electron beam - optical pumping: quenching of 4s (3P2) lower laser level
A new electron beam-optical procedure is proposed for quasi-cw pumping of high-pressure large-volume He-Ar laser on 4p[1/2]1 - 4s[3/2]2 argon atom transition at the wavelength of 912.5 nm. It consists of creation and maintenance of a necessary density of 4s[3/2]2 metastable state in the gain medium by a fast electron beam and subsequent optically pumping of the upper laser level via the classical three-level scheme using a laser diode. Absorption probing is used to study collisional quenching of Ar* metastable in electron-beam-excited high-pressure He-Ar mixtures with a low content of argon. The rate constants for plasma-chemical reactions Ar*+He+Ar-Ar2*+He (3.6 +- 0.4)x10-33 cm6/s, Ar+2He-HeAr*+He (4.4 +- 0.9)x10-36 cm6/s and Ar*+He-Products+He (2.4 +- 0.3)x10-15 cm3/s are for the first time measured.
0
1
0
0
0
0
An Adaptive Characteristic-wise Reconstruction WENOZ scheme for Gas Dynamic Euler Equations
Due to its excellent shock-capturing capability and high resolution, the WENO scheme family has been widely used in varieties of compressive flow simulation. However, for problems containing strong shocks and contact discontinuities, such as the Lax shock tube problem, the WENO scheme still produces numerical oscillations. To avoid such numerical oscillations, the characteristic-wise construction method should be applied. Compared to component-wise reconstruction, characteristic-wise reconstruction leads to much more computational cost and thus is not suite for large scale simulation such as direct numeric simulation of turbulence. In this paper, an adaptive characteristic-wise reconstruction WENO scheme, i.e. the AdaWENO scheme, is proposed to improve the computational efficiency of the characteristic-wise reconstruction method. The new scheme performs characteristic-wise reconstruction near discontinuities while switching to component-wise reconstruction for smooth regions. Meanwhile, a new calculation strategy for the WENO smoothness indicators is implemented to reduce over-all computational cost. Several one dimensional and two dimensional numerical tests are performed to validate and evaluate the AdaWENO scheme. Numerical results show that AdaWENO maintains essentially non-oscillatory flow field near discontinuities as the characteristic-wise reconstruction method. Besieds, compared to the component-wise reconstruction, AdaWENO is about 40\% faster which indicates its excellent efficiency.
0
1
0
0
0
0
A level set-based structural optimization code using FEniCS
This paper presents an educational code written using FEniCS, based on the level set method, to perform compliance minimization in structural optimization. We use the concept of distributed shape derivative to compute a descent direction for the compliance, which is defined as a shape functional. The use of the distributed shape derivative is facilitated by FEniCS, which allows to handle complicated partial differential equations with a simple implementation. The code is written for compliance minimization in the framework of linearized elasticity, and can be easily adapted to tackle other functionals and partial differential equations. We also provide an extension of the code for compliant mechanisms. We start by explaining how to compute shape derivatives, and discuss the differences between the distributed and boundary expressions of the shape derivative. Then we describe the implementation in details, and show the application of this code to some classical benchmarks of topology optimization. The code is available at this http URL, and the main file is also given in the appendix.
0
0
1
0
0
0
Transaction Support over Redis: An Overview
This document outlines the approach to supporting cross-node transactions over a Redis cluster.
1
0
0
0
0
0
Subgraphs and motifs in a dynamic airline network
How does the small-scale topological structure of an airline network behave as the network evolves? To address this question, we study the dynamic and spatial properties of small undirected subgraphs using 15 years of data on Southwest Airlines' domestic route service. We find that this real-world network has much in common with random graphs, and describe a possible power-law scaling between subgraph counts and the number of edges in the network, that appears to be quite robust to changes in network density and size. We use analytic formulae to identify statistically over- and under-represented subgraphs, known as motifs and anti-motifs, and discover the existence of substantial topology transitions. We propose a simple subgraph-based node ranking measure, that is not always highly correlated with standard node centrality, and can identify important nodes relative to specific topologies, and investigate the spatial "distribution" of the triangle subgraph using graphical tools. Our results have implications for the way in which subgraphs can be used to analyze real-world networks.
1
0
0
0
0
0
On the Glitch Phenomenon
The Principle of the Glitch states that for any device which makes a discrete decision based upon a continuous range of possible inputs, there are inputs for which it will take arbitrarily long to reach a decision. The appropriate mathematical setting for studying this principle is described. This involves defining the concept of continuity for mappings on sets of functions. It can then be shown that the glitch principle follows from the continuous behavior of the device.
1
0
1
0
0
0
Asymptotic properties and approximation of Bayesian logspline density estimators for communication-free parallel computing methods
In this article we perform an asymptotic analysis of Bayesian parallel density estimators which are based on logspline density estimation. The parallel estimator we introduce is in the spirit of a kernel density estimator introduced in recent studies. We provide a numerical procedure that produces the density estimator itself in place of the sampling algorithm. We then derive an error bound for the mean integrated squared error for the full data posterior density estimator. We also investigate the parameters that arise from logspline density estimation and the numerical approximation procedure. Our investigation identifies specific choices of parameters for logspline density estimation that result in the error bound scaling appropriately in relation to these choices.
0
0
1
1
0
0
Tunable Weyl and Dirac states in the nonsymmorphic compound $\rm\mathbf{CeSbTe}$
Recent interest in topological semimetals has lead to the proposal of many new topological phases that can be realized in real materials. Next to Dirac and Weyl systems, these include more exotic phases based on manifold band degeneracies in the bulk electronic structure. The exotic states in topological semimetals are usually protected by some sort of crystal symmetry and the introduction of magnetic order can influence these states by breaking time reversal symmetry. Here we show that we can realize a rich variety of different topological semimetal states in a single material, $\rm CeSbTe$. This compound can exhibit different types of magnetic order that can be accessed easily by applying a small field. It allows, therefore, for tuning the electronic structure and can drive it through a manifold of topologically distinct phases, such as the first nonsymmorphic magnetic topological material with an eight-fold band crossing at a high symmetry point. Our experimental results are backed by a full magnetic group theory analysis and ab initio calculations. This discovery introduces a realistic and promising platform for studying the interplay of magnetism and topology.
0
1
0
0
0
0
Personalised Query Suggestion for Intranet Search with Temporal User Profiling
Recent research has shown the usefulness of using collective user interaction data (e.g., query logs) to recommend query modification suggestions for Intranet search. However, most of the query suggestion approaches for Intranet search follow an "one size fits all" strategy, whereby different users who submit an identical query would get the same query suggestion list. This is problematic, as even with the same query, different users may have different topics of interest, which may change over time in response to the user's interaction with the system. We address the problem by proposing a personalised query suggestion framework for Intranet search. For each search session, we construct two temporal user profiles: a click user profile using the user's clicked documents and a query user profile using the user's submitted queries. We then use the two profiles to re-rank the non-personalised query suggestion list returned by a state-of-the-art query suggestion method for Intranet search. Experimental results on a large-scale query logs collection show that our personalised framework significantly improves the quality of suggested queries.
1
0
0
0
0
0
Navigability of Random Geometric Graphs in the Universe and Other Spacetimes
Random geometric graphs in hyperbolic spaces explain many common structural and dynamical properties of real networks, yet they fail to predict the correct values of the exponents of power-law degree distributions observed in real networks. In that respect, random geometric graphs in asymptotically de Sitter spacetimes, such as the Lorentzian spacetime of our accelerating universe, are more attractive as their predictions are more consistent with observations in real networks. Yet another important property of hyperbolic graphs is their navigability, and it remains unclear if de Sitter graphs are as navigable as hyperbolic ones. Here we study the navigability of random geometric graphs in three Lorentzian manifolds corresponding to universes filled only with dark energy (de Sitter spacetime), only with matter, and with a mixture of dark energy and matter as in our universe. We find that these graphs are navigable only in the manifolds with dark energy. This result implies that, in terms of navigability, random geometric graphs in asymptotically de Sitter spacetimes are as good as random hyperbolic graphs. It also establishes a connection between the presence of dark energy and navigability of the discretized causal structure of spacetime, which provides a basis for a different approach to the dark energy problem in cosmology.
1
1
0
0
0
0
Learning Geometric Concepts with Nasty Noise
We study the efficient learnability of geometric concept classes - specifically, low-degree polynomial threshold functions (PTFs) and intersections of halfspaces - when a fraction of the data is adversarially corrupted. We give the first polynomial-time PAC learning algorithms for these concept classes with dimension-independent error guarantees in the presence of nasty noise under the Gaussian distribution. In the nasty noise model, an omniscient adversary can arbitrarily corrupt a small fraction of both the unlabeled data points and their labels. This model generalizes well-studied noise models, including the malicious noise model and the agnostic (adversarial label noise) model. Prior to our work, the only concept class for which efficient malicious learning algorithms were known was the class of origin-centered halfspaces. Specifically, our robust learning algorithm for low-degree PTFs succeeds under a number of tame distributions -- including the Gaussian distribution and, more generally, any log-concave distribution with (approximately) known low-degree moments. For LTFs under the Gaussian distribution, we give a polynomial-time algorithm that achieves error $O(\epsilon)$, where $\epsilon$ is the noise rate. At the core of our PAC learning results is an efficient algorithm to approximate the low-degree Chow-parameters of any bounded function in the presence of nasty noise. To achieve this, we employ an iterative spectral method for outlier detection and removal, inspired by recent work in robust unsupervised learning. Our aforementioned algorithm succeeds for a range of distributions satisfying mild concentration bounds and moment assumptions. The correctness of our robust learning algorithm for intersections of halfspaces makes essential use of a novel robust inverse independence lemma that may be of broader interest.
1
0
0
0
0
0
The relationship between the number of editorial board members and the scientific output of universities in the chemistry field
Editorial board members, who are considered the gatekeepers of scientific journals, play an important role in academia, and may directly or indirectly affect the scientific output of a university. In this article, we used the quantile regression method among a sample of 1,387 university in chemistry to characterize the correlation between the number of editorial board members and the scientific output of their universities. Furthermore, we used time-series data and the Granger causality test to explore the causal relationship between the number of editorial board members and the number of articles of some top universities. Our results suggest that the number of editorial board members is positively and significantly related to the scientific output (as measured by the number of articles, total number of citations, citations per paper, and h index) of their universities. However, the Granger causality test results suggest that the causal relationship between the number of editorial board members and the number of articles of some top universities is not obvious. Combining these findings with the results of qualitative interviews with editorial board members, we discuss the causal relationship between the number of editorial board members and the scientific output of their universities.
1
0
0
1
0
0
A Non-standard Standard Model
This paper examines the Standard Model under the strong-electroweak gauge group $SU_S(3)\times U_{EW}(2)$ subject to the condition $u_{EW}(2)\not\cong su_I(2)\oplus u_Y(1)$. Physically, the condition ensures that all electroweak gauge bosons interact with each other prior to symmetry breaking --- as one might expect from $U(2)$ invariance. This represents a crucial shift in the notion of physical gauge bosons: Unlike the Standard Model which posits a change of Lie algebra basis induced by spontaneous symmetry breaking, here the basis is unaltered and $A,\,Z^0,\,W^\pm$ represent (modulo $U_{EW}(2)$ gauge transformations) the physical bosons both \emph{before} and after spontaneous symmetry breaking. Our choice of $u_{EW}(2)$ basis requires some modification of the matter field sector of the Standard Model. Careful attention to the product group structure calls for strong-electroweak degrees of freedom in the $(\mathbf{3},\mathbf{2})$ and the $(\mathbf{3},\overline{\mathbf{2}})$ of $SU_S(3)\times U_{EW}(2)$ that possess integer electric charge just like leptons. These degrees of freedom play the role of quarks, and they lead to a modified Lagrangian that nevertheless reproduces transition rates and cross sections equivalent to the Standard Model. The close resemblance between quark and lepton electroweak doublets in this picture suggests a mechanism for a phase transition between quarks and leptons that stems from the product structure of the gauge group. Our hypothesis is that the strong and electroweak bosons see each other as a source of decoherence. In effect, leptons get identified with the $SU_S(3)$-trace of quark representations. This mechanism allows for possible extensions of the Standard Model that don't require large inclusive multiplets of matter fields. As an example, we propose and investigate a model that turns out to have some promising cosmological implications.
0
1
0
0
0
0
Robust functional regression model for marginal mean and subject-specific inferences
We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student $t$-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals for conditional mean curves. Numerical studies show that the proposed model provides robust analysis against data contamination or distribution misspecification, and the proposed prediction intervals maintain the nominal confidence levels. A real data application is presented as an illustrative example.
0
0
0
1
0
0