title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Generalized weighted Ostrowski and Ostrowski-Grüss type inequalities on time scales via a parameter function
We prove generalized weighted Ostrowski and Ostrowski--Grüss type inequalities on time scales via a parameter function. In particular, our result extends a result of Dragomir and Barnett. Furthermore, we apply our results to the continuous, discrete, and quantum cases, to obtain some interesting new inequalities.
0
0
1
0
0
0
"Found in Translation": Predicting Outcomes of Complex Organic Chemistry Reactions using Neural Sequence-to-Sequence Models
There is an intuitive analogy of an organic chemist's understanding of a compound and a language speaker's understanding of a word. Consequently, it is possible to introduce the basic concepts and analyze potential impacts of linguistic analysis to the world of organic chemistry. In this work, we cast the reaction prediction task as a translation problem by introducing a template-free sequence-to-sequence model, trained end-to-end and fully data-driven. We propose a novel way of tokenization, which is arbitrarily extensible with reaction information. With this approach, we demonstrate results superior to the state-of-the-art solution by a significant margin on the top-1 accuracy. Specifically, our approach achieves an accuracy of 80.1% without relying on auxiliary knowledge such as reaction templates. Also, 66.4% accuracy is reached on a larger and noisier dataset.
1
0
0
1
0
0
Efficient Decomposition of High-Rank Tensors
Tensors are a natural way to express correlations among many physical variables, but storing tensors in a computer naively requires memory which scales exponentially in the rank of the tensor. This is not optimal, as the required memory is actually set not by the rank but by the mutual information amongst the variables in question. Representations such as the tensor tree perform near-optimally when the tree decomposition is chosen to reflect the correlation structure in question, but making such a choice is non-trivial and good heuristics remain highly context-specific. In this work I present two new algorithms for choosing efficient tree decompositions, independent of the physical context of the tensor. The first is a brute-force algorithm which produces optimal decompositions up to truncation error but is generally impractical for high-rank tensors, as the number of possible choices grows exponentially in rank. The second is a greedy algorithm, and while it is not optimal it performs extremely well in numerical experiments while having runtime which makes it practical even for tensors of very high rank.
1
1
0
0
0
0
Universality in Chaos: Lyapunov Spectrum and Random Matrix Theory
We propose the existence of a new universality in classical chaotic systems when the number of degrees of freedom is large: the statistical property of the Lyapunov spectrum is described by Random Matrix Theory. We demonstrate it by studying the finite-time Lyapunov exponents of the matrix model of a stringy black hole and the mass deformed models. The massless limit, which has a dual string theory interpretation, is special in that the universal behavior can be seen already at t=0, while in other cases it sets in at late time. The same pattern is demonstrated also in the product of random matrices.
0
1
0
0
0
0
Fixed points of competitive threshold-linear networks
Threshold-linear networks (TLNs) are models of neural networks that consist of simple, perceptron-like neurons and exhibit nonlinear dynamics that are determined by the network's connectivity. The fixed points of a TLN, including both stable and unstable equilibria, play a critical role in shaping its emergent dynamics. In this work, we provide two novel characterizations for the set of fixed points of a competitive TLN: the first is in terms of a simple sign condition, while the second relies on the concept of domination. We apply these results to a special family of TLNs, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. This leads us to prove a series of graph rules that enable one to determine fixed points of a CTLN by analyzing the underlying graph. Additionally, we study larger networks composed of smaller "building block" subnetworks, and prove several theorems relating the fixed points of the full network to those of its components. Our results provide the foundation for a kind of "graphical calculus" to infer features of the dynamics from a network's connectivity.
0
0
0
0
1
0
A Decision Procedure for Herbrand Formulae without Skolemization
This paper describes a decision procedure for disjunctions of conjunctions of anti-prenex normal forms of pure first-order logic (FOLDNFs) that do not contain $\vee$ within the scope of quantifiers. The disjuncts of these FOLDNFs are equivalent to prenex normal forms whose quantifier-free parts are conjunctions of atomic and negated atomic formulae (= Herbrand formulae). In contrast to the usual algorithms for Herbrand formulae, neither skolemization nor unification algorithms with function symbols are applied. Instead, a procedure is described that rests on nothing but equivalence transformations within pure first-order logic (FOL). This procedure involves the application of a calculus for negative normal forms (the NNF-calculus) with $A \dashv\vdash A \wedge A$ (= $\wedge$I) as the sole rule that increases the complexity of given FOLDNFs. The described algorithm illustrates how, in the case of Herbrand formulae, decision problems can be solved through a systematic search for proofs that reduce the number of applications of the rule $\wedge$I to a minimum in the NNF-calculus. In the case of Herbrand formulae, it is even possible to entirely abstain from applying $\wedge$I. Finally, it is shown how the described procedure can be used within an optimized general search for proofs of contradiction and what kind of questions arise for a $\wedge$I-minimal proof strategy in the case of a general search for proofs of contradiction.
1
0
1
0
0
0
The role of relativistic many-body theory in probing new physics beyond the standard model via the electric dipole moments of diamagnetic atoms
The observation of electric dipole moments (EDMs) in atomic systems due to parity and time-reversal violating (P,T-odd) interactions can probe new physics beyond the standard model and also provide insights into the matter-antimatter asymmetry in the Universe. The EDMs of open-shell atomic systems are sensitive to the electron EDM and the P,T-odd scalar-pseudoscalar (S-PS) semi-leptonic interaction, but the dominant contributions to the EDMs of diamagnetic atoms come from the hadronic and tensor-pseudotensor (T-PT) semi-leptonic interactions. Several diamagnetic atoms like $^{129}$Xe, $^{171}$Yb, $^{199}$Hg, $^{223}$Rn, and $^{225}$Ra are candidates for the experimental search for the possible existence of EDMs, and among these $^{199}$Hg has yielded the lowest limit till date. The T or CP violating coupling constants of the aforementioned interactions can be extracted from these measurements by combining with atomic and nuclear calculations. In this work, we report the calculations of the EDMs of the above atoms by including both the electromagnetic and P,T-odd violating interactions simultaneously. These calculations are performed by employing relativistic many-body methods based on the random phase approximation (RPA) and the singles and doubles coupled-cluster (CCSD) method starting with the Dirac-Hartree-Fock (DHF) wave function in both cases. The differences in the results from both the methods shed light on the importance of the non-core-polarization electron correlation effects that are accounted for by the CCSD method. We also determine electric dipole polarizabilities of these atoms, which have computational similarities with EDMs and compare them with the available experimental and other theoretical results to assess the accuracy of our calculations.
0
1
0
0
0
0
Security Incident Recognition and Reporting (SIRR): An Industrial Perspective
Reports and press releases highlight that security incidents continue to plague organizations. While researchers and practitioners' alike endeavor to identify and implement realistic security solutions to prevent incidents from occurring, the ability to initially identify a security incident is paramount when researching a security incident lifecycle. Hence, this research investigates the ability of employees in a Global Fortune 500 financial organization, through internal electronic surveys, to recognize and report security incidents to pursue a more holistic security posture. The research contribution is an initial insight into security incident perceptions by employees in the financial sector as well as serving as an initial guide for future security incident recognition and reporting initiatives.
1
0
0
0
0
0
On attainability of optimal controls in coefficients for system of Hammerstein type with anisotropic p-Laplacia
In this paper we consider an optimal control problem for the coupled system of a nonlinear monotone Dirichlet problem with anisotropic p-Laplacian and matrix-valued nonsmooth controls in its coefficients and a nonlinear equation of Hammerstein type. Using the direct method in calculus of variations, we prove the existence of an optimal control in considered problem and provide sensitivity analysis for a specific case of considered problem with respect to two-parameter regularization.
0
0
1
0
0
0
Adversarial Variational Bayes Methods for Tweedie Compound Poisson Mixed Models
The Tweedie Compound Poisson-Gamma model is routinely used for modeling non-negative continuous data with a discrete probability mass at zero. Mixed models with random effects account for the covariance structure related to the grouping hierarchy in the data. An important application of Tweedie mixed models is pricing the insurance policies, e.g. car insurance. However, the intractable likelihood function, the unknown variance function, and the hierarchical structure of mixed effects have presented considerable challenges for drawing inferences on Tweedie. In this study, we tackle the Bayesian Tweedie mixed-effects models via variational inference approaches. In particular, we empower the posterior approximation by implicit models trained in an adversarial setting. To reduce the variance of gradients, we reparameterize random effects, and integrate out one local latent variable of Tweedie. We also employ a flexible hyper prior to ensure the richness of the approximation. Our method is evaluated on both simulated and real-world data. Results show that the proposed method has smaller estimation bias on the random effects compared to traditional inference methods including MCMC; it also achieves a state-of-the-art predictive performance, meanwhile offering a richer estimation of the variance function.
0
0
0
1
0
0
Sampling of Temporal Networks: Methods and Biases
Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.
1
0
0
0
0
0
Personalizing Path-Specific Effects
Unlike classical causal inference, which often has an average causal effect of a treatment within a population as a target, in settings such as personalized medicine, the goal is to map a given unit's characteristics to a treatment tailored to maximize the expected outcome for that unit. Obtaining high-quality mappings of this type is the goal of the dynamic regime literature (Chakraborty and Moodie 2013), with connections to reinforcement learning and experimental design. Aside from the average treatment effects, mechanisms behind causal relationships are also of interest. A well-studied approach to mechanism analysis is establishing average effects along with a particular set of causal pathways, in the simplest case the direct and indirect effects. Estimating such effects is the subject of the mediation analysis literature (Robins and Greenland 1992; Pearl 2001). In this paper, we consider how unit characteristics may be used to tailor a treatment assignment strategy that maximizes a particular path-specific effect. In healthcare applications, finding such a policy is of interest if, for instance, we are interested in maximizing the chemical effect of a drug on an outcome (corresponding to the direct effect), while assuming drug adherence (corresponding to the indirect effect) is set to some reference level. To solve our problem, we define counterfactuals associated with path-specific effects of a policy, give a general identification algorithm for these counterfactuals, give a proof of completeness, and show how classification algorithms in machine learning (Chen, Zeng, and Kosorok 2016) may be used to find a high-quality policy. We validate our approach via a simulation study.
0
0
0
1
0
0
Bridge Programs as an approach to improving diversity in physics
In most physical sciences, students from underrepresented minority (URM) groups constitute a small percentage of earned degrees at the undergraduate and graduate levels. Bridge programs can serve as an initiative to increase the number of URM students that gain access to graduate school and earn advanced degrees in physics. This talk discussed levels of representation in physical sciences as well as some results and best practices of current bridge programs in physics. The APS Bridge Program has enabled over 100 students to be placed into Bridge or graduate programs in physics, while retaining 88% of those placed.
0
1
0
0
0
0
NEURAL: quantitative features for newborn EEG using Matlab
Background: For newborn infants in critical care, continuous monitoring of brain function can help identify infants at-risk of brain injury. Quantitative features allow a consistent and reproducible approach to EEG analysis, but only when all implementation aspects are clearly defined. Methods: We detail quantitative features frequently used in neonatal EEG analysis and present a Matlab software package together with exact implementation details for all features. The feature set includes stationary features that capture amplitude and frequency characteristics and features of inter-hemispheric connectivity. The software, a Neonatal Eeg featURe set in mAtLab (NEURAL), is open source and freely available. The software also includes a pre-processing stage with a basic artefact removal procedure. Conclusions: NEURAL provides a common platform for quantitative analysis of neonatal EEG. This will support reproducible research and enable comparisons across independent studies. These features present summary measures of the EEG that can also be used in automated methods to determine brain development and health of the newborn in critical care.
0
1
0
1
0
0
False Positive Reduction by Actively Mining Negative Samples for Pulmonary Nodule Detection in Chest Radiographs
Generating large quantities of quality labeled data in medical imaging is very time consuming and expensive. The performance of supervised algorithms for various tasks on imaging has improved drastically over the years, however the availability of data to train these algorithms have become one of the main bottlenecks for implementation. To address this, we propose a semi-supervised learning method where pseudo-negative labels from unlabeled data are used to further refine the performance of a pulmonary nodule detection network in chest radiographs. After training with the proposed network, the false positive rate was reduced to 0.1266 from 0.4864 while maintaining sensitivity at 0.89.
0
0
0
1
0
0
WASP-12b: A Mass-Losing Extremely Hot Jupiter
WASP-12b is an extreme hot Jupiter in a 1 day orbit, suffering profound irradiation from its F type host star. The planet is surrounded by a translucent exosphere which overfills the Roche lobe and produces line-blanketing absorption in the near-UV. The planet is losing mass. Another unusual property of the WASP-12 system is that observed chromospheric emission from the star is anomalously low: WASP-12 is an extreme outlier amongst thousands of stars when the log $R^{'}_{HK}$ chromospheric activity indicator is considered. Occam's razor suggests these two extremely rare properties coincide in this system because they are causally related. The absence of the expected chromospheric emission is attributable to absorption by a diffuse circumstellar gas shroud which surrounds the entire planetary system and fills our line of sight to the chromospherically active regions of the star. This circumstellar gas shroud is probably fed by mass loss from WASP-12b. The orbital eccentricity of WASP-12b is small but may be non-zero. The planet is part of a hierarchical quadruple system; its current orbit is consistent with prior secular dynamical evolution leading to a highly eccentric orbit followed by tidal circularization. When compared with the Galaxy's population of planets, WASP-12b lies on the upper boundary of the sub-Jovian desert in both the $(M_{\rm P}, P)$ and $(R_{\rm P}, P)$ planes. Determining the mass loss rate for WASP-12b will illuminate the mechanism(s) responsible for the sub-Jovian desert.
0
1
0
0
0
0
On the status of the Born-Oppenheimer expansion in molecular systems theory
It is shown that the adiabatic Born-Oppenheimer expansion does not satisfy the necessary condition for the applicability of perturbation theory. A simple example of an exact solution of a problem that can not be obtained from the Born-Oppenheimer expansion is given. A new version of perturbation theory for molecular systems is proposed.
0
1
0
0
0
0
Computing Tropical Prevarieties in Parallel
The computation of the tropical prevariety is the first step in the application of polyhedral methods to compute positive dimensional solution sets of polynomial systems. In particular, pretropisms are candidate leading exponents for the power series developments of the solutions. The computation of the power series may start as soon as one pretropism is available, so our parallel computation of the tropical prevariety has an application in a pipelined solver. We present a parallel implementation of dynamic enumeration. Our first distributed memory implementation with forked processes achieved good speedups, but quite often resulted in large variations in the execution times of the processes. The shared memory multithreaded version applies work stealing to reduce the variability of the run time. Our implementation applies the thread safe Parma Polyhedral Library (PPL), in exact arithmetic with the GNU Multiprecision Arithmetic Library (GMP), aided by the fast memory allocations of TCMalloc. Our parallel implementation is capable of computing the tropical prevariety of the cyclic 16-roots problem. We also report on computational experiments on the $n$-body and $n$-vortex problems; our computational results compare favorably with Gfan.
1
0
1
0
0
0
3k-4 theorem for ordered groups
Recently, G. A. Freiman, M. Herzog, P. Longobardi, M. Maj proved two `structure theorems' for ordered groups \cite{FHLM}. We give elementary proof of these two theorems.
0
0
1
0
0
0
The Extinction Properties of and Distance to the Highly Reddened Type Ia Supernova SN 2012cu
Correction of Type Ia Supernova brightnesses for extinction by dust has proven to be a vexing problem. Here we study the dust foreground to the highly reddened SN 2012cu, which is projected onto a dust lane in the galaxy NGC 4772. The analysis is based on multi-epoch, spectrophotometric observations spanning 3,300 - 9,200 {\AA}, obtained by the Nearby Supernova Factory. Phase-matched comparison of the spectroscopically twinned SN 2012cu and SN 2011fe across 10 epochs results in the best-fit color excess of (E(B-V), RMS) = (1.00, 0.03) and total-to-selective extinction ratio of (RV , RMS) = (2.95, 0.08) toward SN 2012cu within its host galaxy. We further identify several diffuse interstellar bands, and compare the 5780 {\AA} band with the dust-to-band ratio for the Milky Way. Overall, we find the foreground dust-extinction properties for SN 2012cu to be consistent with those of the Milky Way. Furthermore we find no evidence for significant time variation in any of these extinction tracers. We also compare the dust extinction curve models of Cardelli et al. (1989), O'Donnell (1994), and Fitzpatrick (1999), and find the predictions of Fitzpatrick (1999) fit SN 2012cu the best. Finally, the distance to NGC4772, the host of SN 2012cu, at a redshift of z = 0.0035, often assigned to the Virgo Southern Extension, is determined to be 16.6$\pm$1.1 Mpc. We compare this result with distance measurements in the literature.
0
1
0
0
0
0
On Whitham and related equations
The aim of this paper is to study, via theoretical analysis and numerical simulations, the dynamics of Whitham and related equations. In particular we establish rigorous bounds between solutions of the Whitham and KdV equations and provide some insights into the dynamics of the Whitham equation in different regimes, some of them being outside the range of validity of the Whitham equation as a water waves model.
0
0
1
0
0
0
Analytic Connectivity in General Hypergraphs
In this paper we extend the known results of analytic connectivity to non-uniform hypergraphs. We prove a modified Cheeger's inequality and also give a bound on analytic connectivity with respect to the degree sequence and diameter of a hypergraph.
1
0
0
0
0
0
Cosmological searches for a non-cold dark matter component
We explore an extended cosmological scenario where the dark matter is an admixture of cold and additional non-cold species. The mass and temperature of the non-cold dark matter particles are extracted from a number of cosmological measurements. Among others, we consider tomographic weak lensing data and Milky Way dwarf satellite galaxy counts. We also study the potential of these scenarios in alleviating the existing tensions between local measurements and Cosmic Microwave Background (CMB) estimates of the $S_8$ parameter, with $S_8=\sigma_8\sqrt{\Omega_m}$, and of the Hubble constant $H_0$. In principle, a sub-dominant, non-cold dark matter particle with a mass $m_X\sim$~keV, could achieve the goals above. However, the preferred ranges for its temperature and its mass are different when extracted from weak lensing observations and from Milky Way dwarf satellite galaxy counts, since these two measurements require suppressions of the matter power spectrum at different scales. Therefore, solving simultaneously the CMB-weak lensing tensions and the small scale crisis in the standard cold dark matter picture via only one non-cold dark matter component seems to be challenging.
0
1
0
0
0
0
F-pure threshold and height of quasi-homogeneous polynomials
We consider a quasi-homogeneous polynomial $f \in \mathbb{Z}[x_0, \ldots, x_N]$ of degree $w$ equal to the degree of $x_0 \cdots x_N$ and show that the $F$-pure threshold of the reduction $f_p \in \mathbb{F}_p[x_0, \ldots, x_N]$ is equal to the log canonical threshold if and only if the height of the Artin-Mazur formal group associated to $H^{N-1}\left( X, {\mathbb{G}}_{m,X} \right)$, where $X$ is the hypersurface given by $f$, is equal to 1. We also prove that a similar result holds for Fermat hypersurfaces of degree $>N+1$. Furthermore, we give examples of weighted Delsarte surfaces which show that other values of the $F$-pure threshold of a quasi-homogeneous polynomial of degree $w$ cannot be characterized by the height.
0
0
1
0
0
0
Optimal VWAP execution under transient price impact
We solve the problem of optimal liquidation with volume weighted average price (VWAP) benchmark when the market impact is linear and transient. Our setting is indeed more general as it considers the case when the trading interval is not necessarily coincident with the benchmark interval: Implementation Shortfall and Target Close execution are shown to be particular cases of our setting. We find explicit solutions in continuous and discrete time considering risk averse investors having a CARA utility function. Finally, we show that, contrary to what is observed for Implementation Shortfall, the optimal VWAP solution contains both buy and sell trades also when the decay kernel is convex.
0
0
0
0
0
1
Image-derived generative modeling of pseudo-macromolecular structures - towards the statistical assessment of Electron CryoTomography template matching
Cellular Electron CryoTomography (CECT) is a 3D imaging technique that captures information about the structure and spatial organization of macromolecular complexes within single cells, in near-native state and at sub-molecular resolution. Although template matching is often used to locate macromolecules in a CECT image, it is insufficient as it only measures the relative structural similarity. Therefore, it is preferable to assess the statistical credibility of the decision through hypothesis testing, requiring many templates derived from a diverse population of macromolecular structures. Due to the very limited number of known structures, we need a generative model to efficiently and reliably sample pseudo-structures from the complex distribution of macromolecular structures. To address this challenge, we propose a novel image-derived approach for performing hypothesis testing for template matching by constructing generative models using the generative adversarial network. Finally, we conducted hypothesis testing experiments for template matching on both simulated and experimental subtomograms, allowing us to conclude the identity of subtomograms with high statistical credibility and significantly reducing false positives.
0
0
0
1
1
0
Bayesian Alignments of Warped Multi-Output Gaussian Processes
We propose a novel Bayesian approach to modelling nonlinear alignments of time series based on latent shared information. We apply the method to the real-world problem of finding common structure in the sensor data of wind turbines introduced by the underlying latent and turbulent wind field. The proposed model allows for both arbitrary alignments of the inputs and non-parametric output warpings to transform the observations. This gives rise to multiple deep Gaussian process models connected via latent generating processes. We present an efficient variational approximation based on nested variational compression and show how the model can be used to extract shared information between dependent time series, recovering an interpretable functional decomposition of the learning problem. We show results for an artificial data set and real-world data of two wind turbines.
1
0
0
1
0
0
Fast and unsupervised methods for multilingual cognate clustering
In this paper we explore the use of unsupervised methods for detecting cognates in multilingual word lists. We use online EM to train sound segment similarity weights for computing similarity between two words. We tested our online systems on geographically spread sixteen different language groups of the world and show that the Online PMI system (Pointwise Mutual Information) outperforms a HMM based system and two linguistically motivated systems: LexStat and ALINE. Our results suggest that a PMI system trained in an online fashion can be used by historical linguists for fast and accurate identification of cognates in not so well-studied language families.
1
0
0
0
0
0
Structure, magnetic susceptibility and specific heat of the spin-orbital-liquid candidate FeSc2S4 : Influence of fe off-stoichiometry
We report structural, susceptibility and specific heat studies of stoichiometric and off-stoichiometric poly- and single crystals of the A-site spinel compound FeSc2S4. In stoichiometric samples no long-range magnetic order is found down to 1.8 K. The magnetic susceptibility of these samples is field independent in the temperature range 10 - 400 K and does not show irreversible effects at low temperatures. In contrast, the magnetic susceptibility of samples with iron excess shows substantial field dependence at high temperatures and manifests a pronounced magnetic irreversibility at low temperatures with a difference between ZFC and FC susceptibilities and a maximum at 10 K reminiscent of a magnetic transition. Single crystal x-ray diffraction of the stoichiometric samples revealed a single phase spinel structure without site inversion. In single crystalline samples with Fe excess besides the main spinel phase a second ordered single-crystal phase was detected with the diffraction pattern of a vacancy-ordered superstructure of iron sulfide, close to the 5C polytype Fe9S10. Specific heat studies reveal a broad anomaly, which evolves below 20 K in both stoichiometric and off-stoichiometric crystals. We show that the low-temperature specific heat can be well described by considering the low-lying spin-orbital electronic levels of Fe2+ ions. Our results demonstrate significant influence of excess Fe ions on intrinsic magnetic behavior of FeSc2S4 and provide support for the spin-orbital liquid scenario proposed in earlier studies for the stoichiometric compound.
0
1
0
0
0
0
Directed Information as Privacy Measure in Cloud-based Control
We consider cloud-based control scenarios in which clients with local control tasks outsource their computational or physical duties to a cloud service provider. In order to address privacy concerns in such a control architecture, we first investigate the issue of finding an appropriate privacy measure for clients who desire to keep local state information as private as possible during the control operation. Specifically, we justify the use of Kramer's notion of causally conditioned directed information as a measure of privacy loss based on an axiomatic argument. Then we propose a methodology to design an optimal "privacy filter" that minimizes privacy loss while a given level of control performance is guaranteed. We show in particular that the optimal privacy filter for cloud-based Linear-Quadratic-Gaussian (LQG) control can be synthesized by a Linear-Matrix-Inequality (LMI) algorithm. The trade-off in the design is illustrated by a numerical example.
0
0
1
0
0
0
Classification of simple linearly compact Kantor triple systems over the complex numbers
Simple finite dimensional Kantor triple systems over the complex numbers are classified in terms of Satake diagrams. We prove that every simple and linearly compact Kantor triple system has finite dimension and give an explicit presentation of all the classical and exceptional systems.
0
0
1
0
0
0
Code-division multiplexed resistive pulse sensor networks for spatio-temporal detection of particles in microfluidic devices
Spatial separation of suspended particles based on contrast in their physical or chemical properties forms the basis of various biological assays performed on lab-on-achip devices. To electronically acquire this information, we have recently introduced a microfluidic sensing platform, called Microfluidic CODES, which combines the resistive pulse sensing with the code division multiple access in multiplexing a network of integrated electrical sensors. In this paper, we enhance the multiplexing capacity of the Microfluidic CODES by employing sensors that generate non-orthogonal code waveforms and a new decoding algorithm that combines machine learning techniques with minimum mean-squared error estimation. As a proof of principle, we fabricated a microfluidic device with a network of 10 code-multiplexed sensors and characterized it using cells suspended in phosphate buffer saline solution.
0
0
0
1
0
0
Navigability evaluation of complex networks by greedy routing efficiency
Network navigability is a key feature of complex networked systems. For a network embedded in a geometrical space, maximization of greedy routing (GR) measures based on the node geometrical coordinates can ensure efficient greedy navigability. In PNAS, Seguin et al. (PNAS 2018, vol. 115, no. 24) define a measure for quantifying the efficiency of brain network navigability in the Euclidean space, referred to as the efficiency ratio, whose formula exactly coincides with the GR-score (GR-efficiency) previously published by Muscoloni et al. (Nature Communications 2017, vol. 8, no. 1615). In this Letter, we point out potential flaws in the study of Seguin et al. regarding the discussion of the GR evaluation. In particular, we revise the concept of GR navigability, together with a careful discussion of the advantage offered by the new proposed GR-efficiency measure in comparison to the main measures previously adopted in literature. Finally, we clarify and standardize the GR-efficiency terminology in order to simplify and facilitate the discussion in future studies.
1
0
0
0
0
0
Fermion condensation and super pivotal categories
We study fermionic topological phases using the technique of fermion condensation. We give a prescription for performing fermion condensation in bosonic topological phases which contain a fermion. Our approach to fermion condensation can roughly be understood as coupling the parent bosonic topological phase to a phase of physical fermions, and condensing pairs of physical and emergent fermions. There are two distinct types of objects in fermionic theories, which we call "m-type" and "q-type" particles. The endomorphism algebras of q-type particles are complex Clifford algebras, and they have no analogues in bosonic theories. We construct a fermionic generalization of the tube category, which allows us to compute the quasiparticle excitations in fermionic topological phases. We then prove a series of results relating data in condensed theories to data in their parent theories; for example, if $\mathcal{C}$ is a modular tensor category containing a fermion, then the tube category of the condensed theory satisfies $\textbf{Tube}(\mathcal{C}/\psi) \cong \mathcal{C} \times (\mathcal{C}/\psi)$. We also study how modular transformations, fusion rules, and coherence relations are modified in the fermionic setting, prove a fermionic version of the Verlinde dimension formula, construct a commuting projector lattice Hamiltonian for fermionic theories, and write down a fermionic version of the Turaev-Viro-Barrett-Westbury state sum. A large portion of this work is devoted to three detailed examples of performing fermion condensation to produce fermionic topological phases: we condense fermions in the Ising theory, the $SO(3)_6$ theory, and the $\frac{1}{2}\text{E}_6$ theory, and compute the quasiparticle excitation spectrum in each of these examples.
0
1
1
0
0
0
PIMKL: Pathway Induced Multiple Kernel Learning
Reliable identification of molecular biomarkers is essential for accurate patient stratification. While state-of-the-art machine learning approaches for sample classification continue to push boundaries in terms of performance, most of these methods are not able to integrate different data types and lack generalization power, limiting their application in a clinical setting. Furthermore, many methods behave as black boxes, and we have very little understanding about the mechanisms that lead to the prediction. While opaqueness concerning machine behaviour might not be a problem in deterministic domains, in health care, providing explanations about the molecular factors and phenotypes that are driving the classification is crucial to build trust in the performance of the predictive system. We propose Pathway Induced Multiple Kernel Learning (PIMKL), a novel methodology to reliably classify samples that can also help gain insights into the molecular mechanisms that underlie the classification. PIMKL exploits prior knowledge in the form of a molecular interaction network and annotated gene sets, by optimizing a mixture of pathway-induced kernels using a Multiple Kernel Learning (MKL) algorithm, an approach that has demonstrated excellent performance in different machine learning applications. After optimizing the combination of kernels for prediction of a specific phenotype, the model provides a stable molecular signature that can be interpreted in the light of the ingested prior knowledge and that can be used in transfer learning tasks.
0
0
0
1
1
0
pyRecLab: A Software Library for Quick Prototyping of Recommender Systems
This paper introduces pyRecLab, a software library written in C++ with Python bindings which allows to quickly train, test and develop recommender systems. Although there are several software libraries for this purpose, only a few let developers to get quickly started with the most traditional methods, permitting them to try different parameters and approach several tasks without a significant loss of performance. Among the few libraries that have all these features, they are available in languages such as Java, Scala or C#, what is a disadvantage for less experienced programmers more used to the popular Python programming language. In this article we introduce details of pyRecLab, showing as well performance analysis in terms of error metrics (MAE and RMSE) and train/test time. We benchmark it against the popular Java-based library LibRec, showing similar results. We expect programmers with little experience and people interested in quickly prototyping recommender systems to be benefited from pyRecLab.
1
0
0
0
0
0
A unified theory of adaptive stochastic gradient descent as Bayesian filtering
We formulate stochastic gradient descent (SGD) as a Bayesian filtering problem. Inference in the Bayesian setting naturally gives rise to BRMSprop and BAdam: Bayesian variants of RMSprop and Adam. Remarkably, the Bayesian approach recovers many features of state-of-the-art adaptive SGD methods, including amoungst others root-mean-square normalization, Nesterov acceleration and AdamW. As such, the Bayesian approach provides one explanation for the empirical effectiveness of state-of-the-art adaptive SGD algorithms. Empirically comparing BRMSprop and BAdam with naive RMSprop and Adam on MNIST, we find that Bayesian methods have the potential to considerably reduce test loss and classification error.
0
0
0
1
0
0
Radiative effects during the assembly of direct collapse black holes
We perform a post-processing radiative feedback analysis on a 3D ab initio cosmological simulation of an atomic cooling halo under the direct collapse black hole (DCBH) scenario. We maintain the spatial resolution of the simulation by incorporating native ray-tracing on unstructured mesh data, including Monte Carlo Lyman-alpha (Ly{\alpha}) radiative transfer. DCBHs are born in gas-rich, metal-poor environments with the possibility of Compton-thick conditions, $N_H \gtrsim 10^{24} {\rm cm}^{-2}$. Therefore, the surrounding gas is capable of experiencing the full impact of the bottled-up radiation pressure. In particular, we find that multiple scattering of Ly{\alpha} photons provides an important source of mechanical feedback after the gas in the sub-parsec region becomes partially ionized, avoiding the bottleneck of destruction via the two-photon emission mechanism. We provide detailed discussion of the simulation environment, expansion of the ionization front, emission and escape of Ly{\alpha} radiation, and Compton scattering. A sink particle prescription allows us to extract approximate limits on the post-formation evolution of the radiative feedback. Fully coupled Ly{\alpha} radiation hydrodynamics will be crucial to consider in future DCBH simulations.
0
1
0
0
0
0
Spin-orbit interactions in optically active materials
We investigate the inherent influence of light polarization on the intensity distribution in anisotropic media undergoing a local inhomogeneous rotation of the principal axes. Whereas in general such configuration implies a complicated interaction between geometric and dynamic phase, we show that, in a medium showing an inhomogeneous circular birefringence, the geometric phase vanishes. Due to the spin-orbit interaction, the two circular polarizations perceive reversed spatial distribution of the dynamic phase. Based upon this effect, polarization-selective lens, waveguides and beam deflectors are proposed.
0
1
0
0
0
0
Automatic White-Box Testing of First-Order Logic Ontologies
Formal ontologies are axiomatizations in a logic-based formalism. The development of formal ontologies, and their important role in the Semantic Web area, is generating considerable research on the use of automated reasoning techniques and tools that help in ontology engineering. One of the main aims is to refine and to improve axiomatizations for enabling automated reasoning tools to efficiently infer reliable information. Defects in the axiomatization can not only cause wrong inferences, but can also hinder the inference of expected information, either by increasing the computational cost of, or even preventing, the inference. In this paper, we introduce a novel, fully automatic white-box testing framework for first-order logic ontologies. Our methodology is based on the detection of inference-based redundancies in the given axiomatization. The application of the proposed testing method is fully automatic since a) the automated generation of tests is guided only by the syntax of axioms and b) the evaluation of tests is performed by automated theorem provers. Our proposal enables the detection of defects and serves to certify the grade of suitability --for reasoning purposes-- of every axiom. We formally define the set of tests that are generated from any axiom and prove that every test is logically related to redundancies in the axiom from which the test has been generated. We have implemented our method and used this implementation to automatically detect several non-trivial defects that were hidden in various first-order logic ontologies. Throughout the paper we provide illustrative examples of these defects, explain how they were found, and how each proof --given by an automated theorem-prover-- provides useful hints on the nature of each defect. Additionally, by correcting all the detected defects, we have obtained an improved version of one of the tested ontologies: Adimen-SUMO.
1
0
0
0
0
0
Critical values in Bak-Sneppen type models
In the Bak-Sneppen model, the lowest fitness particle and its two nearest neighbors are renewed at each temporal step with a uniform (0,1) fitness distribution. The model presents a critical value that depends on the interaction criteria (two nearest neighbors) and on the update procedure (uniform). Here we calculate the critical value for models where one or both properties are changed. We study models with non-uniform updates, models with random neighbors and models with binary fitness and obtain exact results for the average fitness and for $p_c$.
0
1
0
0
0
0
On the correspondence of deviances and maximum likelihood and interval estimates from log-linear to logistic regression modelling
Consider a set of categorical variables $\mathcal{P}$ where at least one, denoted by $Y$, is binary. The log-linear model that describes the counts in the resulting contingency table implies a specific logistic regression model, with the binary variable as the outcome. Extending results in Christensen (1997), by also considering the case where factors present in the contingency table disappear from the logistic regression model, we prove that the Maximum Likelihood Estimate (MLE) for the parameters of the logistic regression equals the MLE for the corresponding parameters of the log-linear model. We prove that, asymptotically, standard errors for the two sets of parameters are also equal. Subsequently, Wald confidence intervals are asymptotically equal. These results demonstrate the extent to which inferences from the log-linear framework can be translated to inferences within the logistic regression framework, on the magnitude of main effects and interactions. Finally, we prove that the deviance of the log-linear model is equal to the deviance of the corresponding logistic regression, provided that the latter is fitted to a dataset where no cell observations are merged when one or more factors in $\mathcal{P} \setminus \{ Y \}$ become obsolete. We illustrate the derived results with the analysis of a real dataset.
0
0
0
1
0
0
ASIC Implementation of Time-Domain Digital Backpropagation with Deep-Learned Chromatic Dispersion Filters
We consider time-domain digital backpropagation with chromatic dispersion filters jointly optimized and quantized using machine-learning techniques. Compared to the baseline implementations, we show improved BER performance and >40% power dissipation reductions in 28-nm CMOS.
0
0
0
1
0
0
A formalization of convex polyhedra based on the simplex method
We present a formalization of convex polyhedra in the proof assistant Coq. The cornerstone of our work is a complete implementation of the simplex method, together with the proof of its correctness and termination. This allows us to define the basic predicates over polyhedra in an effective way (i.e., as programs), and relate them with the corresponding usual logical counterparts. To this end, we make an extensive use of the Boolean reflection methodology. The benefit of this approach is that we can easily derive the proof of several fundamental results on polyhedra, such as Farkas' Lemma, the duality theorem of linear programming, and Minkowski's Theorem.
1
0
1
0
0
0
Weight Spectrum of Quasi-Perfect Binary Codes with Distance 4
We consider the weight spectrum of a class of quasi-perfect binary linear codes with code distance 4. For example, extended Hamming code and Panchenko code are the known members of this class. Also, it is known that in many cases Panchenko code has the minimal number of weight 4 codewords. We give exact recursive formulas for the weight spectrum of quasi-perfect codes and their dual codes. As an example of application of the weight spectrum we derive a lower estimate for the conditional probability of correction of erasure patterns of high weights (equal to or greater than code distance).
1
0
0
0
0
0
Ultra-High Electro-Optic Activity Demonstrated in a Silicon-Organic Hybrid (SOH) Modulator
Efficient electro-optic (EO) modulators crucially rely on advanced materials that exhibit strong electro-optic activity and that can be integrated into high-speed and efficient phase shifter structures. In this paper, we demonstrate ultra-high in-device EO figures of merit of up to n3r33 = 2300 pm/V achieved in a silicon-organic hybrid (SOH) Mach-Zehnder Modulator (MZM) using the EO chromophore JRD1. This is the highest material-related in-device EO figure of merit hitherto achieved in a high-speed modulator at any operating wavelength. The {\pi}-voltage of the 1.5 mm-long device amounts to 210 mV, leading to a voltage-length product of U{\pi}L = 320 V{\mu}m - the lowest value reported for MZM that are based on low-loss dielectric waveguides. The viability of the devices is demonstrated by generating high-quality on-off-keying (OOK) signals at 40 Gbit/s with Q factors in excess of 8 at a drive voltage as low as 140 mVpp. We expect that efficient high-speed EO modulators will not only have major impact in the field of optical communications, but will also open new avenues towards ultra-fast photonic-electronic signal processing.
0
1
0
0
0
0
How the Experts Do It: Assessing and Explaining Agent Behaviors in Real-Time Strategy Games
How should an AI-based explanation system explain an agent's complex behavior to ordinary end users who have no background in AI? Answering this question is an active research area, for if an AI-based explanation system could effectively explain intelligent agents' behavior, it could enable the end users to understand, assess, and appropriately trust (or distrust) the agents attempting to help them. To provide insights into this question, we turned to human expert explainers in the real-time strategy domain, "shoutcaster", to understand (1) how they foraged in an evolving strategy game in real time, (2) how they assessed the players' behaviors, and (3) how they constructed pertinent and timely explanations out of their insights and delivered them to their audience. The results provided insights into shoutcasters' foraging strategies for gleaning information necessary to assess and explain the players; a characterization of the types of implicit questions shoutcasters answered; and implications for creating explanations by using the patterns
1
0
0
0
0
0
Entanglement Entropy of Eigenstates of Quadratic Fermionic Hamiltonians
In a seminal paper [D. N. Page, Phys. Rev. Lett. 71, 1291 (1993)], Page proved that the average entanglement entropy of subsystems of random pure states is $S_{\rm ave}\simeq\ln{\cal D}_{\rm A} - (1/2) {\cal D}_{\rm A}^2/{\cal D}$ for $1\ll{\cal D}_{\rm A}\leq\sqrt{\cal D}$, where ${\cal D}_{\rm A}$ and ${\cal D}$ are the Hilbert space dimensions of the subsystem and the system, respectively. Hence, typical pure states are (nearly) maximally entangled. We develop tools to compute the average entanglement entropy $\langle S\rangle$ of all eigenstates of quadratic fermionic Hamiltonians. In particular, we derive exact bounds for the most general translationally invariant models $\ln{\cal D}_{\rm A} - (\ln{\cal D}_{\rm A})^2/\ln{\cal D} \leq \langle S \rangle \leq \ln{\cal D}_{\rm A} - [1/(2\ln2)] (\ln{\cal D}_{\rm A})^2/\ln{\cal D}$. Consequently we prove that: (i) if the subsystem size is a finite fraction of the system size then $\langle S\rangle<\ln{\cal D}_{\rm A}$ in the thermodynamic limit, i.e., the average over eigenstates of the Hamiltonian departs from the result for typical pure states, and (ii) in the limit in which the subsystem size is a vanishing fraction of the system size, the average entanglement entropy is maximal, i.e., typical eigenstates of such Hamiltonians exhibit eigenstate thermalization.
0
1
0
0
0
0
Focusing light through dynamical samples using fast closed-loop wavefront optimization
We describe a fast closed-loop optimization wavefront shaping system able to focus light through dynamic scattering media. A MEMS-based spatial light modulator (SLM), a fast photodetector and FPGA electronics are combined to implement a closed-loop optimization of a wavefront with a single mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO2 particles in glycerol with tunable temporal stability.
0
1
0
0
0
0
The curvature estimates for convex solutions of some fully nonlinear Hessian type equations
The curvature estimates of quotient curvature equation do not always exist even for convex setting \cite{GRW}. Thus it is natural question to find other type of elliptic equations possessing curvature estimates. In this paper, we discuss the existence of curvature estimate for fully nonlinear elliptic equations defined by symmetric polynomials, mainlly, the linear combination of elementary symmetric polynomials.
0
0
1
0
0
0
The Compressed Model of Residual CNDS
Convolutional neural networks have achieved a great success in the recent years. Although, the way to maximize the performance of the convolutional neural networks still in the beginning. Furthermore, the optimization of the size and the time that need to train the convolutional neural networks is very far away from reaching the researcher's ambition. In this paper, we proposed a new convolutional neural network that combined several techniques to boost the optimization of the convolutional neural network in the aspects of speed and size. As we used our previous model Residual-CNDS (ResCNDS), which solved the problems of slower convergence, overfitting, and degradation, and compressed it. The outcome model called Residual-Squeeze-CNDS (ResSquCNDS), which we demonstrated on our sold technique to add residual learning and our model of compressing the convolutional neural networks. Our model of compressing adapted from the SQUEEZENET model, but our model is more generalizable, which can be applied almost to any neural network model, and fully integrated into the residual learning, which addresses the problem of the degradation very successfully. Our proposed model trained on very large-scale MIT Places365-Standard scene datasets, which backing our hypothesis that the new compressed model inherited the best of the previous ResCNDS8 model, and almost get the same accuracy in the validation Top-1 and Top-5 with 87.64% smaller in size and 13.33% faster in the training time.
1
0
0
0
0
0
Chow Rings of Mp_{0,2}(N,d) and Mbar_{0,2}(P^{N-1},d) and Gromov-Witten Invariants of Projective Hypersurfaces of Degree 1 and 2
In this paper, we prove formulas that represent two-pointed Gromov-Witten invariant <O_{h^a}O_{h^b}>_{0,d} of projective hypersurfaces with d=1,2 in terms of Chow ring of Mbar_{0,2}(P^{N-1},d), the moduli spaces of stable maps from genus 0 stable curves to projective space P^{N-1}. Our formulas are based on representation of the intersection number w(O_{h^a}O_{h^b})_{0,d}, which was introduced by Jinzenji, in terms of Chow ring of Mp_{0,2}(N,d), the moduli space of quasi maps from P^1 to P^{N-1} with two marked points. In order to prove our formulas, we use the results on Chow ring of Mbar_{0,2}(P^{N-1},d), that were derived by A. Mustata and M. Mustata. We also present explicit toric data of Mp_{0,2}(N,d) and prove relations of Chow ring of Mp_{0,2}(N,d).
0
0
1
0
0
0
A Real-Time Autonomous Highway Accident Detection Model Based on Big Data Processing and Computational Intelligence
Due to increasing urban population and growing number of motor vehicles, traffic congestion is becoming a major problem of the 21st century. One of the main reasons behind traffic congestion is accidents which can not only result in casualties and losses for the participants, but also in wasted and lost time for the others that are stuck behind the wheels. Early detection of an accident can save lives, provides quicker road openings, hence decreases wasted time and resources, and increases efficiency. In this study, we propose a preliminary real-time autonomous accident-detection system based on computational intelligence techniques. Istanbul City traffic-flow data for the year 2015 from various sensor locations are populated using big data processing methodologies. The extracted features are then fed into a nearest neighbor model, a regression tree, and a feed-forward neural network model. For the output, the possibility of an occurrence of an accident is predicted. The results indicate that even though the number of false alarms dominates the real accident cases, the system can still provide useful information that can be used for status verification and early reaction to possible accidents.
1
0
0
1
0
0
Asymptotic Eigenfunctions for a class of Difference Operators
We analyze a general class of difference operators $H_\varepsilon = T_\varepsilon + V_\varepsilon$ on $\ell^2(\varepsilon \mathbb{Z}^d)$, where $V_\varepsilon$ is a one-well potential and $\varepsilon$ is a small parameter. We construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low lying eigenvalues of $H_\varepsilon$. These are obtained from eigenfunctions or quasimodes for the operator $H_\varepsilon$, acting on $L^2(\mathbb{R}^d)$, via restriction to the lattice $\varepsilon\mathbb{Z}^d$.
0
0
1
0
0
0
Common Glass-Forming Spin-Liquid State in the Pyrochlore Magnets Dy$_2$Ti$_2$O$_7$ and Ho$_2$Ti$_2$O$_7$
Despite a well-ordered pyrochlore crystal structure and strong magnetic interactions between the Dy$^{3+}$ or Ho$^{3+}$ ions, no long range magnetic order has been detected in the pyrochlore titanates Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$. To explore the actual magnetic phase formed by cooling these materials, we measure their magnetization dynamics using toroidal, boundary-free magnetization transport techniques. We find that the dynamical magnetic susceptibility of both compounds has the same distinctive phenomenology, that is indistinguishable in form from that of the dielectric permittivity of dipolar glass-forming liquids. Moreover, Ho$_2$Ti$_2$O$_7$ and Dy$_2$Ti$_2$O$_7$ both exhibit microscopic magnetic relaxation times that increase along the super-Arrhenius trajectories analogous to those observed in glass-forming dipolar liquids. Thus, upon cooling below about 2K, Dy$_2$Ti$_2$O$_7$ and Ho$_2$Ti$_2$O$_7$ both appear to enter the same magnetic state exhibiting the characteristics of a glass-forming spin-liquid.
0
1
0
0
0
0
Britannia Rule the Waves
The students are introduced to navigation in general and the longitude problem in particular. A few videos provide insight into scientific and historical facts related to the issue. Then, the students learn in two steps how longitude can be derived from time measurements. They first build a Longitude Clock that visualises the math behind the concept. They use it to determine the longitudes corresponding to five time measurements. In the second step, they assume the position of James Cook's navigator and plot the location of seven destinations on Cook's second voyage between 1772 and 1775.
0
1
0
0
0
0
Error Forward-Propagation: Reusing Feedforward Connections to Propagate Errors in Deep Learning
We introduce Error Forward-Propagation, a biologically plausible mechanism to propagate error feedback forward through the network. Architectural constraints on connectivity are virtually eliminated for error feedback in the brain; systematic backward connectivity is not used or needed to deliver error feedback. Feedback as a means of assigning credit to neurons earlier in the forward pathway for their contribution to the final output is thought to be used in learning in the brain. How the brain solves the credit assignment problem is unclear. In machine learning, error backpropagation is a highly successful mechanism for credit assignment in deep multilayered networks. Backpropagation requires symmetric reciprocal connectivity for every neuron. From a biological perspective, there is no evidence of such an architectural constraint, which makes backpropagation implausible for learning in the brain. This architectural constraint is reduced with the use of random feedback weights. Models using random feedback weights require backward connectivity patterns for every neuron, but avoid symmetric weights and reciprocal connections. In this paper, we practically remove this architectural constraint, requiring only a backward loop connection for effective error feedback. We propose reusing the forward connections to deliver the error feedback by feeding the outputs into the input receiving layer. This mechanism, Error Forward-Propagation, is a plausible basis for how error feedback occurs deep in the brain independent of and yet in support of the functionality underlying intricate network architectures. We show experimentally that recurrent neural networks with two and three hidden layers can be trained using Error Forward-Propagation on the MNIST and Fashion MNIST datasets, achieving $1.90\%$ and $11\%$ generalization errors respectively.
0
0
0
0
1
0
CANA: A python package for quantifying control and canalization in Boolean Networks
Logical models offer a simple but powerful means to understand the complex dynamics of biochemical regulation, without the need to estimate kinetic parameters. However, even simple automata components can lead to collective dynamics that are computationally intractable when aggregated into networks. In previous work we demonstrated that automata network models of biochemical regulation are highly canalizing, whereby many variable states and their groupings are redundant (Marques-Pita and Rocha, 2013). The precise charting and measurement of such canalization simplifies these models, making even very large networks amenable to analysis. Moreover, canalization plays an important role in the control, robustness, modularity and criticality of Boolean network dynamics, especially those used to model biochemical regulation (Gates and Rocha, 2016; Gates et al., 2016; Manicka, 2017). Here we describe a new publicly-available Python package that provides the necessary tools to extract, measure, and visualize canalizing redundancy present in Boolean network models. It extracts the pathways most effective in controlling dynamics in these models, including their effective graph and dynamics canalizing map, as well as other tools to uncover minimum sets of control variables.
1
0
0
0
1
0
$α$-Variational Inference with Statistical Guarantees
We propose a family of variational approximations to Bayesian posterior distributions, called $\alpha$-VB, with provable statistical guarantees. The standard variational approximation is a special case of $\alpha$-VB with $\alpha=1$. When $\alpha \in(0,1]$, a novel class of variational inequalities are developed for linking the Bayes risk under the variational approximation to the objective function in the variational optimization problem, implying that maximizing the evidence lower bound in variational inference has the effect of minimizing the Bayes risk within the variational density family. Operating in a frequentist setup, the variational inequalities imply that point estimates constructed from the $\alpha$-VB procedure converge at an optimal rate to the true parameter in a wide range of problems. We illustrate our general theory with a number of examples, including the mean-field variational approximation to (low)-high-dimensional Bayesian linear regression with spike and slab priors, mixture of Gaussian models, latent Dirichlet allocation, and (mixture of) Gaussian variational approximation in regular parametric models.
0
0
1
1
0
0
Stable monoenergetic ion acceleration by a two color laser tweezer
In the past decades, the phenomenal progress in the development of ultraintense lasers has opened up many exciting new frontiers in laser matter physics, including laser plasma ion acceleration. Currently a major challenge in this frontier is to find simple methods to stably produce monoenergetic ion beams with sufficient charge for real applications. Here, we propose a novel scheme using a two color laser tweezer to fulfill this goal. In this scheme, two circularly polarized lasers with different wavelengths collide right on a thin nano-foil target containing mixed ion species. The radiation pressure of this laser pair acts like a tweezer to pinch and fully drag the electrons out, forming a stable uniform accelerating field for the ions. Scaling laws and three-dimensional particle-in-cell simulations confirm that high energy (10-1000 MeV) high charge ($\sim 10^{10}$) proton beams with narrow energy spread ($\sim4\%-20\%$) can be obtained by commercially available lasers. Such a scheme may open up a new route for compact high quality ion sources for various applications.
0
1
0
0
0
0
Experimental demonstration of an ultra-compact on-chip polarization controlling structure
We demonstrated a novel on-chip polarization controlling structure, fabricated by standard 0.18-um foundry technology. It achieved polarization rotation with a size of 0.726 um * 5.27 um and can be easily extended into dynamic polarization controllers.
0
1
0
0
0
0
Letter-Based Speech Recognition with Gated ConvNets
In the recent literature, "end-to-end" speech systems often refer to letter-based acoustic models trained in a sequence-to-sequence manner, either via a recurrent model or via a structured output learning approach (such as CTC). In contrast to traditional phone (or senone)-based approaches, these "end-to-end'' approaches alleviate the need of word pronunciation modeling, and do not require a "forced alignment" step at training time. Phone-based approaches remain however state of the art on classical benchmarks. In this paper, we propose a letter-based speech recognition system, leveraging a ConvNet acoustic model. Key ingredients of the ConvNet are Gated Linear Units and high dropout. The ConvNet is trained to map audio sequences to their corresponding letter transcriptions, either via a classical CTC approach, or via a recent variant called ASG. Coupled with a simple decoder at inference time, our system matches the best existing letter-based systems on WSJ (in word error rate), and shows near state of the art performance on LibriSpeech.
1
0
0
0
0
0
The Frequent Network Neighborhood Mapping of the Human Hippocampus Shows Much More Frequent Neighbor Sets in Males Than in Females
In the study of the human connectome, the vertices and the edges of the network of the human brain are analyzed: the vertices of the graphs are the anatomically identified gray matter areas of the subjects; this set is exactly the same for all the subjects. The edges of the graphs correspond to the axonal fibers, connecting these areas. In the biological applications of graph theory, it happens very rarely that scientists examine numerous large graphs on the very same, labeled vertex set. Exactly this is the case in the study of the connectomes. Because of the particularity of these sets of graphs, novel, robust methods need to be developed for their analysis. Here we introduce the new method of the Frequent Network Neighborhood Mapping for the connectome, which serves as a robust identification of the neighborhoods of given vertices of special interest in the graph. We apply the novel method for mapping the neighborhoods of the human hippocampus and discover strong statistical asymmetries between the connectomes of the sexes, computed from the Human Connectome Project. We analyze 413 braingraphs, each with 463 nodes. We show that the hippocampi of men have much more significantly frequent neighbor sets than women; therefore, in a sense, the connections of the hippocampi are more regularly distributed in men and more varied in women. Our results are in contrast to the volumetric studies of the human hippocampus, where it was shown that the relative volume of the hippocampus is the same in men and women.
0
0
0
0
1
0
Recurrent Neural Networks as Weighted Language Recognizers
We investigate the computational complexity of various problems for simple recurrent neural networks (RNNs) as formal models for recognizing weighted languages. We focus on the single-layer, ReLU-activation, rational-weight RNNs with softmax, which are commonly used in natural language processing applications. We show that most problems for such RNNs are undecidable, including consistency, equivalence, minimization, and the determination of the highest-weighted string. However, for consistent RNNs the last problem becomes decidable, although the solution length can surpass all computable bounds. If additionally the string is limited to polynomial length, the problem becomes NP-complete and APX-hard. In summary, this shows that approximations and heuristic algorithms are necessary in practical applications of those RNNs.
1
0
0
0
0
0
Classifying and Qualifying GUI Defects
Graphical user interfaces (GUIs) are integral parts of software systems that require interactions from their users. Software testers have paid special attention to GUI testing in the last decade, and have devised techniques that are effective in finding several kinds of GUI errors. However, the introduction of new types of interactions in GUIs (e.g., direct manipulation) presents new kinds of errors that are not targeted by current testing techniques. We believe that to advance GUI testing, the community needs a comprehensive and high level GUI fault model, which incorporates all types of interactions. The work detailed in this paper establishes 4 contributions: 1) A GUI fault model designed to identify and classify GUI faults. 2) An empirical analysis for assessing the relevance of the proposed fault model against failures found in real GUIs. 3) An empirical assessment of two GUI testing tools (i.e. GUITAR and Jubula) against those failures. 4) GUI mutants we've developed according to our fault model. These mutants are freely available and can be reused by developers for benchmarking their GUI testing tools.
1
0
0
0
0
0
A mechanistic model of connector hubs, modularity, and cognition
The human brain network is modular--comprised of communities of tightly interconnected nodes. This network contains local hubs, which have many connections within their own communities, and connector hubs, which have connections diversely distributed across communities. A mechanistic understanding of these hubs and how they support cognition has not been demonstrated. Here, we leveraged individual differences in hub connectivity and cognition. We show that a model of hub connectivity accurately predicts the cognitive performance of 476 individuals in four distinct tasks. Moreover, there is a general optimal network structure for cognitive performance--individuals with diversely connected hubs and consequent modular brain networks exhibit increased cognitive performance, regardless of the task. Critically, we find evidence consistent with a mechanistic model in which connector hubs tune the connectivity of their neighbors to be more modular while allowing for task appropriate information integration across communities, which increases global modularity and cognitive performance.
0
0
0
0
1
0
The OGLE Collection of Variable Stars. Over 450 000 Eclipsing and Ellipsoidal Binary Systems Toward the Galactic Bulge
We present a collection of 450 598 eclipsing and ellipsoidal binary systems detected in the OGLE fields toward the Galactic bulge. The collection consists of binary systems of all types: detached, semi-detached, and contact eclipsing binaries, RS CVn stars, cataclysmic variables, HW Vir binaries, double periodic variables, and even planetary transits. For all stars we provide the I- and V-band time-series photometry obtained during the OGLE-II, OGLE-III, and OGLE-IV surveys. We discuss methods used to identify binary systems in the OGLE data and present several objects of particular interest.
0
1
0
0
0
0
The discrete moment problem with nonconvex shape constraints
The discrete moment problem is a foundational problem in distribution-free robust optimization, where the goal is to find a worst-case distribution that satisfies a given set of moments. This paper studies the discrete moment problems with additional "shape constraints" that guarantee the worst case distribution is either log-concave or has an increasing failure rate. These classes of shape constraints have not previously been studied in the literature, in part due to their inherent nonconvexities. Nonetheless, these classes of distributions are useful in practice. We characterize the structure of optimal extreme point distributions by developing new results in reverse convex optimization, a lesser-known tool previously employed in designing global optimization algorithms. We are able to show, for example, that an optimal extreme point solution to a moment problem with $m$ moments and log-concave shape constraints is piecewise geometric with at most $m$ pieces. Moreover, this structure allows us to design an exact algorithm for computing optimal solutions in a low-dimensional space of parameters. Moreover, We describe a computational approach to solving these low-dimensional problems, including numerical results for a representative set of instances.
0
0
1
1
0
0
Some studies using capillary for flow control in a closed loop gas recirculation system
A Pilot unit of a closed loop gas (CLS) mixing and distribution system for the INO project was designed and is being operated with (1.8 x 1.9) m^2 glass RPCs (Resistive Plate Chamber). The performance of an RPC depends on the quality and quantity of gas mixture being used, a number of studies on controlling the flow and optimization of the gas mixture is being carried out. In this paper the effect of capillary as a dynamic impedance element on the differential pressure across RPC detector in a closed loop gas system is being highlighted. The flow versus the pressure variation with different types of capillaries and also with different types of gasses that are being used in an RPC is presented. An attempt is also made to measure the transient time of the gas flow through the capillary.
0
1
0
0
0
0
Optimal Tuning of Two-Dimensional Keyboards
We give a new analysis of a tuning problem in music theory, pertaining specifically to the approximation of harmonics on a two-dimensional keyboard. We formulate the question as a linear programming problem on families of constraints and provide exact solutions for many new keyboard dimensions. We also show that an optimal tuning for harmonic approximation can be obtained for any keyboard of given width, provided sufficiently many rows of octaves.
1
0
0
0
0
0
Ultra Reliable Short Message Relaying with Wireless Power Transfer
We consider a dual-hop wireless network where an energy constrained relay node first harvests energy through the received radio-frequency signal from the source, and then uses the harvested energy to forward the source's information to the destination node. The throughput and delay metrics are investigated for a decode-and-forward relaying mechanism at finite blocklength regime and delay-limited transmission mode. We consider ultra-reliable communication scenarios under discussion for the next fifth-generation of wireless systems, with error and latency constraints. The impact on these metrics of the blocklength, information bits, and relay position is investigated.
1
0
0
1
0
0
Verifiable Light-Weight Monitoring for Certificate Transparency Logs
Trust in publicly verifiable Certificate Transparency (CT) logs is reduced through cryptography, gossip, auditing, and monitoring. The role of a monitor is to observe each and every log entry, looking for suspicious certificates that interest the entity running the monitor. While anyone can run a monitor, it requires continuous operation and copies of the logs to be inspected. This has lead to the emergence of monitoring-as-a-service: a trusted party runs the monitor and provides registered subjects with selective certificate notifications, e.g., "notify me of all foo.com certificates". We present a CT/bis extension for verifiable light-weight monitoring that enables subjects to verify the correctness of such notifications, reducing the trust that is placed in these monitors. Our extension supports verifiable monitoring of wild-card domains and piggybacks on CT's existing gossip-audit security model.
1
0
0
0
0
0
Chainspace: A Sharded Smart Contracts Platform
Chainspace is a decentralized infrastructure, known as a distributed ledger, that supports user defined smart contracts and executes user-supplied transactions on their objects. The correct execution of smart contract transactions is verifiable by all. The system is scalable, by sharding state and the execution of transactions, and using S-BAC, a distributed commit protocol, to guarantee consistency. Chainspace is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT), and extremely high-auditability, non-repudiation and `blockchain' techniques. Even when BFT fails, auditing mechanisms are in place to trace malicious participants. We present the design, rationale, and details of Chainspace; we argue through evaluating an implementation of the system about its scaling and other features; we illustrate a number of privacy-friendly smart contracts for smart metering, polling and banking and measure their performance.
1
0
0
0
0
0
On Certain Properties of Convex Functions
This note deals with certain properties of convex functions. We provide results on the convexity of the set of minima of these functions, the behaviour of their subgradient set under restriction, and optimization of these functions over an affine subspace.
0
0
1
0
0
0
The detection of variable radio emission from the fast rotating magnetic hot B-star HR7355 and evidence for its X-ray aurorae
In this paper we investigate the multiwavelengths properties of the magnetic early B-type star HR7355. We present its radio light curves at several frequencies, taken with the Jansky Very Large Array, and X-ray spectra, taken with the XMM X-ray telescope. Modeling of the radio light curves for the Stokes I and V provides a quantitative analysis of the HR7355 magnetosphere. A comparison between HR7355 and a similar analysis for the Ap star CUVir, allows us to study how the different physical parameters of the two stars affect the structure of the respective magnetospheres where the non-thermal electrons originate. Our model includes a cold thermal plasma component that accumulates at high magnetic latitudes that influences the radio regime, but does not give rise to X-ray emission. Instead, the thermal X-ray emission arises from shocks generated by wind stream collisions close to the magnetic equatorial plane. The analysis of the X-ray spectrum of HR7355 also suggests the presence of a non-thermal radiation. Comparison between the spectral index of the power-law X-ray energy distribution with the non-thermal electron energy distribution indicates that the non-thermal X-ray component could be the auroral signature of the non-thermal electrons that impact the stellar surface, the same non-thermal electrons that are responsible for the observed radio emission. On the basis of our analysis, we suggest a novel model that simultaneously explains the X-ray and the radio features of HR7355 and is likely relevant for magnetospheres of other magnetic early type stars.
0
1
0
0
0
0
In silico optimization of critical currents in superconductors
For many technological applications of superconductors the performance of a material is determined by the highest current it can carry losslessly - the critical current. In turn, the critical current can be controlled by adding non-superconducting defects in the superconductor matrix. Here we report on systematic comparison of different local and global optimization strategies to predict optimal structures of pinning centers leading to the highest possible critical currents. We demonstrate performance of these methods for a superconductor with randomly placed spherical, elliptical, and columnar defects.
0
1
0
0
0
0
Attracting sequences of holomorphic automorphisms that agree to a certain order
The basin of attraction of a uniformly attracting sequence of holomorphic automorphisms that agree to a certain order in the common fixed point, is biholomorphic to $\mathbb{C}^n$. We also give sufficient estimates how large this order has to be.
0
0
1
0
0
0
Repulsive Fermi polarons with negative effective mass
Recent LENS experiment on a 3D Fermi gas has reported a negative effective mass ($m^*<0$) of Fermi polarons in the strongly repulsive regime. There naturally arise a question whether the negative $m^*$ is a precursor of the instability towards phase separation (or itinerant ferromagnetism). In this work, we make use of the exact solutions to study the ground state and excitation properties of repulsive Fermi polarons in 1D, which can also exhibit a negative $m^*$ in the super Tonks-Girardeau regime. By analyzing the total spin, quasi-momentum distribution and pair correlations, we conclude that the negative $m^*$ is irrelevant to the instability towards ferromagnetism or phase separation, but rather an intrinsic feature of collective excitations for fermions in the strongly repulsive regime. Surprisingly, for large and negative $m^*$, such excitation is accompanied with a spin density modulation when the majority fermions move closer to the impurity rather than being repelled far away, contrary to the picture of phase separation. These results suggest an alternative interpretation of negative $m^*$ as observed in recent LENS experiment.
0
1
0
0
0
0
Entropy Formula for Random $\mathbb{Z}^k$-actions
In this paper, entropies, including measure-theoretic entropy and topological entropy, are considered for random $\mathbb{Z}^k$-actions which are generated by random compositions of the generators of $\mathbb{Z}^k$-actions. Applying Pesin's theory for commutative diffeomorphisms we obtain a measure-theoretic entropy formula of $C^{2}$ random $\mathbb{Z}^k$-actions via the Lyapunov spectra of the generators. Some formulas and bounds of topological entropy for certain random $\mathbb{Z}^k$(or $\mathbb{Z}_+^k$ )-actions generated by more general maps, such as Lipschitz maps, continuous maps on finite graphs and $C^{1}$ expanding maps, are also obtained. Moreover, as an application, we give a formula of Friedland's entropy for certain $C^{2}$ $\mathbb{Z}^k$-actions.
0
0
1
0
0
0
Kohn anomalies in momentum dependence of magnetic susceptibility of some three-dimensional systems
We study a question of presence of Kohn points, yielding at low temperatures non-analytic momentum dependence of magnetic susceptibility near its maximum, in electronic spectum of some three-dimensional systems. In particular, we consider one-band model on face centered cubic lattice with hopping between nearest and next-nearest neighbors, which models some aspects of the dispersion of ZrZn$_2$, and the two-band model on body centered cubic lattice, modeling the dispersion of chromium. For the former model it is shown that Kohn points yielding maxima of susceptibility exist in a certain (sufficiently wide) region of electronic concentrations; the dependence of the wave vectors, corresponding to the maxima, on the chemical potential is investigated. For the two-band model we show existence of the lines of Kohn points, yielding maximum of the susceptibility, which position agrees with the results of band structure calculations and experimental data on the wave vector of antiferromagnetism of chromium.
0
1
0
0
0
0
Stability conditions, $τ$-tilting Theory and Maximal Green Sequences
Extending the notion of maximal green sequences to an abelian category, we characterize the stability functions, as defined by Rudakov, that induce a maximal green sequence in an abelian length category. Furthermore, we use $\tau$-tilting theory to give a description of the wall and chamber structure of any finite dimensional algebra. Finally we introduce the notion of green paths in the wall and chamber structure of an algebra and show that green paths serve as geometrical generalization of maximal green sequences in this context.
0
0
1
0
0
0
The thermal phase curve offset on tidally- and non-tidally-locked exoplanets: A shallow water model
Using a shallow water model with time-dependent forcing we show that the peak of an exoplanet thermal phase curve is, in general, offset from secondary eclipse when the planet is rotating. That is, the planetary hot-spot is offset from the point of maximal heating (the substellar point) and may lead or lag the forcing; the extent and sign of the offset is a function of both the rotation rate and orbital period of the planet. We also find that the system reaches a steady-state in the reference frame of the moving forcing. The model is an extension of the well studied Matsuno-Gill model into a full spherical geometry and with a planetary-scale translating forcing representing the insolation received on an exoplanet from a host star. The speed of the gravity waves in the model is shown to be a key metric in evaluating the phase curve offset. If the velocity of the substellar point (relative to the planet's surface) exceeds that of the gravity waves then the hotspot will lag the substellar point, as might be expected by consideration of forced gravity wave dynamics. However, when the substellar point is moving slower than the internal wavespeed of the system the hottest point can lead the passage of the forcing. We provide an interpretation of this result by consideration of the Rossby and Kelvin wave dynamics as well as, in the very slowly rotating case, a one-dimensional model that yields an analytic solution. Finally, we consider the inverse problem of constraining planetary rotation rate from an observed phase curve.
0
1
0
0
0
0
On Asymptotic Standard Normality of the Two Sample Pivot
The asymptotic solution to the problem of comparing the means of two heteroscedastic populations, based on two random samples from the populations, hinges on the pivot underpinning the construction of the confidence interval and the test statistic being asymptotically standard Normal, which is known to happen if the two samples are independent and the ratio of the sample sizes converges to a finite positive number. This restriction on the asymptotic behavior of the ratio of the sample sizes carries the risk of rendering the asymptotic justification of the finite sample approximation invalid. It turns out that neither the restriction on the asymptotic behavior of the ratio of the sample sizes nor the assumption of cross sample independence is necessary for the pivotal convergence in question to take place. If the joint distribution of the standardized sample means converges to a spherically symmetric distribution, then that distribution must be bivariate standard Normal (which can happen without the assumption of cross sample independence), and the aforesaid pivotal convergence holds.
0
0
1
1
0
0
Breakthrough revisited: investigating the requirements for growth of dust beyond the bouncing barrier
For grain growth to proceed effectively and lead to planet formation a number of barriers to growth must be overcome. One such barrier, relevant for compact grains in the inner regions of the disc, is the `bouncing barrier' in which large grains ($\sim$ mm size) tend to bounce off each other rather than sticking. However, by maintaining a population of small grains it has been suggested that cm-size particles may grow rapidly by sweeping up these small grains. We present the first numerically resolved investigation into the conditions under which grains may be lucky enough to grow beyond the bouncing barrier by a series of rare collisions leading to growth (so-called `breakthrough'). Our models support previous results, and show that in simple models breakthrough requires the mass ratio at which high velocity collisions transition to growth instead of causing fragmentation to be low, $\phi \lesssim 50$. However, in models that take into account the dependence of the fragmentation threshold on mass-ratio, we find breakthrough occurs more readily, even if mass transfer is relatively inefficient. This suggests that bouncing may only slow down growth, rather than preventing growth beyond a threshold barrier. However, even when growth beyond the bouncing barrier is possible, radial drift will usually prevent growth to arbitrarily large sizes.
0
1
0
0
0
0
Extremely high magnetoresistance and conductivity in the type-II Weyl semimetals WP2 and MoP2
The peculiar band structure of semimetals exhibiting Dirac and Weyl crossings can lead to spectacular electronic properties such as large mobilities accompanied by extremely high magnetoresistance. In particular, two closely neighbouring Weyl points of the same chirality are protected from annihilation by structural distortions or defects, thereby significantly reducing the scattering probability between them. Here we present the electronic properties of the transition metal diphosphides, WP2 and MoP2, that are type-II Weyl semimetals with robust Weyl points. We present transport and angle resolved photoemission spectroscopy measurements, and first principles calculations. Our single crystals of WP2 display an extremely low residual low-temperature resistivity of 3 nohm-cm accompanied by an enormous and highly anisotropic magnetoresistance above 200 million % at 63 T and 2.5 K. These properties are likely a consequence of the novel Weyl fermions expressed in this compound. We observe a large suppression of charge carrier backscattering in WP2 from transport measurements.
0
1
0
0
0
0
Density and current profiles in $U_q(A^{(1)}_2)$ zero range process
The stochastic $R$ matrix for $U_q(A^{(1)}_n)$ introduced recently gives rise to an integrable zero range process of $n$ classes of particles in one dimension. For $n=2$ we investigate how finitely many first class particles fixed as defects influence the grand canonical ensemble of the second class particles. By using the matrix product stationary probabilities involving infinite products of $q$-bosons, exact formulas are derived for the local density and current of the second class particles in the large volume limit.
0
1
0
0
0
0
Crystal structure, site selectivity, and electronic structure of layered chalcogenide LaOBiPbS3
We have investigated the crystal structure of LaOBiPbS3 using neutron diffraction and synchrotron X-ray diffraction. From structural refinements, we found that the two metal sites, occupied by Bi and Pb, were differently surrounded by the sulfur atoms. Calculated bond valence sum suggested that one metal site was nearly trivalent and the other was nearly divalent. Neutron diffraction also revealed site selectivity of Bi and Pb in the LaOBiPbS3 structure. These results suggested that the crystal structure of LaOBiPbS3 can be regarded as alternate stacks of the rock-salt-type Pb-rich sulfide layers and the LaOBiS2-type Bi-rich layers. From band calculations for an ideal (LaOBiS2)(PbS) system, we found that the S bands of the PbS layer were hybridized with the Bi bands of the BiS plane at around the Fermi energy, which resulted in the electronic characteristics different from that of LaOBiS2. Stacking the rock-salt type sulfide (chalcogenide) layers and the BiS2-based layered structure could be a new strategy to exploration of new BiS2-based layered compounds, exotic two-dimensional electronic states, or novel functionality.
0
1
0
0
0
0
Responses of Pre-transitional Materials with Stress-Generating Defects to External Stimuli: Superelasticity, Supermagnetostriction, Invar and Elinvar Effects
We considered a generic case of pre-transitional materials with static stress-generating defects, dislocations and coherent nano-precipitates, at temperatures close but above the starting temperature of martensitic transformation, Ms. Using the Phase Field Microelasticity theory and 3D simulation, we demonstrated that the local stress generated by these defects produces equilibrium nano-size martensitic embryos (MEs) in pre-transitional state, these embryos being orientation variants of martensite. This is a new type of equilibrium: the thermoelastic equilibrium between the MEs and parent phase in which the total volume of MEs and their size are equilibrium internal thermodynamic parameters. This thermoelastic equilibrium exists only in presence of the stress-generating defects. Cooling the pre-transitional state towards Ms or applying the external stimuli, stress or magnetic field, results in a shift of the thermoelastic equilibrium provided by a reversible anhysteretic growth of MEs that results in a giant ME-generated macroscopic strain. In particular, this effect can be associated with the diffuse phase transformations observed in some ferroelectrics above the Curie point. It is shown that the ME-generated strain is giant and describes a superelasticity if the applied field is stress. It describes a super magnetostriction if the martensite (or austenite) are ferromagnetic and the applied field is a magnetic field. In general, the material with defects can be a multiferroic with a giant multiferroic response if the parent and martensitic phase have different ferroic properties. Finally the ME-generated strain may explain or, at least, contribute to the Invar and Elinvar effects that are typically observed in pre-transitional austenite. The thermoelastic equilibrium and all these effects exist only if the interaction between the defects and MEs is infinite-range.
0
1
0
0
0
0
On The Robustness of Epsilon Skew Extension for Burr III Distribution on Real Line
The Burr III distribution is used in a wide variety of fields of lifetime data analysis, reliability theory, and financial literature, etc. It is defined on the positive axis and has two shape parameters, say $c$ and $k$. These shape parameters make the distribution quite flexible. They also control the tail behavior of the distribution. In this study, we extent the Burr III distribution to the real axis and also add a skewness parameter, say $\varepsilon$, with epsilon-skew extension approach. When the parameters $c$ and $k$ have a relation such that $ck \approx 1 $ or $ck < 1 $, it is skewed unimodal. Otherwise, it is skewed bimodal with the same level of peaks on the negative and positive sides of real line. Thus, ESBIII distribution can capture fitting the various data sets even when the number of parameters are three. Location and scale form of this distribution are also given. Some distributional properties of the new distribution are investigated. The maximum likelihood (ML) estimation method for the parameters of ESBIII is considered. The robustness properties of ML estimators are studied and also tail behaviour of ESBIII distribution is examined. The applications on real data are considered to illustrate the modeling capacity of this distribution in the class of bimodal distributions.
0
0
1
1
0
0
Logics for Word Transductions with Synthesis
We introduce a logic, called LT, to express properties of transductions, i.e. binary relations from input to output (finite) words. In LT, the input/output dependencies are modelled via an origin function which associates to any position of the output word, the input position from which it originates. LT is well-suited to express relations (which are not necessarily functional), and can express all regular functional transductions, i.e. transductions definable for instance by deterministic two-way transducers. Despite its high expressive power, LT has decidable satisfiability and equivalence problems, with tight non-elementary and elementary complexities, depending on specific representation of LT-formulas. Our main contribution is a synthesis result: from any transduction R defined in LT , it is possible to synthesise a regular functional transduction f such that for all input words u in the domain of R, f is defined and (u,f(u)) belongs to R. As a consequence, we obtain that any functional transduction is regular iff it is LT-definable. We also investigate the algorithmic and expressiveness properties of several extensions of LT, and explicit a correspondence between transductions and data words. As a side-result, we obtain a new decidable logic for data words.
1
0
0
0
0
0
Topology and strong four fermion interactions in four dimensions
We study massless fermions interacting through a particular four fermion term in four dimensions. Exact symmetries prevent the generation of bilinear fermion mass terms. We determine the structure of the low energy effective action for the auxiliary field needed to generate the four fermion term and find it has an novel structure that admits topologically non-trivial defects with non-zero Hopf invariant. We show that fermions propagating in such a background pick up a mass without breaking symmetries. Furthermore pairs of such defects experience a logarithmic interaction. We argue that a phase transition separates a phase where these defects proliferate from a broken phase where they are bound tightly. We conjecture that by tuning one additional operator the broken phase can be eliminated with a single BKT-like phase transition separating the massless from massive phases.
0
1
0
0
0
0
Augmentor: An Image Augmentation Library for Machine Learning
The generation of artificial data based on existing observations, known as data augmentation, is a technique used in machine learning to improve model accuracy, generalisation, and to control overfitting. Augmentor is a software package, available in both Python and Julia versions, that provides a high level API for the expansion of image data using a stochastic, pipeline-based approach which effectively allows for images to be sampled from a distribution of augmented images at runtime. Augmentor provides methods for most standard augmentation practices as well as several advanced features such as label-preserving, randomised elastic distortions, and provides many helper functions for typical augmentation tasks used in machine learning.
1
0
0
1
0
0
Multilevel Sequential${}^2$ Monte Carlo for Bayesian Inverse Problems
The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in [P.S. Koutsourelakis, J. Comput. Phys., 228 (2009), pp. 6184-6211] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.
0
0
0
1
0
0
Multiparameter actuation of a neutrally-stable shell: a flexible gear-less motor
We have designed and tested experimentally a morphing structure consisting of a neutrally stable thin cylindrical shell driven by a multiparameter piezoelectric actuation. The shell is obtained by plastically deforming an initially flat copper disk, so as to induce large isotropic and almost uniform inelastic curvatures. Following the plastic deformation, in a perfectly isotropic system, the shell is theoretically neutrally stable, owning a continuous manifold of stable cylindrical shapes corresponding to the rotation of the axis of maximal curvature. Small imperfections render the actual structure bistable, giving preferred orientations. A three-parameter piezoelectric actuation, exerted through micro-fiber-composite actuators, allows us to add a small perturbation to the plastic inelastic curvature and to control the direction of maximal curvature. This actuation law is designed through a geometrical analogy based on a fully non-linear inextensible uniform-curvature shell model. We report on the fabrication, identification, and experimental testing of a prototype and demonstrate the effectiveness of the piezoelectric actuators in controlling its shape. The resulting motion is an apparent rotation of the shell, controlled by the voltages as in a "gear-less motor", which is, in reality, a precession of the axis of principal curvature.
0
1
0
0
0
0
An Evolutionary Game for User Access Mode Selection in Fog Radio Access Networks
The fog radio access network (F-RAN) is a promising paradigm for the fifth generation wireless communication systems to provide high spectral efficiency and energy efficiency. Characterizing users to select an appropriate communication mode among fog access point (F-AP), and device-to-device (D2D) in F-RANs is critical for performance optimization. Using evolutionary game theory, we investigate the dynamics of user access mode selection in F-RANs. Specifically, the competition among groups of potential users space is formulated as a dynamic evolutionary game, and the evolutionary equilibrium is the solution to this game. Stochastic geometry tool is used to derive the proposals' payoff expressions for both F-AP and D2D users by taking into account the different nodes locations, cache sizes as well as the delay cost. The analytical results obtained from the game model are evaluated via simulations, which show that the evolutionary game based access mode selection algorithm can reach a much higher payoff than the max rate based algorithm.
1
0
0
0
0
0
Latent Estimation of GDP, GDP per capita, and Population from Historic and Contemporary Sources
The concepts of Gross Domestic Product (GDP), GDP per capita, and population are central to the study of political science and economics. However, a growing literature suggests that existing measures of these concepts contain considerable error or are based on overly simplistic modeling choices. We address these problems by creating a dynamic, three-dimensional latent trait model, which uses observed information about GDP, GDP per capita, and population to estimate posterior prediction intervals for each of these important concepts. By combining historical and contemporary sources of information, we are able to extend the temporal and spatial coverage of existing datasets for country-year units back to 1500 A.D through 2015 A.D. and, because the model makes use of multiple indicators of the underlying concepts, we are able to estimate the relative precision of the different country-year estimates. Overall, our latent variable model offers a principled method for incorporating information from different historic and contemporary data sources. It can be expanded or refined as researchers discover new or alternative sources of information about these concepts.
0
0
0
1
0
0
Portfolio Optimization under Fast Mean-reverting and Rough Fractional Stochastic Environment
Fractional stochastic volatility models have been widely used to capture the non-Markovian structure revealed from financial time series of realized volatility. On the other hand, empirical studies have identified scales in stock price volatility: both fast-time scale on the order of days and slow-scale on the order of months. So, it is natural to study the portfolio optimization problem under the effects of dependence behavior which we will model by fractional Brownian motions with Hurst index $H$, and in the fast or slow regimes characterized by small parameters $\eps$ or $\delta$. For the slowly varying volatility with $H \in (0,1)$, it was shown that the first order correction to the problem value contains two terms of order $\delta^H$, one random component and one deterministic function of state processes, while for the fast varying case with $H > \half$, the same form holds at order $\eps^{1-H}$. This paper is dedicated to the remaining case of a fast-varying rough environment ($H < \half$) which exhibits a different behavior. We show that, in the expansion, only one deterministic term of order $\sqrt{\eps}$ appears in the first order correction.
0
0
0
0
0
1
Triangle Generative Adversarial Networks
A Triangle Generative Adversarial Network ($\Delta$-GAN) is developed for semi-supervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence is provided by only a few paired samples. $\Delta$-GAN consists of four neural networks, two generators and two discriminators. The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs. The generators and discriminators are trained together using adversarial learning. Under mild assumptions, in theory the joint distributions characterized by the two generators concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs. Experiments on semi-supervised image classification, image-to-image translation and attribute-based image generation demonstrate the superiority of the proposed approach.
1
0
0
1
0
0
Learning Distributions of Meant Color
When a speaker says the name of a color, the color that they picture is not necessarily the same as the listener imagines. Color is a grounded semantic task, but that grounding is not a mapping of a single word (or phrase) to a single point in color-space. Proper understanding of color language requires the capacity to map a sequence of words to a probability distribution in color-space. A distribution is required as there is no clear agreement between people as to what a particular color describes -- different people have a different idea of what it means to be `very dark orange'. We propose a novel GRU-based model to handle this case. Learning how each word in a color name contributes to the color described, allows for knowledge sharing between uses of the words in different color names. This knowledge sharing significantly improves predicative capacity for color names with sparse training data. The extreme case of this challenge in data sparsity is for color names without any direct training data. Our model is able to predict reasonable distributions for these cases, as evaluated on a held-out dataset consisting only of such terms.
1
0
0
0
0
0
Bose - Einstein condensation of triplons with a weakly broken U(1) symmetry
The low-temperature properties of certain quantum magnets can be described in terms of a Bose-Einstein condensation (BEC) of magnetic quasiparticles (triplons). Some mean-field approaches (MFA) to describe these systems, based on the standard grand canonical ensemble, do not take the anomalous density into account and leads to an internal inconsistency, as it has been shown by Hohenberg and Martin, and may therefore produce unphysical results. Moreover, an explicit breaking of the U(1) symmetry as observed, for example, in TlCuCl3 makes the application of MFA more complicated. In the present work, we develop a self-consistent MFA approach, similar to the Hartree-Fock-Bogolyubov approximation in the notion of representative statistical ensembles, including the effect of a weakly broken U(1) symmetry. We apply our results on experimental data of the quantum magnet TlCuCl3 and show that magnetization curves and the energy dispersion can be well described within this approximation assuming that the BEC scenario is still valid. We predict that the shift of the critical temperature Tc due to a finite exchange anisotropy is rather substantial even when the anisotropy parameter \gamma is small, e.g., \Delta T_c \approx 10%$ of Tc in H = 6T and for \gamma\approx 4 \mu eV.
0
1
0
0
0
0