title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
J0906+6930: a radio-loud quasar in the early Universe
Radio-loud high-redshift quasars (HRQs), although only a few of them are known to date, are crucial for the studies of the growth of supermassive black holes (SMBHs) and the evolution of active galactic nuclei (AGN) at early cosmological epochs. Radio jets offer direct evidence of SMBHs, and their radio structures can be studied with the highest angular resolution using Very Long Baseline Interferometry (VLBI). Here we report on the observations of three HRQs (J0131-0321, J0906+6930, J1026+2542) at z>5 using the Korean VLBI Network and VLBI Exploration of Radio Astrometry Arrays (together known as KaVA) with the purpose of studying their pc-scale jet properties. The observations were carried out at 22 and 43 GHz in 2016 January among the first-batch open-use experiments of KaVA. The quasar J0906+6930 was detected at 22 GHz but not at 43 GHz. The other two sources were not detected and upper limits to their compact radio emission are given. Archival VLBI imaging data and single-dish 15-GHz monitoring light curve of J0906+6930 were also acquired as complementary information. J0906+6930 shows a moderate-level variability at 15 GHz. The radio image is characterized by a core-jet structure with a total detectable size of ~5 pc in projection. The brightness temperature, 1.9x10^{11} K, indicates relativistic beaming of the jet. The radio properties of J0906+6930 are consistent with a blazar. Follow-up VLBI observations will be helpful for determining its structural variation.
0
1
0
0
0
0
Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields
This work investigates the training of conditional random fields (CRFs) via the stochastic dual coordinate ascent (SDCA) algorithm of Shalev-Shwartz and Zhang (2016). SDCA enjoys a linear convergence rate and a strong empirical performance for binary classification problems. However, it has never been used to train CRFs. Yet it benefits from an `exact' line search with a single marginalization oracle call, unlike previous approaches. In this paper, we adapt SDCA to train CRFs, and we enhance it with an adaptive non-uniform sampling strategy based on block duality gaps. We perform experiments on four standard sequence prediction tasks. SDCA demonstrates performances on par with the state of the art, and improves over it on three of the four datasets, which have in common the use of sparse features.
1
0
0
1
0
0
Accelerating Innovation Through Analogy Mining
The availability of large idea repositories (e.g., the U.S. patent database) could significantly accelerate innovation and discovery by providing people with inspiration from solutions to analogous problems. However, finding useful analogies in these large, messy, real-world repositories remains a persistent challenge for either human or automated methods. Previous approaches include costly hand-created databases that have high relational structure (e.g., predicate calculus representations) but are very sparse. Simpler machine-learning/information-retrieval similarity metrics can scale to large, natural-language datasets, but struggle to account for structural similarity, which is central to analogy. In this paper we explore the viability and value of learning simpler structural representations, specifically, "problem schemas", which specify the purpose of a product and the mechanisms by which it achieves that purpose. Our approach combines crowdsourcing and recurrent neural networks to extract purpose and mechanism vector representations from product descriptions. We demonstrate that these learned vectors allow us to find analogies with higher precision and recall than traditional information-retrieval methods. In an ideation experiment, analogies retrieved by our models significantly increased people's likelihood of generating creative ideas compared to analogies retrieved by traditional methods. Our results suggest a promising approach to enabling computational analogy at scale is to learn and leverage weaker structural representations.
1
0
0
1
0
0
$η$-Ricci solitons in $(\varepsilon)$-almost paracontact metric manifolds
The object of this paper is to study $\eta$-Ricci solitons on $(\varepsilon)$-almost paracontact metric manifolds. We investigate $\eta$-Ricci solitons in the case when its potential vector field is exactly the characteristic vector field $\xi$ of the $(\varepsilon)$-almost paracontact metric manifold and when the potential vector field is torse-forming. We also study Einstein-like and $(\varepsilon)$-para Sasakian manifolds admitting $\eta$-Ricci solitons. Finally we obtain some results for $\eta$-Ricci solitons on $(\varepsilon)$-almost paracontact metric manifolds with a special view towards parallel symmetric (0,2)-tensor fields.
0
0
1
0
0
0
Entropy facilitated active transport
We show how active transport of ions can be interpreted as an entropy facilitated process. In this interpretation, the pore geometry through which substrates are transported can give rise to a driving force. This gives a direct link between the geometry and the changes in Gibbs energy required. Quantifying the size of this effect for several proteins we find that the entropic contribution from the pore geometry is significant and we discuss how the effect can be used to interpret variations in the affinity at the binding site.
0
1
0
0
0
0
Consistency of the Predicative Calculus of Cumulative Inductive Constructions (pCuIC)
In order to avoid well-know paradoxes associated with self-referential definitions, higher-order dependent type theories stratify the theory using a countably infinite hierarchy of universes (also known as sorts), Type$_0$ : Type$_1$ : $\cdots$ . Such type systems are called cumulative if for any type $A$ we have that $A$ : Type$_{i}$ implies $A$ : Type$_{i+1}$. The predicative calculus of inductive constructions (pCIC) which forms the basis of the Coq proof assistant, is one such system. In this paper we present and establish the soundness of the predicative calculus of cumulative inductive constructions (pCuIC) which extends the cumulativity relation to inductive types.
1
0
0
0
0
0
Private Information Retrieval from MDS Coded Data with Colluding Servers: Settling a Conjecture by Freij-Hollanti et al.
A $(K, N, T, K_c)$ instance of the MDS-TPIR problem is comprised of $K$ messages and $N$ distributed servers. Each message is separately encoded through a $(K_c, N)$ MDS storage code. A user wishes to retrieve one message, as efficiently as possible, while revealing no information about the desired message index to any colluding set of up to $T$ servers. The fundamental limit on the efficiency of retrieval, i.e., the capacity of MDS-TPIR is known only at the extremes where either $T$ or $K_c$ belongs to $\{1,N\}$. The focus of this work is a recent conjecture by Freij-Hollanti, Gnilke, Hollanti and Karpuk which offers a general capacity expression for MDS-TPIR. We prove that the conjecture is false by presenting as a counterexample a PIR scheme for the setting $(K, N, T, K_c) = (2,4,2,2)$, which achieves the rate $3/5$, exceeding the conjectured capacity, $4/7$. Insights from the counterexample lead us to capacity characterizations for various instances of MDS-TPIR including all cases with $(K, N, T, K_c) = (2,N,T,N-1)$, where $N$ and $T$ can be arbitrary.
1
0
0
0
0
0
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation
Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs) are useful for many practical tasks in machine learning. Synaptic weights, as well as neuron activation functions within the deep network are typically stored with high-precision formats, e.g. 32 bit floating point. However, since storage capacity is limited and each memory access consumes power, both storage capacity and memory access are two crucial factors in these networks. Here we present a method and present the ADaPTION toolbox to extend the popular deep learning library Caffe to support training of deep CNNs with reduced numerical precision of weights and activations using fixed point notation. ADaPTION includes tools to measure the dynamic range of weights and activations. Using the ADaPTION tools, we quantized several CNNs including VGG16 down to 16-bit weights and activations with only 0.8% drop in Top-1 accuracy. The quantization, especially of the activations, leads to increase of up to 50% of sparsity especially in early and intermediate layers, which we exploit to skip multiplications with zero, thus performing faster and computationally cheaper inference.
1
0
0
0
0
0
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).
1
0
0
1
0
0
Social Media Would Not Lie: Prediction of the 2016 Taiwan Election via Online Heterogeneous Data
The prevalence of online media has attracted researchers from various domains to explore human behavior and make interesting predictions. In this research, we leverage heterogeneous social media data collected from various online platforms to predict Taiwan's 2016 presidential election. In contrast to most existing research, we take a "signal" view of heterogeneous information and adopt the Kalman filter to fuse multiple signals into daily vote predictions for the candidates. We also consider events that influenced the election in a quantitative manner based on the so-called event study model that originated in the field of financial research. We obtained the following interesting findings. First, public opinions in online media dominate traditional polls in Taiwan election prediction in terms of both predictive power and timeliness. But offline polls can still function on alleviating the sample bias of online opinions. Second, although online signals converge as election day approaches, the simple Facebook "Like" is consistently the strongest indicator of the election result. Third, most influential events have a strong connection to cross-strait relations, and the Chou Tzu-yu flag incident followed by the apology video one day before the election increased the vote share of Tsai Ing-Wen by 3.66%. This research justifies the predictive power of online media in politics and the advantages of information fusion. The combined use of the Kalman filter and the event study method contributes to the data-driven political analytics paradigm for both prediction and attribution purposes.
1
0
0
1
0
0
The stratified micro-randomized trial design: sample size considerations for testing nested causal effects of time-varying treatments
Technological advancements in the field of mobile devices and wearable sensors have helped overcome obstacles in the delivery of care, making it possible to deliver behavioral treatments anytime and anywhere. Increasingly the delivery of these treatments is triggered by predictions of risk or engagement which may have been impacted by prior treatments. Furthermore the treatments are often designed to have an impact on individuals over a span of time during which subsequent treatments may be provided. Here we discuss our work on the design of a mobile health smoking cessation experimental study in which two challenges arose. First the randomizations to treatment should occur at times of stress and second the outcome of interest accrues over a period that may include subsequent treatment. To address these challenges we develop the "stratified micro-randomized trial," in which each individual is randomized among treatments at times determined by predictions constructed from outcomes to prior treatment and with randomization probabilities depending on these outcomes. We define both conditional and marginal proximal treatment effects. Depending on the scientific goal these effects may be defined over a period of time during which subsequent treatments may be provided. We develop a primary analysis method and associated sample size formulae for testing these effects.
0
0
0
1
0
0
Multiplicative Convolution of Real Asymmetric and Real Antisymmetric Matrices
The singular values of products of standard complex Gaussian random matrices, or sub-blocks of Haar distributed unitary matrices, have the property that their probability distribution has an explicit, structured form referred to as a polynomial ensemble. It is furthermore the case that the corresponding bi-orthogonal system can be determined in terms of Meijer G-functions, and the correlation kernel given as an explicit double contour integral. It has recently been shown that the Hermitised product $X_M \cdots X_2 X_1A X_1^T X_2^T \cdots X_M^T$, where each $X_i$ is a standard real complex Gaussian matrix, and $A$ is real anti-symmetric shares exhibits analogous properties. Here we use the theory of spherical functions and transforms to present a theory which, for even dimensions, includes these properties of the latter product as a special case. As an example we show that the theory also allows for a treatment of this class of Hermitised product when the $X_i$ are chosen as sub-blocks of Haar distributed real orthogonal matrices.
0
0
1
0
0
0
An approach to Griffiths conjecture
The Griffiths conjecture asserts that every ample vector bundle $E$ over a compact complex manifold $S$ admits a hermitian metric with positive curvature in the sense of Griffiths. In this article we give a sufficient condition for a positive hermitian metric on $\mathcal{O}_{\mathbb{P}(E^*)}(1)$ to induce a Griffiths positive $L^2$-metric on the vector bundle $E$. This result suggests to study the relative Kähler-Ricci flow on $\mathcal{O}_{\mathbb{P}(E^*)}(1)$ for the fibration $\mathbb{P}(E^*)\to S$. We define a flow and give arguments for the convergence.
0
0
1
0
0
0
On Detecting Adversarial Perturbations
Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small "detector" subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack.
1
0
0
1
0
0
It's Time to Consider "Time" when Evaluating Recommender-System Algorithms [Proposal]
In this position paper, we question the current practice of calculating evaluation metrics for recommender systems as single numbers (e.g. precision p=.28 or mean absolute error MAE = 1.21). We argue that single numbers express only average effectiveness over a usually rather long period (e.g. a year or even longer), which provides only a vague and static view of the data. We propose that recommender-system researchers should instead calculate metrics for time-series such as weeks or months, and plot the results in e.g. a line chart. This way, results show how algorithms' effectiveness develops over time, and hence the results allow drawing more meaningful conclusions about how an algorithm will perform in the future. In this paper, we explain our reasoning, provide an example to illustrate our reasoning and present suggestions for what the community should do next.
1
0
0
0
0
0
Anomalous metals -- failed superconductors
The observation of metallic ground states in a variety of two-dimensional electronic systems poses a fundamental challenge for the theory of electron fluids. Here, we analyze evidence for the existence of a regime, which we call the "anomalous metal regime," in diverse 2D superconducting systems driven through a quantum superconductor to metal transition (QSMT) by tuning physical parameters such as the magnetic field, the gate voltage in the case of systems with a MOSFET geometry, or the degree of disorder. The principal phenomenological observation is that in the anomalous metal, as a function of decreasing temperature, the resistivity first drops as if the system were approaching a superconducting ground state, but then saturates at low temperatures to a value that can be orders of magnitude smaller than the Drude value. The anomalous metal also shows a giant positive magneto-resistance. Thus, it behaves as if it were a "failed superconductor." This behavior is observed in a broad range of parameters. We moreover exhibit, by theoretical solution of a model of superconducting grains embedded in a metallic matrix, that as a matter of principle such anomalous metallic behavior can occur in the neighborhood of a QSMT. However, we also argue that the robustness and ubiquitous nature of the observed phenomena are difficult to reconcile with any existing theoretical treatment, and speculate about the character of a more fundamental theoretical framework.
0
1
0
0
0
0
Energy Optimization of Automatic Hybrid Sailboat
Autonomous Surface Vehicles (ASVs) provide an effective way to actualize applications such as environment monitoring, search and rescue, and scientific researches. However, the conventional ASVs depends overly on the stored energy. Hybrid Sailboat, mainly powered by the wind, can solve this problem by using an auxiliary propulsion system. The electric energy cost of Hybrid Sailboat needs to be optimized to achieve the ocean automatic cruise mission. Based on adjusted setting on sails and rudders, this paper seeks the optimal trajectory for autonomic cruising to reduce the energy cost by changing the heading angle of sailing upwind. The experiment results validate the heading angle accounts for energy cost and the trajectory with the best heading angle saves up to 23.7% than other conditions. Furthermore, the energy-time line can be used to predict the energy cost for long-time sailing.
1
0
0
0
0
0
Estimating the Operating Characteristics of Ensemble Methods
In this paper we present a technique for using the bootstrap to estimate the operating characteristics and their variability for certain types of ensemble methods. Bootstrapping a model can require a huge amount of work if the training data set is large. Fortunately in many cases the technique lets us determine the effect of infinite resampling without actually refitting a single model. We apply the technique to the study of meta-parameter selection for random forests. We demonstrate that alternatives to bootstrap aggregation and to considering \sqrt{d} features to split each node, where d is the number of features, can produce improvements in predictive accuracy.
0
0
0
1
0
0
Plasma turbulence at ion scales: a comparison between PIC and Eulerian hybrid-kinetic approaches
Kinetic-range turbulence in magnetized plasmas and, in particular, in the context of solar-wind turbulence has been extensively investigated over the past decades via numerical simulations. Among others, one of the widely adopted reduced plasma model is the so-called hybrid-kinetic model, where the ions are fully kinetic and the electrons are treated as a neutralizing (inertial or massless) fluid. Within the same model, different numerical methods and/or approaches to turbulence development have been employed. In the present work, we present a comparison between two-dimensional hybrid-kinetic simulations of plasma turbulence obtained with two complementary approaches spanning about two decades in wavenumber - from MHD inertial range to scales well below the ion gyroradius - with a state-of-the-art accuracy. One approach employs hybrid particle-in-cell (HPIC) simulations of freely-decaying Alfvénic turbulence, whereas the other consists of Eulerian hybrid Vlasov-Maxwell (HVM) simulations of turbulence continuously driven with partially-compressible large-scale fluctuations. Despite the completely different initialization and injection/drive at large scales, the same properties of turbulent fluctuations at $k_\perp\rho_i\gtrsim1$ are observed. The system indeed self-consistently "reprocesses" the turbulent fluctuations while they are cascading towards smaller and smaller scales, in a way which actually depends on the plasma beta parameter. Small-scale turbulence has been found to be mainly populated by kinetic Alfvén wave (KAW) fluctuations for $\beta\geq1$, whereas KAW fluctuations are only sub-dominant for low-$\beta$.
0
1
0
0
0
0
Computational determination of the largest lattice polytope diameter
A lattice (d, k)-polytope is the convex hull of a set of points in dimension d whose coordinates are integers between 0 and k. Let {\delta}(d, k) be the largest diameter over all lattice (d, k)-polytopes. We develop a computational framework to determine {\delta}(d, k) for small instances. We show that {\delta}(3, 4) = 7 and {\delta}(3, 5) = 9; that is, we verify for (d, k) = (3, 4) and (3, 5) the conjecture whereby {\delta}(d, k) is at most (k + 1)d/2 and is achieved, up to translation, by a Minkowski sum of lattice vectors.
1
0
0
0
0
0
A high resolution ion microscope for cold atoms
We report on an ion-optical system that serves as a microscope for ultracold ground state and Rydberg atoms. The system is designed to achieve a magnification of up to 1000 and a spatial resolution in the 100 nm range, thereby surpassing many standard imaging techniques for cold atoms. The microscope consists of four electrostatic lenses and a microchannel plate in conjunction with a delay line detector in order to achieve single particle sensitivity with high temporal and spatial resolution. We describe the design process of the microscope including ion-optical simulations of the imaging system and characterize aberrations and the resolution limit. Furthermore, we present the experimental realization of the microscope in a cold atom setup and investigate its performance by patterned ionization with a structure size down to 2.7 {\mu}m. The microscope meets the requirements for studying various many-body effects, ranging from correlations in cold quantum gases up to Rydberg molecule formation.
0
1
0
0
0
0
Lock-Free Parallel Perceptron for Graph-based Dependency Parsing
Dependency parsing is an important NLP task. A popular approach for dependency parsing is structured perceptron. Still, graph-based dependency parsing has the time complexity of $O(n^3)$, and it suffers from slow training. To deal with this problem, we propose a parallel algorithm called parallel perceptron. The parallel algorithm can make full use of a multi-core computer which saves a lot of training time. Based on experiments we observe that dependency parsing with parallel perceptron can achieve 8-fold faster training speed than traditional structured perceptron methods when using 10 threads, and with no loss at all in accuracy.
1
0
0
0
0
0
Finite groups with systems of $K$-$\frak{F}$-subnormal subgroups
Let $\frak {F}$ be a class of group. A subgroup $A$ of a finite group $G$ is said to be $K$-$\mathfrak{F}$-subnormal in $G$ if there is a subgroup chain $$A=A_{0} \leq A_{1} \leq \cdots \leq A_{n}=G$$ such that either $A_{i-1} \trianglelefteq A_{i}$ or $A_{i}/(A_{i-1})_{A_{i}} \in \mathfrak{F}$ for all $i=1, \ldots , n$. A formation $\frak {F}$ is said to be $K$-lattice provided in every finite group $G$ the set of all its $K$-$\mathfrak{F}$-subnormal subgroups forms a sublattice of the lattice of all subgroups of $G$. In this paper we consider some new applications of the theory of $K$-lattice formations. In particular, we prove the following Theorem A. Let $\mathfrak{F}$ be a hereditary $K$-lattice saturated formation containing all nilpotent groups. (i) If every $\mathfrak{F}$-critical subgroup $H$ of $G$ is $K$-$\mathfrak{F}$-subnormal in $G$ with $H/F(H)\in {\mathfrak{F}}$, then $G/F(G)\in {\mathfrak{F}}$. (ii) If every Schmidt subgroup of $G$ is $K$-$\mathfrak{F}$-subnormal in $G$, then $G/G_{\mathfrak{F}}$ is abelian.
0
0
1
0
0
0
Actions Speak Louder Than Goals: Valuing Player Actions in Soccer
Assessing the impact of the individual actions performed by soccer players during games is a crucial aspect of the player recruitment process. Unfortunately, most traditional metrics fall short in addressing this task as they either focus on rare events like shots and goals alone or fail to account for the context in which the actions occurred. This paper introduces a novel advanced soccer metric for valuing any type of individual player action on the pitch, be it with or without the ball. Our metric values each player action based on its impact on the game outcome while accounting for the circumstances under which the action happened. When applied to on-the-ball actions like passes, dribbles, and shots alone, our metric identifies Argentine forward Lionel Messi, French teenage star Kylian Mbappé, and Belgian winger Eden Hazard as the most effective players during the 2016/2017 season.
0
0
0
1
0
0
Partitioning the Outburst Energy of a Low Eddington Accretion Rate AGN at the Center of an Elliptical Galaxy: the Recent 12 Myr History of the Supermassive Black Hole in M87
M87, the active galaxy at the center of the Virgo cluster, is ideal for studying the interaction of a supermassive black hole (SMBH) with a hot, gas-rich environment. A deep Chandra observation of M87 exhibits an approximately circular shock front (13 kpc radius, in projection) driven by the expansion of the central cavity (filled by the SMBH with relativistic radio-emitting plasma) with projected radius $\sim$1.9 kpc. We combine constraints from X-ray and radio observations of M87 with a shock model to derive the properties of the outburst that created the 13 kpc shock. Principal constraints for the model are 1) the measured Mach number ($M$$\sim$1.2), 2) the radius of the 13 kpc shock, and 3) the observed size of the central cavity/bubble (the radio-bright cocoon) that serves as the piston to drive the shock. We find an outburst of $\sim$5$\times$$10^{57}$ ergs that began about 12 Myr ago and lasted $\sim$2 Myr matches all the constraints. In this model, $\sim$22% of the energy is carried by the shock as it expands. The remaining $\sim$80% of the outburst energy is available to heat the core gas. More than half the total outburst energy initially goes into the enthalpy of the central bubble, the radio cocoon. As the buoyant bubble rises, much of its energy is transferred to the ambient thermal gas. For an outburst repetition rate of about 12 Myrs (the age of the outburst), 80% of the outburst energy is sufficient to balance the radiative cooling.
0
1
0
0
0
0
On the missing link between pressure drop, viscous dissipation, and the turbulent energy spectrum
After decades of experimental, theoretical, and numerical research in fluid dynamics, many aspects of turbulence remain poorly understood. The main reason for this is often attributed to the multiscale nature of turbulent flows, which poses a formidable challenge. There are, however, properties of these flows whose roles and inter-connections have never been clarified fully. In this article, we present a new connection between the pressure drop, viscous dissipation, and the turbulent energy spectrum, which, to the best of our knowledge, has never been established prior to our work. We use this finding to show analytically that viscous dissipation in laminar pipe flows cannot increase the temperature of the fluid, and to also reproduce qualitatively Nikuradse's experimental results involving pressure drops in turbulent flows in rough pipes.
0
1
0
0
0
0
Discrete Local Induction Equation
The local induction equation, or the binormal flow on space curves is a well-known model of deformation of space curves as it describes the dynamics of vortex filaments, and the complex curvature is governed by the nonlinear Schrödinger equation. In this paper, we present its discrete analogue, namely, a model of deformation of discrete space curves by the discrete nonlinear Schrödinger equation. We also present explicit formulas for both smooth and discrete curves in terms of tau functions of the two-component KP hierarchy.
0
1
1
0
0
0
A sharp lower bound for the lifespan of small solutions to the Schrödinger equation with a subcritical power nonlinearity
Let $T_{\epsilon}$ be the lifespan for the solution to the Schrödinger equation on $\mathbb{R}^d$ with a power nonlinearity $\lambda |u|^{2\theta/d}u$ ($\lambda \in \mathbb{C}$, $0<\theta<1$) and the initial data in the form $\epsilon \varphi(x)$. We provide a sharp lower bound estimate for $T_{\epsilon}$ as $\epsilon \to +0$ which can be written explicitly by $\lambda$, $d$, $\theta$, $\varphi$ and $\epsilon$. This is an improvement of the previous result by H.Sasaki [Adv. Diff. Eq. 14 (2009), 1021--1039].
0
0
1
0
0
0
State Space Reduction for Reachability Graph of CSM Automata
Classical CTL temporal logics are built over systems with interleaving model concurrency. Many attempts are made to fight a state space explosion problem (for instance, compositional model checking). There are some methods of reduction of a state space based on independence of actions. However, in CSM model, which is based on coincidences rather than on interleaving, independence of actions cannot be defined. Therefore a state space reduction basing on identical temporal consequences rather than on independence of action is proposed. The new reduction is not as good as for interleaving systems, because all successors of a state (in depth of two levels) must be obtained before a reduction may be applied. This leads to reduction of space required for representation of a state space, but not in time of state space construction. Yet much savings may occur in regular state spaces for CSM systems.
1
0
0
0
0
0
Permission Inference for Array Programs
Information about the memory locations accessed by a program is, for instance, required for program parallelisation and program verification. Existing inference techniques for this information provide only partial solutions for the important class of array-manipulating programs. In this paper, we present a static analysis that infers the memory footprint of an array program in terms of permission pre- and postconditions as used, for example, in separation logic. This formulation allows our analysis to handle concurrent programs and produces specifications that can be used by verification tools. Our analysis expresses the permissions required by a loop via maximum expressions over the individual loop iterations. These maximum expressions are then solved by a novel maximum elimination algorithm, in the spirit of quantifier elimination. Our approach is sound and is implemented; an evaluation on existing benchmarks for memory safety of array programs demonstrates accurate results, even for programs with complex access patterns and nested loops.
1
0
0
0
0
0
Generating Query Suggestions to Support Task-Based Search
We address the problem of generating query suggestions to support users in completing their underlying tasks (which motivated them to search in the first place). Given an initial query, these query suggestions should provide a coverage of possible subtasks the user might be looking for. We propose a probabilistic modeling framework that obtains keyphrases from multiple sources and generates query suggestions from these keyphrases. Using the test suites of the TREC Tasks track, we evaluate and analyze each component of our model.
1
0
0
0
0
0
Application of Spin-Exchange Relaxation-Free Magnetometry to the Cosmic Axion Spin Precession Experiment
The Cosmic Axion Spin Precession Experiment (CASPEr) seeks to measure oscillating torques on nuclear spins caused by axion or axion-like-particle (ALP) dark matter via nuclear magnetic resonance (NMR) techniques. A sample spin-polarized along a leading magnetic field experiences a resonance when the Larmor frequency matches the axion/ALP Compton frequency, generating precessing transverse nuclear magnetization. Here we demonstrate a Spin-Exchange Relaxation-Free (SERF) magnetometer with sensitivity $\approx 1~{\rm fT/\sqrt{Hz}}$ and an effective sensing volume of 0.1 $\rm{cm^3}$ that may be useful for NMR detection in CASPEr. A potential drawback of SERF-magnetometer-based NMR detection is the SERF's limited dynamic range. Use of a magnetic flux transformer to suppress the leading magnetic field is considered as a potential method to expand the SERF's dynamic range in order to probe higher axion/ALP Compton frequencies.
0
1
0
0
0
0
Symmetry and the Geometric Phase in Ultracold Hydrogen-Exchange Reactions
Quantum reactive scattering calculations are reported for the ultracold hydrogen-exchange reaction and its non-reactive atom-exchange isotopic counterparts, proceeding from excited rotational states. It is shown that while the geometric phase (GP) does not necessarily control the reaction to all final states one can always find final states where it does. For the isotopic counterpart reactions these states can be used to make a measurement of the GP effect by separately measuring the even and odd symmetry contributions, which experimentally requires nuclear-spin final-state resolution. This follows from symmetry considerations that make the even and odd identical-particle exchange symmetry wavefunctions which include the GP locally equivalent to the opposite symmetry wavefunctions which do not. This equivalence reflects the important role discrete symmetries play in ultracold chemistry generally and highlights the key role ultracold reactions can play in understanding fundamental aspects of chemical reactivity.
0
1
0
0
0
0
Characterization and Photometric Performance of the Hyper Suprime-Cam Software Pipeline
The Subaru Strategic Program (SSP) is an ambitious multi-band survey using the Hyper Suprime-Cam (HSC) on the Subaru telescope. The Wide layer of the SSP is both wide and deep, reaching a detection limit of i~26.0 mag. At these depths, it is challenging to achieve accurate, unbiased, and consistent photometry across all five bands. The HSC data are reduced using a pipeline that builds on the prototype pipeline for the Large Synoptic Survey Telescope. We have developed a Python-based, flexible framework to inject synthetic galaxies into real HSC images called SynPipe. Here we explain the design and implementation of SynPipe and generate a sample of synthetic galaxies to examine the photometric performance of the HSC pipeline. For stars, we achieve 1% photometric precision at i~19.0 mag and 6% precision at i~25.0 in the i-band. For synthetic galaxies with single-Sersic profiles, forced CModel photometry achieves 13% photometric precision at i~20.0 mag and 18% precision at i~25.0 in the i-band. We show that both forced PSF and CModel photometry yield unbiased color estimates that are robust to seeing conditions. We identify several caveats that apply to the version of HSC pipeline used for the first public HSC data release (DR1) that need to be taking into consideration. First, the degree to which an object is blended with other objects impacts the overall photometric performance. This is especially true for point sources. Highly blended objects tend to have larger photometric uncertainties, systematically underestimated fluxes and slightly biased colors. Second, >20% of stars at 22.5< i < 25.0 mag can be misclassified as extended objects. Third, the current CModel algorithm tends to strongly underestimate the half-light radius and ellipticity of galaxy with i>21.5 mag.
0
1
0
0
0
0
Information Geometry Approach to Parameter Estimation in Hidden Markov Models
We consider the estimation of hidden Markovian process by using information geometry with respect to transition matrices. We consider the case when we use only the histogram of $k$-memory data. Firstly, we focus on a partial observation model with Markovian process and we show that the asymptotic estimation error of this model is given as the inverse of projective Fisher information of transition matrices. Next, we apply this result to the estimation of hidden Markovian process. We carefully discuss the equivalence problem for hidden Markovian process on the tangent space. Then, we propose a novel method to estimate hidden Markovian process.
0
0
1
1
0
0
Parallel transport in principal 2-bundles
A nice differential-geometric framework for (non-abelian) higher gauge theory is provided by principal 2-bundles, i.e. categorified principal bundles. Their total spaces are Lie groupoids, local trivializations are kinds of Morita equivalences, and connections are Lie-2-algebra-valued 1-forms. In this article, we construct explicitly the parallel transport of a connection on a principal 2-bundle. Parallel transport along a path is a Morita equivalence between the fibres over the end points, and parallel transport along a surface is an intertwiner between Morita equivalences. We prove that our constructions fit into the general axiomatic framework for categorified parallel transport and surface holonomy.
0
0
1
0
0
0
Gotta Learn Fast: A New Benchmark for Generalization in RL
In this report, we present a new reinforcement learning (RL) benchmark based on the Sonic the Hedgehog (TM) video game franchise. This benchmark is intended to measure the performance of transfer learning and few-shot learning algorithms in the RL domain. We also present and evaluate some baseline algorithms on the new benchmark.
0
0
0
1
0
0
Generative Adversarial Networks recover features in astrophysical images of galaxies beyond the deconvolution limit
Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here we train a generative adversarial network (GAN) on a sample of $4,550$ images of nearby galaxies at $0.01<z<0.02$ from the Sloan Digital Sky Survey and conduct $10\times$ cross validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance which far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low-signal-to-noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.
0
1
0
1
0
0
Trends in European flood risk over the past 150 years
Flood risk changes in time and is influenced by both natural and socio-economic trends and interactions. In Europe, previous studies of historical flood losses corrected for demographic and economic growth ("normalized") have been limited in temporal and spatial extent, leading to an incomplete representation in trends of losses over time. In this study we utilize a gridded reconstruction of flood exposure in 37 European countries and a new database of damaging floods since 1870. Our results indicate that since 1870 there has been an increase in annually inundated area and number of persons affected, contrasted by a substantial decrease in flood fatalities, after correcting for change in flood exposure. For more recent decades we also found a considerable decline in financial losses per year. We estimate, however, that there is large underreporting of smaller floods beyond most recent years, and show that underreporting has a substantial impact on observed trends.
0
0
0
1
0
0
Kinetic Trans-assembly of DNA Nanostructures
The central dogma of molecular biology is the principal framework for understanding how nucleic acid information is propagated and used by living systems to create complex biomolecules. Here, by integrating the structural and dynamic paradigms of DNA nanotechnology, we present a rationally designed synthetic platform which functions in an analogous manner to create complex DNA nanostructures. Starting from one type of DNA nanostructure, DNA strand displacement circuits were designed to interact and pass along the information encoded in the initial structure to mediate the self-assembly of a different type of structure, the final output structure depending on the type of circuit triggered. Using this concept of a DNA structure "trans-assembling" a different DNA structure through non-local strand displacement circuitry, four different schemes were implemented. Specifically, 1D ladder and 2D double-crossover (DX) lattices were designed to kinetically trigger DNA circuits to activate polymerization of either ring structures or another type of DX lattice under enzyme-free, isothermal conditions. In each scheme, the desired multilayer reaction pathway was activated, among multiple possible pathways, ultimately leading to the downstream self-assembly of the correct output structure.
0
0
0
0
1
0
Synthesis and analysis in total variation regularization
We generalize the bridge between analysis and synthesis estimators by Elad, Milanfar and Rubinstein (2007) to rank deficient cases. This is a starting point for the study of the connection between analysis and synthesis for total variation regularized estimators. In particular, the case of first order total variation regularized estimators over general graphs and their synthesis form are studied. We give a definition of the discrete graph derivative operator based on the notion of line graph and provide examples of the synthesis form of $k^{\text{th}}$ order total variation regularized estimators over a range of graphs.
0
0
1
1
0
0
The Leray transform: factorization, dual $CR$ structures and model hypersurfaces in $\mathbb{C}\mathbb{P}^2$
We compute the exact norms of the Leray transforms for a family $\mathcal{S}_{\beta}$ of unbounded hypersurfaces in two complex dimensions. The $\mathcal{S}_{\beta}$ generalize the Heisenberg group, and provide local projective approximations to any smooth, strongly $\mathbb{C}$-convex hypersurface $\mathcal{S}_{\beta}$ to two orders of tangency. This work is then examined in the context of projective dual $CR$-structures and the corresponding pair of canonical dual Hardy spaces associated to $\mathcal{S}_{\beta}$, leading to a universal description of the Leray transform and a factorization of the transform through orthogonal projection onto the conjugate dual Hardy space.
0
0
1
0
0
0
A Fast Quantum-safe Asymmetric Cryptosystem Using Extra Superincreasing Sequences
This paper gives the definitions of an extra superincreasing sequence and an anomalous subset sum, and proposes a fast quantum-safe asymmetric cryptosystem called JUOAN2. The new cryptosystem is based on an additive multivariate permutation problem (AMPP) and an anomalous subset sum problem (ASSP) which parallel a multivariate polynomial problem and a shortest vector problem respectively, and composed of a key generator, an encryption algorithm, and a decryption algorithm. The authors analyze the security of the new cryptosystem against the Shamir minima accumulation point attack and the LLL lattice basis reduction attack, and prove it to be semantically secure (namely IND-CPA) on the assumption that AMPP and ASSP have no subexponential time solutions. Particularly, the analysis shows that the new cryptosystem has the potential to be resistant to quantum computing attack, and is especially suitable to the secret communication between two mobile terminals in maneuvering field operations under any weather. At last, an example explaining the correctness of the new cryptosystem is given.
1
0
0
0
0
0
Knowledge Transfer for Melanoma Screening with Deep Learning
Knowledge transfer impacts the performance of deep learning -- the state of the art for image classification tasks, including automated melanoma screening. Deep learning's greed for large amounts of training data poses a challenge for medical tasks, which we can alleviate by recycling knowledge from models trained on different tasks, in a scheme called transfer learning. Although much of the best art on automated melanoma screening employs some form of transfer learning, a systematic evaluation was missing. Here we investigate the presence of transfer, from which task the transfer is sourced, and the application of fine tuning (i.e., retraining of the deep learning model after transfer). We also test the impact of picking deeper (and more expensive) models. Our results favor deeper models, pre-trained over ImageNet, with fine-tuning, reaching an AUC of 80.7% and 84.5% for the two skin-lesion datasets evaluated.
1
0
0
0
0
0
Large odd order character sums and improvements of the Pólya-Vinogradov inequality
For a primitive Dirichlet character $\chi$ modulo $q$, we define $M(\chi)=\max_{t } |\sum_{n \leq t} \chi(n)|$. In this paper, we study this quantity for characters of a fixed odd order $g\geq 3$. Our main result provides a further improvement of the classical Pólya-Vinogradov inequality in this case. More specifically, we show that for any such character $\chi$ we have $$M(\chi)\ll_{\varepsilon} \sqrt{q}(\log q)^{1-\delta_g}(\log\log q)^{-1/4+\varepsilon},$$ where $\delta_g := 1-\frac{g}{\pi}\sin(\pi/g)$. This improves upon the works of Granville and Soundararajan and of Goldmakher. Furthermore, assuming the Generalized Riemann hypothesis (GRH) we prove that $$ M(\chi) \ll \sqrt{q} \left(\log_2 q\right)^{1-\delta_g} \left(\log_3 q\right)^{-\frac{1}{4}}\left(\log_4 q\right)^{O(1)}, $$ where $\log_j$ is the $j$-th iterated logarithm. We also show unconditionally that this bound is best possible (up to a power of $\log_4 q$). One of the key ingredients in the proof of the upper bounds is a new Halász-type inequality for logarithmic mean values of completely multiplicative functions, which might be of independent interest.
0
0
1
0
0
0
Estimation under group actions: recovering orbits from invariants
Motivated by geometric problems in signal processing, computer vision, and structural biology, we study a class of orbit recovery problems where we observe very noisy copies of an unknown signal, each acted upon by a random element of some group (such as Z/p or SO(3)). The goal is to recover the orbit of the signal under the group action in the high-noise regime. This generalizes problems of interest such as multi-reference alignment (MRA) and the reconstruction problem in cryo-electron microscopy (cryo-EM). We obtain matching lower and upper bounds on the sample complexity of these problems in high generality, showing that the statistical difficulty is intricately determined by the invariant theory of the underlying symmetry group. In particular, we determine that for cryo-EM with noise variance $\sigma^2$ and uniform viewing directions, the number of samples required scales as $\sigma^6$. We match this bound with a novel algorithm for ab initio reconstruction in cryo-EM, based on invariant features of degree at most 3. We further discuss how to recover multiple molecular structures from heterogeneous cryo-EM samples.
1
0
1
0
0
0
Crystal field excitations from $\mathrm{Yb^{3+}}$ ions at defective sites in highly stuffed $\rm Yb_2Ti_2O_7$
The pyrochlore magnet $\rm Yb_2Ti_2O_7$ has been proposed as a quantum spin ice candidate, a spin liquid state expected to display emergent quantum electrodynamics with gauge photons among its elementary excitations. However, $\rm Yb_2Ti_2O_7$'s ground state is known to be very sensitive to its precise stoichiometry. Powder samples, produced by solid state synthesis at relatively low temperatures, tend to be stoichiometric, while single crystals grown from the melt tend to display weak "stuffing" wherein $\mathrm{\sim 2\%}$ of the $\mathrm{Yb^{3+}}$, normally at the $A$ site of the $A_2B_2O_7$ pyrochlore structure, reside as well at the $B$ site. In such samples $\mathrm{Yb^{3+}}$ ions should exist in defective environments at low levels, and be subjected to crystalline electric fields (CEFs) very different from those at the stoichiometric $A$ sites. New neutron scattering measurements of $\mathrm{Yb^{3+}}$ in four compositions of $\rm Yb_{2+x}Ti_{2-x}O_{7-y}$, show the spectroscopic signatures for these defective $\mathrm{Yb^{3+}}$ ions and explicitly demonstrate that the spin anisotropy of the $\mathrm{Yb^{3+}}$ moment changes from XY-like for stoichiometric $\mathrm{Yb^{3+}}$, to Ising-like for "stuffed" $B$ site $\mathrm{Yb^{3+}}$, or for $A$ site $\mathrm{Yb^{3+}}$ in the presence of an oxygen vacancy.
0
1
0
0
0
0
HOUDINI: Lifelong Learning as Program Synthesis
We present a neurosymbolic framework for the lifelong learning of algorithmic tasks that mix perception and procedural reasoning. Reusing high-level concepts across domains and learning complex procedures are key challenges in lifelong learning. We show that a program synthesis approach that combines gradient descent with combinatorial search over programs can be a more effective response to these challenges than purely neural methods. Our framework, called HOUDINI, represents neural networks as strongly typed, differentiable functional programs that use symbolic higher-order combinators to compose a library of neural functions. Our learning algorithm consists of: (1) a symbolic program synthesizer that performs a type-directed search over parameterized programs, and decides on the library functions to reuse, and the architectures to combine them, while learning a sequence of tasks; and (2) a neural module that trains these programs using stochastic gradient descent. We evaluate HOUDINI on three benchmarks that combine perception with the algorithmic tasks of counting, summing, and shortest-path computation. Our experiments show that HOUDINI transfers high-level concepts more effectively than traditional transfer learning and progressive neural networks, and that the typed representation of networks significantly accelerates the search.
1
0
0
1
0
0
Detecting Adversarial Examples via Key-based Network
Though deep neural networks have achieved state-of-the-art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. Small and often imperceptible perturbations to the input images are sufficient to fool the most powerful deep neural networks. Various defense methods have been proposed to address this issue. However, they either require knowledge on the process of generating adversarial examples, or are not robust against new attacks specifically designed to penetrate the existing defense. In this work, we introduce key-based network, a new detection-based defense mechanism to distinguish adversarial examples from normal ones based on error correcting output codes, using the binary code vectors produced by multiple binary classifiers applied to randomly chosen label-sets as signatures to match normal images and reject adversarial examples. In contrast to existing defense methods, the proposed method does not require knowledge of the process for generating adversarial examples and can be applied to defend against different types of attacks. For the practical black-box and gray-box scenarios, where the attacker does not know the encoding scheme, we show empirically that key-based network can effectively detect adversarial examples generated by several state-of-the-art attacks.
0
0
0
1
0
0
Guessing Attacks on Distributed-Storage Systems
The secrecy of a distributed-storage system for passwords is studied. The encoder, Alice, observes a length-n password and describes it using two hints, which she stores in different locations. The legitimate receiver, Bob, observes both hints. In one scenario the requirement is that the expected number of guesses it takes Bob to guess the password approach one as n tends to infinity, and in the other that the expected size of the shortest list that Bob must form to guarantee that it contain the password approach one. The eavesdropper, Eve, sees only one of the hints. Assuming that Alice cannot control which hints Eve observes, the largest normalized (by n) exponent that can be guaranteed for the expected number of guesses it takes Eve to guess the password is characterized for each scenario. Key to the proof are new results on Arikan's guessing and Bunte and Lapidoth's task-encoding problem; in particular, the paper establishes a close relation between the two problems. A rate-distortion version of the model is also discussed, as is a generalization that allows for Alice to produce {\delta} (not necessarily two) hints, for Bob to observe {\nu} (not necessarily two) of the hints, and for Eve to observe {\eta} (not necessarily one) of the hints. The generalized model is robust against {\delta} - {\nu} disk failures.
1
0
1
0
0
0
Numerical analysis of nonlocal fracture models in Hölder space
In this work, we calculate the convergence rate of the finite difference approximation for a class of nonlocal fracture models. We consider two point force interactions characterized by a double well potential. We show the existence of a evolving displacement field in Hölder space with Hölder exponent $\gamma \in (0,1]$. The rate of convergence of the finite difference approximation depends on the factor $C_s h^\gamma/\epsilon^2$ where $\epsilon$ gives the length scale of nonlocal interaction, $h$ is the discretization length and $C_s$ is the maximum of Hölder norm of the solution and its second derivatives during the evolution. It is shown that the rate of convergence holds for both the forward Euler scheme as well as general single step implicit schemes. A stability result is established for the semi-discrete approximation. The Hölder continuous evolutions are seen to converge to a brittle fracture evolution in the limit of vanishing nonlocality.
0
0
1
0
0
0
Iterative Collaborative Filtering for Sparse Matrix Estimation
The sparse matrix estimation problem consists of estimating the distribution of an $n\times n$ matrix $Y$, from a sparsely observed single instance of this matrix where the entries of $Y$ are independent random variables. This captures a wide array of problems; special instances include matrix completion in the context of recommendation systems, graphon estimation, and community detection in (mixed membership) stochastic block models. Inspired by classical collaborative filtering for recommendation systems, we propose a novel iterative, collaborative filtering-style algorithm for matrix estimation in this generic setting. We show that the mean squared error (MSE) of our estimator converges to $0$ at the rate of $O(d^2 (pn)^{-2/5})$ as long as $\omega(d^5 n)$ random entries from a total of $n^2$ entries of $Y$ are observed (uniformly sampled), $\mathbb{E}[Y]$ has rank $d$, and the entries of $Y$ have bounded support. The maximum squared error across all entries converges to $0$ with high probability as long as we observe a little more, $\Omega(d^5 n \ln^2(n))$ entries. Our results are the best known sample complexity results in this generality.
0
0
1
1
0
0
Parameter Estimation in Finite Mixture Models by Regularized Optimal Transport: A Unified Framework for Hard and Soft Clustering
In this short paper, we formulate parameter estimation for finite mixture models in the context of discrete optimal transportation with convex regularization. The proposed framework unifies hard and soft clustering methods for general mixture models. It also generalizes the celebrated $k$\nobreakdash-means and expectation-maximization algorithms in relation to associated Bregman divergences when applied to exponential family mixture models.
1
0
0
1
0
0
Robust parameter determination in epidemic models with analytical descriptions of uncertainties
Compartmental equations are primary tools in disease spreading studies. Their predictions are accurate for large populations but disagree with empirical and simulated data for finite populations, where uncertainties become a relevant factor. Starting from the agent-based approach, we investigate the role of uncertainties and autocorrelation functions in SIS epidemic model, including their relationship with epidemiological variables. We find new differential equations that take uncertainties into account. The findings provide improved predictions to the SIS model and it can offer new insights for emerging diseases.
0
0
0
0
1
0
Direct observation of domain wall surface tension by deflating or inflating a magnetic bubble
The surface energy of a magnetic Domain Wall (DW) strongly affects its static and dynamic behaviours. However, this effect was seldom directly observed and many related phenomena have not been well understood. Moreover, a reliable method to quantify the DW surface energy is still missing. Here, we report a series of experiments in which the DW surface energy becomes a dominant parameter. We observed that a semicircular magnetic domain bubble could spontaneously collapse under the Laplace pressure induced by DW surface energy. We further demonstrated that the surface energy could lead to a geometrically induced pinning when the DW propagates in a Hall cross or from a nanowire into a nucleation pad. Based on these observations, we developed two methods to quantify the DW surface energy, which could be very helpful to estimate intrinsic parameters such as Dzyaloshinskii-Moriya Interactions (DMI) or exchange stiffness in magnetic ultra-thin films.
0
1
0
0
0
0
Unified Halo-Independent Formalism From Convex Hulls for Direct Dark Matter Searches
Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution $F(v)$ in Earth's frame or 2- Galactic velocity distribution $f^{\rm gal}(\vec{u})$, consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is $({\mathcal N}-1)$, where ${\mathcal N}$ is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is ${\mathcal N}$. Using time-averaged rates, the aforementioned form of $F(v)$ results in a piecewise constant unmodulated halo function $\tilde\eta^0_{BF}(v_{\rm min})$ (which is an integral of the speed distribution) with at most $({\mathcal N}-1)$ downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of $f^{\rm gal}(\vec{u})$, which is a sum of Galactic streams, yields a periodic time-dependent halo function $\tilde\eta_{BF}(v_{\rm min}, t)$ which at any fixed time is a piecewise constant function of $v_{\rm min}$ with at most ${\mathcal N}$ downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic ...
0
1
0
0
0
0
Finite size effects for spiking neural networks with spatially dependent coupling
We study finite-size fluctuations in a network of spiking deterministic neurons coupled with non-uniform synaptic coupling. We generalize a previously developed theory of finite size effects for uniform globally coupled neurons. In the uniform case, mean field theory is well defined by averaging over the network as the number of neurons in the network goes to infinity. However, for nonuniform coupling it is no longer possible to average over the entire network if we are interested in fluctuations at a particular location within the network. We show that if the coupling function approaches a continuous function in the infinite system size limit then an average over a local neighborhood can be defined such that mean field theory is well defined for a spatially dependent field. We then derive a perturbation expansion in the inverse system size around the mean field limit for the covariance of the input to a neuron (synaptic drive) and firing rate fluctuations due to dynamical deterministic finite-size effects.
0
0
0
0
1
0
Shape and fission instabilities of ferrofluids in non-uniform magnetic fields
We study static distributions of ferrofluid submitted to non-uniform magnetic fields. We show how the normal-field instability is modified in the presence of a weak magnetic field gradient. Then we consider a ferrofluid droplet and show how the gradient affects its shape. A rich phase transitions phenomenology is found. We also investigate the creation of droplets by successive splits when a magnet is vertically approached from below and derive theoretical expressions which are solved numerically to obtain the number of droplets and their aspect ratio as function of the field configuration. A quantitative comparison is performed with previous experimental results, as well as with our own experiments, and yields good agreement with the theoretical modeling.
0
1
0
0
0
0
Encrypted accelerated least squares regression
Information that is stored in an encrypted format is, by definition, usually not amenable to statistical analysis or machine learning methods. In this paper we present detailed analysis of coordinate and accelerated gradient descent algorithms which are capable of fitting least squares and penalised ridge regression models, using data encrypted under a fully homomorphic encryption scheme. Gradient descent is shown to dominate in terms of encrypted computational speed, and theoretical results are proven to give parameter bounds which ensure correctness of decryption. The characteristics of encrypted computation are empirically shown to favour a non-standard acceleration technique. This demonstrates the possibility of approximating conventional statistical regression methods using encrypted data without compromising privacy.
1
0
0
1
0
0
Unified Model of Chaotic Inflation and Dynamical Supersymmetry Breaking
The large hierarchy between the Planck scale and the weak scale can be explained by the dynamical breaking of supersymmetry in strongly coupled gauge theories. Similarly, the hierarchy between the Planck scale and the energy scale of inflation may also originate from strong dynamics, which dynamically generate the inflaton potential. We present a model of the hidden sector which unifies these two ideas, i.e., in which the scales of inflation and supersymmetry breaking are provided by the dynamics of the same gauge group. The resultant inflation model is chaotic inflation with a fractional power-law potential in accord with the upper bound on the tensor-to-scalar ratio. The supersymmetry breaking scale can be much smaller than the inflation scale, so that the solution to the large hierarchy problem of the weak scale remains intact. As an intrinsic feature of our model, we find that the sgoldstino, which might disturb the inflationary dynamics, is automatically stabilized during inflation by dynamically generated corrections in the strongly coupled sector. This renders our model a field-theoretical realization of what is sometimes referred to as sgoldstino-less inflation.
0
1
0
0
0
0
Tuplemax Loss for Language Identification
In many scenarios of a language identification task, the user will specify a small set of languages which he/she can speak instead of a large set of all possible languages. We want to model such prior knowledge into the way we train our neural networks, by replacing the commonly used softmax loss function with a novel loss function named tuplemax loss. As a matter of fact, a typical language identification system launched in North America has about 95% users who could speak no more than two languages. Using the tuplemax loss, our system achieved a 2.33% error rate, which is a relative 39.4% improvement over the 3.85% error rate of standard softmax loss method.
1
0
0
0
0
0
Sparse Data Driven Mesh Deformation
Example-based mesh deformation methods are powerful tools for realistic shape editing. However, existing techniques typically combine all the example deformation modes, which can lead to overfitting, i.e. using a overly complicated model to explain the user-specified deformation. This leads to implausible or unstable deformation results, including unexpected global changes outside the region of interest. To address this fundamental limitation, we propose a sparse blending method that automatically selects a smaller number of deformation modes to compactly describe the desired deformation. This along with a suitably chosen deformation basis including spatially localized deformation modes leads to significant advantages, including more meaningful, reliable, and efficient deformations because fewer and localized deformation modes are applied. To cope with large rotations, we develop a simple but effective representation based on polar decomposition of deformation gradients, which resolves the ambiguity of large global rotations using an as-consistent-as-possible global optimization. This simple representation has a closed form solution for derivatives, making it efficient for sparse localized representation and thus ensuring interactive performance. Experimental results show that our method outperforms state-of-the-art data-driven mesh deformation methods, for both quality of results and efficiency.
1
0
0
0
0
0
Short Presburger arithmetic is hard
We study the computational complexity of short sentences in Presburger arithmetic (Short-PA). Here by "short" we mean sentences with a bounded number of variables, quantifiers, inequalities and Boolean operations; the input consists only of the integer coefficients involved in the linear inequalities. We prove that satisfiability of Short-PA sentences with $m+2$ alternating quantifiers is $\Sigma_{P}^m$-complete or $\Pi_{P}^m$-complete, when the first quantifier is $\exists$ or $\forall$, respectively. Counting versions and restricted systems are also analyzed. Further application are given to hardness of two natural problems in Integer Optimizations.
1
0
1
0
0
0
The Bias of the Log Power Spectrum for Discrete Surveys
A primary goal of galaxy surveys is to tighten constraints on cosmological parameters, and the power spectrum $P(k)$ is the standard means of doing so. However, at translinear scales $P(k)$ is blind to much of these surveys' information---information which the log density power spectrum recovers. For discrete fields (such as the galaxy density), $A^*$ denotes the statistic analogous to the log density: $A^*$ is a "sufficient statistic" in that its power spectrum (and mean) capture virtually all of a discrete survey's information. However, the power spectrum of $A^*$ is biased with respect to the corresponding log spectrum for continuous fields, and to use $P_{A^*}(k)$ to constrain the values of cosmological parameters, we require some means of predicting this bias. Here we present a prescription for doing so; for Euclid-like surveys (with cubical cells 16$h^{-1}$ Mpc across) our bias prescription's error is less than 3 per cent. This prediction will facilitate optimal utilization of the information in future galaxy surveys.
0
1
0
0
0
0
Consistent nonparametric change point detection combining CUSUM and marked empirical processes
A weakly dependent time series regression model with multivariate covariates and univariate observations is considered, for which we develop a procedure to detect whether the nonparametric conditional mean function is stable in time against change point alternatives. Our proposal is based on a modified CUSUM type test procedure, which uses a sequential marked empirical process of residuals. We show weak convergence of the considered process to a centered Gaussian process under the null hypothesis of no change in the mean function and a stationarity assumption. This requires some sophisticated arguments for sequential empirical processes of weakly dependent variables. As a consequence we obtain convergence of Kolmogorov-Smirnov and Cramér-von Mises type test statistics. The proposed procedure acquires a very simple limiting distribution and nice consistency properties, features from which related tests are lacking. We moreover suggest a bootstrap version of the procedure and discuss its applicability in the case of unstable variances.
0
0
1
1
0
0
Nonlinear electric field effect on perpendicular magnetic anisotropy in Fe/MgO interfaces
The electric field effect on magnetic anisotropy was studied in an ultrathin Fe(001) monocrystalline layer sandwiched between Cr buffer and MgO tunnel barrier layers, mainly through post-annealing temperature and measurement temperature dependences. A large coefficient of the electric field effect of more than 200 fJ/Vm was observed in the negative range of electric field, as well as an areal energy density of perpendicular magnetic anisotropy (PMA) of around 600 uJ/m2. More interestingly, nonlinear behavior, giving rise to a local minimum around +100 mV/nm, was observed in the electric field dependence of magnetic anisotropy, being independent of the post-annealing and measurement temperatures. The insensitivity to both the interface conditions and the temperature of the system suggests that the nonlinear behavior is attributed to an intrinsic origin such as an inherent electronic structure in the Fe/MgO interface. The present study can contribute to the progress in theoretical studies, such as ab initio calculations, on the mechanism of the electric field effect on PMA.
0
1
0
0
0
0
Local Symmetry and Global Structure in Adaptive Voter Models
"Coevolving" or "adaptive" voter models (AVMs) are natural systems for modeling the emergence of mesoscopic structure from local networked processes driven by conflict and homophily. Because of this, many methods for approximating the long-run behavior of AVMs have been proposed over the last decade. However, most such methods are either restricted in scope, expensive in computation, or inaccurate in predicting important statistics. In this work, we develop a novel, second-order moment closure approximation method for studying the equilibrium mesoscopic structure of AVMs and apply it to binary-state rewire-to-random and rewire-to-same model variants with random state-switching. This framework exploits an asymmetry in voting events that enables us to derive analytic approximations for the fast-timescale dynamics. The resulting numerical approximations enable the computation of key properties of the model behavior, such as the location of the fragmentation transition and the equilibrium active edge density, across the entire range of state densities. Numerically, they are nearly exact for the rewire-to-random model, and competitive with other current approaches for the rewire-to-same model. We conclude with suggestions for model refinement and extensions to more complex models.
1
0
0
0
0
0
A Weighted Model Confidence Set: Applications to Local and Mixture Model Confidence Sets
This article provides a weighted model confidence set, whenever underling model has been misspecified and some part of support of random variable $X$ conveys some important information about underling true model. Application of such weighted model confidence set for local and mixture model confidence sets have been given. Two simulation studies have been conducted to show practical application of our findings.
0
0
0
1
0
0
Poverty Mapping Using Convolutional Neural Networks Trained on High and Medium Resolution Satellite Images, With an Application in Mexico
Mapping the spatial distribution of poverty in developing countries remains an important and costly challenge. These "poverty maps" are key inputs for poverty targeting, public goods provision, political accountability, and impact evaluation, that are all the more important given the geographic dispersion of the remaining bottom billion severely poor individuals. In this paper we train Convolutional Neural Networks (CNNs) to estimate poverty directly from high and medium resolution satellite images. We use both Planet and Digital Globe imagery with spatial resolutions of 3-5 sq. m. and 50 sq. cm. respectively, covering all 2 million sq. km. of Mexico. Benchmark poverty estimates come from the 2014 MCS-ENIGH combined with the 2015 Intercensus and are used to estimate poverty rates for 2,456 Mexican municipalities. CNNs are trained using the 896 municipalities in the 2014 MCS-ENIGH. We experiment with several architectures (GoogleNet, VGG) and use GoogleNet as a final architecture where weights are fine-tuned from ImageNet. We find that 1) the best models, which incorporate satellite-estimated land use as a predictor, explain approximately 57% of the variation in poverty in a validation sample of 10 percent of MCS-ENIGH municipalities; 2) Across all MCS-ENIGH municipalities explanatory power reduces to 44% in a CNN prediction and landcover model; 3) Predicted poverty from the CNN predictions alone explains 47% of the variation in poverty in the validation sample, and 37% over all MCS-ENIGH municipalities; 4) In urban areas we see slight improvements from using Digital Globe versus Planet imagery, which explain 61% and 54% of poverty variation respectively. We conclude that CNNs can be trained end-to-end on satellite imagery to estimate poverty, although there is much work to be done to understand how the training process influences out of sample validation.
1
0
0
1
0
0
Network of sensitive magnetometers for urban studies
The magnetic signature of an urban environment is investigated using a geographically distributed network of fluxgate magnetometers deployed in and around Berkeley, California. The system hardware and software are described and results from initial operation of the network are reported. The sensors sample the vector magnetic field with a 4 kHz resolution and are sensitive to fluctuations below 0.1 $\textrm{nT}/\sqrt{\textrm{Hz}}$. Data from separate stations are synchronized to around $\pm100$ $\mu{s}$ using GPS and computer system clocks. Data from all sensors are automatically uploaded to a central server. Anomalous events, such as lightning strikes, have been observed. A wavelet analysis is used to study observations over a wide range of temporal scales up to daily variations that show strong differences between weekend and weekdays. The Bay Area Rapid Transit (BART) is identified as the most dominant signal from these observations and a superposed epoch analysis is used to study and extract the BART signal. Initial results of the correlation between sensors are also presented.
0
1
0
0
0
0
Cooperative Hierarchical Dirichlet Processes: Superposition vs. Maximization
The cooperative hierarchical structure is a common and significant data structure observed in, or adopted by, many research areas, such as: text mining (author-paper-word) and multi-label classification (label-instance-feature). Renowned Bayesian approaches for cooperative hierarchical structure modeling are mostly based on topic models. However, these approaches suffer from a serious issue in that the number of hidden topics/factors needs to be fixed in advance and an inappropriate number may lead to overfitting or underfitting. One elegant way to resolve this issue is Bayesian nonparametric learning, but existing work in this area still cannot be applied to cooperative hierarchical structure modeling. In this paper, we propose a cooperative hierarchical Dirichlet process (CHDP) to fill this gap. Each node in a cooperative hierarchical structure is assigned a Dirichlet process to model its weights on the infinite hidden factors/topics. Together with measure inheritance from hierarchical Dirichlet process, two kinds of measure cooperation, i.e., superposition and maximization, are defined to capture the many-to-many relationships in the cooperative hierarchical structure. Furthermore, two constructive representations for CHDP, i.e., stick-breaking and international restaurant process, are designed to facilitate the model inference. Experiments on synthetic and real-world data with cooperative hierarchical structures demonstrate the properties and the ability of CHDP for cooperative hierarchical structure modeling and its potential for practical application scenarios.
1
0
0
1
0
0
The extended law of star formation: the combined role of gas and stars
We present a model for the origin of the extended law of star formation in which the surface density of star formation ($\Sigma_{\rm SFR}$) depends not only on the local surface density of the gas ($\Sigma_{g}$), but also on the stellar surface density ($\Sigma_{*}$), the velocity dispersion of the stars, and on the scaling laws of turbulence in the gas. We compare our model with the spiral, face-on galaxy NGC 628 and show that the dependence of the star formation rate on the entire set of physical quantities for both gas and stars can help explain both the observed general trends in the $\Sigma_{g}-\Sigma_{\rm SFR}$ and $\Sigma_{*}-\Sigma_{\rm SFR}$ relations, but also, and equally important, the scatter in these relations at any value of $\Sigma_{g}$ and $\Sigma_{*}$. Our results point out to the crucial role played by existing stars along with the gaseous component in setting the conditions for large scale gravitational instabilities and star formation in galactic disks.
0
1
0
0
0
0
Local and global similarity of holomorphic matrices
R. Guralnick (Linear Algebra Appl. 99, 85-96, 1988) proved that two holomorphic matrices on a noncompact connected Riemann surface, which are locally holomorphically similar, are globally holomorphically similar. We generalize this to (possibly, non-smooth) one-dimensional Stein spaces. For Stein spaces of arbitrary dimension, we prove that global $\mathcal C^\infty$ similarity implies global holomorphic similarity, whereas global continuous similarity is not sufficient.
0
0
1
0
0
0
WHInter: A Working set algorithm for High-dimensional sparse second order Interaction models
Learning sparse linear models with two-way interactions is desirable in many application domains such as genomics. l1-regularised linear models are popular to estimate sparse models, yet standard implementations fail to address specifically the quadratic explosion of candidate two-way interactions in high dimensions, and typically do not scale to genetic data with hundreds of thousands of features. Here we present WHInter, a working set algorithm to solve large l1-regularised problems with two-way interactions for binary design matrices. The novelty of WHInter stems from a new bound to efficiently identify working sets while avoiding to scan all features, and on fast computations inspired from solutions to the maximum inner product search problem. We apply WHInter to simulated and real genetic data and show that it is more scalable and two orders of magnitude faster than the state of the art.
0
0
0
1
1
0
On the Limitation of Convolutional Neural Networks in Recognizing Negative Images
Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance on a variety of computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. In this paper, we examine whether CNNs are capable of learning the semantics of training data. To this end, we evaluate CNNs on negative images, since they share the same structure and semantics as regular images and humans can classify them correctly. Our experimental results indicate that when training on regular images and testing on negative images, the model accuracy is significantly lower than when it is tested on regular images. This leads us to the conjecture that current training methods do not effectively train models to generalize the concepts. We then introduce the notion of semantic adversarial examples - transformed inputs that semantically represent the same objects, but the model does not classify them correctly - and present negative images as one class of such inputs.
1
0
0
1
0
0
$\aleph_1$ and the modal $μ$-calculus
For a regular cardinal $\kappa$, a formula of the modal $\mu$-calculus is $\kappa$-continuous in a variable x if, on every model, its interpretation as a unary function of x is monotone and preserves unions of $\kappa$-directed sets. We define the fragment $C_{\aleph_1}(x)$ of the modal $\mu$-calculus and prove that all the formulas in this fragment are $\aleph_1$-continuous. For each formula $\phi(x)$ of the modal $\mu$-calculus, we construct a formula $\psi(x) \in C_{\aleph_1 }(x)$ such that $\phi(x)$ is $\kappa$-continuous, for some $\kappa$, if and only if $\phi(x)$ is equivalent to $\psi(x)$. Consequently, we prove that (i) the problem whether a formula is $\kappa$-continuous for some $\kappa$ is decidable, (ii) up to equivalence, there are only two fragments determined by continuity at some regular cardinal: the fragment $C_{\aleph_0}(x)$ studied by Fontaine and the fragment $C_{\aleph_1}(x)$. We apply our considerations to the problem of characterizing closure ordinals of formulas of the modal $\mu$-calculus. An ordinal $\alpha$ is the closure ordinal of a formula $\phi(x)$ if its interpretation on every model converges to its least fixed-point in at most $\alpha$ steps and if there is a model where the convergence occurs exactly in $\alpha$ steps. We prove that $\omega_1$, the least uncountable ordinal, is such a closure ordinal. Moreover we prove that closure ordinals are closed under ordinal sum. Thus, any formal expression built from 0, 1, $\omega$, $\omega_1$ by using the binary operator symbol + gives rise to a closure ordinal.
1
0
1
0
0
0
Optimal Service Elasticity in Large-Scale Distributed Systems
A fundamental challenge in large-scale cloud networks and data centers is to achieve highly efficient server utilization and limit energy consumption, while providing excellent user-perceived performance in the presence of uncertain and time-varying demand patterns. Auto-scaling provides a popular paradigm for automatically adjusting service capacity in response to demand while meeting performance targets, and queue-driven auto-scaling techniques have been widely investigated in the literature. In typical data center architectures and cloud environments however, no centralized queue is maintained, and load balancing algorithms immediately distribute incoming tasks among parallel queues. In these distributed settings with vast numbers of servers, centralized queue-driven auto-scaling techniques involve a substantial communication overhead and major implementation burden, or may not even be viable at all. Motivated by the above issues, we propose a joint auto-scaling and load balancing scheme which does not require any global queue length information or explicit knowledge of system parameters, and yet provides provably near-optimal service elasticity. We establish the fluid-level dynamics for the proposed scheme in a regime where the total traffic volume and nominal service capacity grow large in proportion. The fluid-limit results show that the proposed scheme achieves asymptotic optimality in terms of user-perceived delay performance as well as energy consumption. Specifically, we prove that both the waiting time of tasks and the relative energy portion consumed by idle servers vanish in the limit. At the same time, the proposed scheme operates in a distributed fashion and involves only constant communication overhead per task, thus ensuring scalability in massive data center operations.
1
0
1
0
0
0
Variational Monte Carlo study of spin dynamics in underdoped cuprates
The hour-glass-like dispersion of spin excitations is a common feature of underdoped cuprates. It was qualitatively explained by the random phase approximation based on various ordered states with some phenomenological parameters; however, its origin remains elusive. Here, we present a numerical study of spin dynamics in the $t$-$J$ model using the variational Monte Carlo method. This parameter-free method satisfies the no double-occupancy constraint of the model and thus provides a better evaluation on the spin dynamics with respect to various mean-field trial states. We conclude that the lower branch of the hour-glass dispersion is a collective mode and the upper branch is more likely the consequence of the stripe state than the other candidates.
0
1
0
0
0
0
High brightness electron beam for radiation therapy: A new approach
I propose to use high brightness electron beam with 1 to 100 MeV energy as tool to combat tumor or cancerous tissues in deep part of body. The method is to directly deliver the electron beam to the tumor site via a small tube that connected to a high brightness electron beam accelerator that is commonly available around the world. Here I gave a basic scheme on the principle, I believe other issues people raises will be solved easily for those who are interested in solving the problems.
0
1
0
0
0
0
Parabolic equations with divergence-free drift in space $L_{t}^{l}L_{x}^{q}$
In this paper we study the fundamental solution $\varGamma(t,x;\tau,\xi)$ of the parabolic operator $L_{t}=\partial_{t}-\Delta+b(t,x)\cdot\nabla$, where for every $t$, $b(t,\cdot)$ is a divergence-free vector field, and we consider the case that $b$ belongs to the Lebesgue space $L^{l}\left(0,T;L^{q}\left(\mathbb{R}^{n}\right)\right)$. The regularity of weak solutions to the parabolic equation $L_{t}u=0$ depends critically on the value of the parabolic exponent $\gamma=\frac{2}{l}+\frac{n}{q}$. Without the divergence-free condition on $b$, the regularity of weak solutions has been established when $\gamma\leq1$, and the heat kernel estimate has been obtained as well, except for the case that $l=\infty,q=n$. The regularity of weak solutions was deemed not true for the critical case $L^{\infty}\left(0,T;L^{n}\left(\mathbb{R}^{n}\right)\right)$ for a general $b$, while it is true for the divergence-free case, and a written proof can be deduced from the results in [Semenov, 2006]. One of the results obtained in the present paper establishes the Aronson type estimate for critical and supercritical cases and for vector fields $b$ which are divergence-free. We will prove the best possible lower and upper bounds for the fundamental solution one can derive under the current approach. The significance of the divergence-free condition enters the study of parabolic equations rather recently, mainly due to the discovery of the compensated compactness. The interest for the study of such parabolic equations comes from its connections with Leray's weak solutions of the Navier-Stokes equations and the Taylor diffusion associated with a vector field where the heat operator $L_{t}$ appears naturally.
0
0
1
0
0
0
Translating Terminological Expressions in Knowledge Bases with Neural Machine Translation
Our work presented in this paper focuses on the translation of terminological expressions represented in semantically structured resources, like ontologies or knowledge graphs. The challenge of translating ontology labels or terminological expressions represented in knowledge bases lies in the highly specific vocabulary and the lack of contextual information, which can guide a machine translation system to translate ambiguous words into the targeted domain. Due to these challenges, we evaluate the translation quality of domain-specific expressions in the medical and financial domain with statistical (SMT) as well as with neural machine translation (NMT) methods and experiment domain adaptation of the translation models with terminological expressions only. Furthermore, we perform experiments on the injection of external terminological expressions into the translation systems. Through these experiments, we observed a significant advantage in domain adaptation for the domain-specific resource in the medical and financial domain and the benefit of subword models over word-based NMT models for terminology translation.
1
0
0
0
0
0
Detecting Arbitrary Attacks Using Continuous Secured Side Information in Wireless Networks
This paper focuses on Byzantine attack detection for Gaussian two-hop one-way relay network, where an amplify-and-forward relay may conduct Byzantine attacks by forwarding altered symbols to the destination. For facilitating attack detection, we utilize the openness of wireless medium to make the destination observe some secured signals that are not attacked. Then, a detection scheme is developed for the destination by using its secured observations to statistically check other observations from the relay. On the other hand, notice the Gaussian channel is continuous, which allows the possible Byzantine attacks to be conducted within continuous alphabet(s). The existing work on discrete channel is not applicable for investigating the performance of the proposed scheme. The main contribution of this paper is to prove that if and only if the wireless relay network satisfies a non-manipulable channel condition, the proposed detection scheme achieves asymptotic errorless performance against arbitrary attacks that allow the stochastic distributions of altered symbols to vary arbitrarily and depend on each other. No pre-shared secret or secret transmission is needed for the detection. Furthermore, we also prove that the relay network is non-manipulable as long as all channel coefficients are non-zero, which is not essential restrict for many practical systems.
1
0
0
0
0
0
Phase Transitions in the Pooled Data Problem
In this paper, we study the pooled data problem of identifying the labels associated with a large collection of items, based on a sequence of pooled tests revealing the counts of each label within the pool. In the noiseless setting, we identify an exact asymptotic threshold on the required number of tests with optimal decoding, and prove a phase transition between complete success and complete failure. In addition, we present a novel noisy variation of the problem, and provide an information-theoretic framework for characterizing the required number of tests for general random noise models. Our results reveal that noise can make the problem considerably more difficult, with strict increases in the scaling laws even at low noise levels. Finally, we demonstrate similar behavior in an approximate recovery setting, where a given number of errors is allowed in the decoded labels.
1
0
0
1
0
0
Graph Convolutional Networks for Classification with a Structured Label Space
It is a usual practice to ignore any structural information underlying classes in multi-class classification. In this paper, we propose a graph convolutional network (GCN) augmented neural network classifier to exploit a known, underlying graph structure of labels. The proposed approach resembles an (approximate) inference procedure in, for instance, a conditional random field (CRF). We evaluate the proposed approach on document classification and object recognition and report both accuracies and graph-theoretic metrics that correspond to the consistency of the model's prediction. The experiment results reveal that the proposed model outperforms a baseline method which ignores the graph structures of a label space in terms of graph-theoretic metrics.
1
0
0
1
0
0
Decoupling of graphene from Ni(111) via oxygen intercalation
The combination of the surface science techniques (STM, XPS, ARPES) and density-functional theory calculations was used to study the decoupling of graphene from Ni(111) by oxygen intercalation. The formation of the antiferromagnetic (AFM) NiO layer at the interface between graphene and ferromagnetic (FM) Ni is found, where graphene protects the underlying AFM/FM sandwich system. It is found that graphene is fully decoupled in this system and strongly $p$-doped via charge transfer with a position of the Dirac point of $(0.69\pm0.02)$ eV above the Fermi level. Our theoretical analysis confirms all experimental findings, addressing also the interface properties between graphene and AFM NiO.
0
1
0
0
0
0
Model Risk Measurement under Wasserstein Distance
The paper proposes a new approach to model risk measurement based on the Wasserstein distance between two probability measures. It formulates the theoretical motivation resulting from the interpretation of fictitious adversary of robust risk management. The proposed approach accounts for all alternative models and incorporates the economic reality of the fictitious adversary. It provides practically feasible results that overcome the restriction and the integrability issue imposed by the nominal model. The Wasserstein approach suits for all types of model risk problems, ranging from the single-asset hedging risk problem to the multi-asset allocation problem. The robust capital allocation line, accounting for the correlation risk, is not achievable with other non-parametric approaches.
0
0
0
0
0
1
Fast kNN mode seeking clustering applied to active learning
A significantly faster algorithm is presented for the original kNN mode seeking procedure. It has the advantages over the well-known mean shift algorithm that it is feasible in high-dimensional vector spaces and results in uniquely, well defined modes. Moreover, without any additional computational effort it may yield a multi-scale hierarchy of clusterings. The time complexity is just O(n^1.5). resulting computing times range from seconds for 10^4 objects to minutes for 10^5 objects and to less than an hour for 10^6 objects. The space complexity is just O(n). The procedure is well suited for finding large sets of small clusters and is thereby a candidate to analyze thousands of clusters in millions of objects. The kNN mode seeking procedure can be used for active learning by assigning the clusters to the class of the modal objects of the clusters. Its feasibility is shown by some examples with up to 1.5 million handwritten digits. The obtained classification results based on the clusterings are compared with those obtained by the nearest neighbor rule and the support vector classifier based on the same labeled objects for training. It can be concluded that using the clustering structure for classification can be significantly better than using the trained classifiers. A drawback of using the clustering for classification, however, is that no classifier is obtained that may be used for out-of-sample objects.
1
0
0
1
0
0
Towards a Flow- and Path-Sensitive Information Flow Analysis: Technical Report
This paper investigates a flow- and path-sensitive static information flow analysis. Compared with security type systems with fixed labels, it has been shown that flow-sensitive type systems accept more secure programs. We show that an information flow analysis with fixed labels can be both flow- and path-sensitive. The novel analysis has two major components: 1) a general-purpose program transformation that removes false dataflow dependencies in a program that confuse a fixed-label type system, and 2) a fixed-label type system that allows security types to depend on path conditions. We formally prove that the proposed analysis enforces a rigorous security property: noninterference. Moreover, we show that the analysis is strictly more precise than a classic flow-sensitive type system, and it allows sound control of information flow in the presence of mutable variables without resorting to run-time mechanisms.
1
0
0
0
0
0
Topic supervised non-negative matrix factorization
Topic models have been extensively used to organize and interpret the contents of large, unstructured corpora of text documents. Although topic models often perform well on traditional training vs. test set evaluations, it is often the case that the results of a topic model do not align with human interpretation. This interpretability fallacy is largely due to the unsupervised nature of topic models, which prohibits any user guidance on the results of a model. In this paper, we introduce a semi-supervised method called topic supervised non-negative matrix factorization (TS-NMF) that enables the user to provide labeled example documents to promote the discovery of more meaningful semantic structure of a corpus. In this way, the results of TS-NMF better match the intuition and desired labeling of the user. The core of TS-NMF relies on solving a non-convex optimization problem for which we derive an iterative algorithm that is shown to be monotonic and convergent to a local optimum. We demonstrate the practical utility of TS-NMF on the Reuters and PubMed corpora, and find that TS-NMF is especially useful for conceptual or broad topics, where topic key terms are not well understood. Although identifying an optimal latent structure for the data is not a primary objective of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard similarity scores than the contemporary methods, (unsupervised) NMF and latent Dirichlet allocation, at supervision rates as low as 10% to 20%.
1
0
0
1
0
0
Low-temperature lattice effects in the spin-liquid candidate $κ$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$
The quasi-two-dimensional organic charge-transfer salt $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ is one of the prime candidates for a quantum spin-liquid due the strong spin frustration of its anisotropic triangular lattice in combination with its proximity to the Mott transition. Despite intensive investigations of the material's low-temperature properties, several important questions remain to be answered. Particularly puzzling are the 6\,K anomaly and the enigmatic effects observed in magnetic fields. Here we report on low-temperature measurements of lattice effects which were shown to be particularly strongly pronounced in this material (R. S. Manna \emph{et al.}, Phys. Rev. Lett. \textbf{104}, 016403 (2010)). A special focus of our study lies on sample-to-sample variations of these effects and their implications on the interpretation of experimental data. By investigating overall nine single crystals from two different batches, we can state that there are considerable differences in the size of the second-order phase transition anomaly around 6\,K, varying within a factor of 3. In addition, we find field-induced anomalies giving rise to pronounced features in the sample length for two out of these nine crystals for temperatures $T <$ 9 K. We tentatively assign the latter effects to $B$-induced magnetic clusters suspected to nucleate around crystal imperfections. These $B$-induced effects are absent for the crystals where the 6\,K anomaly is most strongly pronounced. The large lattice effects observed at 6\,K are consistent with proposed pairing instabilities of fermionic excitations breaking the lattice symmetry. The strong sample-to-sample variation in the size of the phase transition anomaly suggests that the conversion of the fermions to bosons at the instability is only partial and to some extent influenced by not yet identified sample-specific parameters.
0
1
0
0
0
0
ECO-AMLP: A Decision Support System using an Enhanced Class Outlier with Automatic Multilayer Perceptron for Diabetes Prediction
With advanced data analytical techniques, efforts for more accurate decision support systems for disease prediction are on rise. Surveys by World Health Organization (WHO) indicate a great increase in number of diabetic patients and related deaths each year. Early diagnosis of diabetes is a major concern among researchers and practitioners. The paper presents an application of \textit{Automatic Multilayer Perceptron }which\textit{ }is combined with an outlier detection method \textit{Enhanced Class Outlier Detection using distance based algorithm }to create a prediction framework named as Enhanced Class Outlier with Automatic Multi layer Perceptron (ECO-AMLP). A series of experiments are performed on publicly available Pima Indian Diabetes Dataset to compare ECO-AMLP with other individual classifiers as well as ensemble based methods. The outlier technique used in our framework gave better results as compared to other pre-processing and classification techniques. Finally, the results are compared with other state-of-the-art methods reported in literature for diabetes prediction on PIDD and achieved accuracy of 88.7\% bests all other reported studies.
1
0
0
0
0
0
Local approximation of non-holomorphic discs in almost complex manifolds
We provide a local approximation result of non-holomorphic discs with small d-bar by pseudoholomorphic ones. As an application, we provide a certain gluing construction.
0
0
1
0
0
0
A Tutorial on Deep Learning for Music Information Retrieval
Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research. However, the majority of works aim to adopt and assess methods that have been shown to be effective in other domains, while there is still a great need for more original research focusing on music primarily and utilising musical knowledge and insight. The goal of this paper is to boost the interest of beginners by providing a comprehensive tutorial and reducing the barriers to entry into deep learning for MIR. We lay out the basic principles and review prominent works in this hard to navigate the field. We then outline the network structures that have been successful in MIR problems and facilitate the selection of building blocks for the problems at hand. Finally, guidelines for new tasks and some advanced topics in deep learning are discussed to stimulate new research in this fascinating field.
1
0
0
0
0
0
Modeling and control of modern wind turbine systems: An introduction
This chapter provides an introduction to the modeling and control of power generation from wind turbine systems. In modeling, the focus is on the electrical components: electrical machine (e.g. permanent-magnet synchronous generators), back-to-back converter (consisting of machine-side and grid-side converter sharing a common DC-link), mains filters and ideal (balanced) power grid. The aerodynamics and the torque generation of the wind turbine are explained in simplified terms using a so-called power coefficient. The overall control system is considered. In particular, the phase-locked loop system for grid-side voltage orientation, the nonlinear speed control system for the generator (and turbine), and the non-minimum phase DC-link voltage control system are discussed in detail; based on a brief derivation of the underlying machine-side and grid-side current control systems. With the help of the power balance of the wind turbine, the operation management and the control of the power flow are explained. Concluding simulation results illustrate the overall system behavior of a controlled wind turbine with a permanent-magnet synchronous generator.
1
0
0
0
0
0
Linear algebraic analogues of the graph isomorphism problem and the Erdős-Rényi model
A classical difficult isomorphism testing problem is to test isomorphism of p-groups of class 2 and exponent p in time polynomial in the group order. It is known that this problem can be reduced to solving the alternating matrix space isometry problem over a finite field in time polynomial in the underlying vector space size. We propose a venue of attack for the latter problem by viewing it as a linear algebraic analogue of the graph isomorphism problem. This viewpoint leads us to explore the possibility of transferring techniques for graph isomorphism to this long-believed bottleneck case of group isomorphism. In 1970's, Babai, Erdős, and Selkow presented the first average-case efficient graph isomorphism testing algorithm (SIAM J Computing, 1980). Inspired by that algorithm, we devise an average-case efficient algorithm for the alternating matrix space isometry problem over a key range of parameters, in a random model of alternating matrix spaces in vein of the Erdős-Rényi model of random graphs. For this, we develop a linear algebraic analogue of the classical individualisation technique, a technique belonging to a set of combinatorial techniques that has been critical for the progress on the worst-case time complexity for graph isomorphism, but was missing in the group isomorphism context. As a consequence of the main algorithm, we establish a weaker linear algebraic analogue of Erdős and Rényi's classical result that most graphs have the trivial automorphism group. We finally show that Luks' dynamic programming technique for graph isomorphism (STOC 1999) can be adapted to slightly improve the worst-case time complexity of the alternating matrix space isometry problem in a certain range of parameters.
1
0
1
0
0
0
A Family of Metrics for Clustering Algorithms
We give the motivation for scoring clustering algorithms and a metric $M : A \rightarrow \mathbb{N}$ from the set of clustering algorithms to the natural numbers which we realize as \begin{equation} M(A) = \sum_i \alpha_i |f_i - \beta_i|^{w_i} \end{equation} where $\alpha_i,\beta_i,w_i$ are parameters used for scoring the feature $f_i$, which is computed empirically.. We give a method by which one can score features such as stability, noise sensitivity, etc and derive the necessary parameters. We conclude by giving a sample set of scores.
1
0
0
0
0
0
A numerical scheme for an improved Green-Naghdi model in the Camassa-Holm regime for the propagation of internal waves
In this paper we introduce a new reformulation of the Green-Naghdi model in the Camassa-Holm regime for the propagation of internal waves over a flat topography derived by Duchêne, Israwi and Talhouk. These new Green-Naghdi systems are adapted to improve the frequency dispersion of the original model, they share the same order of precision as the standard one but have an appropriate structure which makes them much more suitable for the numerical resolution. We develop a second order splitting scheme where the hyperbolic part of the system is treated with a high-order finite volume scheme and the dispersive part is treated with a finite difference approach. Numerical simulations are then performed to validate the model and the numerical methods.
0
0
1
0
0
0
General Bayesian Updating and the Loss-Likelihood Bootstrap
In this paper we revisit the weighted likelihood bootstrap, a method that generates samples from an approximate Bayesian posterior of a parametric model. We show that the same method can be derived, without approximation, under a Bayesian nonparametric model with the parameter of interest defined as minimising an expected negative log-likelihood under an unknown sampling distribution. This interpretation enables us to extend the weighted likelihood bootstrap to posterior sampling for parameters minimizing an expected loss. We call this method the loss-likelihood bootstrap. We make a connection between this and general Bayesian updating, which is a way of updating prior belief distributions without needing to construct a global probability model, yet requires the calibration of two forms of loss function. The loss-likelihood bootstrap is used to calibrate the general Bayesian posterior by matching asymptotic Fisher information. We demonstrate the methodology on a number of examples.
0
0
0
1
0
0
Efficient Estimation of Generalization Error and Bias-Variance Components of Ensembles
For many applications, an ensemble of base classifiers is an effective solution. The tuning of its parameters(number of classes, amount of data on which each classifier is to be trained on, etc.) requires G, the generalization error of a given ensemble. The efficient estimation of G is the focus of this paper. The key idea is to approximate the variance of the class scores/probabilities of the base classifiers over the randomness imposed by the training subset by normal/beta distribution at each point x in the input feature space. We estimate the parameters of the distribution using a small set of randomly chosen base classifiers and use those parameters to give efficient estimation schemes for G. We give empirical evidence for the quality of the various estimators. We also demonstrate their usefulness in making design choices such as the number of classifiers in the ensemble and the size of a subset of data used for training that is needed to achieve a certain value of generalization error. Our approach also has great potential for designing distributed ensemble classifiers.
1
0
0
1
0
0
Self-consistent calculation of the flux-flow conductivity in diffusive superconductors
In the framework of Keldysh-Usadel kinetic theory, we study the temperature dependence of flux-flow conductivity (FFC) in diffusive superconductors. By using self-consistent vortex solutions we find the exact values of dimensionless parameters that determine the diffusion-controlled FFC both in the limit of the low temperatures and close to the critical one. Taking into account the electron-phonon scattering we study the transition between flux-flow regimes controlled either by the diffusion or the inelastic relaxation of non-equilibrium quasiparticles. We demonstrate that the inelastic electron-phonon relaxation leads to the strong suppression of FFC as compared to the previous estimates making it possible to obtain the numerical agreement with experimental results.
0
1
0
0
0
0