title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Yangian Symmetry and Integrability of Planar N=4 Super-Yang-Mills Theory
In this letter we establish Yangian symmetry of planar N=4 super-Yang-Mills theory. We prove that the classical equations of motion of the model close onto themselves under the action of Yangian generators. Moreover we propose an off-shell extension of our statement which is equivalent to the invariance of the action and prove that it is exactly satisfied. We assert that our relationship serves as a criterion for integrability in planar gauge theories by explicitly checking that it applies to integrable ABJM theory but not to non-integrable N=1 super-Yang-Mills theory.
0
0
1
0
0
0
DaMaSCUS: The Impact of Underground Scatterings on Direct Detection of Light Dark Matter
Conventional dark matter direct detection experiments set stringent constraints on dark matter by looking for elastic scattering events between dark matter particles and nuclei in underground detectors. However these constraints weaken significantly in the sub-GeV mass region, simply because light dark matter does not have enough energy to trigger detectors regardless of the dark matter-nucleon scattering cross section. Even if future experiments lower their energy thresholds, they will still be blind to parameter space where dark matter particles interact with nuclei strongly enough that they lose enough energy and become unable to cause a signal above the experimental threshold by the time they reach the underground detector. Therefore in case dark matter is in the sub-GeV region and strongly interacting, possible underground scatterings of dark matter with terrestrial nuclei must be taken into account because they affect significantly the recoil spectra and event rates, regardless of whether the experiment probes DM via DM-nucleus or DM-electron interaction. To quantify this effect we present the publicly available Dark Matter Simulation Code for Underground Scatterings (DaMaSCUS), a Monte Carlo simulator of DM trajectories through the Earth taking underground scatterings into account. Our simulation allows the precise calculation of the density and velocity distribution of dark matter at any detector of given depth and location on Earth. The simulation can also provide the accurate recoil spectrum in underground detectors as well as the phase and amplitude of the diurnal modulation caused by this shadowing effect of the Earth, ultimately relating the modulations expected in different detectors, which is important to decisively conclude if a diurnal modulation is due to dark matter or an irrelevant background.
0
1
0
0
0
0
Allocation strategies for high fidelity models in the multifidelity regime
We propose a novel approach to allocating resources for expensive simulations of high fidelity models when used in a multifidelity framework. Allocation decisions that distribute computational resources across several simulation models become extremely important in situations where only a small number of expensive high fidelity simulations can be run. We identify this allocation decision as a problem in optimal subset selection, and subsequently regularize this problem so that solutions can be computed. Our regularized formulation yields a type of group lasso problem that has been studied in the literature to accomplish subset selection. Our numerical results compare performance of algorithms that solve the group lasso problem for algorithmic allocation against a variety of other strategies, including those based on classical linear algebraic pivoting routines and those derived from more modern machine learning-based methods. We demonstrate on well known synthetic problems and more difficult real-world simulations that this group lasso solution to the relaxed optimal subset selection problem performs better than the alternatives.
1
0
0
0
0
0
Simplicial Closure and higher-order link prediction
Networks provide a powerful formalism for modeling complex systems by using a model of pairwise interactions. But much of the structure within these systems involves interactions that take place among more than two nodes at once; for example, communication within a group rather than person-to person, collaboration among a team rather than a pair of coauthors, or biological interaction between a set of molecules rather than just two. Such higher-order interactions are ubiquitous, but their empirical study has received limited attention, and little is known about possible organizational principles of such structures. Here we study the temporal evolution of 19 datasets with explicit accounting for higher-order interactions. We show that there is a rich variety of structure in our datasets but datasets from the same system types have consistent patterns of higher-order structure. Furthermore, we find that tie strength and edge density are competing positive indicators of higher-order organization, and these trends are consistent across interactions involving differing numbers of nodes. To systematically further the study of theories for such higher-order structures, we propose higher-order link prediction as a benchmark problem to assess models and algorithms that predict higher-order structure. We find a fundamental differences from traditional pairwise link prediction, with a greater role for local rather than long-range information in predicting the appearance of new interactions.
1
0
0
1
0
0
Basic concepts and tools for the Toki Pona minimal and constructed language: description of the language and main issues; analysis of the vocabulary; text synthesis and syntax highlighting; Wordnet synsets
A minimal constructed language (conlang) is useful for experiments and comfortable for making tools. The Toki Pona (TP) conlang is minimal both in the vocabulary (with only 14 letters and 124 lemmas) and in the (about) 10 syntax rules. The language is useful for being a used and somewhat established minimal conlang with at least hundreds of fluent speakers. This article exposes current concepts and resources for TP, and makes available Python (and Vim) scripted routines for the analysis of the language, synthesis of texts, syntax highlighting schemes, and the achievement of a preliminary TP Wordnet. Focus is on the analysis of the basic vocabulary, as corpus analyses were found. The synthesis is based on sentence templates, relates to context by keeping track of used words, and renders larger texts by using a fixed number of phonemes (e.g. for poems) and number of sentences, words and letters (e.g. for paragraphs). Syntax highlighting reflects morphosyntactic classes given in the official dictionary and different solutions are described and implemented in the well-established Vim text editor. The tentative TP Wordnet is made available in three patterns of relations between synsets and word lemmas. In summary, this text holds potentially novel conceptualizations about, and tools and results in analyzing, synthesizing and syntax highlighting the TP language.
1
0
0
0
0
0
Spectral Methods for Nonparametric Models
Nonparametric models are versatile, albeit computationally expensive, tool for modeling mixture models. In this paper, we introduce spectral methods for the two most popular nonparametric models: the Indian Buffet Process (IBP) and the Hierarchical Dirichlet Process (HDP). We show that using spectral methods for the inference of nonparametric models are computationally and statistically efficient. In particular, we derive the lower-order moments of the IBP and the HDP, propose spectral algorithms for both models, and provide reconstruction guarantees for the algorithms. For the HDP, we further show that applying hierarchical models on dataset with hierarchical structure, which can be solved with the generalized spectral HDP, produces better solutions to that of flat models regarding likelihood performance.
1
0
0
1
0
0
Setting the threshold for high throughput detectors: A mathematical approach for ensembles of dynamic, heterogeneous, probabilistic anomaly detectors
Anomaly detection (AD) has garnered ample attention in security research, as such algorithms complement existing signature-based methods but promise detection of never-before-seen attacks. Cyber operations manage a high volume of heterogeneous log data; hence, AD in such operations involves multiple (e.g., per IP, per data type) ensembles of detectors modeling heterogeneous characteristics (e.g., rate, size, type) often with adaptive online models producing alerts in near real time. Because of high data volume, setting the threshold for each detector in such a system is an essential yet underdeveloped configuration issue that, if slightly mistuned, can leave the system useless, either producing a myriad of alerts and flooding downstream systems, or giving none. In this work, we build on the foundations of Ferragut et al. to provide a set of rigorous results for understanding the relationship between threshold values and alert quantities, and we propose an algorithm for setting the threshold in practice. Specifically, we give an algorithm for setting the threshold of multiple, heterogeneous, possibly dynamic detectors completely a priori, in principle. Indeed, if the underlying distribution of the incoming data is known (closely estimated), the algorithm provides provably manageable thresholds. If the distribution is unknown (e.g., has changed over time) our analysis reveals how the model distribution differs from the actual distribution, indicating a period of model refitting is necessary. We provide empirical experiments showing the efficacy of the capability by regulating the alert rate of a system with $\approx$2,500 adaptive detectors scoring over 1.5M events in 5 hours. Further, we demonstrate on the real network data and detection framework of Harshaw et al. the alternative case, showing how the inability to regulate alerts indicates the detection model is a bad fit to the data.
1
0
0
1
0
0
Photonic Loschmidt echo in binary waveguide lattices
Time reversal is one of the most intriguing yet elusive wave phenomenon of major interest in different areas of classical and quantum physics. Time reversal requires in principle to flip the sign of the Hamiltonian of the system, leading to a revival of the initial state (Loschmidt echo). Here it is shown that Loschmidt echo of photons can be observed in an optical setting without resorting to reversal of the Hamiltonian. We consider photonic propagation in a binary waveguide lattice and show that, by exchanging the two sublattices after some propagation distance, a Loschmidt echo can be observed. Examples of Loschmidt echoes for single photon and NOON states are given in one- and two-dimensional waveguide lattices.
0
1
0
0
0
0
Proof Reduction of Fair Stuttering Refinement of Asynchronous Systems and Applications
We present a series of definitions and theorems demonstrating how to reduce the requirements for proving system refinements ensuring containment of fair stuttering runs. A primary result of the work is the ability to reduce the requisite proofs on runs of a system of interacting state machines to a set of definitions and checks on single steps of a small number of state machines corresponding to the intuitive notions of freedom from starvation and deadlock. We further refine the definitions to afford an efficient explicit-state checking procedure in certain finite state cases. We demonstrate the proof reduction on versions of the Bakery Algorithm.
1
0
0
0
0
0
Steady Galactic Dynamos and Observational Consequences I: Halo Magnetic Fields
We study the global consequences in the halos of spiral galaxies of the steady, axially symmetric, mean field dynamo. We use the classical theory but add the possibility of using the velocity field components as parameters in addition to the helicity and diffusivity. The analysis is based on the simplest version of the theory and uses scale-invariant solutions. The velocity field (subject to restrictions) is a scale invariant field in a `pattern' frame, in place of a full dynamical theory. The `pattern frame' of reference may either be the systemic frame or some rigidly rotating spiral pattern frame. One type of solution for the magnetic field yields off-axis, spirally wound, magnetic field lines. These predict sign changes in the Faraday screen rotation measure in every quadrant of the halo of an edge-on galaxy. Such rotation measure oscillations have been observed in the CHANG-ES survey.
0
1
0
0
0
0
Efficient Decision Trees for Multi-class Support Vector Machines Using Entropy and Generalization Error Estimation
We propose new methods for Support Vector Machines (SVMs) using tree architecture for multi-class classi- fication. In each node of the tree, we select an appropriate binary classifier using entropy and generalization error estimation, then group the examples into positive and negative classes based on the selected classi- fier and train a new classifier for use in the classification phase. The proposed methods can work in time complexity between O(log2N) to O(N) where N is the number of classes. We compared the performance of our proposed methods to the traditional techniques on the UCI machine learning repository using 10-fold cross-validation. The experimental results show that our proposed methods are very useful for the problems that need fast classification time or problems with a large number of classes as the proposed methods run much faster than the traditional techniques but still provide comparable accuracy.
1
0
0
1
0
0
Semiclassical measures on hyperbolic surfaces have full support
We show that each limiting semiclassical measure obtained from a sequence of eigenfunctions of the Laplacian on a compact hyperbolic surface is supported on the entire cosphere bundle. The key new ingredient for the proof is the fractal uncertainty principle, first formulated in [arXiv:1504.06589] and proved for porous sets in [arXiv:1612.09040].
0
1
1
0
0
0
Assessment of algorithms for computing moist available potential energy
Atmospheric moist available potential energy (MAPE) has been traditionally defined as the potential energy of a moist atmosphere relative to that of the adiabatically sorted reference state defining a global potential energy minimum. Finding such a reference state was recently shown to be a linear assignment problem, and therefore exactly solvable. However, this is computationally extremely expensive, so there has been much interest in developing heuristic methods for computing MAPE in practice. Comparisons of the accuracy of such approximate algorithms have so far been limited to a small number of test cases; this work provides an assessment of the algorithms' performance across a wide range of atmospheric soundings, in two different locations. We determine that the divide-and-conquer algorithm is the best suited to practical application, but suffers from the previously overlooked shortcoming that it can produce a reference state with higher potential energy than the actual state, resulting in a negative value of MAPE. Additionally, we show that it is possible to construct an algorithm exploiting a theoretical expression linking MAPE to Convective Available Potential Energy (CAPE) previously derived by Kerry Emanuel. This approach has a similar accuracy to existing approximate sorting algorithms, whilst providing greater insight into the physical source of MAPE. In light of these results, we discuss how to make progress towards constructing a satisfactory moist APE theory for the atmosphere. We also outline a method for vectorising the adiabatic lifting of moist air parcels, which increases the computational efficiency of algorithms for calculating MAPE, and could be used for other applications such as convection schemes.
0
1
0
0
0
0
A Zero Knowledge Sumcheck and its Applications
Many seminal results in Interactive Proofs (IPs) use algebraic techniques based on low-degree polynomials, the study of which is pervasive in theoretical computer science. Unfortunately, known methods for endowing such proofs with zero knowledge guarantees do not retain this rich algebraic structure. In this work, we develop algebraic techniques for obtaining zero knowledge variants of proof protocols in a way that leverages and preserves their algebraic structure. Our constructions achieve unconditional (perfect) zero knowledge in the Interactive Probabilistically Checkable Proof (IPCP) model of Kalai and Raz [KR08] (the prover first sends a PCP oracle, then the prover and verifier engage in an Interactive Proof in which the verifier may query the PCP). Our main result is a zero knowledge variant of the sumcheck protocol [LFKN92] in the IPCP model. The sumcheck protocol is a key building block in many IPs, including the protocol for polynomial-space computation due to Shamir [Sha92], and the protocol for parallel computation due to Goldwasser, Kalai, and Rothblum [GKR15]. A core component of our result is an algebraic commitment scheme, whose hiding property is guaranteed by algebraic query complexity lower bounds [AW09,JKRS09]. This commitment scheme can then be used to considerably strengthen our previous work [BCFGRS16] that gives a sumcheck protocol with much weaker zero knowledge guarantees, itself using algebraic techniques based on algorithms for polynomial identity testing [RS05,BW04]. We demonstrate the applicability of our techniques by deriving zero knowledge variants of well-known protocols based on algebraic techniques, including the protocols of Shamir and of Goldwasser, Kalai, and Rothblum, as well as the protocol of Babai, Fortnow, and Lund [BFL91].
1
0
0
0
0
0
Exact diagonalization and cluster mean-field study of triangular-lattice XXZ antiferromagnets near saturation
Quantum magnetic phases near the magnetic saturation of triangular-lattice antiferromagnets with XXZ anisotropy have been attracting renewed interest since it has been suggested that a nontrivial coplanar phase, called the $\pi$-coplanar or $\Psi$ phase, could be stabilized by quantum effects in a certain range of anisotropy parameter $J/J_z$ besides the well-known 0-coplanar (known also as $V$) and umbrella phases. Recently, Sellmann $et$ $al$. [Phys. Rev. B {\bf 91}, 081104(R) (2015)] claimed that the $\pi$-coplanar phase is absent for $S=1/2$ from an exact-diagonalization analysis in the sector of the Hilbert space with only three down-spins (three magnons). We first reconsider and improve this analysis by taking into account several low-lying eigenvalues and the associated eigenstates as a function of $J/J_z$ and by sensibly increasing the system sizes (up to 1296 spins). A careful identification analysis shows that the lowest eigenstate is a chirally antisymmetric combination of finite-size umbrella states for $J/J_z\gtrsim 2.218$ while it corresponds to a coplanar phase for $J/J_z\lesssim 2.218$. However, we demonstrate that the distinction between 0-coplanar and $\pi$-coplanar phases in the latter region is fundamentally impossible from the symmetry-preserving finite-size calculations with fixed magnon number.} Therefore, we also perform a cluster mean-field plus scaling analysis for small spins $S\leq 3/2$. The obtained results, together with the previous large-$S$ analysis, indicate that the $\pi$-coplanar phase exists for any $S$ except for the classical limit ($S\rightarrow \infty$) and the existence range in $J/J_z$ is largest in the most quantum case of $S=1/2$.
0
1
0
0
0
0
Perturbed Proximal Descent to Escape Saddle Points for Non-convex and Non-smooth Objective Functions
We consider the problem of finding local minimizers in non-convex and non-smooth optimization. Under the assumption of strict saddle points, positive results have been derived for first-order methods. We present the first known results for the non-smooth case, which requires different analysis and a different algorithm.
1
0
0
1
0
0
Don't Panic! Better, Fewer, Syntax Errors for LR Parsers
Syntax errors are generally easy to fix for humans, but not for parsers, in general, and LR parsers, in particular. Traditional 'panic mode' error recovery, though easy to implement and applicable to any grammar, often leads to a cascading chain of errors that drown out the original. More advanced error recovery techniques suffer less from this problem but have seen little practical use because their typical performance was seen as poor, their worst case unbounded, and the repairs they reported arbitrary. In this paper we show two generic error recovery algorithms that fix all three problems. First, our algorithms are the first to report the complete set of possible repair sequences for a given location, allowing programmers to select the one that best fits their intention. Second, on a corpus of 200,000 real-world syntactically invalid Java programs, we show that our best performing algorithm is able to repair 98.71% of files within a cut-off of 0.5s. Furthermore, we are also able to use the complete set of repair sequences to reduce the cascading error problem even further than previous approaches. Our best performing algorithm reports 442,252.0 error locations in the corpus to the user, while the panic mode algorithm reports 980,848.0 error locations: in other words, our algorithms reduce the cascading error problem by well over half.
1
0
0
0
0
0
Predicting Tomorrow's Headline using Today's Twitter Deliberations
Predicting the popularity of news article is a challenging task. Existing literature mostly focused on article contents and polarity to predict popularity. However, existing research has not considered the users' preference towards a particular article. Understanding users' preference is an important aspect for predicting the popularity of news articles. Hence, we consider the social media data, from the Twitter platform, to address this research gap. In our proposed model, we have considered the users' involvement as well as the users' reaction towards an article to predict the popularity of the article. In short, we are predicting tomorrow's headline by probing today's Twitter discussion. We have considered 300 political news article from the New York Post, and our proposed approach has outperformed other baseline models.
1
0
0
0
0
0
On a problem of Bharanedhar and Ponnusamy involving planar harmonic mappings
In this paper, we give a negative answer to a problem presented by Bharanedhar and Ponnusamy (Rocky Mountain J. Math. 44: 753--777, 2014) concerning univalency of a class of harmonic mappings. More precisely, we show that for all values of the involved parameter, this class contains a non-univalent function. Moreover, several results on a new subclass of close-to-convex harmonic mappings, which is motivated by work of Ponnusamy and Sairam Kaliraj (Mediterr. J. Math. 12: 647--665, 2015), are obtained.
0
0
1
0
0
0
Projecting UK Mortality using Bayesian Generalised Additive Models
Forecasts of mortality provide vital information about future populations, with implications for pension and health-care policy as well as for decisions made by private companies about life insurance and annuity pricing. Stochastic mortality forecasts allow the uncertainty in mortality predictions to be taken into consideration when making policy decisions and setting product prices. Longer lifespans imply that forecasts of mortality at ages 90 and above will become more important in such calculations. This paper presents a Bayesian approach to the forecasting of mortality that jointly estimates a Generalised Additive Model (GAM) for mortality for the majority of the age-range and a parametric model for older ages where the data are sparser. The GAM allows smooth components to be estimated for age, cohort and age-specific improvement rates, together with a non-smoothed period effect. Forecasts for the United Kingdom are produced using data from the Human Mortality Database spanning the period 1961-2013. A metric that approximates predictive accuracy under Leave-One-Out cross-validation is used to estimate weights for the `stacking' of forecasts with different points of transition between the GAM and parametric elements. Mortality for males and females are estimated separately at first, but a joint model allows the asymptotic limit of mortality at old ages to be shared between sexes, and furthermore provides for forecasts accounting for correlations in period innovations. The joint and single sex model forecasts estimated using data from 1961-2003 are compared against observed data from 2004-2013 to facilitate model assessment.
0
0
0
1
0
0
The Case for Meta-Cognitive Machine Learning: On Model Entropy and Concept Formation in Deep Learning
Machine learning is usually defined in behaviourist terms, where external validation is the primary mechanism of learning. In this paper, I argue for a more holistic interpretation in which finding more probable, efficient and abstract representations is as central to learning as performance. In other words, machine learning should be extended with strategies to reason over its own learning process, leading to so-called meta-cognitive machine learning. As such, the de facto definition of machine learning should be reformulated in these intrinsically multi-objective terms, taking into account not only the task performance but also internal learning objectives. To this end, we suggest a "model entropy function" to be defined that quantifies the efficiency of the internal learning processes. It is conjured that the minimization of this model entropy leads to concept formation. Besides philosophical aspects, some initial illustrations are included to support the claims.
1
0
0
1
0
0
Bulk viscosity model for near-equilibrium acoustic wave attenuation
Acoustic wave attenuation due to vibrational and rotational molecular relaxation, under simplifying assumptions of near-thermodynamic equilibrium and absence of molecular dissociations, can be accounted for by specifying a bulk viscosity coefficient $\mu_B$. In this paper, we propose a simple frequency-dependent bulk viscosity model which, under such assumptions, accurately captures wave attenuation rates from infrasonic to ultrasonic frequencies in Navier--Stokes and lattice Boltzmann simulations. The proposed model can be extended to any gas mixture for which molecular relaxation timescales and attenuation measurements are available. The performance of the model is assessed for air by varying the base temperature, pressure, relative humidity $h_r$, and acoustic frequency. Since the vibrational relaxation timescales of oxygen and nitrogen are a function of humidity, for certain frequencies an intermediate value of $h_r$ can be found which maximizes $\mu_B$. The contribution to bulk viscosity due to rotational relaxation is verified to be a function of temperature, confirming recent findings in the literature. While $\mu_B$ decreases with higher frequencies, its effects on wave attenuation become more significant, as shown via a dimensionless analysis. The proposed bulk viscosity model is designed for frequency-domain linear acoustic formulations but is also extensible to time-domain simulations of narrow-band frequency content flows.
0
1
0
0
0
0
A Lagrangian fluctuation-dissipation relation for scalar turbulence, III. Turbulent Rayleigh-Bénard convection
A Lagrangian fluctuation-dissipation relation has been derived in a previous work to describe the dissipation rate of advected scalars, both passive and active, in wall-bounded flows. We apply this relation here to develop a Lagrangian description of thermal dissipation in turbulent Rayleigh-Bénard convection in a right-cylindrical cell of arbitrary cross-section, with either imposed temperature difference or imposed heat-flux at the top and bottom walls. We obtain an exact relation between the steady-state thermal dissipation rate and the time for passive tracer particles released at the top or bottom wall to mix to their final uniform value near those walls. We show that an "ultimate regime" with the Nusselt-number scaling predicted by Spiegel (1971) or, with a log-correction, by Kraichnan (1962) will occur at high Rayleigh numbers, unless this near-wall mixing time is asymptotically much longer than the free-fall time, or almost the large-scale circulation time. We suggest a new criterion for an ultimate regime in terms of transition to turbulence of a thermal "mixing zone", which is much wider than the standard thermal boundary layer. Kraichnan-Spiegel scaling may, however, not hold if the intensity and volume of thermal plumes decrease sufficiently rapidly with increasing Rayleigh number. To help resolve this issue, we suggest a program to measure the near-wall mixing time, which we argue is accessible both by laboratory experiment and by numerical simulation.
0
1
0
0
0
0
Model Selection for Explosive Models
This paper examines the limit properties of information criteria (such as AIC, BIC, HQIC) for distinguishing between the unit root model and the various kinds of explosive models. The explosive models include the local-to-unit-root model, the mildly explosive model and the regular explosive model. Initial conditions with different order of magnitude are considered. Both the OLS estimator and the indirect inference estimator are studied. It is found that BIC and HQIC, but not AIC, consistently select the unit root model when data come from the unit root model. When data come from the local-to-unit-root model, both BIC and HQIC select the wrong model with probability approaching 1 while AIC has a positive probability of selecting the right model in the limit. When data come from the regular explosive model or from the mildly explosive model in the form of $1+n^{\alpha }/n$ with $\alpha \in (0,1)$, all three information criteria consistently select the true model. Indirect inference estimation can increase or decrease the probability for information criteria to select the right model asymptotically relative to OLS, depending on the information criteria and the true model. Simulation results confirm our asymptotic results in finite sample.
0
0
1
1
0
0
Deep Energy Estimator Networks
Density estimation is a fundamental problem in statistical learning. This problem is especially challenging for complex high-dimensional data due to the curse of dimensionality. A promising solution to this problem is given here in an inference-free hierarchical framework that is built on score matching. We revisit the Bayesian interpretation of the score function and the Parzen score matching, and construct a multilayer perceptron with a scalable objective for learning the energy (i.e. the unnormalized log-density), which is then optimized with stochastic gradient descent. In addition, the resulting deep energy estimator network (DEEN) is designed as products of experts. We present the utility of DEEN in learning the energy, the score function, and in single-step denoising experiments for synthetic and high-dimensional data. We also diagnose stability problems in the direct estimation of the score function that had been observed for denoising autoencoders.
0
0
0
1
0
0
A Scale Free Algorithm for Stochastic Bandits with Bounded Kurtosis
Existing strategies for finite-armed stochastic bandits mostly depend on a parameter of scale that must be known in advance. Sometimes this is in the form of a bound on the payoffs, or the knowledge of a variance or subgaussian parameter. The notable exceptions are the analysis of Gaussian bandits with unknown mean and variance by Cowan and Katehakis [2015] and of uniform distributions with unknown support [Cowan and Katehakis, 2015]. The results derived in these specialised cases are generalised here to the non-parametric setup, where the learner knows only a bound on the kurtosis of the noise, which is a scale free measure of the extremity of outliers.
0
0
0
1
0
0
An Asynchronous Parallel Approach to Sparse Recovery
Asynchronous parallel computing and sparse recovery are two areas that have received recent interest. Asynchronous algorithms are often studied to solve optimization problems where the cost function takes the form $\sum_{i=1}^M f_i(x)$, with a common assumption that each $f_i$ is sparse; that is, each $f_i$ acts only on a small number of components of $x\in\mathbb{R}^n$. Sparse recovery problems, such as compressed sensing, can be formulated as optimization problems, however, the cost functions $f_i$ are dense with respect to the components of $x$, and instead the signal $x$ is assumed to be sparse, meaning that it has only $s$ non-zeros where $s\ll n$. Here we address how one may use an asynchronous parallel architecture when the cost functions $f_i$ are not sparse in $x$, but rather the signal $x$ is sparse. We propose an asynchronous parallel approach to sparse recovery via a stochastic greedy algorithm, where multiple processors asynchronously update a vector in shared memory containing information on the estimated signal support. We include numerical simulations that illustrate the potential benefits of our proposed asynchronous method.
1
0
0
0
0
0
Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples
State-of-the-art neural networks are vulnerable to adversarial examples; they can easily misclassify inputs that are imperceptibly different than their training and test data. In this work, we establish that the use of cross-entropy loss function and the low-rank features of the training data have responsibility for the existence of these inputs. Based on this observation, we suggest that addressing adversarial examples requires rethinking the use of cross-entropy loss function and looking for an alternative that is more suited for minimization with low-rank features. In this direction, we present a training scheme called differential training, which uses a loss function defined on the differences between the features of points from opposite classes. We show that differential training can ensure a large margin between the decision boundary of the neural network and the points in the training dataset. This larger margin increases the amount of perturbation needed to flip the prediction of the classifier and makes it harder to find an adversarial example with small perturbations. We test differential training on a binary classification task with CIFAR-10 dataset and demonstrate that it radically reduces the ratio of images for which an adversarial example could be found -- not only in the training dataset, but in the test dataset as well.
1
0
0
1
0
0
Parametrizing modified gravity for cosmological surveys
One of the challenges in testing gravity with cosmology is the vast freedom opened when extending General Relativity. For linear perturbations, one solution consists in using the Effective Field Theory of Dark Energy (EFT of DE). Even then, the theory space is described in terms of a handful of free functions of time. This needs to be reduced to a finite number of parameters to be practical for cosmological surveys. We explore in this article how well simple parametrizations, with a small number of parameters, can fit observables computed from complex theories. Imposing the stability of linear perturbations appreciably reduces the theory space we explore. We find that observables are not extremely sensitive to short time-scale variations and that simple, smooth parametrizations are usually sufficient to describe this theory space. Using the Bayesian Information Criterion, we find that using two parameters for each function (an amplitude and a power law index) is preferred over complex models for 86% of our theory space.
0
1
0
0
0
0
Inductive Freeness of Ziegler's Canonical Multiderivations for Reflection Arrangements
Let $A$ be a free hyperplane arrangement. In 1989, Ziegler showed that the restriction $A''$ of $A$ to any hyperplane endowed with the natural multiplicity is then a free multiarrangement. We initiate a study of the stronger freeness property of inductive freeness for these canonical free multiarrangements and investigate them for the underlying class of reflection arrangements. More precisely, let $A = A(W)$ be the reflection arrangement of a complex reflection group $W$. By work of Terao, each such reflection arrangement is free. Thus so is Ziegler's canonical multiplicity on the restriction $A''$ of $A$ to a hyperplane. We show that the latter is inductively free as a multiarrangement if and only if $A''$ itself is inductively free.
0
0
1
0
0
0
Toward III-V/Si co-integration by controlling biatomic steps on hydrogenated Si(001)
The integration of III-V on silicon is still a hot topic as it will open up a way to co-integrate Si CMOS logic with photonic vices. To reach this aim, several hurdles should be solved, and more particularly the generation of antiphase boundaries (APBs) at the III-V/Si(001) interface. Density functional theory (DFT) has been used to demonstrate the existence of a double-layer steps on nominal Si(001) which is formed during annealing under proper hydrogen chemical potential. This phenomenon could be explained by the formation of dimer vacancy lines which could be responsible for the preferential and selective etching of one type of step leading to the double step surface creation. To check this hypothesis, different experiments have been carried in an industrial 300 mm MOCVD where the total pressure during the anneal step of Si(001) surface has been varied. Under optimized conditions, an APBs-free GaAs layer was grown on a nominal Si(001) surface paving the way for III-V integration on silicon industrial platform.
0
1
0
0
0
0
Proceedings XVI Jornadas sobre Programación y Lenguajes
This volume contains a selection of the papers presented at the XVI Jornadas sobre Programación y Lenguajes (PROLE 2016), held at Salamanca, Spain, during September 14th-15th, 2016. Previous editions of the workshop were held in Santander (2015), Cádiz (2014), Madrid (2013), Almería (2012), A Coruña (2011), València (2010), San Sebastián (2009), Gijón (2008), Zaragoza (2007), Sitges (2006), Granada (2005), Málaga (2004), Alicante (2003), El Escorial (2002), and Almagro (2001). Programming languages provide a conceptual framework which is necessary for the development, analysis, optimization and understanding of programs and programming tasks. The aim of the PROLE series of conferences (PROLE stems from PROgramación y LEnguajes) is to serve as a meeting point for Spanish research groups which develop their work in the area of programming and programming languages. The organization of this series of events aims at fostering the exchange of ideas, experiences and results among these groups. Promoting further collaboration is also one of its main goals.
1
0
0
0
0
0
The proximal point algorithm in geodesic spaces with curvature bounded above
We investigate the asymptotic behavior of sequences generated by the proximal point algorithm for convex functions in complete geodesic spaces with curvature bounded above. Using the notion of resolvents of such functions, which was recently introduced by the authors, we show the existence of minimizers of convex functions under the boundedness assumptions on such sequences as well as the convergence of such sequences to minimizers of given functions.
0
0
1
0
0
0
Spatio-temporal canards in neural field equations
Canards are special solutions to ordinary differential equations that follow invariant repelling slow manifolds for long time intervals. In realistic biophysical single cell models, canards are responsible for several complex neural rhythms observed experimentally, but their existence and role in spatially-extended systems is largely unexplored. We describe a novel type of coherent structure in which a spatial pattern displays temporal canard behaviour. Using interfacial dynamics and geometric singular perturbation theory, we classify spatio-temporal canards and give conditions for the existence of folded-saddle and folded-node canards. We find that spatio-temporal canards are robust to changes in the synaptic connectivity and firing rate. The theory correctly predicts the existence of spatio-temporal canards with octahedral symmetries in a neural field model posed on the unit sphere.
0
1
1
0
0
0
Counterfactuals, indicative conditionals, and negation under uncertainty: Are there cross-cultural differences?
In this paper we study selected argument forms involving counterfactuals and indicative conditionals under uncertainty. We selected argument forms to explore whether people with an Eastern cultural background reason differently about conditionals compared to Westerners, because of the differences in the location of negations. In a 2x2 between-participants design, 63 Japanese university students were allocated to four groups, crossing indicative conditionals and counterfactuals, and each presented in two random task orders. The data show close agreement between the responses of Easterners and Westerners. The modal responses provide strong support for the hypothesis that conditional probability is the best predictor for counterfactuals and indicative conditionals. Finally, the grand majority of the responses are probabilistically coherent, which endorses the psychological plausibility of choosing coherence-based probability logic as a rationality framework for psychological reasoning research.
1
0
1
0
0
0
Efficient Convolutional Network Learning using Parametric Log based Dual-Tree Wavelet ScatterNet
We propose a DTCWT ScatterNet Convolutional Neural Network (DTSCNN) formed by replacing the first few layers of a CNN network with a parametric log based DTCWT ScatterNet. The ScatterNet extracts edge based invariant representations that are used by the later layers of the CNN to learn high-level features. This improves the training of the network as the later layers can learn more complex patterns from the start of learning because the edge representations are already present. The efficient learning of the DTSCNN network is demonstrated on CIFAR-10 and Caltech-101 datasets. The generic nature of the ScatterNet front-end is shown by an equivalent performance to pre-trained CNN front-ends. A comparison with the state-of-the-art on CIFAR-10 and Caltech-101 datasets is also presented.
1
0
0
1
0
0
The $r$th moment of the divisor function: an elementary approach
Let $\tau(n)$ be the number of divisors of $n$. We give an elementary proof of the fact that $$ \sum_{n\le x} \tau(n)^r =xC_{r} (\log x)^{2^r-1}+O(x(\log x)^{2^r-2}), $$ for any integer $r\ge 2$. Here, $$ C_{r}=\frac{1}{(2^r-1)!} \prod_{p\ge 2}\left( \left(1-\frac{1}{p}\right)^{2^r} \left(\sum_{\alpha\ge 0} \frac{(\alpha+1)^r}{p^{\alpha}}\right)\right). $$
0
0
1
0
0
0
Liu-type Shrinkage Estimations in Linear Models
In this study, we present the preliminary test, Stein-type and positive part Liu estimators in the linear models when the parameter vector $\boldsymbol{\beta}$ is partitioned into two parts, namely, the main effects $\boldsymbol{\beta}_1$ and the nuisance effects $\boldsymbol{\beta}_2$ such that $\boldsymbol{\beta}=\left(\boldsymbol{\beta}_1, \boldsymbol{\beta}_2 \right)$. We consider the case that a priori known or suspected set of the explanatory variables do not contribute to predict the response so that a sub-model may be enough for this purpose. Thus, the main interest is to estimate $\boldsymbol{\beta}_1$ when $\boldsymbol{\beta}_2$ is close to zero. Therefore, we conduct a Monte Carlo simulation study to evaluate the relative efficiency of the suggested estimators, where we demonstrate the superiority of the proposed estimators.
0
0
1
1
0
0
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
We present an efficient coresets-based neural network compression algorithm that provably sparsifies the parameters of a trained fully-connected neural network in a manner that approximately preserves the network's output. Our approach is based on an importance sampling scheme that judiciously defines a sampling distribution over the neural network parameters, and as a result, retains parameters of high importance while discarding redundant ones. We leverage a novel, empirical notion of sensitivity and extend traditional coreset constructions to the application of compressing parameters. Our theoretical analysis establishes guarantees on the size and accuracy of the resulting compressed neural network and gives rise to new generalization bounds that may provide novel insights on the generalization properties of neural networks. We demonstrate the practical effectiveness of our algorithm on a variety of neural network configurations and real-world data sets.
0
0
0
1
0
0
Active Galactic Nuclei: what's in a name?
Active Galactic Nuclei (AGN) are energetic astrophysical sources powered by accretion onto supermassive black holes in galaxies, and present unique observational signatures that cover the full electromagnetic spectrum over more than twenty orders of magnitude in frequency. The rich phenomenology of AGN has resulted in a large number of different "flavours" in the literature that now comprise a complex and confusing AGN "zoo". It is increasingly clear that these classifications are only partially related to intrinsic differences between AGN, and primarily reflect variations in a relatively small number of astrophysical parameters as well the method by which each class of AGN is selected. Taken together, observations in different electromagnetic bands as well as variations over time provide complementary windows on the physics of different sub-structures in the AGN. In this review, we present an overview of AGN multi-wavelength properties with the aim of painting their "big picture" through observations in each electromagnetic band from radio to gamma-rays as well as AGN variability. We address what we can learn from each observational method, the impact of selection effects, the physics behind the emission at each wavelength, and the potential for future studies. To conclude we use these observations to piece together the basic architecture of AGN, discuss our current understanding of unification models, and highlight some open questions that present opportunities for future observational and theoretical progress.
0
1
0
0
0
0
Fixed points of Legendre-Fenchel type transforms
A recent result characterizes the fully order reversing operators acting on the class of lower semicontinuous proper convex functions in a real Banach space as certain linear deformations of the Legendre-Fenchel transform. Motivated by the Hilbert space version of this result and by the well-known result saying that this convex conjugation transform has a unique fixed point (namely, the normalized energy function), we investigate the fixed point equation in which the involved operator is fully order reversing and acts on the above-mentioned class of functions. It turns out that this nonlinear equation is very sensitive to the involved parameters and can have no solution, a unique solution, or several (possibly infinitely many) ones. Our analysis yields a few by-products, such as results related to positive definite operators, and to functional equations and inclusions involving monotone operators.
0
0
1
0
0
0
High-Dimensional Materials and Process Optimization using Data-driven Experimental Design with Well-Calibrated Uncertainty Estimates
The optimization of composition and processing to obtain materials that exhibit desirable characteristics has historically relied on a combination of scientist intuition, trial and error, and luck. We propose a methodology that can accelerate this process by fitting data-driven models to experimental data as it is collected to suggest which experiment should be performed next. This methodology can guide the scientist to test the most promising candidates earlier, and can supplement scientific intuition and knowledge with data-driven insights. A key strength of the proposed framework is that it scales to high-dimensional parameter spaces, as are typical in materials discovery applications. Importantly, the data-driven models incorporate uncertainty analysis, so that new experiments are proposed based on a combination of exploring high-uncertainty candidates and exploiting high-performing regions of parameter space. Over four materials science test cases, our methodology led to the optimal candidate being found with three times fewer required measurements than random guessing on average.
0
1
0
1
0
0
Two Posets of Noncrossing Partitions Coming From Undesired Parking Spaces
Consider the noncrossing set partitions of an $n$-element set which either do not contain the block $\{n-1,n\}$, or which do not contain the singleton block $\{n\}$ whenever $1$ and $n-1$ are in the same block. In this article we study the subposet of the noncrossing partition lattice induced by these elements, and show that it is a supersolvable lattice, and therefore lexicographically shellable. We give a combinatorial model for the NBB bases of this lattice and derive an explicit formula for the value of its Möbius function between least and greatest element. This work is motivated by a recent article by M. Bruce, M. Dougherty, M. Hlavacek, R. Kudo, and I. Nicolas, in which they introduce a subposet of the noncrossing partition lattice that is determined by parking functions with certain forbidden entries. In particular, they conjecture that the resulting poset always has a contractible order complex. We prove this conjecture by embedding their poset into ours, and showing that it inherits the lexicographic shellability.
0
0
1
0
0
0
Semi-supervised model-based clustering with controlled clusters leakage
In this paper, we focus on finding clusters in partially categorized data sets. We propose a semi-supervised version of Gaussian mixture model, called C3L, which retrieves natural subgroups of given categories. In contrast to other semi-supervised models, C3L is parametrized by user-defined leakage level, which controls maximal inconsistency between initial categorization and resulting clustering. Our method can be implemented as a module in practical expert systems to detect clusters, which combine expert knowledge with true distribution of data. Moreover, it can be used for improving the results of less flexible clustering techniques, such as projection pursuit clustering. The paper presents extensive theoretical analysis of the model and fast algorithm for its efficient optimization. Experimental results show that C3L finds high quality clustering model, which can be applied in discovering meaningful groups in partially classified data.
1
0
0
1
0
0
Local White Matter Architecture Defines Functional Brain Dynamics
Large bundles of myelinated axons, called white matter, anatomically connect disparate brain regions together and compose the structural core of the human connectome. We recently proposed a method of measuring the local integrity along the length of each white matter fascicle, termed the local connectome. If communication efficiency is fundamentally constrained by the integrity along the entire length of a white matter bundle, then variability in the functional dynamics of brain networks should be associated with variability in the local connectome. We test this prediction using two statistical approaches that are capable of handling the high dimensionality of data. First, by performing statistical inference on distance-based correlations, we show that similarity in the local connectome between individuals is significantly correlated with similarity in their patterns of functional connectivity. Second, by employing variable selection using sparse canonical correlation analysis and cross-validation, we show that segments of the local connectome are predictive of certain patterns of functional brain dynamics. These results are consistent with the hypothesis that structural variability along axon bundles constrains communication between disparate brain regions.
0
0
0
1
1
0
An upper bound on tricolored ordered sum-free sets
We present a strengthening of the lemma on the lower bound of the slice rank by Tao (2016) motivated by the Croot-Lev-Pach-Ellenberg-Gijswijt bound on cap sets (2017, 2017). The Croot-Lev-Pach-Ellenberg-Gijswijt method and the lemma of Tao are based on the fact that the rank of a diagonal matrix is equal to the number of non-zero diagonal entries. Our lemma is based on the rank of upper-triangular matrices. This stronger lemma allows us to prove the following extension of the Ellenberg-Gijswijt result (2017). A tricolored ordered sum-free set in $\mathbb F_p^n$ is a collection $\{(a_i,b_i,c_i):i=1,2,\ldots,m\}$ of ordered triples in $(\mathbb F_p^n )^3$ such that $a_i+b_i+c_i=0$ and if $a_i+b_j+c_k=0$, then $i\le j\le k$. By using the new lemma, we present an upper bound on the size of a tricolored ordered sum-free set in $\mathbb F_p^n$.
0
0
1
0
0
0
The effect of the spatial domain in FANOVA models with ARH(1) error term
Functional Analysis of Variance (FANOVA) from Hilbert-valued correlated data with spatial rectangular or circular supports is analyzed, when Dirichlet conditions are assumed on the boundary. Specifically, a Hilbert-valued fixed effect model with error term defined from an Autoregressive Hilbertian process of order one (ARH(1) process) is considered, extending the formulation given in Ruiz-Medina (2016). A new statistical test is also derived to contrast the significance of the functional fixed effect parameters. The Dirichlet conditions established at the boundary affect the dependence range of the correlated error term. While the rate of convergence to zero of the eigenvalues of the covariance kernels, characterizing the Gaussian functional error components, directly affects the stability of the generalized least-squares parameter estimation problem. A simulation study and a real-data application related to fMRI analysis are undertaken to illustrate the performance of the parameter estimator and statistical test derived.
0
0
1
1
0
0
Audio to Body Dynamics
We present a method that gets as input an audio of violin or piano playing, and outputs a video of skeleton predictions which are further used to animate an avatar. The key idea is to create an animation of an avatar that moves their hands similarly to how a pianist or violinist would do, just from audio. Aiming for a fully detailed correct arms and fingers motion is a goal, however, it's not clear if body movement can be predicted from music at all. In this paper, we present the first result that shows that natural body dynamics can be predicted at all. We built an LSTM network that is trained on violin and piano recital videos uploaded to the Internet. The predicted points are applied onto a rigged avatar to create the animation.
1
0
0
0
0
0
Detecting causal associations in large nonlinear time series datasets
Identifying causal relationships from observational time series data is a key problem in disciplines such as climate science or neuroscience, where experiments are often not possible. Data-driven causal inference is challenging since datasets are often high-dimensional and nonlinear with limited sample sizes. Here we introduce a novel method that flexibly combines linear or nonlinear conditional independence tests with a causal discovery algorithm that allows to reconstruct causal networks from large-scale time series datasets. We validate the method on a well-established climatic teleconnection connecting the tropical Pacific with extra-tropical temperatures and using large-scale synthetic datasets mimicking the typical properties of real data. The experiments demonstrate that our method outperforms alternative techniques in detection power from small to large-scale datasets and opens up entirely new possibilities to discover causal networks from time series across a range of research fields.
0
1
0
1
0
0
Controlling a remotely located Robot using Hand Gestures in real time: A DSP implementation
Telepresence is a necessity for present time as we can't reach everywhere and also it is useful in saving human life at dangerous places. A robot, which could be controlled from a distant location, can solve these problems. This could be via communication waves or networking methods. Also controlling should be in real time and smooth so that it can actuate on every minor signal in an effective way. This paper discusses a method to control a robot over the network from a distant location. The robot was controlled by hand gestures which were captured by the live camera. A DSP board TMS320DM642EVM was used to implement image pre-processing and fastening the whole system. PCA was used for gesture classification and robot actuation was done according to predefined procedures. Classification information was sent over the network in the experiment. This method is robust and could be used to control any kind of robot over distance.
1
0
0
0
0
0
Hybrid Machine Learning Approach to Popularity Prediction of Newly Released Contents for Online Video Streaming Service
In the industry of video content providers such as VOD and IPTV, predicting the popularity of video contents in advance is critical not only from a marketing perspective but also from a network optimization perspective. By predicting whether the content will be successful or not in advance, the content file, which is large, is efficiently deployed in the proper service providing server, leading to network cost optimization. Many previous studies have done view count prediction research to do this. However, the studies have been making predictions based on historical view count data from users. In this case, the contents had been published to the users and already deployed on a service server. These approaches make possible to efficiently deploy a content already published but are impossible to use for a content that is not be published. To address the problems, this research proposes a hybrid machine learning approach to the classification model for the popularity prediction of newly video contents which is not published. In this paper, we create a new variable based on the related content of the specific content and divide entire dataset by the characteristics of the contents. Next, the prediction is performed using XGBoosting and deep neural net based model according to the data characteristics of the cluster. Our model uses metadata for contents for prediction, so we use categorical embedding techniques to solve the sparsity of categorical variables and make them learn efficiently for the deep neural net model. As well, we use the FTRL-proximal algorithm to solve the problem of the view-count volatility of video content. We achieve overall better performance than the previous standalone method with a dataset from one of the top streaming service company.
1
0
0
1
0
0
Vaught's Two-Cardinal Theorem and Quasi-Minimality in Continuous Logic
We prove the following continuous analogue of Vaught's Two-Cardinal Theorem: if for some $\kappa>\lambda\geq \aleph_0$, a continuous theory $T$ has a model with density character $\kappa$ which has a definable subset of density character $\lambda$, then $T$ has a model with density character $\aleph_1$ which has a separable definable subset. We also show that if we assume that $T$ is $\omega$-stable, then if $T$ has a model of density character $\aleph_1$ with a separable definable set, then for any uncountable $\kappa$ we can find a model of $T$ with density character $\kappa$ which has a separable definable subset. In order to prove this, we develop an approximate notion of quasi-minimality for the continuous setting. We apply these results to show a continuous version of the forward direction of the Baldwin-Lachlan characterization of uncountable categoricity: if a continuous theory $T$ is uncountably categorical, then $T$ is $\omega$-stable and has no Vaughtian pairs.
0
0
1
0
0
0
Spectral Calibration of the Fluorescence Telescopes of the Pierre Auger Observatory
We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. We used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. We made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small.
0
1
0
0
0
0
Two properties of Müntz spaces
We show that Müntz spaces, as subspaces of $C[0,1]$, contain asymptotically isometric copies of $c_0$ and that their dual spaces are octahedral.
0
0
1
0
0
0
Rational approximations to the zeta function
This article describes a sequence of rational functions which converges locally uniformly to the zeta function. The numerators (and denominators) of these rational functions can be expressed as characteristic polynomials of matrices that are on the face of it very simple. As a consequence, the Riemann hypothesis can be restated as what looks like a rather conventional spectral problem but which is related to the one found by Connes in his analysis of the zeta function. However the point here is that the rational approximations look to be susceptible of quantitative estimation.
0
0
1
0
0
0
The COS-Halos Survey: Metallicities in the Low-Redshift Circumgalactic Medium
We analyze new far-ultraviolet spectra of 13 quasars from the z~0.2 COS-Halos survey that cover the HI Lyman limit of 14 circumgalactic medium (CGM) systems. These data yield precise estimates or more constraining limits than previous COS-Halos measurements on the HI column densities NHI. We then apply a Monte-Carlo Markov Chain approach on 32 systems from COS-Halos to estimate the metallicity of the cool (T~10^4K) CGM gas that gives rise to low-ionization state metal lines, under the assumption of photoionization equilibrium with the extragalactic UV background. The principle results are: (1) the CGM of field L* galaxies exhibits a declining HI surface density with impact parameter Rperp (at >99.5%$ confidence), (2) the transmission of ionizing radiation through CGM gas alone is 70+/-7%; (3) the metallicity distribution function of the cool CGM is unimodal with a median of 1/3 Z_Sun and a 95% interval from ~1/50 Z_Sun to over 3x solar. The incidence of metal poor (<1/100 Z_Sun) gas is low, implying any such gas discovered along quasar sightlines is typically unrelated to L* galaxies; (4) we find an unexpected increase in gas metallicity with declining NHI (at >99.9% confidence) and, therefore, also with increasing Rperp. The high metallicity at large radii implies early enrichment; (5) A non-parametric estimate of the cool CGM gas mass is M_CGM_cool = 9.2 +/- 4.3 10^10 Msun, which together with new mass estimates for the hot CGM may resolve the galactic missing baryons problem. Future analyses of halo gas should focus on the underlying astrophysics governing the CGM, rather than processes that simply expel the medium from the halo.
0
1
0
0
0
0
On Quaternionic Tori and their Moduli Spaces
Quaternionic tori are defined as quotients of the skew field $\mathbb{H}$ of quaternions by rank-4 lattices. Using slice regular functions, these tori are endowed with natural structures of quaternionic manifolds (in fact quaternionic curves), and a fundamental region in a $12$-dimensional real subspace is then constructed to classify them up to biregular diffeomorphisms. The points of the moduli space correspond to suitable \emph{special} bases of rank-4 lattices, which are studied with respect to the action of the group $GL(4, \mathbb{Z})$, and up to biregular diffeomeorphisms. All tori with a non trivial group of biregular automorphisms - and all possible groups of their biregular automorphisms - are then identified, and recognized to correspond to five different subsets of boundary points of the moduli space.
0
0
1
0
0
0
GIER: A Danish computer from 1961 with a role in the modern revolution of astronomy
A Danish computer, GIER, from 1961 played a vital role in the development of a new method for astrometric measurement. This method, photon counting astrometry, ultimately led to two satellites with a significant role in the modern revolution of astronomy. A GIER was installed at the Hamburg Observatory in 1964 where it was used to implement the entirely new method for the measurement of stellar positions by means of a meridian circle, then the fundamental instrument of astrometry. An expedition to Perth in Western Australia with the instrument and the computer was a success. This method was also implemented in space in the first ever astrometric satellite Hipparcos launched by ESA in 1989. The Hipparcos results published in 1997 revolutionized astrometry with an impact in all branches of astronomy from the solar system and stellar structure to cosmic distances and the dynamics of the Milky Way. In turn, the results paved the way for a successor, the one million times more powerful Gaia astrometry satellite launched by ESA in 2013. Preparations for a Gaia successor in twenty years are making progress.
0
1
0
0
0
0
Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study
Several recent papers investigate Active Learning (AL) for mitigating the data dependence of deep learning for natural language processing. However, the applicability of AL to real-world problems remains an open question. While in supervised learning, practitioners can try many different methods, evaluating each against a validation set before selecting a model, AL affords no such luxury. Over the course of one AL run, an agent annotates its dataset exhausting its labeling budget. Thus, given a new task, an active learner has no opportunity to compare models and acquisition functions. This paper provides a large scale empirical study of deep active learning, addressing multiple tasks and, for each, multiple datasets, multiple models, and a full suite of acquisition functions. We find that across all settings, Bayesian active learning by disagreement, using uncertainty estimates provided either by Dropout or Bayes-by Backprop significantly improves over i.i.d. baselines and usually outperforms classic uncertainty sampling.
0
0
0
1
0
0
Analogy and duality between random channel coding and lossy source coding
Here we write in a unified fashion (using "R(P, Q, D)") the random coding exponents in channel coding and lossy source coding. We derive their explicit forms and show, that, for a given random codebook distribution Q, the channel decoding error exponent can be viewed as an encoding success exponent in lossy source coding, and the channel correct-decoding exponent can be viewed as an encoding failure exponent in lossy source coding. We then extend the channel exponents to arbitrary D, which corresponds for D > 0 to erasure decoding and for D < 0 to list decoding. For comparison, we also derive the exact random coding exponent for Forney's optimum tradeoff decoder.
1
0
0
0
0
0
Towards Bursting Filter Bubble via Contextual Risks and Uncertainties
A rising topic in computational journalism is how to enhance the diversity in news served to subscribers to foster exploration behavior in news reading. Despite the success of preference learning in personalized news recommendation, their over-exploitation causes filter bubble that isolates readers from opposing viewpoints and hurts long-term user experiences with lack of serendipity. Since news providers can recommend neither opposite nor diversified opinions if unpopularity of these articles is surely predicted, they can only bet on the articles whose forecasts of click-through rate involve high variability (risks) or high estimation errors (uncertainties). We propose a novel Bayesian model of uncertainty-aware scoring and ranking for news articles. The Bayesian binary classifier models probability of success (defined as a news click) as a Beta-distributed random variable conditional on a vector of the context (user features, article features, and other contextual features). The posterior of the contextual coefficients can be computed efficiently using a low-rank version of Laplace's method via thin Singular Value Decomposition. Efficiencies in personalized targeting of exceptional articles, which are chosen by each subscriber in test period, are evaluated on real-world news datasets. The proposed estimator slightly outperformed existing training and scoring algorithms, in terms of efficiency in identifying successful outliers.
0
0
0
1
0
0
Nonlinear Loewy Factorizable Algebraic ODEs and Hayman's Conjecture
In this paper, we introduce certain $n$-th order nonlinear Loewy factorizable algebraic ordinary differential equations for the first time and study the growth of their meromorphic solutions in terms of the Nevanlinna characteristic function. It is shown that for generic cases all their meromorphic solutions are elliptic functions or their degenerations and hence their order of growth are at most two. Moreover, for the second order factorizable algebraic ODEs, all the meromorphic solutions of them (except for one case) are found explicitly. This allows us to show that a conjecture proposed by Hayman in 1996 holds for these second order ODEs.
0
0
1
0
0
0
Grounding Symbols in Multi-Modal Instructions
As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability---for instance, learning to ground symbols in the physical world. Realistically, this task must cope with small datasets consisting of a particular users' contextual assignment of meaning to terms. We present a method for processing a raw stream of cross-modal input---i.e., linguistic instructions, visual perception of a scene and a concurrent trace of 3D eye tracking fixations---to produce the segmentation of objects with a correspondent association to high-level concepts. To test our framework we present experiments in a table-top object manipulation scenario. Our results show our model learns the user's notion of colour and shape from a small number of physical demonstrations, generalising to identifying physical referents for novel combinations of the words.
1
0
0
0
0
0
Morphometric analysis in gamma-ray astronomy using Minkowski functionals: II. Joint structure quantification
We pursue a novel morphometric analysis to detect sources in very-high-energy gamma-ray counts maps by structural deviations from the background noise. Because the Minkowski functionals from integral geometry quantify the shape of the counts map itself, the morphometric analysis includes unbiased structure information without prior knowledge about the source. Their distribution provides access to intricate geometric information about the background. We combine techniques from stochastic geometry and statistical physics to determine the joint distribution of all Minkowski functionals. We achieve an accurate characterization of the background structure for large scan windows (with up to $15\times15$ pixels), where the number of microstates varies over up to 64 orders of magnitude. Moreover, in a detailed simulation study, we confirm the statistical significance of features in the background noise and discuss how to correct for trial effects. We also present a local correction of detector effects that can considerably enhance the sensitivity of the analysis. In the third paper of this series, we will use the here derived refined structure characterization for a more sensitive data analysis that can detect formerly undetected sources.
0
1
0
0
0
0
An Event-based Fast Movement Detection Algorithm for a Positioning Robot Using POWERLINK Communication
This work develops a tracking system based on an event-based camera. A bioinspired filtering algorithm to reduce noise and transmitted data while keeping the main features at the scene is implemented in FPGA which also serves as a network node. POWERLINK IEEE 61158 industrial network is used to communicate the FPGA with a controller connected to a self-developed two axis servo-controlled robot. The FPGA includes the network protocol to integrate the event-based camera as any other existing network node. The inverse kinematics for the robot is included in the controller. In addition, another network node is used to control pneumatic valves blowing the ball at different speed and trajectories. To complete the system and provide a comparison, a traditional frame-based camera is also connected to the controller. The imaging data for the tracking system are obtained either from the event-based or frame-based camera. Results show that the robot can accurately follow the ball using fast image recognition, with the intrinsic advantages of the event-based system (size, price, power). This works shows how the development of new equipment and algorithms can be efficiently integrated in an industrial system, merging commercial industrial equipment with the new devices so that new technologies can rapidly enter into the industrial field.
1
0
0
0
0
0
$\textsf{S}^3T$: An Efficient Score-Statistic for Spatio-Temporal Surveillance
We present an efficient score statistic, called the $\textsf{S}^3 \textsf{T}$ statistic, to detect the emergence of a spatially and temporally correlated signal from either fixed-sample or sequential data. The signal may cause a men shift and/or a change in the covariance structure. The score statistic can capture both spatial and temporal structures of the change and hence is particularly powerful in detecting weak signals. The score statistic is computationally efficient and statistically powerful. Our main theoretical contribution are accurate analytical approximations on the false alarm rate of the detection procedures, which can be used to calibrate the threshold analytically. Numerical experiments on simulated and real data demonstrate the good performance of our procedure for solar flame detection and water quality monitoring.
0
0
1
1
0
0
Relaxing Integrity Requirements for Attack-Resilient Cyber-Physical Systems
The increase in network connectivity has also resulted in several high-profile attacks on cyber-physical systems. An attacker that manages to access a local network could remotely affect control performance by tampering with sensor measurements delivered to the controller. Recent results have shown that with network-based attacks, such as Man-in-the-Middle attacks, the attacker can introduce an unbounded state estimation error if measurements from a suitable subset of sensors contain false data when delivered to the controller. While these attacks can be addressed with the standard cryptographic tools that ensure data integrity, their continuous use would introduce significant communication and computation overhead. Consequently, we study effects of intermittent data integrity guarantees on system performance under stealthy attacks. We consider linear estimators equipped with a general type of residual-based intrusion detectors (including $\chi^2$ and SPRT detectors), and show that even when integrity of sensor measurements is enforced only intermittently, the attack impact is significantly limited; specifically, the state estimation error is bounded or the attacker cannot remain stealthy. Furthermore, we present methods to: (1) evaluate the effects of any given integrity enforcement policy in terms of reachable state-estimation errors for any type of stealthy attacks, and (2) design an enforcement policy that provides the desired estimation error guarantees under attack. Finally, on three automotive case studies we show that even with less than 10% of authenticated messages we can ensure satisfiable control performance in the presence of attacks.
1
0
1
0
0
0
Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia Content
With the increasing popularity of smart devices, rumors with multimedia content become more and more common on social networks. The multimedia information usually makes rumors look more convincing. Therefore, finding an automatic approach to verify rumors with multimedia content is a pressing task. Previous rumor verification research only utilizes multimedia as input features. We propose not to use the multimedia content but to find external information in other news platforms pivoting on it. We introduce a new features set, cross-lingual cross-platform features that leverage the semantic similarity between the rumors and the external information. When implemented, machine learning methods utilizing such features achieved the state-of-the-art rumor verification results.
1
0
0
0
0
0
Convergence of the Kähler-Ricci iteration
The Ricci iteration is a discrete analogue of the Ricci flow. According to Perelman, the Ricci flow converges to a Kahler-Einstein metric whenever one exists, and it has been conjectured that the Ricci iteration should behave similarly. This article confirms this conjecture. As a special case, this gives a new method of uniformization of the Riemann sphere.
0
0
1
0
0
0
Joint estimation of genetic and parent-of-origin effects using RNA-seq data from human
RNA sequencing allows one to study allelic imbalance of gene expression, which may be due to genetic factors or genomic imprinting. It is desirable to model both genetic and parent-of-origin effects simultaneously to avoid confounding and to improve the power to detect either effect. In a study of experimental cross, separation of genetic and parent-of-origin effects can be achieved by studying reciprocal cross of two inbred strains. In contrast, this task is much more challenging for an outbred population such as human population. To address this challenge, we propose a new framework to combine experimental strategies and novel statistical methods. Specifically, we propose to collect genotype data from family trios as well as RNA-seq data from the children of family trios. We have developed a new statistical method to estimate both genetic and parent-of-origin effects from such data sets. We demonstrated this approach by studying 30 trios of HapMap samples. Our results support some of previous finding of imprinted genes and also recover new candidate imprinted genes.
0
0
0
1
0
0
Consistency Analysis for Massively Inconsistent Datasets in Bound-to-Bound Data Collaboration
Bound-to-Bound Data Collaboration (B2BDC) provides a natural framework for addressing both forward and inverse uncertainty quantification problems. In this approach, QOI (quantity of interest) models are constrained by related experimental observations with interval uncertainty. A collection of such models and observations is termed a dataset and carves out a feasible region in the parameter space. If a dataset has a nonempty feasible set, it is said to be consistent. In real-world applications, it is often the case that collections of experiments and observations are inconsistent. Revealing the source of this inconsistency, i.e., identifying which models and/or observations are problematic, is essential before a dataset can be used for prediction. To address this issue, we introduce a constraint relaxation-based approach, entitled the vector consistency measure, for investigating datasets with numerous sources of inconsistency. The benefits of this vector consistency measure over a previous method of consistency analysis are demonstrated in two realistic gas combustion examples.
1
0
1
0
0
0
Hydra: a C++11 framework for data analysis in massively parallel platforms
Hydra is a header-only, templated and C++11-compliant framework designed to perform the typical bottleneck calculations found in common HEP data analyses on massively parallel platforms. The framework is implemented on top of the C++11 Standard Library and a variadic version of the Thrust library and is designed to run on Linux systems, using OpenMP, CUDA and TBB enabled devices. This contribution summarizes the main features of Hydra. A basic description of the overall design, functionality and user interface is provided, along with some code examples and measurements of performance.
1
1
0
0
0
0
Analysis of dropout learning regarded as ensemble learning
Deep learning is the state-of-the-art in fields such as visual object recognition and speech recognition. This learning uses a large number of layers, huge number of units, and connections. Therefore, overfitting is a serious problem. To avoid this problem, dropout learning is proposed. Dropout learning neglects some inputs and hidden units in the learning process with a probability, p, and then, the neglected inputs and hidden units are combined with the learned network to express the final output. We find that the process of combining the neglected hidden units with the learned network can be regarded as ensemble learning, so we analyze dropout learning from this point of view.
1
0
0
1
0
0
Spatially-resolved Brillouin spectroscopy reveals biomechanical changes in early ectatic corneal disease and post-crosslinking in vivo
Mounting evidence connects the biomechanical properties of tissues to the development of eye diseases such as keratoconus, a common disease in which the cornea thins and bulges into a conical shape. However, measuring biomechanical changes in vivo with sufficient sensitivity for disease detection has proved challenging. Here, we present a first large-scale study (~200 subjects, including normal and keratoconus patients) using Brillouin light-scattering microscopy to measure longitudinal modulus in corneal tissues with high sensitivity and spatial resolution. Our results in vivo provide evidence of biomechanical inhomogeneity at the onset of keratoconus and suggest that biomechanical asymmetry between the left and right eyes may presage disease development. We additionally measure the stiffening effect of corneal crosslinking treatment in vivo for the first time. Our results demonstrate the promise of Brillouin microscopy for diagnosis and treatment of keratoconus, and potentially other diseases.
0
0
0
0
1
0
Combining Symbolic Execution and Model Checking to Verify MPI Programs
Message Passing Interface (MPI) is the standard paradigm of programming in high performance computing. MPI programming takes significant effort, and is error-prone. Thus, effective tools for analyzing MPI programs are much needed. On the other hand, analyzing MPI programs itself is challenging because of non-determinism caused by program inputs and non-deterministic operations. Existing approaches for analyzing MPI programs either do not handle inputs or fail to support programs with mixed blocking and non-blocking operations. This paper presents MPI symbolic verifier (MPI-SV), the first symbolic execution based tool for verifying MPI programs having both blocking and non-blocking operations. To ensure soundness, we propose a blockingdriven matching algorithm to safely handle non-deterministic operations, and a method to soundly and completely model the equivalent behavior of a program execution path. The models of MPI program paths are generated on-the-fly during symbolic execution, and verified w.r.t. the expected properties by model checking. To improve scalability, MPI-SV uses the results of model checking to prune redundant paths. We have implemented MPI-SV and evaluated it on the verification of deadlock freedom for 108 real-world MPI tasks. The pure symbolic execution based technique can successfully verify 61 out of the 108 tasks (56%) within one hour, while in comparison, MPI-SV can verify 94 tasks (87%), a 31% improvement. On average, MPI-SV also achieves 7.25X speedup on verifying deadlock freedom and 2.64X speedup on finding deadlocks. These experimental results are promising, and demonstrate MPI-SV's effectiveness and efficiency.
1
0
0
0
0
0
An Improved Modified Cholesky Decomposition Method for Inverse Covariance Matrix Estimation
The modified Cholesky decomposition is commonly used for inverse covariance matrix estimation given a specified order of random variables. However, the order of variables is often not available or cannot be pre-determined. Hence, we propose a novel estimator to address the variable order issue in the modified Cholesky decomposition to estimate the sparse inverse covariance matrix. The key idea is to effectively combine a set of estimates obtained from multiple permutations of variable orders, and to efficiently encourage the sparse structure for the resultant estimate by the use of thresholding technique on the combined Cholesky factor matrix. The consistent property of the proposed estimate is established under some weak regularity conditions. Simulation studies show the superior performance of the proposed method in comparison with several existing approaches. We also apply the proposed method into the linear discriminant analysis for analyzing real-data examples for classification.
0
0
0
1
0
0
Self-exciting Point Processes: Infections and Implementations
This is a comment on Reinhart's "Review of Self-Exciting Spatio-Temporal Point Processes and Their Applications" (arXiv:1708.02647v1). I contribute some experiences from modelling the spread of infectious diseases. Furthermore, I try to complement the review with regard to the availability of software for the described models, which I think is essential in "paving the way for new uses".
0
0
0
1
0
0
Magnetism and charge density waves in RNiC$_2$ (R = Ce, Pr, Nd)
We have compared the magnetic, transport, galvanomagnetic and specific heat properties of CeNiC$_2$, PrNiC$_2$ and NdNiC$_2$ to study the interplay between charge density waves and magnetism in these compounds. The negative magnetoresistance in NdNiC$_2$ is discussed in terms of the partial destruction of charge density waves and an irreversible phase transition stabilized by the field induced ferromagnetic transformation is reported. For PrNiC$_2$ we demonstrate that the magnetic field initially weakens the CDW state, due to the Zeeman splitting of conduction bands. However, the Fermi surface nesting is enhanced at a temperature related to the magnetic anomaly.
0
1
0
0
0
0
Conjoined constraints on modified gravity from the expansion history and cosmic growth
In this paper we present conjoined constraints on several cosmological models from the expansion history $H(z)$ and cosmic growth $f\sigma_8(z)$. The models we study include the CPL $w_0w_a$ parametrization, the Holographic Dark Energy (HDE) model, the Time varying vacuum ($\Lambda_t$CDM) model, the Dvali, Gabadadze and Porrati (DGP) and Finsler-Randers (FRDE) models, a power law $f(T)$ model and finally the Hu-Sawicki $f(R)$ model. In all cases we perform a simultaneous fit to the SnIa, CMB, BAO, $H(z)$ and growth data, while also following the conjoined visualization of $H(z)$ and $f\sigma_8(z)$ as in Linder (2017). Furthermore, we introduce the Figure of Merit (FoM) in the $H(z)-f\sigma_8(z)$ parameter space as a way to constrain models that jointly fit both probes well. We use both the latest $H(z)$ and $f\sigma_8(z)$ data, but also LSST-like mocks with $1\%$ measurements and we find that the conjoined method of constraining the expansion history and cosmic growth simultaneously is able not only to place stringent constraints on these parameters but also to provide an easy visual way to discriminate cosmological models. Finally, we confirm the existence of a tension between the growth rate and Planck CMB data and we find that the FoM in the conjoined parameter space of $H(z)-f\sigma_8(z)$ can be used to discriminate between the $\Lambda$CDM model and certain classes of modified gravity models, namely the DGP and $f(T)$.
0
1
0
0
0
0
Coaxial collisions of a vortex ring and a sphere in an inviscid incompressible fluid
The dynamics of a circular thin vortex ring and a sphere moving along the symmetry axis of the ring in an inviscid incompressible fluid is studied on the basis of Euler's equations of motion. The equations of motion for position and radius of the vortex ring and those for position and velocity of the sphere are coupled by hydrodynamic interactions. The equations are cast in Hamiltonian form, from which it is seen that total energy and momentum are conserved. The four Hamiltonian equations of motion are solved numerically for a variety of initial conditions.
0
1
0
0
0
0
Cosmic viscosity as a remedy for tension between PLANCK and LSS data
Measurements of $\sigma_8$ from large scale structure observations show a discordance with the extrapolated $\sigma_8$ from Planck CMB parameters using $\Lambda$CDM cosmology. Similar discordance is found in the value of $H_0$ and $\Omega_m$. In this paper, we show that the presence of viscosity in cold dark matter, shear or bulk or combination of both, can remove the above mentioned conflicts simultaneously. This indicates that the data from Planck CMB observation and different LSS observations prefer small but non-zero amount of viscosity in cold dark matter fluid.
0
1
0
0
0
0
A probabilistic approach to emission-line galaxy classification
We invoke a Gaussian mixture model (GMM) to jointly analyse two traditional emission-line classification schemes of galaxy ionization sources: the Baldwin-Phillips-Terlevich (BPT) and $\rm W_{H\alpha}$ vs. [NII]/H$\alpha$ (WHAN) diagrams, using spectroscopic data from the Sloan Digital Sky Survey Data Release 7 and SEAGal/STARLIGHT datasets. We apply a GMM to empirically define classes of galaxies in a three-dimensional space spanned by the $\log$ [OIII]/H$\beta$, $\log$ [NII]/H$\alpha$, and $\log$ EW(H${\alpha}$), optical parameters. The best-fit GMM based on several statistical criteria suggests a solution around four Gaussian components (GCs), which are capable to explain up to 97 per cent of the data variance. Using elements of information theory, we compare each GC to their respective astronomical counterpart. GC1 and GC4 are associated with star-forming galaxies, suggesting the need to define a new starburst subgroup. GC2 is associated with BPT's Active Galaxy Nuclei (AGN) class and WHAN's weak AGN class. GC3 is associated with BPT's composite class and WHAN's strong AGN class. Conversely, there is no statistical evidence -- based on four GCs -- for the existence of a Seyfert/LINER dichotomy in our sample. Notwithstanding, the inclusion of an additional GC5 unravels it. The GC5 appears associated to the LINER and Passive galaxies on the BPT and WHAN diagrams respectively. Subtleties aside, we demonstrate the potential of our methodology to recover/unravel different objects inside the wilderness of astronomical datasets, without lacking the ability to convey physically interpretable results. The probabilistic classifications from the GMM analysis are publicly available within the COINtoolbox (this https URL\_Catalogue/).
0
1
0
1
0
0
Quantum repeaters with individual rare-earth ions at telecommunication wavelengths
We present a quantum repeater scheme that is based on individual erbium and europium ions. Erbium ions are attractive because they emit photons at telecommunication wavelength, while europium ions offer exceptional spin coherence for long-term storage. Entanglement between distant erbium ions is created by photon detection. The photon emission rate of each erbium ion is enhanced by a microcavity with high Purcell factor, as has recently been demonstrated. Entanglement is then transferred to nearby europium ions for storage. Gate operations between nearby ions are performed using dynamically controlled electric-dipole coupling. These gate operations allow entanglement swapping to be employed in order to extend the distance over which entanglement is distributed. The deterministic character of the gate operations allows improved entanglement distribution rates in comparison to atomic ensemble-based protocols. We also propose an approach that utilizes multiplexing in order to enhance the entanglement distribution rate.
0
1
0
0
0
0
Questions and dependency in intuitionistic logic
In recent years, the logic of questions and dependencies has been investigated in the closely related frameworks of inquisitive logic and dependence logic. These investigations have assumed classical logic as the background logic of statements, and added formulas expressing questions and dependencies to this classical core. In this paper, we broaden the scope of these investigations by studying questions and dependency in the context of intuitionistic logic. We propose an intuitionistic team semantics, where teams are embedded within intuitionistic Kripke models. The associated logic is a conservative extension of intuitionistic logic with questions and dependence formulas. We establish a number of results about this logic, including a normal form result, a completeness result, and translations to classical inquisitive logic and modal dependence logic.
1
0
1
0
0
0
Translations: generalizing relative expressiveness between logics
There is a strong demand for precise means for the comparison of logics in terms of expressiveness both from theoretical and from application areas. The aim of this paper is to propose a sufficiently general and reasonable formal criterion for expressiveness, so as to apply not only to model-theoretic logics, but also to Tarskian and proof-theoretic logics. For model-theoretic logics there is a standard framework of relative expressiveness, based on the capacity of characterizing structures, and a straightforward formal criterion issuing from it. The problem is that it only allows the comparison of those logics defined within the same class of models. The urge for a broader framework of expressiveness is not new. Nevertheless, the enterprise is complex and a reasonable model-theoretic formal criterion is still wanting. Recently there appeared two criteria in this wider framework, one from García-Matos & Väänänen and other from L. Kuijer. We argue that they are not adequate. Their limitations are analyzed and we propose to move to an even broader framework lacking model-theoretic notions, which we call "translational expressiveness". There is already a criterion in this later framework by Mossakowski et al., however it turned out to be too lax. We propose some adequacy criteria for expressiveness and a formal criterion of translational expressiveness complying with them is given.
1
0
1
0
0
0
Learning Aided Optimization for Energy Harvesting Devices with Outdated State Information
This paper considers utility optimal power control for energy harvesting wireless devices with a finite capacity battery. The distribution information of the underlying wireless environment and harvestable energy is unknown and only outdated system state information is known at the device controller. This scenario shares similarity with Lyapunov opportunistic optimization and online learning but is different from both. By a novel combination of Zinkevich's online gradient learning technique and the drift-plus-penalty technique from Lyapunov opportunistic optimization, this paper proposes a learning-aided algorithm that achieves utility within $O(\epsilon)$ of the optimal, for any desired $\epsilon>0$, by using a battery with an $O(1/\epsilon)$ capacity. The proposed algorithm has low complexity and makes power investment decisions based on system history, without requiring knowledge of the system state or its probability distribution.
1
0
0
0
0
0
Absence of long range order in the frustrated magnet SrDy$_2$O$_4$ due to trapped defects from a dimensionality crossover
Magnetic frustration and low dimensionality can prevent long range magnetic order and lead to exotic correlated ground states. SrDy$_2$O$_4$ consists of magnetic Dy$^{3+}$ ions forming magnetically frustrated zig-zag chains along the c-axis and shows no long range order to temperatures as low as $T=60$ mK. We carried out neutron scattering and AC magnetic susceptibility measurements using powder and single crystals of SrDy$_2$O$_4$. Diffuse neutron scattering indicates strong one-dimensional (1D) magnetic correlations along the chain direction that can be qualitatively accounted for by the axial next-nearest neighbour Ising (ANNNI) model with nearest-neighbor and next-nearest-neighbor exchange $J_1=0.3$ meV and $J_2=0.2$ meV, respectively. Three-dimensional (3D) correlations become important below $T^*\approx0.7$ K. At $T=60$ mK, the short range correlations are characterized by a putative propagation vector $\textbf{k}_{1/2}=(0,\frac{1}{2},\frac{1}{2})$. We argue that the absence of long range order arises from the presence of slowly decaying 1D domain walls that are trapped due to 3D correlations. This stabilizes a low-temperature phase without long range magnetic order, but with well-ordered chain segments separated by slowly-moving domain walls.
0
1
0
0
0
0
A class of multi-resolution approximations for large spatial datasets
Gaussian processes are popular and flexible models for spatial, temporal, and functional data, but they are computationally infeasible for large datasets. We discuss Gaussian-process approximations that use basis functions at multiple resolutions to achieve fast inference and that can (approximately) represent any spatial covariance structure. We consider two special cases of this multi-resolution-approximation framework, a taper version and a domain-partitioning (block) version. We describe theoretical properties and inference procedures, and study the computational complexity of the methods. Numerical comparisons and an application to satellite data are also provided.
0
0
0
1
0
0
Origin of Operating Voltage Increase in InGaN-based Light-emitting Diodes under High Injection: Phase Space Filling Effect on Forward Voltage Characteristics
As an attempt to further elucidate the operating voltage increase in InGaN-based light-emitting diodes (LEDs), the radiative and nonradiative current components are separately analyzed in combination with the Shockley diode equation. Through the analyses, we have shown that the increase in operating voltage is caused by phase space filling effect in high injection. We have also shown that the classical Shockley diode equation is insufficient to comprehensively explain the I-V curve of the LED devices since the transport and recombination characteristics of respective current components are basically different. Hence, we have proposed a modified Shockley equation suitable for modern LED devices. Our analysis gives a new insight on the cause of the wall-plug-efficiency drop influenced by such factors as the efficiency droop and the high operating voltage in InGaN LEDs.
0
1
0
0
0
0
Emergence of Selective Invariance in Hierarchical Feed Forward Networks
Many theories have emerged which investigate how in- variance is generated in hierarchical networks through sim- ple schemes such as max and mean pooling. The restriction to max/mean pooling in theoretical and empirical studies has diverted attention away from a more general way of generating invariance to nuisance transformations. We con- jecture that hierarchically building selective invariance (i.e. carefully choosing the range of the transformation to be in- variant to at each layer of a hierarchical network) is im- portant for pattern recognition. We utilize a novel pooling layer called adaptive pooling to find linear pooling weights within networks. These networks with the learnt pooling weights have performances on object categorization tasks that are comparable to max/mean pooling networks. In- terestingly, adaptive pooling can converge to mean pooling (when initialized with random pooling weights), find more general linear pooling schemes or even decide not to pool at all. We illustrate the general notion of selective invari- ance through object categorization experiments on large- scale datasets such as SVHN and ILSVRC 2012.
1
0
0
0
0
0
Virtual retraction and Howson's theorem in pro-$p$ groups
We show that for every finitely generated closed subgroup $K$ of a non-solvable Demushkin group $G$, there exists an open subgroup $U$ of $G$ containing $K$, and a continuous homomorphism $\tau \colon U \to K$ satisfying $\tau(k) = k$ for every $k \in K$. We prove that the intersection of a pair of finitely generated closed subgroups of a Demushkin group is finitely generated (giving an explicit bound on the number of generators). Furthermore, we show that these properties of Demushkin groups are preserved under free pro-$p$ products, and deduce that Howson's theorem holds for the Sylow subgroups of the absolute Galois group of a number field. Finally, we confirm two conjectures of Ribes, thus classifying the finitely generated pro-$p$ M. Hall groups.
0
0
1
0
0
0
Convolutional Dictionary Learning: A Comparative Review and New Algorithms
Convolutional sparse representations are a form of sparse representation with a dictionary that has a structure that is equivalent to convolution with a set of linear filters. While effective algorithms have recently been developed for the convolutional sparse coding problem, the corresponding dictionary learning problem is substantially more challenging. Furthermore, although a number of different approaches have been proposed, the absence of thorough comparisons between them makes it difficult to determine which of them represents the current state of the art. The present work both addresses this deficiency and proposes some new approaches that outperform existing ones in certain contexts. A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.
1
0
0
1
0
0
Replication Ethics
Suppose some future technology enables the same consciously experienced human life to be repeated, identically or nearly so, N times, in series or in parallel. Is this roughly N times as valuable as enabling the same life once, because each life has value and values are additive? Or is it of roughly equal value as enabling the life once, because only one life is enabled, albeit in a physically unusual way? Does it matter whether the lives are contemporaneous or successive? We argue that these questions highlight a hitherto neglected facet of population ethics that may become relevant in the not necessarily far distant future.
1
1
0
0
0
0
PSYM-WIDE: a survey for large-separation planetary-mass companions to late spectral type members of young moving groups
We present the results of a direct-imaging survey for very large separation ($>$100 au), companions around 95 nearby young K5-L5 stars and brown dwarfs. They are high-likelihood candidates or confirmed members of the young ($\lessapprox$150 Myr) $\beta$ Pictoris and AB Doradus moving groups (ABDMG) and the TW Hya, Tucana-Horologium, Columba, Carina, and Argus associations. Images in $i'$ and $z'$ filters were obtained with the Gemini Multi-Object Spectrograph (GMOS) on Gemini South to search for companions down to an apparent magnitude of $z'\sim$22-24 at separations $\gtrapprox$20" from the targets and in the remainder of the wide 5.5' $\times$ 5.5' GMOS field of view. This allowed us to probe the most distant region where planetary-mass companions could be gravitationally bound to the targets. This region was left largely unstudied by past high-contrast imaging surveys, which probed much closer-in separations. This survey led to the discovery of a planetary-mass (9-13 $\,M_{\rm{Jup}}$) companion at 2000 au from the M3V star GU Psc, a highly probable member of ABDMG. No other substellar companions were identified. These results allowed us to constrain the frequency of distant planetary-mass companions (5-13 $\,M_{\rm{Jup}}$) to 0.84$_{-0.66}^{+6.73}$% (95% confidence) at semimajor axes between 500 and 5000 au around young K5-L5 stars and brown dwarfs. This is consistent with other studies suggesting that gravitationally bound planetary-mass companions at wide separations from low-mass stars are relatively rare.
0
1
0
0
0
0
Boolean dimension and tree-width
The dimension is a key measure of complexity of partially ordered sets. Small dimension allows succinct encoding. Indeed if $P$ has dimension $d$, then to know whether $x \leq y$ in $P$ it is enough to check whether $x\leq y$ in each of the $d$ linear extensions of a witnessing realizer. Focusing on the encoding aspect Nešetřil and Pudlák defined a more expressive version of dimension. A poset $P$ has boolean dimension at most $d$ if it is possible to decide whether $x \leq y$ in $P$ by looking at the relative position of $x$ and $y$ in only $d$ permutations of the elements of $P$. We prove that posets with cover graphs of bounded tree-width have bounded boolean dimension. This stays in contrast with the fact that there are posets with cover graphs of tree-width three and arbitrarily large dimension. This result might be a step towards a resolution of the long-standing open problem: Do planar posets have bounded boolean dimension?
1
0
0
0
0
0
Drawing Planar Graphs with Few Geometric Primitives
We define the \emph{visual complexity} of a plane graph drawing to be the number of basic geometric objects needed to represent all its edges. In particular, one object may represent multiple edges (e.g., one needs only one line segment to draw a path with an arbitrary number of edges). Let $n$ denote the number of vertices of a graph. We show that trees can be drawn with $3n/4$ straight-line segments on a polynomial grid, and with $n/2$ straight-line segments on a quasi-polynomial grid. Further, we present an algorithm for drawing planar 3-trees with $(8n-17)/3$ segments on an $O(n)\times O(n^2)$ grid. This algorithm can also be used with a small modification to draw maximal outerplanar graphs with $3n/2$ edges on an $O(n)\times O(n^2)$ grid. We also study the problem of drawing maximal planar graphs with circular arcs and provide an algorithm to draw such graphs using only $(5n - 11)/3$ arcs. This is significantly smaller than the lower bound of $2n$ for line segments for a nontrivial graph class.
1
0
0
0
0
0
On the nature of the candidate T-Tauri star V501 Aurigae
We report new multi-colour photometry and high-resolution spectroscopic observations of the long-period variable V501 Aur, previously considered to be a weak-lined T-Tauri star belonging to the Taurus-Auriga star-forming region. The spectroscopic observations reveal that V501 Aur is a single-lined spectroscopic binary system with a 68.8-day orbital period, a slightly eccentric orbit (e ~ 0.03), and a systemic velocity discrepant from the mean of Taurus-Auriga. The photometry shows quasi-periodic variations on a different, ~55-day timescale that we attribute to rotational modulation by spots. No eclipses are seen. The visible object is a rapidly rotating (vsini ~ 25 km/s) early K star, which along with the rotation period implies it must be large (R > 26.3 Rsun), as suggested also by spectroscopic estimates indicating a low surface gravity. The parallax from the Gaia mission and other independent estimates imply a distance much greater than the Taurus-Auriga region, consistent with the giant interpretation. Taken together, this evidence together with a re-evaluation of the LiI~$\lambda$6707 and H$\alpha$ lines shows that V501 Aur is not a T-Tauri star, but is instead a field binary with a giant primary far behind the Taurus-Auriga star-forming region. The large mass function from the spectroscopic orbit and a comparison with stellar evolution models suggest the secondary may be an early-type main-sequence star.
0
1
0
0
0
0
3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks
The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3D-PRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxel-based generative models while using a significantly reduced parameter space.
1
0
0
1
0
0
Counting the number of distinct distances of elements in valued field extensions
The defect of valued field extensions is a major obstacle in open problems in resolution of singularities and in the model theory of valued fields, whenever positive characteristic is involved. We continue the detailed study of defect extensions through the tool of distances, which measure how well an element in an immediate extension can be approximated by elements from the base field. We show that in several situations the number of essentially distinct distances in fixed extensions, or even just over a fixed base field, is finite, and we compute upper bounds. We apply this to the special case of valued functions fields over perfect base fields. This provides important information used in forthcoming research on relative resolution problems.
0
0
1
0
0
0
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses
Neural networks are known to be vulnerable to adversarial examples. In this note, we evaluate the two white-box defenses that appeared at CVPR 2018 and find they are ineffective: when applying existing techniques, we can reduce the accuracy of the defended models to 0%.
0
0
0
1
0
0