text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Limiting Behaviour of the Teichmüller Harmonic Map Flow, Abstract: In this paper we study the Teichmüller harmonic map flow as introduced by Rupflin and Topping [15]. It evolves pairs of maps and metrics $(u,g)$ into branched minimal immersions, or equivalently into weakly conformal harmonic maps, where $u$ maps from a fixed closed surface $M$ with metric $g$ to a general target manifold $N$. It arises naturally as a gradient flow for the Dirichlet energy functional viewed as acting on equivalence classes of such pairs, obtained from the invariance under diffeomorphisms and conformal changes of the domain metric. In the construction of a suitable inner product for the gradient flow a choice of relative weight of the map tangent directions and metric tangent directions is made, which manifests itself in the appearance of a coupling constant $\eta$ in the flow equations. We study limits of the flow as $\eta$ approaches 0, corresponding to slowing down the evolution of the metric. We first show that given a smooth harmonic map flow on a fixed time interval, the Teichmüller harmonic map flows starting at the same initial data converge uniformly to the underlying harmonic map flow when $\eta \downarrow 0$. Next we consider a rescaling of time, which increases the speed of the map evolution while evolving the metric at a constant rate. We show that under appropriate topological assumptions, in the limit the rescaled flows converge to a unique flow through harmonic maps with the metric evolving in the direction of the real part of the Hopf differential.
[ 0, 0, 1, 0, 0, 0 ]
Title: Cartesian Fibrations and Representability, Abstract: In higher category theory, we use fibrations to model presheaves. In this paper we introduce a new method to build such fibrations. Concretely, for suitable reflective subcategories of simplicial spaces, we build fibrations that model presheaves valued in that subcategory. Using this we can build Cartesian fibrations, but we can also model presheaves valued in Segal spaces. Additionally, using this new approach, we define representable Cartesian fibrations, generalizing representable presheaves valued in spaces, and show they have similar properties.
[ 0, 0, 1, 0, 0, 0 ]
Title: Signal coupling to embedded pitch adapters in silicon sensors, Abstract: We have examined the effects of embedded pitch adapters on signal formation in n-substrate silicon microstrip sensors with data from beam tests and simulation. According to simulation, the presence of the pitch adapter metal layer changes the electric field inside the sensor, resulting in slowed signal formation on the nearby strips and a pick-up effect on the pitch adapter. This can result in an inefficiency to detect particles passing through the pitch adapter region. All these effects have been observed in the beam test data.
[ 0, 1, 0, 0, 0, 0 ]
Title: Modular System for Shelves and Coasts (MOSSCO v1.0) - a flexible and multi-component framework for coupled coastal ocean ecosystem modelling, Abstract: Shelf and coastal sea processes extend from the atmosphere through the water column and into the sea bed. These processes are driven by physical, chemical, and biological interactions at local scales, and they are influenced by transport and cross strong spatial gradients. The linkages between domains and many different processes are not adequately described in current model systems. Their limited integration level in part reflects lacking modularity and flexibility; this shortcoming hinders the exchange of data and model components and has historically imposed supremacy of specific physical driver models. We here present the Modular System for Shelves and Coasts (MOSSCO, this http URL), a novel domain and process coupling system tailored---but not limited--- to the coupling challenges of and applications in the coastal ocean. MOSSCO builds on the existing coupling technology Earth System Modeling Framework and on the Framework for Aquatic Biogeochemical Models, thereby creating a unique level of modularity in both domain and process coupling; the new framework adds rich metadata, flexible scheduling, configurations that allow several tens of models to be coupled, and tested setups for coastal coupled applications. That way, MOSSCO addresses the technology needs of a growing marine coastal Earth System community that encompasses very different disciplines, numerical tools, and research questions.
[ 0, 1, 0, 0, 0, 0 ]
Title: Radar, without tears, Abstract: A brief introduction to radar: principles, Doppler effect, antennas, waveforms, power budget - and future radars. [13 pages]
[ 0, 1, 0, 0, 0, 0 ]
Title: A Multi-Stage Algorithm for Acoustic Physical Model Parameters Estimation, Abstract: One of the challenges in computational acoustics is the identification of models that can simulate and predict the physical behavior of a system generating an acoustic signal. Whenever such models are used for commercial applications an additional constraint is the time-to-market, making automation of the sound design process desirable. In previous works, a computational sound design approach has been proposed for the parameter estimation problem involving timbre matching by deep learning, which was applied to the synthesis of pipe organ tones. In this work we refine previous results by introducing the former approach in a multi-stage algorithm that also adds heuristics and a stochastic optimization method operating on objective cost functions based on psychoacoustics. The optimization method shows to be able to refine the first estimate given by the deep learning approach and substantially improve the objective metrics, with the additional benefit of reducing the sound design process time. Subjective listening tests are also conducted to gather additional insights on the results.
[ 1, 0, 0, 1, 0, 0 ]
Title: On convergence rate of stochastic proximal point algorithm without strong convexity, smoothness or bounded gradients, Abstract: Significant parts of the recent learning literature on stochastic optimization algorithms focused on the theoretical and practical behaviour of stochastic first order schemes under different convexity properties. Due to its simplicity, the traditional method of choice for most supervised machine learning problems is the stochastic gradient descent (SGD) method. Many iteration improvements and accelerations have been added to the pure SGD in order to boost its convergence in various (strong) convexity setting. However, the Lipschitz gradient continuity or bounded gradients assumptions are an essential requirement for most existing stochastic first-order schemes. In this paper novel convergence results are presented for the stochastic proximal point algorithm in different settings. In particular, without any strong convexity, smoothness or bounded gradients assumptions, we show that a slightly modified quadratic growth assumption is sufficient to guarantee for the stochastic proximal point $\mathcal{O}\left(\frac{1}{k}\right)$ convergence rate, in terms of the distance to the optimal set. Furthermore, linear convergence is obtained for interpolation setting, when the optimal set of expected cost is included in the optimal sets of each functional component.
[ 1, 0, 0, 1, 0, 0 ]
Title: Extended periodic links and HOMFLYPT polynomial, Abstract: Extended strongly periodic links have been introduced by Przytycki and Sokolov as a symmetric surgery presentation of three-manifolds on which the finite cyclic group acts without fixed points. The purpose of this paper is to prove that the symmetry of these links is reflected by the first coefficients of the HOMFLYPT polynomial.
[ 0, 0, 1, 0, 0, 0 ]
Title: Exploring Cross-Domain Data Dependencies for Smart Homes to Improve Energy Efficiency, Abstract: Over the past decade, the idea of smart homes has been conceived as a potential solution to counter energy crises or to at least mitigate its intensive destructive consequences in the residential building sector.
[ 1, 0, 0, 0, 0, 0 ]
Title: Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters, Abstract: Deep learning models can take weeks to train on a single GPU-equipped machine, necessitating scaling out DL training to a GPU-cluster. However, current distributed DL implementations can scale poorly due to substantial parameter synchronization over the network, because the high throughput of GPUs allows more data batches to be processed per unit time than CPUs, leading to more frequent network synchronization. We present Poseidon, an efficient communication architecture for distributed DL on GPUs. Poseidon exploits the layered model structures in DL programs to overlap communication and computation, reducing bursty network communication. Moreover, Poseidon uses a hybrid communication scheme that optimizes the number of bytes required to synchronize each layer, according to layer properties and the number of machines. We show that Poseidon is applicable to different DL frameworks by plugging Poseidon into Caffe and TensorFlow. We show that Poseidon enables Caffe and TensorFlow to achieve 15.5x speed-up on 16 single-GPU machines, even with limited bandwidth (10GbE) and the challenging VGG19-22K network for image classification. Moreover, Poseidon-enabled TensorFlow achieves 31.5x speed-up with 32 single-GPU machines on Inception-V3, a 50% improvement over the open-source TensorFlow (20x speed-up).
[ 1, 0, 0, 1, 0, 0 ]
Title: A closed formula for illiquid corporate bonds and an application to the European market, Abstract: We deduce a simple closed formula for illiquid corporate coupon bond prices when liquid bonds with similar characteristics (e.g. maturity) are present in the market for the same issuer. The key model parameter is the time-to-liquidate a position, i.e. the time that an experienced bond trader takes to liquidate a given position on a corporate coupon bond. The option approach we propose for pricing bonds' illiquidity is reminiscent of the celebrated work of Longstaff (1995) on the non-marketability of some non-dividend-paying shares in IPOs. This approach describes a quite common situation in the fixed income market: it is rather usual to find issuers that, besides liquid benchmark bonds, issue some other bonds that either are placed to a small number of investors in private placements or have a limited issue size. The model considers interest rate and credit spread term structures and their dynamics. We show that illiquid bonds present an additional liquidity spread that depends on the time-to-liquidate aside from credit and interest rate parameters. We provide a detailed application for two issuers in the European market.
[ 0, 0, 0, 0, 0, 1 ]
Title: Supervising Unsupervised Learning with Evolutionary Algorithm in Deep Neural Network, Abstract: A method to control results of gradient descent unsupervised learning in a deep neural network by using evolutionary algorithm is proposed. To process crossover of unsupervisedly trained models, the algorithm evaluates pointwise fitness of individual nodes in neural network. Labeled training data is randomly sampled and breeding process selects nodes by calculating degree of their consistency on different sets of sampled data. This method supervises unsupervised training by evolutionary process. We also introduce modified Restricted Boltzmann Machine which contains repulsive force among nodes in a neural network and it contributes to isolate network nodes each other to avoid accidental degeneration of nodes by evolutionary process. These new methods are applied to document classification problem and it results better accuracy than a traditional fully supervised classifier implemented with linear regression algorithm.
[ 0, 0, 0, 1, 0, 0 ]
Title: Inductive Representation Learning in Large Attributed Graphs, Abstract: Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks.
[ 1, 0, 0, 1, 0, 0 ]
Title: High-resolution Spectroscopy and Spectropolarimetry of Selected Delta Scuti Pulsating Variables, Abstract: The combination of photometry, spectroscopy and spectropolarimetry of the chemically peculiar stars often aims to study the complex physical phenomena such as stellar pulsation, chemical inhomogeneity, magnetic field and their interplay with stellar atmosphere and circumstellar environment. The prime objective of the present study is to determine the atmospheric parameters of a set of Am stars to understand their evolutionary status. Atmospheric abundances and basic parameters are determined using full spectrum fitting technique by comparing the high-resolution spectra to the synthetic spectra. To know the evolutionary status we derive the effective temperature and luminosity from different methods and compare them with the literature. The location of these stars in the H-R diagram demonstrate that all the sample stars are evolved from the Zero-Age-Main-Sequence towards Terminal-Age-Main-Sequence and occupy the region of $\delta$ Sct instability strip. The abundance analysis shows that the light elements e.g. Ca and Sc are underabundant while iron peak elements such as Ba, Ce etc. are overabundant and these chemical properties are typical for Am stars. The results obtained from the spectropolarimetric analysis shows that the longitudinal magnetic fields in all the studied stars are negligible that gives further support their Am class of peculiarity.
[ 0, 1, 0, 0, 0, 0 ]
Title: Tight Analysis for the 3-Majority Consensus Dynamics, Abstract: We present a tight analysis for the well-studied randomized 3-majority dynamics of stabilizing consensus, hence answering the main open question of Becchetti et al. [SODA'16]. Consider a distributed system of n nodes, each initially holding an opinion in {1, 2, ..., k}. The system should converge to a setting where all (non-corrupted) nodes hold the same opinion. This consensus opinion should be \emph{valid}, meaning that it should be among the initially supported opinions, and the (fast) convergence should happen even in the presence of a malicious adversary who can corrupt a bounded number of nodes per round and in particular modify their opinions. A well-studied distributed algorithm for this problem is the 3-majority dynamics, which works as follows: per round, each node gathers three opinions --- say by taking its own and two of other nodes sampled at random --- and then it sets its opinion equal to the majority of this set; ties are broken arbitrarily, e.g., towards the node's own opinion. Becchetti et al. [SODA'16] showed that the 3-majority dynamics converges to consensus in O((k^2\sqrt{\log n} + k\log n)(k+\log n)) rounds, even in the presence of a limited adversary. We prove that, even with a stronger adversary, the convergence happens within O(k\log n) rounds. This bound is known to be optimal.
[ 1, 0, 0, 0, 0, 0 ]
Title: Superlinear scaling in the urban system of England of Wales. A comparison with US cities, Abstract: According to the theory of urban scaling, urban indicators scale with city size in a predictable fashion. In particular, indicators of social and economic productivity are expected to have a superlinear relation. This behavior was verified for many urban systems, but recent findings suggest that this pattern may not be valid for England and Wales (E&W), where income has a linear relation with city size. This finding raises the question of whether the cities of E&W exhibit any superlinear relation with respect to quantities such as the level of education and occupational groups. In this paper, we evaluate the scaling of educational and occupational groups of E&W to see if we can detect superlinear relations in the number of educated and better-paid persons. As E&W may be unique in its linear scaling of income, we complement our analysis by comparing it to the urban system of the United States (US), a country for which superlinear scaling of income has already been demonstrated. To make the two urban systems comparable, we define the urban systems of both countries using the same method and test the sensitivity of our results to changes in the boundaries of cities. We find that cities of E&W exhibit patterns of superlinear scaling with respect to education and certain categories of better-paid occupations. However, the tendency of such groups to have superlinear scaling seems to be more consistent in the US. We show that while the educational and occupational distributions of US cities can partly explain the superlinear scaling of earnings, the distribution leads to a linear scaling of earnings in E&W.
[ 0, 1, 0, 0, 0, 0 ]
Title: Noncommutative hyperbolic metrics, Abstract: We characterize certain noncommutative domains in terms of noncommutative holomorphic equivalence via a pseudometric that we define in purely algebraic terms. We prove some properties of this pseudometric and provide an application to free probability.
[ 0, 0, 1, 0, 0, 0 ]
Title: Basin stability for chimera states, Abstract: Chimera states, namely complex spatiotemporal patterns that consist of coexisting domains of spatially coherent and incoherent dynamics, are investigated in a network of coupled identical oscillators. These intriguing spatiotemporal patterns were first reported in nonlocally coupled phase oscillators, and it was shown that such mixed type behavior occurs only for specific initial conditions in nonlocally and globally coupled networks. The influence of initial conditions on chimera states has remained a fundamental problem since their discovery. In this report, we investigate the robustness of chimera states together with incoherent and coherent states in dependence on the initial conditions. For this, we use the basin stability method which is related to the volume of the basin of attraction, and we consider nonlocally and globally coupled time-delayed Mackey-Glass oscillators as example. Previously, it was shown that the existence of chimera states can be characterized by mean phase velocity and a statistical measure, such as the strength of incoherence, by using well prepared initial conditions. Here we show further how the coexistence of different dynamical states can be identified and quantified by means of the basin stability measure over a wide range of the parameter space.
[ 0, 1, 0, 0, 0, 0 ]
Title: On the Usage of Databases of Educational Materials in Macedonian Education, Abstract: Technologies have become important part of our lives. The steps for introducing ICTs in education vary from country to country. The Republic of Macedonia has invested with a lot in installment of hardware and software in education and in teacher training. This research was aiming to determine the situation of usage of databases of digital educational materials and to define recommendation for future improvements. Teachers from urban schools were interviewed with a questionnaire. The findings are several: only part of the interviewed teachers had experience with databases of educational materials; all teachers still need capacity building activities focusing exactly on the use and benefits from databases of educational materials; preferably capacity building materials to be in Macedonian language; technical support and upgrading of software and materials should be performed on a regular basis. Most of the findings can be applied at both national and international level - with all this implemented, application of ICT in education will have much bigger positive impact.
[ 1, 0, 0, 0, 0, 0 ]
Title: Deep Neural Linear Bandits: Overcoming Catastrophic Forgetting through Likelihood Matching, Abstract: We study the neural-linear bandit model for solving sequential decision-making problems with high dimensional side information. Neural-linear bandits leverage the representation power of deep neural networks and combine it with efficient exploration mechanisms, designed for linear contextual bandits, on top of the last hidden layer. Since the representation is being optimized during learning, information regarding exploration with "old" features is lost. Here, we propose the first limited memory neural-linear bandit that is resilient to this phenomenon, which we term catastrophic forgetting. We evaluate our method on a variety of real-world data sets, including regression, classification, and sentiment analysis, and observe that our algorithm is resilient to catastrophic forgetting and achieves superior performance.
[ 1, 0, 0, 1, 0, 0 ]
Title: The Stretch to Stray on Time: Resonant Length of Random Walks in a Transient, Abstract: First-passage times in random walks have a vast number of diverse applications in physics, chemistry, biology, and finance. In general, environmental conditions for a stochastic process are not constant on the time scale of the average first-passage time, or control might be applied to reduce noise. We investigate moments of the first-passage time distribution under a transient describing relaxation of environmental conditions. We solve the Laplace-transformed (generalized) master equation analytically using a novel method that is applicable to general state schemes. The first-passage time from one end to the other of a linear chain of states is our application for the solutions. The dependence of its average on the relaxation rate obeys a power law for slow transients. The exponent $\nu$ depends on the chain length $N$ like $\nu=-N/(N+1)$ to leading order. Slow transients substantially reduce the noise of first-passage times expressed as the coefficient of variation (CV), even if the average first-passage time is much longer than the transient. The CV has a pronounced minimum for some lengths, which we call resonant lengths. These results also suggest a simple and efficient noise control strategy, and are closely related to the timing of repetitive excitations, coherence resonance and information transmission by noisy excitable systems. A resonant number of steps from the inhibited state to the excitation threshold and slow recovery from negative feedback provide optimal timing noise reduction and information transmission.
[ 0, 0, 0, 0, 1, 1 ]
Title: Classical Music Clustering Based on Acoustic Features, Abstract: In this paper we cluster 330 classical music pieces collected from MusicNet database based on their musical note sequence. We use shingling and chord trajectory matrices to create signature for each music piece and performed spectral clustering to find the clusters. Based on different resolution, the output clusters distinctively indicate composition from different classical music era and different composing style of the musicians.
[ 1, 0, 0, 0, 0, 0 ]
Title: Many-Body-Localization : Strong Disorder perturbative approach for the Local Integrals of Motion, Abstract: For random quantum spin models, the strong disorder perturbative expansion of the Local Integrals of Motion (LIOMs) around the real-spin operators is revisited. The emphasis is on the links with other properties of the Many-Body-Localized phase, in particular the memory in the dynamics of the local magnetizations and the statistics of matrix elements of local operators in the eigenstate basis. Finally, this approach is applied to analyze the Many-Body-Localization transition in a toy model studied previously from the point of view of the entanglement entropy.
[ 0, 1, 0, 0, 0, 0 ]
Title: Semi-supervised Learning for Discrete Choice Models, Abstract: We introduce a semi-supervised discrete choice model to calibrate discrete choice models when relatively few requests have both choice sets and stated preferences but the majority only have the choice sets. Two classic semi-supervised learning algorithms, the expectation maximization algorithm and the cluster-and-label algorithm, have been adapted to our choice modeling problem setting. We also develop two new algorithms based on the cluster-and-label algorithm. The new algorithms use the Bayesian Information Criterion to evaluate a clustering setting to automatically adjust the number of clusters. Two computational studies including a hotel booking case and a large-scale airline itinerary shopping case are presented to evaluate the prediction accuracy and computational effort of the proposed algorithms. Algorithmic recommendations are rendered under various scenarios.
[ 0, 0, 0, 1, 0, 0 ]
Title: An Analytic Criterion for Turbulent Disruption of Planetary Resonances, Abstract: Mean motion commensurabilities in multi-planet systems are an expected outcome of protoplanetary disk-driven migration, and their relative dearth in the observational data presents an important challenge to current models of planet formation and dynamical evolution. One natural mechanism that can lead to the dissolution of commensurabilities is stochastic orbital forcing, induced by turbulent density fluctuations within the nebula. While this process is qualitatively promising, the conditions under which mean motion resonances can be broken are not well understood. In this work, we derive a simple analytic criterion that elucidates the relationship among the physical parameters of the system, and find the conditions necessary to drive planets out of resonance. Subsequently, we confirm our findings with numerical integrations carried out in the perturbative regime, as well as direct N-body simulations. Our calculations suggest that turbulent resonance disruption depends most sensitively on the planet-star mass ratio. Specifically, for a disk with properties comparable to the early solar nebula with $\alpha=0.01$, only planet pairs with cumulative mass ratios smaller than $(m_1+m_2)/M\lesssim10^{-5}\sim3M_{\oplus}/M_{\odot}$ are susceptible to breaking resonance at semi-major axis of order $a\sim0.1\,$AU. Although turbulence can sometimes compromise resonant pairs, an additional mechanism (such as suppression of resonance capture probability through disk eccentricity) is required to adequately explain the largely non-resonant orbital architectures of extrasolar planetary systems.
[ 0, 1, 1, 0, 0, 0 ]
Title: Automatic sequences and generalised polynomials, Abstract: We conjecture that bounded generalised polynomial functions cannot be generated by finite automata, except for the trivial case when they are ultimately periodic. Using methods from ergodic theory, we are able to partially resolve this conjecture, proving that any hypothetical counterexample is periodic away from a very sparse and structured set. In particular, we show that for a polynomial $p(n)$ with at least one irrational coefficient (except for the constant one) and integer $m\geq 2$, the sequence $\lfloor p(n) \rfloor \bmod{m}$ is never automatic. We also prove that the conjecture is equivalent to the claim that the set of powers of an integer $k\geq 2$ is not given by a generalised polynomial.
[ 1, 0, 1, 0, 0, 0 ]
Title: Existence of travelling waves and high activation energy limits for a onedimensional thermo-diffusive lean spray flame model, Abstract: We provide a mathematical analysis of a thermo-diffusive combustion model of lean spray flames, for which we prove the existence of travelling waves. In the high activation energy singular limit we show the existence of two distinct combustion regimes with a sharp transition -- the diffusion limited regime and the vaporisation controlled regime. The latter is specific to spray flames with slow enough vaporisation. We give a complete characterisation of these regimes, including explicit velocities, profiles, and upper estimate of the size of the internal combustion layer. Our model is on the one hand simple enough to allow for explicit asymptotic limits and on the other hand rich enough to capture some particular aspects of spray combustion. Finally, we briefly discuss the cases where the vaporisation is infinitely fast, or where the spray is polydisperse.
[ 0, 0, 1, 0, 0, 0 ]
Title: Understanding Organizational Approach towards End User Privacy, Abstract: End user privacy is a critical concern for all organizations that collect, process and store user data as a part of their business. Privacy concerned users, regulatory bodies and privacy experts continuously demand organizations provide users with privacy protection. Current research lacks an understanding of organizational characteristics that affect an organization's motivation towards user privacy. This has resulted in a "one solution fits all" approach, which is incapable of providing sustainable solutions for organizational issues related to user privacy. In this work, we have empirically investigated 40 diverse organizations on their motivations and approaches towards user privacy. Resources such as newspaper articles, privacy policies and internal privacy reports that display information about organizational motivations and approaches towards user privacy were used in the study. We could observe organizations to have two primary motivations to provide end users with privacy as voluntary driven inherent motivation, and risk driven compliance motivation. Building up on these findings we developed a taxonomy of organizational privacy approaches and further explored the taxonomy through limited exclusive interviews. With his work, we encourage authorities and scholars to understand organizational characteristics that define an organization's approach towards privacy, in order to effectively communicate regulations that enforce and encourage organizations to consider privacy within their business practices.
[ 1, 0, 0, 0, 0, 0 ]
Title: Baryonic impact on the dark matter orbital properties of Milky Way-sized haloes, Abstract: We study the orbital properties of dark matter haloes by combining a spectral method and cosmological simulations of Milky Way-sized galaxies. We compare the dynamics and orbits of individual dark matter particles from both hydrodynamic and $N$-body simulations, and find that the fraction of box, tube and resonant orbits of the dark matter halo decreases significantly due to the effects of baryons. In particular, the central region of the dark matter halo in the hydrodynamic simulation is dominated by regular, short-axis tube orbits, in contrast to the chaotic, box and thin orbits dominant in the $N$-body run. This leads to a more spherical dark matter halo in the hydrodynamic run compared to a prolate one as commonly seen in the $N$-body simulations. Furthermore, by using a kernel based density estimator, we compare the coarse-grained phase-space densities of dark matter haloes in both simulations and find that it is lower by $\sim0.5$ dex in the hydrodynamic run due to changes in the angular momentum distribution, which indicates that the baryonic process that affects the dark matter is irreversible. Our results imply that baryons play an important role in determining the shape, kinematics and phase-space density of dark matter haloes in galaxies.
[ 0, 1, 0, 0, 0, 0 ]
Title: Extreme Value Analysis Without the Largest Values: What Can Be Done?, Abstract: In this paper we are concerned with the analysis of heavy-tailed data when a portion of the extreme values is unavailable. This research was motivated by an analysis of the degree distributions in a large social network. The degree distributions of such networks tend to have power law behavior in the tails. We focus on the Hill estimator, which plays a starring role in heavy-tailed modeling. The Hill estimator for this data exhibited a smooth and increasing "sample path" as a function of the number of upper order statistics used in constructing the estimator. This behavior became more apparent as we artificially removed more of the upper order statistics. Building on this observation we introduce a new version of the Hill estimator. It is a function of the number of the upper order statistics used in the estimation, but also depends on the number of unavailable extreme values. We establish functional convergence of the normalized Hill estimator to a Gaussian process. An estimation procedure is developed based on the limit theory to estimate the number of missing extremes and extreme value parameters including the tail index and the bias of Hill's estimator. We illustrate how this approach works in both simulations and real data examples.
[ 0, 0, 1, 0, 0, 0 ]
Title: Reciprocal space engineering with hyperuniform gold metasurfaces, Abstract: Hyperuniform geometries feature correlated disordered topologies which follow from a tailored k-space design. Here we study gold plasmonic hyperuniform metasurfaces and we report evidence of the effectiveness of k-space engineering on both light scattering and light emission experiments. The metasurfaces possess interesting directional emission properties which are revealed by momentum spectroscopy as diffraction and fluorescence emission rings at size-specific k-vectors. The opening of these rotational-symmetric patterns scales with the hyperuniform correlation length parameter as predicted via the spectral function method.
[ 0, 1, 0, 0, 0, 0 ]
Title: Spectral Properties of Continuum Fibonacci Schrödinger Operators, Abstract: We study continuum Schrödinger operators on the real line whose potentials are comprised of two compactly supported square-integrable functions concatenated according to an element of the Fibonacci substitution subshift over two letters. We show that the Hausdorff dimension of the spectrum tends to one in the small-coupling and high-energy regimes, regardless of the shape of the potential pieces.
[ 0, 0, 1, 0, 0, 0 ]
Title: Species tree inference from genomic sequences using the log-det distance, Abstract: The log-det distance between two aligned DNA sequences was introduced as a tool for statistically consistent inference of a gene tree under simple non-mixture models of sequence evolution. Here we prove that the log-det distance, coupled with a distance-based tree construction method, also permits consistent inference of species trees under mixture models appropriate to aligned genomic-scale sequences data. Data may include sites from many genetic loci, which evolved on different gene trees due to incomplete lineage sorting on an ultrametric species tree, with different time-reversible substitution processes. The simplicity and speed of distance-based inference suggests log-det based methods should serve as benchmarks for judging more elaborate and computationally-intensive species trees inference methods.
[ 0, 0, 0, 0, 1, 0 ]
Title: Structure preserving schemes for nonlinear Fokker-Planck equations and applications, Abstract: In this paper we focus on the construction of numerical schemes for nonlinear Fokker-Planck equations that preserve the structural properties, like non negativity of the solution, entropy dissipation and large time behavior. The methods here developed are second order accurate, they do not require any restriction on the mesh size and are capable to capture the asymptotic steady states with arbitrary accuracy. These properties are essential for a correct description of the underlying physical problem. Applications of the schemes to several nonlinear Fokker-Planck equations with nonlocal terms describing emerging collective behavior in socio-economic and life sciences are presented.
[ 0, 1, 1, 0, 0, 0 ]
Title: Long-Term Inertial Navigation Aided by Dynamics of Flow Field Features, Abstract: A current-aided inertial navigation framework is proposed for small autonomous underwater vehicles in long-duration operations (> 1 hour), where neither frequent surfacing nor consistent bottom-tracking are available. We instantiate this concept through mid-depth, underwater navigation. This strategy mitigates dead-reckoning uncertainty of a traditional inertial navigation system by comparing the estimate of local, ambient flow velocity with preloaded ocean current maps. The proposed navigation system is implemented through a marginalized particle filter where the vehicle's states are sequentially tracked along with sensor bias and local turbulence that is not resolved by general flow prediction. The performance of the proposed approach is first analyzed through Monte Carlo simulations in two artificial background flow fields, resembling real-world ocean circulation patterns, superposed with smaller-scale, turbulent components with Kolmogorov energy spectrum. The current-aided navigation scheme significantly improves the dead-reckoning performance of the vehicle even when unresolved, small-scale flow perturbations are present. For a 6-hour navigation with an automotive-grade inertial navigation system, the current-aided navigation scheme results in positioning estimates with under 3% uncertainty per distance traveled (UDT) in a turbulent, double-gyre flow field, and under 7.3% UDT in a turbulent, meandering jet flow field. Further evaluation with field test data and actual ocean simulation analysis demonstrates consistent performance for a 6-hour mission, positioning result with under 25% UDT for a 24-hour navigation when provided direct heading measurements, and terminal positioning estimate with 16% UDT at the cost of increased uncertainty at an early stage of the navigation.
[ 1, 0, 0, 0, 0, 0 ]
Title: Explaining Parochialism: A Causal Account for Political Polarization in Changing Economic Environments, Abstract: Political and social polarization are a significant cause of conflict and poor governance in many societies, thus understanding their causes is of considerable importance. Here we demonstrate that shifts in socialization strategy similar to political polarization and/or identity politics could be a constructive response to periods of apparent economic decline. We start from the observation that economies, like ecologies are seldom at equilibrium. Rather, they often suffer both negative and positive shocks. We show that even where in an expanding economy, interacting with diverse out-groups can afford benefits through innovation and exploration, if that economy contracts, a strategy of seeking homogeneous groups can be important to maintaining individual solvency. This is true even where the expected value of out group interaction exceeds that of in group interactions. Our account unifies what were previously seen as conflicting explanations: identity threat versus economic anxiety. Our model indicates that in periods of extreme deprivation, cooperation with diversity again becomes the best (in fact, only viable) strategy. However, our model also shows that while polarization may increase gradually in response to shifts in the economy, gradual decrease of polarization may not be an available strategy; thus returning to previous levels of cooperation may require structural change.
[ 0, 0, 0, 0, 1, 1 ]
Title: A Secular Resonant Origin for the Loneliness of Hot Jupiters, Abstract: Despite decades of inquiry, the origin of giant planets residing within a few tenths of an astronomical unit from their host stars remains unclear. Traditionally, these objects are thought to have formed further out before subsequently migrating inwards. However, the necessity of migration has been recently called into question with the emergence of in-situ formation models of close-in giant planets. Observational characterization of the transiting sub-sample of close-in giants has revealed that "warm" Jupiters, possessing orbital periods longer than roughly 10 days more often possess close-in, co-transiting planetary companions than shorter period "hot" Jupiters, that are usually lonely. This finding has previously been interpreted as evidence that smooth, early migration or in situ formation gave rise to warm Jupiter-hosting systems, whereas more violent, post-disk migration pathways sculpted hot Jupiter-hosting systems. In this work, we demonstrate that both classes of planet may arise via early migration or in-situ conglomeration, but that the enhanced loneliness of hot Jupiters arises due to a secular resonant interaction with the stellar quadrupole moment. Such an interaction tilts the orbits of exterior, lower mass planets, removing them from transit surveys where the hot Jupiter is detected. Warm Jupiter-hosting systems, in contrast, retain their coplanarity due to the weaker influence of the host star's quadrupolar potential relative to planet-disk interactions. In this way, hot Jupiters and warm Jupiters are placed within a unified theoretical framework that may be readily validated or falsified using data from upcoming missions such as TESS.
[ 0, 1, 0, 0, 0, 0 ]
Title: An Incremental Slicing Method for Functional Programs, Abstract: Several applications of slicing require a program to be sliced with respect to more than one slicing criterion. Program specialization, parallelization and cohesion measurement are examples of such applications. These applications can benefit from an incremental static slicing method in which a significant extent of the computations for slicing with respect to one criterion could be reused for another. In this paper, we consider the problem of incremental slicing of functional programs. We first present a non-incremental version of the slicing algorithm which does a polyvariant analysis 1 of functions. Since polyvariant analyses tend to be costly, we compute a compact context-independent summary of each function and then use this summary at the call sites of the function. The construction of the function summary is non-trivial and helps in the development of the incremental version. The incremental method, on the other hand, consists of a one-time pre-computation step that uses the non-incremental version to slice the program with respect to a fixed default slicing criterion and processes the results further to a canonical form. Presented with an actual slicing criterion, the incremental step involves a low-cost computation that uses the results of the pre-computation to obtain the slice. We have implemented a prototype of the slicer for a pure subset of Scheme, with pairs and lists as the only algebraic data types. Our experiments show that the incremental step of the slicer runs orders of magnitude faster than the non-incremental version. We have also proved the correctness of our incremental algorithm with respect to the non-incremental version.
[ 1, 0, 0, 0, 0, 0 ]
Title: A study of ancient Khmer ephemerides, Abstract: We study ancient Khmer ephemerides described in 1910 by the French engineer Faraut, in order to determine whether they rely on observations carried out in Cambodia. These ephemerides were found to be of Indian origin and have been adapted for another longitude, most likely in Burma. A method for estimating the date and place where the ephemerides were developed or adapted is described and applied.
[ 0, 1, 1, 0, 0, 0 ]
Title: Auto Deep Compression by Reinforcement Learning Based Actor-Critic Structure, Abstract: Model-based compression is an effective, facilitating, and expanded model of neural network models with limited computing and low power. However, conventional models of compression techniques utilize crafted features [2,3,12] and explore specialized areas for exploration and design of large spaces in terms of size, speed, and accuracy, which usually have returns Less and time is up. This paper will effectively analyze deep auto compression (ADC) and reinforcement learning strength in an effective sample and space design, and improve the compression quality of the model. The results of compression of the advanced model are obtained without any human effort and in a completely automated way. With a 4- fold reduction in FLOP, the accuracy of 2.8% is higher than the manual compression model for VGG-16 in ImageNet.
[ 0, 0, 0, 1, 0, 0 ]
Title: MUFASA: The assembly of the red sequence, Abstract: We examine the growth and evolution of quenched galaxies in the Mufasa cosmological hydrodynamic simulations that include an evolving halo mass-based quenching prescription, with galaxy colours computed accounting for line-of-sight extinction to individual star particles. Mufasa reproduces the observed present-day red sequence reasonably well, including its slope, amplitude, and scatter. In Mufasa, the red sequence slope is driven entirely by the steep stellar mass-stellar metallicity relation, which independently agrees with observations. High-mass star-forming galaxies blend smoothly onto the red sequence, indicating the lack of a well-defined green valley at M*>10^10.5 Mo. The most massive galaxies quench the earliest and then grow very little in mass via dry merging; they attain their high masses at earlier epochs when cold inflows more effectively penetrate hot halos. To higher redshifts, the red sequence becomes increasingly contaminated with massive dusty star-forming galaxies; UVJ selection subtly but effectively separates these populations. We then examine the evolution of the mass functions of central and satellite galaxies split into passive and star-forming via UVJ. Massive quenched systems show good agreement with observations out to z~2, despite not including a rapid early quenching mode associated with mergers. However, low-mass quenched galaxies are far too numerous at z<1 in Mufasa, indicating that Mufasa strongly over-quenches satellites. A challenge for hydrodynamic simulations is to devise a quenching model that produces enough early massive quenched galaxies and keeps them quenched to z=0, while not being so strong as to over-quench satellites; Mufasa's current scheme fails at the latter.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Learning for Accelerated Reliability Analysis of Infrastructure Networks, Abstract: Natural disasters can have catastrophic impacts on the functionality of infrastructure systems and cause severe physical and socio-economic losses. Given budget constraints, it is crucial to optimize decisions regarding mitigation, preparedness, response, and recovery practices for these systems. This requires accurate and efficient means to evaluate the infrastructure system reliability. While numerous research efforts have addressed and quantified the impact of natural disasters on infrastructure systems, typically using the Monte Carlo approach, they still suffer from high computational cost and, thus, are of limited applicability to large systems. This paper presents a deep learning framework for accelerating infrastructure system reliability analysis. In particular, two distinct deep neural network surrogates are constructed and studied: (1) A classifier surrogate which speeds up the connectivity determination of networks, and (2) An end-to-end surrogate that replaces a number of components such as roadway status realization, connectivity determination, and connectivity averaging. The proposed approach is applied to a simulation-based study of the two-terminal connectivity of a California transportation network subject to extreme probabilistic earthquake events. Numerical results highlight the effectiveness of the proposed approach in accelerating the transportation system two-terminal reliability analysis with extremely high prediction accuracy.
[ 1, 0, 0, 1, 0, 0 ]
Title: IMLS-SLAM: scan-to-model matching based on 3D data, Abstract: The Simultaneous Localization And Mapping (SLAM) problem has been well studied in the robotics community, especially using mono, stereo cameras or depth sensors. 3D depth sensors, such as Velodyne LiDAR, have proved in the last 10 years to be very useful to perceive the environment in autonomous driving, but few methods exist that directly use these 3D data for odometry. We present a new low-drift SLAM algorithm based only on 3D LiDAR data. Our method relies on a scan-to-model matching framework. We first have a specific sampling strategy based on the LiDAR scans. We then define our model as the previous localized LiDAR sweeps and use the Implicit Moving Least Squares (IMLS) surface representation. We show experiments with the Velodyne HDL32 with only 0.40% drift over a 4 km acquisition without any loop closure (i.e., 16 m drift after 4 km). We tested our solution on the KITTI benchmark with a Velodyne HDL64 and ranked among the best methods (against mono, stereo and LiDAR methods) with a global drift of only 0.69%.
[ 1, 0, 0, 0, 0, 0 ]
Title: On a minimal counterexample to Brauer's $k(B)$-conjecture, Abstract: We study Brauer's long-standing $k(B)$-conjecture on the number of characters in $p$-blocks for finite quasi-simple groups and show that their blocks do not occur as a minimal counterexample for $p\ge5$ nor in the case of abelian defect. For $p=3$ we obtain that the principal 3-blocks do not provide minimal counterexamples. We also determine the precise number of irreducible characters in unipotent blocks of classical groups for odd primes.
[ 0, 0, 1, 0, 0, 0 ]
Title: Separator Reconnection at Earth's Dayside Magnetopause: MMS Observations Compared to Global Simulations, Abstract: We compare a global high resolution resistive magnetohydrodynamics (MHD) simulation of Earth's magnetosphere with observations from the Magnetospheric Multiscale (MMS) constellation for a southward IMF magnetopause crossing during October 16, 2015 that was previously identified as an electron diffusion region (EDR) event. The simulation predicts a complex time-dependent magnetic topology consisting of multiple separators and flux ropes. Despite the topological complexity, the predicted distance between MMS and the primary separator is less than 0.5 Earth radii. These results suggest that global magnetic topology, rather than local magnetic geometry alone, determines the location of the electron diffusion region at the dayside magnetopause.
[ 0, 1, 0, 0, 0, 0 ]
Title: Semi-Supervised Overlapping Community Finding based on Label Propagation with Pairwise Constraints, Abstract: Algorithms for detecting communities in complex networks are generally unsupervised, relying solely on the structure of the network. However, these methods can often fail to uncover meaningful groupings that reflect the underlying communities in the data, particularly when those structures are highly overlapping. One way to improve the usefulness of these algorithms is by incorporating additional background information, which can be used as a source of constraints to direct the community detection process. In this work, we explore the potential of semi-supervised strategies to improve algorithms for finding overlapping communities in networks. Specifically, we propose a new method, based on label propagation, for finding communities using a limited number of pairwise constraints. Evaluations on synthetic and real-world datasets demonstrate the potential of this approach for uncovering meaningful community structures in cases where each node can potentially belong to more than one community.
[ 1, 0, 0, 0, 0, 0 ]
Title: Probabilistic Line Searches for Stochastic Optimization, Abstract: In deterministic optimization, line searches are a standard tool ensuring stability and efficiency. Where only stochastic gradients are available, no direct equivalent has so far been formulated, because uncertain gradients do not allow for a strict sequence of decisions collapsing the search space. We construct a probabilistic line search by combining the structure of existing deterministic methods with notions from Bayesian optimization. Our method retains a Gaussian process surrogate of the univariate optimization objective, and uses a probabilistic belief over the Wolfe conditions to monitor the descent. The algorithm has very low computational cost, and no user-controlled parameters. Experiments show that it effectively removes the need to define a learning rate for stochastic gradient descent.
[ 1, 0, 0, 1, 0, 0 ]
Title: Supersymmetry in Closed Chains of Coupled Majorana Modes, Abstract: We consider a closed chain of even number of Majorana zero modes with nearest-neighbour couplings which are different site by site generically, thus no any crystal symmetry. Instead, we demonstrate the possibility of an emergent supersymmetry (SUSY), which is accompanied by gapless Fermionic excitations. In particular, the condition can be easily satisfied by tuning only one coupling, regardless of how many other couplings are there. Such a system can be realized by four Majorana modes on two parallel Majorana nanowires with their ends connected by Josephson junctions and bodies connected by an external superconducting ring. By tuning the Josephson couplings with a magnetic flux $\Phi$ through the ring, we get the gapless excitations at $\Phi_{SUSY}=\pm f\Phi_0$ with $\Phi_0= hc/2e$, which is signaled by a zero-bias conductance peak in tunneling conductance. We find this $f$ generally a fractional number and oscillating with increasing Zeeman fields that parallel to the nanowires, which provide a unique experimental signature for the existence of Majorana modes.
[ 0, 1, 0, 0, 0, 0 ]
Title: Renormalized Hennings Invariants and 2+1-TQFTs, Abstract: We construct non-semisimple $2+1$-TQFTs yielding mapping class group representations in Lyubashenko's spaces. In order to do this, we first generalize Beliakova, Blanchet and Geer's logarithmic Hennings invariants based on quantum $\mathfrak{sl}_2$ to the setting of finite-dimensional non-degenerate unimodular ribbon Hopf algebras. The tools used for this construction are a Hennings-augmented Reshetikhin-Turaev functor and modified traces. When the Hopf algebra is factorizable, we further show that the universal construction of Blanchet, Habegger, Masbaum and Vogel produces a $2+1$-TQFT on a not completely rigid monoidal subcategory of cobordisms.
[ 0, 0, 1, 0, 0, 0 ]
Title: When to Invest in Security? Empirical Evidence and a Game-Theoretic Approach for Time-Based Security, Abstract: Games of timing aim to determine the optimal defense against a strategic attacker who has the technical capability to breach a system in a stealthy fashion. Key questions arising are when the attack takes place, and when a defensive move should be initiated to reset the system resource to a known safe state. In our work, we study a more complex scenario called Time-Based Security in which we combine three main notions: protection time, detection time, and reaction time. Protection time represents the amount of time the attacker needs to execute the attack successfully. In other words, protection time represents the inherent resilience of the system against an attack. Detection time is the required time for the defender to detect that the system is compromised. Reaction time is the required time for the defender to reset the defense mechanisms in order to recreate a safe system state. In the first part of the paper, we study the VERIS Community Database (VCDB) and screen other data sources to provide insights into the actual timing of security incidents and responses. While we are able to derive distributions for some of the factors regarding the timing of security breaches, we assess the state-of-the-art regarding the collection of timing-related data as insufficient. In the second part of the paper, we propose a two-player game which captures the outlined Time-Based Security scenario in which both players move according to a periodic strategy. We carefully develop the resulting payoff functions, and provide theorems and numerical results to help the defender to calculate the best time to reset the defense mechanism by considering protection time, detection time, and reaction time.
[ 1, 0, 0, 0, 0, 0 ]
Title: Converting Cascade-Correlation Neural Nets into Probabilistic Generative Models, Abstract: Humans are not only adept in recognizing what class an input instance belongs to (i.e., classification task), but perhaps more remarkably, they can imagine (i.e., generate) plausible instances of a desired class with ease, when prompted. Inspired by this, we propose a framework which allows transforming Cascade-Correlation Neural Networks (CCNNs) into probabilistic generative models, thereby enabling CCNNs to generate samples from a category of interest. CCNNs are a well-known class of deterministic, discriminative NNs, which autonomously construct their topology, and have been successful in giving accounts for a variety of psychological phenomena. Our proposed framework is based on a Markov Chain Monte Carlo (MCMC) method, called the Metropolis-adjusted Langevin algorithm, which capitalizes on the gradient information of the target distribution to direct its explorations towards regions of high probability, thereby achieving good mixing properties. Through extensive simulations, we demonstrate the efficacy of our proposed framework.
[ 1, 0, 0, 1, 0, 0 ]
Title: DaMaSCUS: The Impact of Underground Scatterings on Direct Detection of Light Dark Matter, Abstract: Conventional dark matter direct detection experiments set stringent constraints on dark matter by looking for elastic scattering events between dark matter particles and nuclei in underground detectors. However these constraints weaken significantly in the sub-GeV mass region, simply because light dark matter does not have enough energy to trigger detectors regardless of the dark matter-nucleon scattering cross section. Even if future experiments lower their energy thresholds, they will still be blind to parameter space where dark matter particles interact with nuclei strongly enough that they lose enough energy and become unable to cause a signal above the experimental threshold by the time they reach the underground detector. Therefore in case dark matter is in the sub-GeV region and strongly interacting, possible underground scatterings of dark matter with terrestrial nuclei must be taken into account because they affect significantly the recoil spectra and event rates, regardless of whether the experiment probes DM via DM-nucleus or DM-electron interaction. To quantify this effect we present the publicly available Dark Matter Simulation Code for Underground Scatterings (DaMaSCUS), a Monte Carlo simulator of DM trajectories through the Earth taking underground scatterings into account. Our simulation allows the precise calculation of the density and velocity distribution of dark matter at any detector of given depth and location on Earth. The simulation can also provide the accurate recoil spectrum in underground detectors as well as the phase and amplitude of the diurnal modulation caused by this shadowing effect of the Earth, ultimately relating the modulations expected in different detectors, which is important to decisively conclude if a diurnal modulation is due to dark matter or an irrelevant background.
[ 0, 1, 0, 0, 0, 0 ]
Title: Basic concepts and tools for the Toki Pona minimal and constructed language: description of the language and main issues; analysis of the vocabulary; text synthesis and syntax highlighting; Wordnet synsets, Abstract: A minimal constructed language (conlang) is useful for experiments and comfortable for making tools. The Toki Pona (TP) conlang is minimal both in the vocabulary (with only 14 letters and 124 lemmas) and in the (about) 10 syntax rules. The language is useful for being a used and somewhat established minimal conlang with at least hundreds of fluent speakers. This article exposes current concepts and resources for TP, and makes available Python (and Vim) scripted routines for the analysis of the language, synthesis of texts, syntax highlighting schemes, and the achievement of a preliminary TP Wordnet. Focus is on the analysis of the basic vocabulary, as corpus analyses were found. The synthesis is based on sentence templates, relates to context by keeping track of used words, and renders larger texts by using a fixed number of phonemes (e.g. for poems) and number of sentences, words and letters (e.g. for paragraphs). Syntax highlighting reflects morphosyntactic classes given in the official dictionary and different solutions are described and implemented in the well-established Vim text editor. The tentative TP Wordnet is made available in three patterns of relations between synsets and word lemmas. In summary, this text holds potentially novel conceptualizations about, and tools and results in analyzing, synthesizing and syntax highlighting the TP language.
[ 1, 0, 0, 0, 0, 0 ]
Title: Spectral Methods for Nonparametric Models, Abstract: Nonparametric models are versatile, albeit computationally expensive, tool for modeling mixture models. In this paper, we introduce spectral methods for the two most popular nonparametric models: the Indian Buffet Process (IBP) and the Hierarchical Dirichlet Process (HDP). We show that using spectral methods for the inference of nonparametric models are computationally and statistically efficient. In particular, we derive the lower-order moments of the IBP and the HDP, propose spectral algorithms for both models, and provide reconstruction guarantees for the algorithms. For the HDP, we further show that applying hierarchical models on dataset with hierarchical structure, which can be solved with the generalized spectral HDP, produces better solutions to that of flat models regarding likelihood performance.
[ 1, 0, 0, 1, 0, 0 ]
Title: Steady Galactic Dynamos and Observational Consequences I: Halo Magnetic Fields, Abstract: We study the global consequences in the halos of spiral galaxies of the steady, axially symmetric, mean field dynamo. We use the classical theory but add the possibility of using the velocity field components as parameters in addition to the helicity and diffusivity. The analysis is based on the simplest version of the theory and uses scale-invariant solutions. The velocity field (subject to restrictions) is a scale invariant field in a `pattern' frame, in place of a full dynamical theory. The `pattern frame' of reference may either be the systemic frame or some rigidly rotating spiral pattern frame. One type of solution for the magnetic field yields off-axis, spirally wound, magnetic field lines. These predict sign changes in the Faraday screen rotation measure in every quadrant of the halo of an edge-on galaxy. Such rotation measure oscillations have been observed in the CHANG-ES survey.
[ 0, 1, 0, 0, 0, 0 ]
Title: Efficient Decision Trees for Multi-class Support Vector Machines Using Entropy and Generalization Error Estimation, Abstract: We propose new methods for Support Vector Machines (SVMs) using tree architecture for multi-class classi- fication. In each node of the tree, we select an appropriate binary classifier using entropy and generalization error estimation, then group the examples into positive and negative classes based on the selected classi- fier and train a new classifier for use in the classification phase. The proposed methods can work in time complexity between O(log2N) to O(N) where N is the number of classes. We compared the performance of our proposed methods to the traditional techniques on the UCI machine learning repository using 10-fold cross-validation. The experimental results show that our proposed methods are very useful for the problems that need fast classification time or problems with a large number of classes as the proposed methods run much faster than the traditional techniques but still provide comparable accuracy.
[ 1, 0, 0, 1, 0, 0 ]
Title: A Zero Knowledge Sumcheck and its Applications, Abstract: Many seminal results in Interactive Proofs (IPs) use algebraic techniques based on low-degree polynomials, the study of which is pervasive in theoretical computer science. Unfortunately, known methods for endowing such proofs with zero knowledge guarantees do not retain this rich algebraic structure. In this work, we develop algebraic techniques for obtaining zero knowledge variants of proof protocols in a way that leverages and preserves their algebraic structure. Our constructions achieve unconditional (perfect) zero knowledge in the Interactive Probabilistically Checkable Proof (IPCP) model of Kalai and Raz [KR08] (the prover first sends a PCP oracle, then the prover and verifier engage in an Interactive Proof in which the verifier may query the PCP). Our main result is a zero knowledge variant of the sumcheck protocol [LFKN92] in the IPCP model. The sumcheck protocol is a key building block in many IPs, including the protocol for polynomial-space computation due to Shamir [Sha92], and the protocol for parallel computation due to Goldwasser, Kalai, and Rothblum [GKR15]. A core component of our result is an algebraic commitment scheme, whose hiding property is guaranteed by algebraic query complexity lower bounds [AW09,JKRS09]. This commitment scheme can then be used to considerably strengthen our previous work [BCFGRS16] that gives a sumcheck protocol with much weaker zero knowledge guarantees, itself using algebraic techniques based on algorithms for polynomial identity testing [RS05,BW04]. We demonstrate the applicability of our techniques by deriving zero knowledge variants of well-known protocols based on algebraic techniques, including the protocols of Shamir and of Goldwasser, Kalai, and Rothblum, as well as the protocol of Babai, Fortnow, and Lund [BFL91].
[ 1, 0, 0, 0, 0, 0 ]
Title: Exact diagonalization and cluster mean-field study of triangular-lattice XXZ antiferromagnets near saturation, Abstract: Quantum magnetic phases near the magnetic saturation of triangular-lattice antiferromagnets with XXZ anisotropy have been attracting renewed interest since it has been suggested that a nontrivial coplanar phase, called the $\pi$-coplanar or $\Psi$ phase, could be stabilized by quantum effects in a certain range of anisotropy parameter $J/J_z$ besides the well-known 0-coplanar (known also as $V$) and umbrella phases. Recently, Sellmann $et$ $al$. [Phys. Rev. B {\bf 91}, 081104(R) (2015)] claimed that the $\pi$-coplanar phase is absent for $S=1/2$ from an exact-diagonalization analysis in the sector of the Hilbert space with only three down-spins (three magnons). We first reconsider and improve this analysis by taking into account several low-lying eigenvalues and the associated eigenstates as a function of $J/J_z$ and by sensibly increasing the system sizes (up to 1296 spins). A careful identification analysis shows that the lowest eigenstate is a chirally antisymmetric combination of finite-size umbrella states for $J/J_z\gtrsim 2.218$ while it corresponds to a coplanar phase for $J/J_z\lesssim 2.218$. However, we demonstrate that the distinction between 0-coplanar and $\pi$-coplanar phases in the latter region is fundamentally impossible from the symmetry-preserving finite-size calculations with fixed magnon number.} Therefore, we also perform a cluster mean-field plus scaling analysis for small spins $S\leq 3/2$. The obtained results, together with the previous large-$S$ analysis, indicate that the $\pi$-coplanar phase exists for any $S$ except for the classical limit ($S\rightarrow \infty$) and the existence range in $J/J_z$ is largest in the most quantum case of $S=1/2$.
[ 0, 1, 0, 0, 0, 0 ]
Title: Predicting Tomorrow's Headline using Today's Twitter Deliberations, Abstract: Predicting the popularity of news article is a challenging task. Existing literature mostly focused on article contents and polarity to predict popularity. However, existing research has not considered the users' preference towards a particular article. Understanding users' preference is an important aspect for predicting the popularity of news articles. Hence, we consider the social media data, from the Twitter platform, to address this research gap. In our proposed model, we have considered the users' involvement as well as the users' reaction towards an article to predict the popularity of the article. In short, we are predicting tomorrow's headline by probing today's Twitter discussion. We have considered 300 political news article from the New York Post, and our proposed approach has outperformed other baseline models.
[ 1, 0, 0, 0, 0, 0 ]
Title: On a problem of Bharanedhar and Ponnusamy involving planar harmonic mappings, Abstract: In this paper, we give a negative answer to a problem presented by Bharanedhar and Ponnusamy (Rocky Mountain J. Math. 44: 753--777, 2014) concerning univalency of a class of harmonic mappings. More precisely, we show that for all values of the involved parameter, this class contains a non-univalent function. Moreover, several results on a new subclass of close-to-convex harmonic mappings, which is motivated by work of Ponnusamy and Sairam Kaliraj (Mediterr. J. Math. 12: 647--665, 2015), are obtained.
[ 0, 0, 1, 0, 0, 0 ]
Title: Projecting UK Mortality using Bayesian Generalised Additive Models, Abstract: Forecasts of mortality provide vital information about future populations, with implications for pension and health-care policy as well as for decisions made by private companies about life insurance and annuity pricing. Stochastic mortality forecasts allow the uncertainty in mortality predictions to be taken into consideration when making policy decisions and setting product prices. Longer lifespans imply that forecasts of mortality at ages 90 and above will become more important in such calculations. This paper presents a Bayesian approach to the forecasting of mortality that jointly estimates a Generalised Additive Model (GAM) for mortality for the majority of the age-range and a parametric model for older ages where the data are sparser. The GAM allows smooth components to be estimated for age, cohort and age-specific improvement rates, together with a non-smoothed period effect. Forecasts for the United Kingdom are produced using data from the Human Mortality Database spanning the period 1961-2013. A metric that approximates predictive accuracy under Leave-One-Out cross-validation is used to estimate weights for the `stacking' of forecasts with different points of transition between the GAM and parametric elements. Mortality for males and females are estimated separately at first, but a joint model allows the asymptotic limit of mortality at old ages to be shared between sexes, and furthermore provides for forecasts accounting for correlations in period innovations. The joint and single sex model forecasts estimated using data from 1961-2003 are compared against observed data from 2004-2013 to facilitate model assessment.
[ 0, 0, 0, 1, 0, 0 ]
Title: The Case for Meta-Cognitive Machine Learning: On Model Entropy and Concept Formation in Deep Learning, Abstract: Machine learning is usually defined in behaviourist terms, where external validation is the primary mechanism of learning. In this paper, I argue for a more holistic interpretation in which finding more probable, efficient and abstract representations is as central to learning as performance. In other words, machine learning should be extended with strategies to reason over its own learning process, leading to so-called meta-cognitive machine learning. As such, the de facto definition of machine learning should be reformulated in these intrinsically multi-objective terms, taking into account not only the task performance but also internal learning objectives. To this end, we suggest a "model entropy function" to be defined that quantifies the efficiency of the internal learning processes. It is conjured that the minimization of this model entropy leads to concept formation. Besides philosophical aspects, some initial illustrations are included to support the claims.
[ 1, 0, 0, 1, 0, 0 ]
Title: A Lagrangian fluctuation-dissipation relation for scalar turbulence, III. Turbulent Rayleigh-Bénard convection, Abstract: A Lagrangian fluctuation-dissipation relation has been derived in a previous work to describe the dissipation rate of advected scalars, both passive and active, in wall-bounded flows. We apply this relation here to develop a Lagrangian description of thermal dissipation in turbulent Rayleigh-Bénard convection in a right-cylindrical cell of arbitrary cross-section, with either imposed temperature difference or imposed heat-flux at the top and bottom walls. We obtain an exact relation between the steady-state thermal dissipation rate and the time for passive tracer particles released at the top or bottom wall to mix to their final uniform value near those walls. We show that an "ultimate regime" with the Nusselt-number scaling predicted by Spiegel (1971) or, with a log-correction, by Kraichnan (1962) will occur at high Rayleigh numbers, unless this near-wall mixing time is asymptotically much longer than the free-fall time, or almost the large-scale circulation time. We suggest a new criterion for an ultimate regime in terms of transition to turbulence of a thermal "mixing zone", which is much wider than the standard thermal boundary layer. Kraichnan-Spiegel scaling may, however, not hold if the intensity and volume of thermal plumes decrease sufficiently rapidly with increasing Rayleigh number. To help resolve this issue, we suggest a program to measure the near-wall mixing time, which we argue is accessible both by laboratory experiment and by numerical simulation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Model Selection for Explosive Models, Abstract: This paper examines the limit properties of information criteria (such as AIC, BIC, HQIC) for distinguishing between the unit root model and the various kinds of explosive models. The explosive models include the local-to-unit-root model, the mildly explosive model and the regular explosive model. Initial conditions with different order of magnitude are considered. Both the OLS estimator and the indirect inference estimator are studied. It is found that BIC and HQIC, but not AIC, consistently select the unit root model when data come from the unit root model. When data come from the local-to-unit-root model, both BIC and HQIC select the wrong model with probability approaching 1 while AIC has a positive probability of selecting the right model in the limit. When data come from the regular explosive model or from the mildly explosive model in the form of $1+n^{\alpha }/n$ with $\alpha \in (0,1)$, all three information criteria consistently select the true model. Indirect inference estimation can increase or decrease the probability for information criteria to select the right model asymptotically relative to OLS, depending on the information criteria and the true model. Simulation results confirm our asymptotic results in finite sample.
[ 0, 0, 1, 1, 0, 0 ]
Title: A Scale Free Algorithm for Stochastic Bandits with Bounded Kurtosis, Abstract: Existing strategies for finite-armed stochastic bandits mostly depend on a parameter of scale that must be known in advance. Sometimes this is in the form of a bound on the payoffs, or the knowledge of a variance or subgaussian parameter. The notable exceptions are the analysis of Gaussian bandits with unknown mean and variance by Cowan and Katehakis [2015] and of uniform distributions with unknown support [Cowan and Katehakis, 2015]. The results derived in these specialised cases are generalised here to the non-parametric setup, where the learner knows only a bound on the kurtosis of the noise, which is a scale free measure of the extremity of outliers.
[ 0, 0, 0, 1, 0, 0 ]
Title: Toward III-V/Si co-integration by controlling biatomic steps on hydrogenated Si(001), Abstract: The integration of III-V on silicon is still a hot topic as it will open up a way to co-integrate Si CMOS logic with photonic vices. To reach this aim, several hurdles should be solved, and more particularly the generation of antiphase boundaries (APBs) at the III-V/Si(001) interface. Density functional theory (DFT) has been used to demonstrate the existence of a double-layer steps on nominal Si(001) which is formed during annealing under proper hydrogen chemical potential. This phenomenon could be explained by the formation of dimer vacancy lines which could be responsible for the preferential and selective etching of one type of step leading to the double step surface creation. To check this hypothesis, different experiments have been carried in an industrial 300 mm MOCVD where the total pressure during the anneal step of Si(001) surface has been varied. Under optimized conditions, an APBs-free GaAs layer was grown on a nominal Si(001) surface paving the way for III-V integration on silicon industrial platform.
[ 0, 1, 0, 0, 0, 0 ]
Title: Proceedings XVI Jornadas sobre Programación y Lenguajes, Abstract: This volume contains a selection of the papers presented at the XVI Jornadas sobre Programación y Lenguajes (PROLE 2016), held at Salamanca, Spain, during September 14th-15th, 2016. Previous editions of the workshop were held in Santander (2015), Cádiz (2014), Madrid (2013), Almería (2012), A Coruña (2011), València (2010), San Sebastián (2009), Gijón (2008), Zaragoza (2007), Sitges (2006), Granada (2005), Málaga (2004), Alicante (2003), El Escorial (2002), and Almagro (2001). Programming languages provide a conceptual framework which is necessary for the development, analysis, optimization and understanding of programs and programming tasks. The aim of the PROLE series of conferences (PROLE stems from PROgramación y LEnguajes) is to serve as a meeting point for Spanish research groups which develop their work in the area of programming and programming languages. The organization of this series of events aims at fostering the exchange of ideas, experiences and results among these groups. Promoting further collaboration is also one of its main goals.
[ 1, 0, 0, 0, 0, 0 ]
Title: The proximal point algorithm in geodesic spaces with curvature bounded above, Abstract: We investigate the asymptotic behavior of sequences generated by the proximal point algorithm for convex functions in complete geodesic spaces with curvature bounded above. Using the notion of resolvents of such functions, which was recently introduced by the authors, we show the existence of minimizers of convex functions under the boundedness assumptions on such sequences as well as the convergence of such sequences to minimizers of given functions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Spatio-temporal canards in neural field equations, Abstract: Canards are special solutions to ordinary differential equations that follow invariant repelling slow manifolds for long time intervals. In realistic biophysical single cell models, canards are responsible for several complex neural rhythms observed experimentally, but their existence and role in spatially-extended systems is largely unexplored. We describe a novel type of coherent structure in which a spatial pattern displays temporal canard behaviour. Using interfacial dynamics and geometric singular perturbation theory, we classify spatio-temporal canards and give conditions for the existence of folded-saddle and folded-node canards. We find that spatio-temporal canards are robust to changes in the synaptic connectivity and firing rate. The theory correctly predicts the existence of spatio-temporal canards with octahedral symmetries in a neural field model posed on the unit sphere.
[ 0, 1, 1, 0, 0, 0 ]
Title: Efficient Convolutional Network Learning using Parametric Log based Dual-Tree Wavelet ScatterNet, Abstract: We propose a DTCWT ScatterNet Convolutional Neural Network (DTSCNN) formed by replacing the first few layers of a CNN network with a parametric log based DTCWT ScatterNet. The ScatterNet extracts edge based invariant representations that are used by the later layers of the CNN to learn high-level features. This improves the training of the network as the later layers can learn more complex patterns from the start of learning because the edge representations are already present. The efficient learning of the DTSCNN network is demonstrated on CIFAR-10 and Caltech-101 datasets. The generic nature of the ScatterNet front-end is shown by an equivalent performance to pre-trained CNN front-ends. A comparison with the state-of-the-art on CIFAR-10 and Caltech-101 datasets is also presented.
[ 1, 0, 0, 1, 0, 0 ]
Title: Fixed points of Legendre-Fenchel type transforms, Abstract: A recent result characterizes the fully order reversing operators acting on the class of lower semicontinuous proper convex functions in a real Banach space as certain linear deformations of the Legendre-Fenchel transform. Motivated by the Hilbert space version of this result and by the well-known result saying that this convex conjugation transform has a unique fixed point (namely, the normalized energy function), we investigate the fixed point equation in which the involved operator is fully order reversing and acts on the above-mentioned class of functions. It turns out that this nonlinear equation is very sensitive to the involved parameters and can have no solution, a unique solution, or several (possibly infinitely many) ones. Our analysis yields a few by-products, such as results related to positive definite operators, and to functional equations and inclusions involving monotone operators.
[ 0, 0, 1, 0, 0, 0 ]
Title: High-Dimensional Materials and Process Optimization using Data-driven Experimental Design with Well-Calibrated Uncertainty Estimates, Abstract: The optimization of composition and processing to obtain materials that exhibit desirable characteristics has historically relied on a combination of scientist intuition, trial and error, and luck. We propose a methodology that can accelerate this process by fitting data-driven models to experimental data as it is collected to suggest which experiment should be performed next. This methodology can guide the scientist to test the most promising candidates earlier, and can supplement scientific intuition and knowledge with data-driven insights. A key strength of the proposed framework is that it scales to high-dimensional parameter spaces, as are typical in materials discovery applications. Importantly, the data-driven models incorporate uncertainty analysis, so that new experiments are proposed based on a combination of exploring high-uncertainty candidates and exploiting high-performing regions of parameter space. Over four materials science test cases, our methodology led to the optimal candidate being found with three times fewer required measurements than random guessing on average.
[ 0, 1, 0, 1, 0, 0 ]
Title: An upper bound on tricolored ordered sum-free sets, Abstract: We present a strengthening of the lemma on the lower bound of the slice rank by Tao (2016) motivated by the Croot-Lev-Pach-Ellenberg-Gijswijt bound on cap sets (2017, 2017). The Croot-Lev-Pach-Ellenberg-Gijswijt method and the lemma of Tao are based on the fact that the rank of a diagonal matrix is equal to the number of non-zero diagonal entries. Our lemma is based on the rank of upper-triangular matrices. This stronger lemma allows us to prove the following extension of the Ellenberg-Gijswijt result (2017). A tricolored ordered sum-free set in $\mathbb F_p^n$ is a collection $\{(a_i,b_i,c_i):i=1,2,\ldots,m\}$ of ordered triples in $(\mathbb F_p^n )^3$ such that $a_i+b_i+c_i=0$ and if $a_i+b_j+c_k=0$, then $i\le j\le k$. By using the new lemma, we present an upper bound on the size of a tricolored ordered sum-free set in $\mathbb F_p^n$.
[ 0, 0, 1, 0, 0, 0 ]
Title: The effect of the spatial domain in FANOVA models with ARH(1) error term, Abstract: Functional Analysis of Variance (FANOVA) from Hilbert-valued correlated data with spatial rectangular or circular supports is analyzed, when Dirichlet conditions are assumed on the boundary. Specifically, a Hilbert-valued fixed effect model with error term defined from an Autoregressive Hilbertian process of order one (ARH(1) process) is considered, extending the formulation given in Ruiz-Medina (2016). A new statistical test is also derived to contrast the significance of the functional fixed effect parameters. The Dirichlet conditions established at the boundary affect the dependence range of the correlated error term. While the rate of convergence to zero of the eigenvalues of the covariance kernels, characterizing the Gaussian functional error components, directly affects the stability of the generalized least-squares parameter estimation problem. A simulation study and a real-data application related to fMRI analysis are undertaken to illustrate the performance of the parameter estimator and statistical test derived.
[ 0, 0, 1, 1, 0, 0 ]
Title: Controlling a remotely located Robot using Hand Gestures in real time: A DSP implementation, Abstract: Telepresence is a necessity for present time as we can't reach everywhere and also it is useful in saving human life at dangerous places. A robot, which could be controlled from a distant location, can solve these problems. This could be via communication waves or networking methods. Also controlling should be in real time and smooth so that it can actuate on every minor signal in an effective way. This paper discusses a method to control a robot over the network from a distant location. The robot was controlled by hand gestures which were captured by the live camera. A DSP board TMS320DM642EVM was used to implement image pre-processing and fastening the whole system. PCA was used for gesture classification and robot actuation was done according to predefined procedures. Classification information was sent over the network in the experiment. This method is robust and could be used to control any kind of robot over distance.
[ 1, 0, 0, 0, 0, 0 ]
Title: Hybrid Machine Learning Approach to Popularity Prediction of Newly Released Contents for Online Video Streaming Service, Abstract: In the industry of video content providers such as VOD and IPTV, predicting the popularity of video contents in advance is critical not only from a marketing perspective but also from a network optimization perspective. By predicting whether the content will be successful or not in advance, the content file, which is large, is efficiently deployed in the proper service providing server, leading to network cost optimization. Many previous studies have done view count prediction research to do this. However, the studies have been making predictions based on historical view count data from users. In this case, the contents had been published to the users and already deployed on a service server. These approaches make possible to efficiently deploy a content already published but are impossible to use for a content that is not be published. To address the problems, this research proposes a hybrid machine learning approach to the classification model for the popularity prediction of newly video contents which is not published. In this paper, we create a new variable based on the related content of the specific content and divide entire dataset by the characteristics of the contents. Next, the prediction is performed using XGBoosting and deep neural net based model according to the data characteristics of the cluster. Our model uses metadata for contents for prediction, so we use categorical embedding techniques to solve the sparsity of categorical variables and make them learn efficiently for the deep neural net model. As well, we use the FTRL-proximal algorithm to solve the problem of the view-count volatility of video content. We achieve overall better performance than the previous standalone method with a dataset from one of the top streaming service company.
[ 1, 0, 0, 1, 0, 0 ]
Title: Spectral Calibration of the Fluorescence Telescopes of the Pierre Auger Observatory, Abstract: We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. We used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. We made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small.
[ 0, 1, 0, 0, 0, 0 ]
Title: Two properties of Müntz spaces, Abstract: We show that Müntz spaces, as subspaces of $C[0,1]$, contain asymptotically isometric copies of $c_0$ and that their dual spaces are octahedral.
[ 0, 0, 1, 0, 0, 0 ]
Title: On Quaternionic Tori and their Moduli Spaces, Abstract: Quaternionic tori are defined as quotients of the skew field $\mathbb{H}$ of quaternions by rank-4 lattices. Using slice regular functions, these tori are endowed with natural structures of quaternionic manifolds (in fact quaternionic curves), and a fundamental region in a $12$-dimensional real subspace is then constructed to classify them up to biregular diffeomorphisms. The points of the moduli space correspond to suitable \emph{special} bases of rank-4 lattices, which are studied with respect to the action of the group $GL(4, \mathbb{Z})$, and up to biregular diffeomeorphisms. All tori with a non trivial group of biregular automorphisms - and all possible groups of their biregular automorphisms - are then identified, and recognized to correspond to five different subsets of boundary points of the moduli space.
[ 0, 0, 1, 0, 0, 0 ]
Title: GIER: A Danish computer from 1961 with a role in the modern revolution of astronomy, Abstract: A Danish computer, GIER, from 1961 played a vital role in the development of a new method for astrometric measurement. This method, photon counting astrometry, ultimately led to two satellites with a significant role in the modern revolution of astronomy. A GIER was installed at the Hamburg Observatory in 1964 where it was used to implement the entirely new method for the measurement of stellar positions by means of a meridian circle, then the fundamental instrument of astrometry. An expedition to Perth in Western Australia with the instrument and the computer was a success. This method was also implemented in space in the first ever astrometric satellite Hipparcos launched by ESA in 1989. The Hipparcos results published in 1997 revolutionized astrometry with an impact in all branches of astronomy from the solar system and stellar structure to cosmic distances and the dynamics of the Milky Way. In turn, the results paved the way for a successor, the one million times more powerful Gaia astrometry satellite launched by ESA in 2013. Preparations for a Gaia successor in twenty years are making progress.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study, Abstract: Several recent papers investigate Active Learning (AL) for mitigating the data dependence of deep learning for natural language processing. However, the applicability of AL to real-world problems remains an open question. While in supervised learning, practitioners can try many different methods, evaluating each against a validation set before selecting a model, AL affords no such luxury. Over the course of one AL run, an agent annotates its dataset exhausting its labeling budget. Thus, given a new task, an active learner has no opportunity to compare models and acquisition functions. This paper provides a large scale empirical study of deep active learning, addressing multiple tasks and, for each, multiple datasets, multiple models, and a full suite of acquisition functions. We find that across all settings, Bayesian active learning by disagreement, using uncertainty estimates provided either by Dropout or Bayes-by Backprop significantly improves over i.i.d. baselines and usually outperforms classic uncertainty sampling.
[ 0, 0, 0, 1, 0, 0 ]
Title: Grounding Symbols in Multi-Modal Instructions, Abstract: As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability---for instance, learning to ground symbols in the physical world. Realistically, this task must cope with small datasets consisting of a particular users' contextual assignment of meaning to terms. We present a method for processing a raw stream of cross-modal input---i.e., linguistic instructions, visual perception of a scene and a concurrent trace of 3D eye tracking fixations---to produce the segmentation of objects with a correspondent association to high-level concepts. To test our framework we present experiments in a table-top object manipulation scenario. Our results show our model learns the user's notion of colour and shape from a small number of physical demonstrations, generalising to identifying physical referents for novel combinations of the words.
[ 1, 0, 0, 0, 0, 0 ]
Title: Relaxing Integrity Requirements for Attack-Resilient Cyber-Physical Systems, Abstract: The increase in network connectivity has also resulted in several high-profile attacks on cyber-physical systems. An attacker that manages to access a local network could remotely affect control performance by tampering with sensor measurements delivered to the controller. Recent results have shown that with network-based attacks, such as Man-in-the-Middle attacks, the attacker can introduce an unbounded state estimation error if measurements from a suitable subset of sensors contain false data when delivered to the controller. While these attacks can be addressed with the standard cryptographic tools that ensure data integrity, their continuous use would introduce significant communication and computation overhead. Consequently, we study effects of intermittent data integrity guarantees on system performance under stealthy attacks. We consider linear estimators equipped with a general type of residual-based intrusion detectors (including $\chi^2$ and SPRT detectors), and show that even when integrity of sensor measurements is enforced only intermittently, the attack impact is significantly limited; specifically, the state estimation error is bounded or the attacker cannot remain stealthy. Furthermore, we present methods to: (1) evaluate the effects of any given integrity enforcement policy in terms of reachable state-estimation errors for any type of stealthy attacks, and (2) design an enforcement policy that provides the desired estimation error guarantees under attack. Finally, on three automotive case studies we show that even with less than 10% of authenticated messages we can ensure satisfiable control performance in the presence of attacks.
[ 1, 0, 1, 0, 0, 0 ]
Title: Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia Content, Abstract: With the increasing popularity of smart devices, rumors with multimedia content become more and more common on social networks. The multimedia information usually makes rumors look more convincing. Therefore, finding an automatic approach to verify rumors with multimedia content is a pressing task. Previous rumor verification research only utilizes multimedia as input features. We propose not to use the multimedia content but to find external information in other news platforms pivoting on it. We introduce a new features set, cross-lingual cross-platform features that leverage the semantic similarity between the rumors and the external information. When implemented, machine learning methods utilizing such features achieved the state-of-the-art rumor verification results.
[ 1, 0, 0, 0, 0, 0 ]
Title: Convergence of the Kähler-Ricci iteration, Abstract: The Ricci iteration is a discrete analogue of the Ricci flow. According to Perelman, the Ricci flow converges to a Kahler-Einstein metric whenever one exists, and it has been conjectured that the Ricci iteration should behave similarly. This article confirms this conjecture. As a special case, this gives a new method of uniformization of the Riemann sphere.
[ 0, 0, 1, 0, 0, 0 ]
Title: Joint estimation of genetic and parent-of-origin effects using RNA-seq data from human, Abstract: RNA sequencing allows one to study allelic imbalance of gene expression, which may be due to genetic factors or genomic imprinting. It is desirable to model both genetic and parent-of-origin effects simultaneously to avoid confounding and to improve the power to detect either effect. In a study of experimental cross, separation of genetic and parent-of-origin effects can be achieved by studying reciprocal cross of two inbred strains. In contrast, this task is much more challenging for an outbred population such as human population. To address this challenge, we propose a new framework to combine experimental strategies and novel statistical methods. Specifically, we propose to collect genotype data from family trios as well as RNA-seq data from the children of family trios. We have developed a new statistical method to estimate both genetic and parent-of-origin effects from such data sets. We demonstrated this approach by studying 30 trios of HapMap samples. Our results support some of previous finding of imprinted genes and also recover new candidate imprinted genes.
[ 0, 0, 0, 1, 0, 0 ]
Title: Hydra: a C++11 framework for data analysis in massively parallel platforms, Abstract: Hydra is a header-only, templated and C++11-compliant framework designed to perform the typical bottleneck calculations found in common HEP data analyses on massively parallel platforms. The framework is implemented on top of the C++11 Standard Library and a variadic version of the Thrust library and is designed to run on Linux systems, using OpenMP, CUDA and TBB enabled devices. This contribution summarizes the main features of Hydra. A basic description of the overall design, functionality and user interface is provided, along with some code examples and measurements of performance.
[ 1, 1, 0, 0, 0, 0 ]
Title: Self-exciting Point Processes: Infections and Implementations, Abstract: This is a comment on Reinhart's "Review of Self-Exciting Spatio-Temporal Point Processes and Their Applications" (arXiv:1708.02647v1). I contribute some experiences from modelling the spread of infectious diseases. Furthermore, I try to complement the review with regard to the availability of software for the described models, which I think is essential in "paving the way for new uses".
[ 0, 0, 0, 1, 0, 0 ]
Title: Magnetism and charge density waves in RNiC$_2$ (R = Ce, Pr, Nd), Abstract: We have compared the magnetic, transport, galvanomagnetic and specific heat properties of CeNiC$_2$, PrNiC$_2$ and NdNiC$_2$ to study the interplay between charge density waves and magnetism in these compounds. The negative magnetoresistance in NdNiC$_2$ is discussed in terms of the partial destruction of charge density waves and an irreversible phase transition stabilized by the field induced ferromagnetic transformation is reported. For PrNiC$_2$ we demonstrate that the magnetic field initially weakens the CDW state, due to the Zeeman splitting of conduction bands. However, the Fermi surface nesting is enhanced at a temperature related to the magnetic anomaly.
[ 0, 1, 0, 0, 0, 0 ]
Title: Coaxial collisions of a vortex ring and a sphere in an inviscid incompressible fluid, Abstract: The dynamics of a circular thin vortex ring and a sphere moving along the symmetry axis of the ring in an inviscid incompressible fluid is studied on the basis of Euler's equations of motion. The equations of motion for position and radius of the vortex ring and those for position and velocity of the sphere are coupled by hydrodynamic interactions. The equations are cast in Hamiltonian form, from which it is seen that total energy and momentum are conserved. The four Hamiltonian equations of motion are solved numerically for a variety of initial conditions.
[ 0, 1, 0, 0, 0, 0 ]
Title: A probabilistic approach to emission-line galaxy classification, Abstract: We invoke a Gaussian mixture model (GMM) to jointly analyse two traditional emission-line classification schemes of galaxy ionization sources: the Baldwin-Phillips-Terlevich (BPT) and $\rm W_{H\alpha}$ vs. [NII]/H$\alpha$ (WHAN) diagrams, using spectroscopic data from the Sloan Digital Sky Survey Data Release 7 and SEAGal/STARLIGHT datasets. We apply a GMM to empirically define classes of galaxies in a three-dimensional space spanned by the $\log$ [OIII]/H$\beta$, $\log$ [NII]/H$\alpha$, and $\log$ EW(H${\alpha}$), optical parameters. The best-fit GMM based on several statistical criteria suggests a solution around four Gaussian components (GCs), which are capable to explain up to 97 per cent of the data variance. Using elements of information theory, we compare each GC to their respective astronomical counterpart. GC1 and GC4 are associated with star-forming galaxies, suggesting the need to define a new starburst subgroup. GC2 is associated with BPT's Active Galaxy Nuclei (AGN) class and WHAN's weak AGN class. GC3 is associated with BPT's composite class and WHAN's strong AGN class. Conversely, there is no statistical evidence -- based on four GCs -- for the existence of a Seyfert/LINER dichotomy in our sample. Notwithstanding, the inclusion of an additional GC5 unravels it. The GC5 appears associated to the LINER and Passive galaxies on the BPT and WHAN diagrams respectively. Subtleties aside, we demonstrate the potential of our methodology to recover/unravel different objects inside the wilderness of astronomical datasets, without lacking the ability to convey physically interpretable results. The probabilistic classifications from the GMM analysis are publicly available within the COINtoolbox (this https URL\_Catalogue/).
[ 0, 1, 0, 1, 0, 0 ]
Title: Quantum repeaters with individual rare-earth ions at telecommunication wavelengths, Abstract: We present a quantum repeater scheme that is based on individual erbium and europium ions. Erbium ions are attractive because they emit photons at telecommunication wavelength, while europium ions offer exceptional spin coherence for long-term storage. Entanglement between distant erbium ions is created by photon detection. The photon emission rate of each erbium ion is enhanced by a microcavity with high Purcell factor, as has recently been demonstrated. Entanglement is then transferred to nearby europium ions for storage. Gate operations between nearby ions are performed using dynamically controlled electric-dipole coupling. These gate operations allow entanglement swapping to be employed in order to extend the distance over which entanglement is distributed. The deterministic character of the gate operations allows improved entanglement distribution rates in comparison to atomic ensemble-based protocols. We also propose an approach that utilizes multiplexing in order to enhance the entanglement distribution rate.
[ 0, 1, 0, 0, 0, 0 ]
Title: Questions and dependency in intuitionistic logic, Abstract: In recent years, the logic of questions and dependencies has been investigated in the closely related frameworks of inquisitive logic and dependence logic. These investigations have assumed classical logic as the background logic of statements, and added formulas expressing questions and dependencies to this classical core. In this paper, we broaden the scope of these investigations by studying questions and dependency in the context of intuitionistic logic. We propose an intuitionistic team semantics, where teams are embedded within intuitionistic Kripke models. The associated logic is a conservative extension of intuitionistic logic with questions and dependence formulas. We establish a number of results about this logic, including a normal form result, a completeness result, and translations to classical inquisitive logic and modal dependence logic.
[ 1, 0, 1, 0, 0, 0 ]
Title: Learning Aided Optimization for Energy Harvesting Devices with Outdated State Information, Abstract: This paper considers utility optimal power control for energy harvesting wireless devices with a finite capacity battery. The distribution information of the underlying wireless environment and harvestable energy is unknown and only outdated system state information is known at the device controller. This scenario shares similarity with Lyapunov opportunistic optimization and online learning but is different from both. By a novel combination of Zinkevich's online gradient learning technique and the drift-plus-penalty technique from Lyapunov opportunistic optimization, this paper proposes a learning-aided algorithm that achieves utility within $O(\epsilon)$ of the optimal, for any desired $\epsilon>0$, by using a battery with an $O(1/\epsilon)$ capacity. The proposed algorithm has low complexity and makes power investment decisions based on system history, without requiring knowledge of the system state or its probability distribution.
[ 1, 0, 0, 0, 0, 0 ]
Title: Origin of Operating Voltage Increase in InGaN-based Light-emitting Diodes under High Injection: Phase Space Filling Effect on Forward Voltage Characteristics, Abstract: As an attempt to further elucidate the operating voltage increase in InGaN-based light-emitting diodes (LEDs), the radiative and nonradiative current components are separately analyzed in combination with the Shockley diode equation. Through the analyses, we have shown that the increase in operating voltage is caused by phase space filling effect in high injection. We have also shown that the classical Shockley diode equation is insufficient to comprehensively explain the I-V curve of the LED devices since the transport and recombination characteristics of respective current components are basically different. Hence, we have proposed a modified Shockley equation suitable for modern LED devices. Our analysis gives a new insight on the cause of the wall-plug-efficiency drop influenced by such factors as the efficiency droop and the high operating voltage in InGaN LEDs.
[ 0, 1, 0, 0, 0, 0 ]
Title: Convolutional Dictionary Learning: A Comparative Review and New Algorithms, Abstract: Convolutional sparse representations are a form of sparse representation with a dictionary that has a structure that is equivalent to convolution with a set of linear filters. While effective algorithms have recently been developed for the convolutional sparse coding problem, the corresponding dictionary learning problem is substantially more challenging. Furthermore, although a number of different approaches have been proposed, the absence of thorough comparisons between them makes it difficult to determine which of them represents the current state of the art. The present work both addresses this deficiency and proposes some new approaches that outperform existing ones in certain contexts. A thorough set of performance comparisons indicates a very wide range of performance differences among the existing and proposed methods, and clearly identifies those that are the most effective.
[ 1, 0, 0, 1, 0, 0 ]
Title: Boolean dimension and tree-width, Abstract: The dimension is a key measure of complexity of partially ordered sets. Small dimension allows succinct encoding. Indeed if $P$ has dimension $d$, then to know whether $x \leq y$ in $P$ it is enough to check whether $x\leq y$ in each of the $d$ linear extensions of a witnessing realizer. Focusing on the encoding aspect Nešetřil and Pudlák defined a more expressive version of dimension. A poset $P$ has boolean dimension at most $d$ if it is possible to decide whether $x \leq y$ in $P$ by looking at the relative position of $x$ and $y$ in only $d$ permutations of the elements of $P$. We prove that posets with cover graphs of bounded tree-width have bounded boolean dimension. This stays in contrast with the fact that there are posets with cover graphs of tree-width three and arbitrarily large dimension. This result might be a step towards a resolution of the long-standing open problem: Do planar posets have bounded boolean dimension?
[ 1, 0, 0, 0, 0, 0 ]
Title: On the nature of the candidate T-Tauri star V501 Aurigae, Abstract: We report new multi-colour photometry and high-resolution spectroscopic observations of the long-period variable V501 Aur, previously considered to be a weak-lined T-Tauri star belonging to the Taurus-Auriga star-forming region. The spectroscopic observations reveal that V501 Aur is a single-lined spectroscopic binary system with a 68.8-day orbital period, a slightly eccentric orbit (e ~ 0.03), and a systemic velocity discrepant from the mean of Taurus-Auriga. The photometry shows quasi-periodic variations on a different, ~55-day timescale that we attribute to rotational modulation by spots. No eclipses are seen. The visible object is a rapidly rotating (vsini ~ 25 km/s) early K star, which along with the rotation period implies it must be large (R > 26.3 Rsun), as suggested also by spectroscopic estimates indicating a low surface gravity. The parallax from the Gaia mission and other independent estimates imply a distance much greater than the Taurus-Auriga region, consistent with the giant interpretation. Taken together, this evidence together with a re-evaluation of the LiI~$\lambda$6707 and H$\alpha$ lines shows that V501 Aur is not a T-Tauri star, but is instead a field binary with a giant primary far behind the Taurus-Auriga star-forming region. The large mass function from the spectroscopic orbit and a comparison with stellar evolution models suggest the secondary may be an early-type main-sequence star.
[ 0, 1, 0, 0, 0, 0 ]
Title: Counting the number of distinct distances of elements in valued field extensions, Abstract: The defect of valued field extensions is a major obstacle in open problems in resolution of singularities and in the model theory of valued fields, whenever positive characteristic is involved. We continue the detailed study of defect extensions through the tool of distances, which measure how well an element in an immediate extension can be approximated by elements from the base field. We show that in several situations the number of essentially distinct distances in fixed extensions, or even just over a fixed base field, is finite, and we compute upper bounds. We apply this to the special case of valued functions fields over perfect base fields. This provides important information used in forthcoming research on relative resolution problems.
[ 0, 0, 1, 0, 0, 0 ]
Title: On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses, Abstract: Neural networks are known to be vulnerable to adversarial examples. In this note, we evaluate the two white-box defenses that appeared at CVPR 2018 and find they are ineffective: when applying existing techniques, we can reduce the accuracy of the defended models to 0%.
[ 0, 0, 0, 1, 0, 0 ]