title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
On the Statistical Efficiency of Optimal Kernel Sum Classifiers
We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal kernel sum classifiers. This contrasts the typical learning theoretic results which hold for all (potentially suboptimal) classifiers. Our work also justifies assumptions made in prior work on multiple kernel learning. As a byproduct of our analysis, we also provide a new form of Rademacher complexity for hypothesis classes containing only optimal classifiers.
1
0
0
1
0
0
Ultracold atoms in multiple-radiofrequency dressed adiabatic potentials
We present the first experimental demonstration of a multiple-radiofrequency dressed potential for the configurable magnetic confinement of ultracold atoms. We load cold $^{87}$Rb atoms into a double well potential with an adjustable barrier height, formed by three radiofrequencies applied to atoms in a static quadrupole magnetic field. Our multiple-radiofrequency approach gives precise control over the double well characteristics, including the depth of individual wells and the height of the barrier, and enables reliable transfer of atoms between the available trapping geometries. We have characterised the multiple-radiofrequency dressed system using radiofrequency spectroscopy, finding good agreement with the eigenvalues numerically calculated using Floquet theory. This method creates trapping potentials that can be reconfigured by changing the amplitudes, polarizations and frequencies of the applied dressing fields, and easily extended with additional dressing frequencies.
0
1
0
0
0
0
Distribution Matching in Variational Inference
We show that Variational Autoencoders consistently fail to learn marginal distributions in latent and visible space. We ask whether this is a consequence of matching conditional distributions, or a limitation of explicit model and posterior distributions. We explore alternatives provided by marginal distribution matching and implicit distributions through the use of Generative Adversarial Networks in variational inference. We perform a large-scale evaluation of several VAE-GAN hybrids and explore the implications of class probability estimation for learning distributions. We conclude that at present VAE-GAN hybrids have limited applicability: they are harder to scale, evaluate, and use for inference compared to VAEs; and they do not improve over the generation quality of GANs.
0
0
0
1
0
0
Henkin constructions of models with size continuum
We survey the technique of constructing customized models of size continuum in omega steps and illustrate the method by giving new proofs of mostly old results within this rubric. One new theorem, which is joint with Saharon Shelah, is that a pseudominimal theory has an atomic model of size continuum.
0
0
1
0
0
0
Orthogonal groups in characteristic 2 acting on polytopes of high rank
We show that for all integers $m\geq 2$, and all integers $k\geq 2$, the orthogonal groups $\Orth^{\pm}(2m,\Fk)$ act on abstract regular polytopes of rank $2m$, and the symplectic groups $\Sp(2m,\Fk)$ act on abstract regular polytopes of rank $2m+1$.
0
0
1
0
0
0
Free LSD: Prior-Free Visual Landing Site Detection for Autonomous Planes
Full autonomy for fixed-wing unmanned aerial vehicles (UAVs) requires the capability to autonomously detect potential landing sites in unknown and unstructured terrain, allowing for self-governed mission completion or handling of emergency situations. In this work, we propose a perception system addressing this challenge by detecting landing sites based on their texture and geometric shape without using any prior knowledge about the environment. The proposed method considers hazards within the landing region such as terrain roughness and slope, surrounding obstacles that obscure the landing approach path, and the local wind field that is estimated by the on-board EKF. The latter enables applicability of the proposed method on small-scale autonomous planes without landing gear. A safe approach path is computed based on the UAV dynamics, expected state estimation and actuator uncertainty, and the on-board computed elevation map. The proposed framework has been successfully tested on photo-realistic synthetic datasets and in challenging real-world environments.
1
0
0
0
0
0
Semi-supervised and Active-learning Scenarios: Efficient Acoustic Model Refinement for a Low Resource Indian Language
We address the problem of efficient acoustic-model refinement (continuous retraining) using semi-supervised and active learning for a low resource Indian language, wherein the low resource constraints are having i) a small labeled corpus from which to train a baseline `seed' acoustic model and ii) a large training corpus without orthographic labeling or from which to perform a data selection for manual labeling at low costs. The proposed semi-supervised learning decodes the unlabeled large training corpus using the seed model and through various protocols, selects the decoded utterances with high reliability using confidence levels (that correlate to the WER of the decoded utterances) and iterative bootstrapping. The proposed active learning protocol uses confidence level based metric to select the decoded utterances from the large unlabeled corpus for further labeling. The semi-supervised learning protocols can offer a WER reduction, from a poorly trained seed model, by as much as 50% of the best WER-reduction realizable from the seed model's WER, if the large corpus were labeled and used for acoustic-model training. The active learning protocols allow that only 60% of the entire training corpus be manually labeled, to reach the same performance as the entire data.
1
0
0
0
0
0
Energy spectrum of cascade showers generated by cosmic ray muons in water
The spatial distribution of Cherenkov radiation from cascade showers generated by muons in water has been measured with Cherenkov water calorimeter (CWC) NEVOD. This result allowed to improve the techniques of treating cascade showers with unknown axes by means of CWC response analysis. The techniques of selecting the events with high energy cascade showers and reconstructing their parameters are discussed. Preliminary results of measurements of the spectrum of cascade showers in the energy range 100 GeV - 20 TeV generated by cosmic ray muons at large zenith angles and their comparison with expectation are presented.
0
1
0
0
0
0
The limit point of the pentagram map
The pentagram map is a discrete dynamical system defined on the space of polygons in the plane. In the first paper on the subject, R. Schwartz proved that the pentagram map produces from each convex polygon a sequence of successively smaller polygons that converges exponentially to a point. We investigate the limit point itself, giving an explicit description of its Cartesian coordinates as roots of certain degree three polynomials.
0
0
1
0
0
0
Reconstruction formulas for Photoacoustic Imaging in Attenuating Media
In this paper we study the problem of photoacoustic inversion in a weakly attenuating medium. We present explicit reconstruction formulas in such media and show that the inversion based on such formulas is moderately ill--posed. Moreover, we present a numerical algorithm for imaging and demonstrate in numerical experiments the feasibility of this approach.
0
0
1
0
0
0
Rank Determination for Low-Rank Data Completion
Recently, fundamental conditions on the sampling patterns have been obtained for finite completability of low-rank matrices or tensors given the corresponding ranks. In this paper, we consider the scenario where the rank is not given and we aim to approximate the unknown rank based on the location of sampled entries and some given completion. We consider a number of data models, including single-view matrix, multi-view matrix, CP tensor, tensor-train tensor and Tucker tensor. For each of these data models, we provide an upper bound on the rank when an arbitrary low-rank completion is given. We characterize these bounds both deterministically, i.e., with probability one given that the sampling pattern satisfies certain combinatorial properties, and probabilistically, i.e., with high probability given that the sampling probability is above some threshold. Moreover, for both single-view matrix and CP tensor, we are able to show that the obtained upper bound is exactly equal to the unknown rank if the lowest-rank completion is given. Furthermore, we provide numerical experiments for the case of single-view matrix, where we use nuclear norm minimization to find a low-rank completion of the sampled data and we observe that in most of the cases the proposed upper bound on the rank is equal to the true rank.
1
0
0
1
0
0
Network structure from rich but noisy data
Driven by growing interest in the sciences, industry, and among the broader public, a large number of empirical studies have been conducted in recent years of the structure of networks ranging from the internet and the world wide web to biological networks and social networks. The data produced by these experiments are often rich and multimodal, yet at the same time they may contain substantial measurement error. In practice, this means that the true network structure can differ greatly from naive estimates made from the raw data, and hence that conclusions drawn from those naive estimates may be significantly in error. In this paper we describe a technique that circumvents this problem and allows us to make optimal estimates of the true structure of networks in the presence of both richly textured data and significant measurement uncertainty. We give example applications to two different social networks, one derived from face-to-face interactions and one from self-reported friendships.
1
1
0
0
0
0
Algebraic Foundations of Proof Refinement
We contribute a general apparatus for dependent tactic-based proof refinement in the LCF tradition, in which the statements of subgoals may express a dependency on the proofs of other subgoals; this form of dependency is extremely useful and can serve as an algorithmic alternative to extensions of LCF based on non-local instantiation of schematic variables. Additionally, we introduce a novel behavioral distinction between refinement rules and tactics based on naturality. Our framework, called Dependent LCF, is already deployed in the nascent RedPRL proof assistant for computational cubical type theory.
1
0
0
0
0
0
Transfer Learning for Neural Semantic Parsing
The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with a focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence modeling and compare their performance with an independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to a target task with smaller labeled data. We see absolute accuracy gains ranging from 1.0% to 4.4% in our in- house data set, and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks.
1
0
0
0
0
0
Definable Valuations induced by multiplicative subgroups and NIP Fields
We study the algebraic implications of the non-independence property (NIP) and variants thereof (dp-minimality) on infinite fields, motivated by the conjecture that all such fields which are neither real closed nor separably closed admit a definable henselian valuation. Our results mainly focus on Hahn fields and build up on Will Johnson's preprint "dp-minimal fields", arXiv: 1507.02745v1, July 2015.
0
0
1
0
0
0
Bridging the Gap Between Layout Pattern Sampling and Hotspot Detection via Batch Active Learning
Layout hotpot detection is one of the main steps in modern VLSI design. A typical hotspot detection flow is extremely time consuming due to the computationally expensive mask optimization and lithographic simulation. Recent researches try to facilitate the procedure with a reduced flow including feature extraction, training set generation and hotspot detection, where feature extraction methods and hotspot detection engines are deeply studied. However, the performance of hotspot detectors relies highly on the quality of reference layout libraries which are costly to obtain and usually predetermined or randomly sampled in previous works. In this paper, we propose an active learning-based layout pattern sampling and hotspot detection flow, which simultaneously optimizes the machine learning model and the training set that aims to achieve similar or better hotspot detection performance with much smaller number of training instances. Experimental results show that our proposed method can significantly reduce lithography simulation overhead while attaining satisfactory detection accuracy on designs under both DUV and EUV lithography technologies.
0
0
0
1
0
0
iCorr : Complex correlation method to detect origin of replication in prokaryotic and eukaryotic genomes
Computational prediction of origin of replication (ORI) has been of great interest in bioinformatics and several methods including GC Skew, Z curve, auto-correlation etc. have been explored in the past. In this paper, we have extended the auto-correlation method to predict ORI location with much higher resolution for prokaryotes. The proposed complex correlation method (iCorr) converts the genome sequence into a sequence of complex numbers by mapping the nucleotides to {+1,-1,+i,-i} instead of {+1,-1} used in the auto-correlation method (here, 'i' is square root of -1). Thus, the iCorr method uses information about the positions of all the four nucleotides unlike the earlier auto-correlation method which uses the positional information of only one nucleotide. Also, this earlier method required visual inspection of the obtained graphs to identify the location of origin of replication. The proposed iCorr method does away with this need and is able to identify the origin location simply by picking the peak in the iCorr graph. The iCorr method also works for a much smaller segment size compared to the earlier auto-correlation method, which can be very helpful in experimental validation of the computational predictions. We have also developed a variant of the iCorr method to predict ORI location in eukaryotes and have tested it with the experimentally known origin locations of S. cerevisiae with an average accuracy of 71.76%.
0
1
0
0
0
0
On the Power Spectral Density Applied to the Analysis of Old Canvases
A routine task for art historians is painting diagnostics, such as dating or attribution. Signal processing of the X-ray image of a canvas provides useful information about its fabric. However, previous methods may fail when very old and deteriorated artworks or simply canvases of small size are studied. We present a new framework to analyze and further characterize the paintings from their radiographs. First, we start from a general analysis of lattices and provide new unifying results about the theoretical spectra of weaves. Then, we use these results to infer the main structure of the fabric, like the type of weave and the thread densities. We propose a practical estimation of these theoretical results from paintings with the averaged power spectral density (PSD), which provides a more robust tool. Furthermore, we found that the PSD provides a fingerprint that characterizes the whole canvas. We search and discuss some distinctive features we may find in that fingerprint. We apply these results to several masterpieces of the 17th and 18th centuries from the Museo Nacional del Prado to show that this approach yields accurate results in thread counting and is very useful for paintings comparison, even in situations where previous methods fail.
1
0
1
0
0
0
Simons' type formula for slant submanifolds of complex space form
In this paper, we study a slant submanifold of a complex space form. We also obtain an integral formula of Simons' type for a Kaehlerian slant submanifold in a complex space form and apply it to prove our main result.
0
0
1
0
0
0
Eco-evolutionary feedbacks - theoretical models and perspectives
1. Theoretical models pertaining to feedbacks between ecological and evolutionary processes are prevalent in multiple biological fields. An integrative overview is currently lacking, due to little crosstalk between the fields and the use of different methodological approaches. 2. Here we review a wide range of models of eco-evolutionary feedbacks and highlight their underlying assumptions. We discuss models where feedbacks occur both within and between hierarchical levels of ecosystems, including populations, communities, and abiotic environments, and consider feedbacks across spatial scales. 3. Identifying the commonalities among feedback models, and the underlying assumptions, helps us better understand the mechanistic basis of eco-evolutionary feedbacks. Eco-evolutionary feedbacks can be readily modelled by coupling demographic and evolutionary formalisms. We provide an overview of these approaches and suggest future integrative modelling avenues. 4. Our overview highlights that eco-evolutionary feedbacks have been incorporated in theoretical work for nearly a century. Yet, this work does not always include the notion of rapid evolution or concurrent ecological and evolutionary time scales. We discuss the importance of density- and frequency-dependent selection for feedbacks, as well as the importance of dispersal as a central linking trait between ecology and evolution in a spatial context.
0
0
0
0
1
0
Unusual behavior of cuprates explained by heterogeneous charge localization
The cuprate high-temperature superconductors are among the most intensively studied materials, yet essential questions regarding their principal phases and the transitions between them remain unanswered. Generally thought of as doped charge-transfer insulators, these complex lamellar oxides exhibit pseudogap, strange-metal, superconducting and Fermi-liquid behaviour with increasing hole-dopant concentration. Here we propose a simple inhomogeneous Mott-like (de)localization model wherein exactly one hole per copper-oxygen unit is gradually delocalized with increasing doping and temperature. The model is percolative in nature, with parameters that are experimentally constrained. It comprehensively captures pivotal unconventional experimental results, including the temperature and doping dependence of the pseudogap phenomenon, the strange-metal linear temperature dependence of the planar resistivity, and the doping dependence of the superfluid density. The success and simplicity of our model greatly demystify the cuprate phase diagram and point to a local superconducting pairing mechanism involving the (de)localized hole.
0
1
0
0
0
0
An Agile Software Engineering Method to Design Blockchain Applications
Cryptocurrencies and their foundation technology, the Blockchain, are reshaping finance and economics, allowing a decentralized approach enabling trusted applications with no trusted counterpart. More recently, the Blockchain and the programs running on it, called Smart Contracts, are also finding more and more applications in all fields requiring trust and sound certifications. Some people have come to the point of saying that the "Blockchain revolution" can be compared to that of the Internet and the Web in their early days. As a result, all the software development revolving around the Blockchain technology is growing at a staggering rate. The feeling of many software engineers about such huge interest in Blockchain technologies is that of unruled and hurried software development, a sort of competition on a first-come-first-served basis which does not assure neither software quality, nor that the basic concepts of software engineering are taken into account. This paper tries to cope with this issue, proposing a software development process to gather the requirement, analyze, design, develop, test and deploy Blockchain applications. The process is based on several Agile practices, such as User Stories and iterative and incremental development based on them. However, it makes also use of more formal notations, such as some UML diagrams describing the design of the system, with additions to represent specific concepts found in Blockchain development. The method is described in good detail, and an example is given to show how it works.
1
0
0
0
0
0
Optimizing Prediction Intervals by Tuning Random Forest via Meta-Validation
Recent studies have shown that tuning prediction models increases prediction accuracy and that Random Forest can be used to construct prediction intervals. However, to our best knowledge, no study has investigated the need to, and the manner in which one can, tune Random Forest for optimizing prediction intervals { this paper aims to fill this gap. We explore a tuning approach that combines an effectively exhaustive search with a validation technique on a single Random Forest parameter. This paper investigates which, out of eight validation techniques, are beneficial for tuning, i.e., which automatically choose a Random Forest configuration constructing prediction intervals that are reliable and with a smaller width than the default configuration. Additionally, we present and validate three meta-validation techniques to determine which are beneficial, i.e., those which automatically chose a beneficial validation technique. This study uses data from our industrial partner (Keymind Inc.) and the Tukutuku Research Project, related to post-release defect prediction and Web application effort estimation, respectively. Results from our study indicate that: i) the default configuration is frequently unreliable, ii) most of the validation techniques, including previously successfully adopted ones such as 50/50 holdout and bootstrap, are counterproductive in most of the cases, and iii) the 75/25 holdout meta-validation technique is always beneficial; i.e., it avoids the likely counterproductive effects of validation techniques.
0
0
0
1
0
0
Explicit construction of RIP matrices is Ramsey-hard
Matrices $\Phi\in\R^{n\times p}$ satisfying the Restricted Isometry Property (RIP) are an important ingredient of the compressive sensing methods. While it is known that random matrices satisfy the RIP with high probability even for $n=\log^{O(1)}p$, the explicit construction of such matrices defied the repeated efforts, and the most known approaches hit the so-called $\sqrt{n}$ sparsity bottleneck. The notable exception is the work by Bourgain et al \cite{bourgain2011explicit} constructing an $n\times p$ RIP matrix with sparsity $s=\Theta(n^{{1\over 2}+\epsilon})$, but in the regime $n=\Omega(p^{1-\delta})$. In this short note we resolve this open question in a sense by showing that an explicit construction of a matrix satisfying the RIP in the regime $n=O(\log^2 p)$ and $s=\Theta(n^{1\over 2})$ implies an explicit construction of a three-colored Ramsey graph on $p$ nodes with clique sizes bounded by $O(\log^2 p)$ -- a question in the extremal combinatorics which has been open for decades.
0
0
0
1
0
0
Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net
Models applied on real time response task, like click-through rate (CTR) prediction model, require high accuracy and rigorous response time. Therefore, top-performing deep models of high depth and complexity are not well suited for these applications with the limitations on the inference time. In order to further improve the neural networks' performance given the time and computational limitations, we propose an approach that exploits a cumbersome net to help train the lightweight net for prediction. We dub the whole process rocket launching, where the cumbersome booster net is used to guide the learning of the target light net throughout the whole training process. We analyze different loss functions aiming at pushing the light net to behave similarly to the booster net, and adopt the loss with best performance in our experiments. We use one technique called gradient block to improve the performance of the light net and booster net further. Experiments on benchmark datasets and real-life industrial advertisement data present that our light model can get performance only previously achievable with more complex models.
1
0
0
1
0
0
The CMS HGCAL detector for HL-LHC upgrade
The High Luminosity LHC (HL-LHC) will integrate 10 times more luminosity than the LHC, posing significant challenges for radiation tolerance and event pileup on detectors, especially for forward calorimetry, and hallmarks the issue for future colliders. As part of its HL-LHC upgrade program, the CMS collaboration is designing a High Granularity Calorimeter to replace the existing endcap calorimeters. It features unprecedented transverse and longitudinal segmentation for both electromagnetic (ECAL) and hadronic (HCAL) compartments. This will facilitate particle-flow calorimetry, where the fine structure of showers can be measured and used to enhance pileup rejection and particle identification, whilst still achieving good energy resolution. The ECAL and a large fraction of HCAL will be based on hexagonal silicon sensors of 0.5-1cm$^{2}$ cell size, with the remainder of the HCAL based on highly-segmented scintillators with SiPM readout. The intrinsic high-precision timing capabilities of the silicon sensors will add an extra dimension to event reconstruction, especially in terms of pileup rejection. An overview of the HGCAL project is presented, covering motivation, engineering design, readout and trigger concepts, and performance (simulated and from beam tests).
0
1
0
0
0
0
Complete Classification of Generalized Santha-Vazirani Sources
Let $\mathcal{F}$ be a finite alphabet and $\mathcal{D}$ be a finite set of distributions over $\mathcal{F}$. A Generalized Santha-Vazirani (GSV) source of type $(\mathcal{F}, \mathcal{D})$, introduced by Beigi, Etesami and Gohari (ICALP 2015, SICOMP 2017), is a random sequence $(F_1, \dots, F_n)$ in $\mathcal{F}^n$, where $F_i$ is a sample from some distribution $d \in \mathcal{D}$ whose choice may depend on $F_1, \dots, F_{i-1}$. We show that all GSV source types $(\mathcal{F}, \mathcal{D})$ fall into one of three categories: (1) non-extractable; (2) extractable with error $n^{-\Theta(1)}$; (3) extractable with error $2^{-\Omega(n)}$. This rules out other error rates like $1/\log n$ or $2^{-\sqrt{n}}$. We provide essentially randomness-optimal extraction algorithms for extractable sources. Our algorithm for category (2) sources extracts with error $\varepsilon$ from $n = \mathrm{poly}(1/\varepsilon)$ samples in time linear in $n$. Our algorithm for category (3) sources extracts $m$ bits with error $\varepsilon$ from $n = O(m + \log 1/\varepsilon)$ samples in time $\min\{O(nm2^m),n^{O(\lvert\mathcal{F}\rvert)}\}$. We also give algorithms for classifying a GSV source type $(\mathcal{F}, \mathcal{D})$: Membership in category (1) can be decided in $\mathrm{NP}$, while membership in category (3) is polynomial-time decidable.
1
0
0
0
0
0
Hermann Hankel's "On the general theory of motion of fluids", an essay including an English translation of the complete Preisschrift from 1861
The present is a companion paper to "A contemporary look at Hermann Hankel's 1861 pioneering work on Lagrangian fluid dynamics" by Frisch, Grimberg and Villone (2017). Here we present the English translation of the 1861 prize manuscript from Göttingen University "Zur allgemeinen Theorie der Bewegung der Flüssigkeiten" (On the general theory of the motion of the fluids) of Hermann Hankel (1839-1873), which was originally submitted in Latin and then translated into German by the Author for publication. We also provide the English translation of two important reports on the manuscript, one written by Bernhard Riemann and the other by Wilhelm Eduard Weber, during the assessment process for the prize. Finally we give a short biography of Hermann Hankel with his complete bibliography.
0
1
1
0
0
0
Targeted Damage to Interdependent Networks
The giant mutually connected component (GMCC) of an interdependent or multiplex network collapses with a discontinuous hybrid transition under random damage to the network. If the nodes to be damaged are selected in a targeted way, the collapse of the GMCC may occur significantly sooner. Finding the minimal damage set which destroys the largest mutually connected component of a given interdependent network is a computationally prohibitive simultaneous optimization problem. We introduce a simple heuristic strategy -- Effective Multiplex Degree -- for targeted attack on interdependent networks that leverages the indirect damage inherent in multiplex networks to achieve a damage set smaller than that found by any other non computationally intensive algorithm. We show that the intuition from single layer networks that decycling (damage of the $2$-core) is the most effective way to destroy the giant component, does not carry over to interdependent networks, and in fact such approaches are worse than simply removing the highest degree nodes.
1
0
0
0
0
0
The limit of the Hermitian-Yang-Mills flow on reflexive sheaves
In this paper, we study the asymptotic behavior of the Hermitian-Yang-Mills flow on a reflexive sheaf. We prove that the limiting reflexive sheaf is isomorphic to the double dual of the graded sheaf associated to the Harder-Narasimhan-Seshadri filtration, this answers a question by Bando and Siu.
0
0
1
0
0
0
High-accuracy phase-field models for brittle fracture based on a new family of degradation functions
Phase-field approaches to fracture based on energy minimization principles have been rapidly gaining popularity in recent years, and are particularly well-suited for simulating crack initiation and growth in complex fracture networks. In the phase-field framework, the surface energy associated with crack formation is calculated by evaluating a functional defined in terms of a scalar order parameter and its gradients, which in turn describe the fractures in a diffuse sense following a prescribed regularization length scale. Imposing stationarity of the total energy leads to a coupled system of partial differential equations, one enforcing stress equilibrium and another governing phase-field evolution. The two equations are coupled through an energy degradation function that models the loss of stiffness in the bulk material as it undergoes damage. In the present work, we introduce a new parametric family of degradation functions aimed at increasing the accuracy of phase-field models in predicting critical loads associated with crack nucleation as well as the propagation of existing fractures. An additional goal is the preservation of linear elastic response in the bulk material prior to fracture. Through the analysis of several numerical examples, we demonstrate the superiority of the proposed family of functions to the classical quadratic degradation function that is used most often in the literature.
0
1
1
0
0
0
Straggler Mitigation in Distributed Optimization Through Data Encoding
Slow running or straggler tasks can significantly reduce computation speed in distributed computation. Recently, coding-theory-inspired approaches have been applied to mitigate the effect of straggling, through embedding redundancy in certain linear computational steps of the optimization algorithm, thus completing the computation without waiting for the stragglers. In this paper, we propose an alternate approach where we embed the redundancy directly in the data itself, and allow the computation to proceed completely oblivious to encoding. We propose several encoding schemes, and demonstrate that popular batch algorithms, such as gradient descent and L-BFGS, applied in a coding-oblivious manner, deterministically achieve sample path linear convergence to an approximate solution of the original problem, using an arbitrarily varying subset of the nodes at each iteration. Moreover, this approximation can be controlled by the amount of redundancy and the number of nodes used in each iteration. We provide experimental results demonstrating the advantage of the approach over uncoded and data replication strategies.
1
0
0
1
0
0
Inference Trees: Adaptive Inference with Exploration
We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with exploitation, ensures consistency, and alleviates pathologies in existing adaptive methods. ITs adaptively sample from hierarchical partitions of the parameter space, while simultaneously learning these partitions in an online manner. This enables ITs to not only identify regions of high posterior mass, but also maintain uncertainty estimates to track regions where significant posterior mass may have been missed. ITs can be based on any inference method that provides a consistent estimate of the marginal likelihood. They are particularly effective when combined with sequential Monte Carlo, where they capture long-range dependencies and yield improvements beyond proposal adaptation alone.
0
0
0
1
0
0
Application of backpropagation neural networks to both stages of fingerprinting based WIPS
We propose a scheme to employ backpropagation neural networks (BPNNs) for both stages of fingerprinting-based indoor positioning using WLAN/WiFi signal strengths (FWIPS): radio map construction during the offline stage, and localization during the online stage. Given a training radio map (TRM), i.e., a set of coordinate vectors and associated WLAN/WiFi signal strengths of the available access points, a BPNN can be trained to output the expected signal strengths for any input position within the region of interest (BPNN-RM). This can be used to provide a continuous representation of the radio map and to filter, densify or decimate a discrete radio map. Correspondingly, the TRM can also be used to train another BPNN to output the expected position within the region of interest for any input vector of recorded signal strengths and thus carry out localization (BPNN-LA).Key aspects of the design of such artificial neural networks for a specific application are the selection of design parameters like the number of hidden layers and nodes within the network, and the training procedure. Summarizing extensive numerical simulations, based on real measurements in a testbed, we analyze the impact of these design choices on the performance of the BPNN and compare the results in particular to those obtained using the $k$ nearest neighbors ($k$NN) and weighted $k$ nearest neighbors approaches to FWIPS.
1
0
0
1
0
0
Bayesian Bootstraps for Massive Data
Recently, two scalable adaptations of the bootstrap have been proposed: the bag of little bootstraps (BLB; Kleiner et al., 2014) and the subsampled double bootstrap (SDB; Sengupta et al., 2016). In this paper, we introduce Bayesian bootstrap analogues to the BLB and SDB that have similar theoretical and computational properties, a strategy to perform lossless inference for a class of functionals of the Bayesian bootstrap, and briefly discuss extensions for Dirichlet Processes.
0
0
0
1
0
0
Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner
Impressive image captioning results are achieved in domains with plenty of training image and sentence pairs (e.g., MSCOCO). However, transferring to a target domain with significant domain shifts but no paired training data (referred to as cross-domain image captioning) remains largely unexplored. We propose a novel adversarial training procedure to leverage unpaired data in the target domain. Two critic networks are introduced to guide the captioner, namely domain critic and multi-modal critic. The domain critic assesses whether the generated sentences are indistinguishable from sentences in the target domain. The multi-modal critic assesses whether an image and its generated sentence are a valid pair. During training, the critics and captioner act as adversaries -- captioner aims to generate indistinguishable sentences, whereas critics aim at distinguishing them. The assessment improves the captioner through policy gradient updates. During inference, we further propose a novel critic-based planning method to select high-quality sentences without additional supervision (e.g., tags). To evaluate, we use MSCOCO as the source domain and four other datasets (CUB-200-2011, Oxford-102, TGIF, and Flickr30k) as the target domains. Our method consistently performs well on all datasets. In particular, on CUB-200-2011, we achieve 21.8% CIDEr-D improvement after adaptation. Utilizing critics during inference further gives another 4.5% boost.
1
0
0
0
0
0
Faster Fuzzing: Reinitialization with Deep Neural Models
We improve the performance of the American Fuzzy Lop (AFL) fuzz testing framework by using Generative Adversarial Network (GAN) models to reinitialize the system with novel seed files. We assess performance based on the temporal rate at which we produce novel and unseen code paths. We compare this approach to seed file generation from a random draw of bytes observed in the training seed files. The code path lengths and variations were not sufficiently diverse to fully replace AFL input generation. However, augmenting native AFL with these additional code paths demonstrated improvements over AFL alone. Specifically, experiments showed the GAN was faster and more effective than the LSTM and out-performed a random augmentation strategy, as measured by the number of unique code paths discovered. GAN helps AFL discover 14.23% more code paths than the random strategy in the same amount of CPU time, finds 6.16% more unique code paths, and finds paths that are on average 13.84% longer. Using GAN shows promise as a reinitialization strategy for AFL to help the fuzzer exercise deep paths in software.
1
0
0
0
0
0
Contego: An Adaptive Framework for Integrating Security Tasks in Real-Time Systems
Embedded real-time systems (RTS) are pervasive. Many modern RTS are exposed to unknown security flaws, and threats to RTS are growing in both number and sophistication. However, until recently, cyber-security considerations were an afterthought in the design of such systems. Any security mechanisms integrated into RTS must (a) co-exist with the real- time tasks in the system and (b) operate without impacting the timing and safety constraints of the control logic. We introduce Contego, an approach to integrating security tasks into RTS without affecting temporal requirements. Contego is specifically designed for legacy systems, viz., the real-time control systems in which major alterations of the system parameters for constituent tasks is not always feasible. Contego combines the concept of opportunistic execution with hierarchical scheduling to maintain compatibility with legacy systems while still providing flexibility by allowing security tasks to operate in different modes. We also define a metric to measure the effectiveness of such integration. We evaluate Contego using synthetic workloads as well as with an implementation on a realistic embedded platform (an open- source ARM CPU running real-time Linux).
1
0
0
0
0
0
Second Order Analysis for Joint Source-Channel Coding with Markovian Source
We derive the second order rates of joint source-channel coding, whose source obeys an irreducible and ergodic Markov process when the channel is a discrete memoryless, while a previous study solved it only in a special case. We also compare the joint source-channel scheme with the separation scheme in the second order regime while a previous study made a notable comparison only with numerical calculation. To make these two notable progress, we introduce two kinds of new distribution families, switched Gaussian convolution distribution and *-product distribution, which are defined by modifying the Gaussian distribution.
1
0
1
0
0
0
Interface currents and magnetization in singlet-triplet superconducting heterostructures: Role of chiral and helical domains
Chiral and helical domain walls are generic defects of topological spin-triplet superconductors. We study theoretically the magnetic and transport properties of superconducting singlet-triplet-singlet heterostructure as a function of the phase difference between the singlet leads in the presence of chiral and helical domains inside the spin-triplet region. The local inversion symmetry breaking at the singlet-triplet interface allows the emergence of a static phase-controlled magnetization, and generally yields both spin and charge currents flowing along the edges. The parity of the domain wall number affects the relative orientation of the interface moments and currents, while in some cases the domain walls themselves contribute to spin and charge transport. We demonstrate that singlet-triplet heterostructures are a generic prototype to generate and control non-dissipative spin and charge effects, putting them in a broader class of systems exhibiting spin-Hall, anomalous Hall effects and similar phenomena. Features of the electron transport and magnetic effects at the interfaces can be employed to assess the presence of domains in chiral/helical superconductors.
0
1
0
0
0
0
Implicit Weight Uncertainty in Neural Networks
Modern neural networks tend to be overconfident on unseen, noisy or incorrectly labelled data and do not produce meaningful uncertainty measures. Bayesian deep learning aims to address this shortcoming with variational approximations (such as Bayes by Backprop or Multiplicative Normalising Flows). However, current approaches have limitations regarding flexibility and scalability. We introduce Bayes by Hypernet (BbH), a new method of variational approximation that interprets hypernetworks as implicit distributions. It naturally uses neural networks to model arbitrarily complex distributions and scales to modern deep learning architectures. In our experiments, we demonstrate that our method achieves competitive accuracies and predictive uncertainties on MNIST and a CIFAR5 task, while being the most robust against adversarial attacks.
1
0
0
1
0
0
A systematic analysis of the XMM-Newton background: III. Impact of the magnetospheric environment
A detailed characterization of the particle induced background is fundamental for many of the scientific objectives of the Athena X-ray telescope, thus an adequate knowledge of the background that will be encountered by Athena is desirable. Current X-ray telescopes have shown that the intensity of the particle induced background can be highly variable. Different regions of the magnetosphere can have very different environmental conditions, which can, in principle, differently affect the particle induced background detected by the instruments. We present results concerning the influence of the magnetospheric environment on the background detected by EPIC instrument onboard XMM-Newton through the estimate of the variation of the in-Field-of-View background excess along the XMM-Newton orbit. An important contribution to the XMM background, which may affect the Athena background as well, comes from soft proton flares. Along with the flaring component a low-intensity component is also present. We find that both show modest variations in the different magnetozones and that the soft proton component shows a strong trend with the distance from Earth.
0
1
0
0
0
0
DeepPermNet: Visual Permutation Learning
We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning. The goal of this task is to find the permutation that recovers the structure of data from shuffled versions of it. In the case of natural images, this task boils down to recovering the original image from patches shuffled by an unknown permutation matrix. Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods. To this end, we resort to a continuous approximation of these matrices using doubly-stochastic matrices which we generate from standard CNN predictions using Sinkhorn iterations. Unrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet, an end-to-end CNN model for this task. The utility of DeepPermNet is demonstrated on two challenging computer vision problems, namely, (i) relative attributes learning and (ii) self-supervised representation learning. Our results show state-of-the-art performance on the Public Figures and OSR benchmarks for (i) and on the classification and segmentation tasks on the PASCAL VOC dataset for (ii).
1
0
0
0
0
0
ADE String Chains and Mirror Symmetry
6d superconformal field theories (SCFTs) are the SCFTs in the highest possible dimension. They can be geometrically engineered in F-theory by compactifying on non-compact elliptic Calabi-Yau manifolds. In this paper we focus on the class of SCFTs whose base geometry is determined by $-2$ curves intersecting according to ADE Dynkin diagrams and derive the corresponding mirror Calabi-Yau manifold. The mirror geometry is uniquely determined in terms of the mirror curve which has also an interpretation in terms of the Seiberg-Witten curve of the four-dimensional theory arising from torus compactification. Adding the affine node of the ADE quiver to the base geometry, we connect to recent results on SYZ mirror symmetry for the $A$ case and provide a physical interpretation in terms of little string theory. Our results, however, go beyond this case as our construction naturally covers the $D$ and $E$ cases as well.
0
0
1
0
0
0
(non)-automaticity of completely multiplicative sequences having negligible many non-trivial prime factors
In this article we consider the completely multiplicative sequences $(a_n)_{n \in \mathbf{N}}$ defined on a field $\mathbf{K}$ and satisfying $$\sum_{p| p \leq n, a_p \neq 1, p \in \mathbf{P}}\frac{1}{p}<\infty,$$ where $\mathbf{P}$ is the set of prime numbers. We prove that if such sequences are automatic then they cannot have infinitely many prime numbers $p$ such that $a_{p}\neq 1$. Using this fact, we prove that if a completely multiplicative sequence $(a_n)_{n \in \mathbf{N}}$, vanishing or not, can be written in the form $a_n=b_n\chi_n$ such that $(b_n)_{n \in \mathbf{N}}$ is a non ultimately periodic, completely multiplicative automatic sequence satisfying the above condition, and $(\chi_n)_{n \in \mathbf{N}}$ is a Dirichlet character or a constant sequence, then there exists only one prime number $p$ such that $b_p \neq 1$ or $0$.
0
0
1
0
0
0
Timing Aware Dummy Metal Fill Methodology
In this paper, we analyzed parasitic coupling capacitance coming from dummy metal fill and its impact on timing. Based on the modeling, we proposed two approaches to minimize the timing impact from dummy metal fill. The first approach applies more spacing between critical nets and metal fill, while the second approach leverages the shielding effects of reference nets. Experimental results show consistent improvement compared to traditional metal fill method.
1
0
0
0
0
0
Asymptotic efficiency of the proportional compensation scheme for a large number of producers
We consider a manager, who allocates some fixed total payment amount between $N$ rational agents in order to maximize the aggregate production. The profit of $i$-th agent is the difference between the compensation (reward) obtained from the manager and the production cost. We compare (i) the \emph{normative} compensation scheme, where the manager enforces the agents to follow an optimal cooperative strategy; (ii) the \emph{linear piece rates} compensation scheme, where the manager announces an optimal reward per unit good; (iii) the \emph{proportional} compensation scheme, where agent's reward is proportional to his contribution to the total output. Denoting the correspondent total production levels by $s^*$, $\hat s$ and $\overline s$ respectively, where the last one is related to the unique Nash equilibrium, we examine the limits of the prices of anarchy $\mathscr A_N=s^*/\overline s$, $\mathscr A_N'=\hat s/\overline s$ as $N\to\infty$. These limits are calculated for the cases of identical convex costs with power asymptotics at the origin, and for power costs, corresponding to the Coob-Douglas and generalized CES production functions with decreasing returns to scale. Our results show that asymptotically no performance is lost in terms of $\mathscr A'_N$, and in terms of $\mathscr A_N$ the loss does not exceed $31\%$.
1
0
0
0
0
0
Non-equilibrium statistical mechanics of continuous attractors
Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood. Here, we find fundamental limits on how rapidly internal representations encoded along continuous attractors can be updated by an external signal. We apply these results to place cell networks to derive a velocity-dependent non-equilibrium memory capacity in neural networks.
0
0
0
0
1
0
Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets
A $(t, s, v)$-all-or-nothing transform is a bijective mapping defined on $s$-tuples over an alphabet of size $v$, which satisfies the condition that the values of any $t$ input co-ordinates are completely undetermined, given only the values of any $s-t$ output co-ordinates. The main question we address in this paper is: for which choices of parameters does a $(t, s, v)$-all-or-nothing transform (AONT) exist? More specifically, if we fix $t$ and $v$, we want to determine the maximum integer $s$ such that a $(t, s, v)$-AONT exists. We mainly concentrate on the case $t=2$ for arbitrary values of $v$, where we obtain various necessary as well as sufficient conditions for existence of these objects. We consider both linear and general (linear or nonlinear) AONT. We also show some connections between AONT, orthogonal arrays and resilient functions.
1
0
1
0
0
0
Exhaustive Exploration of the Failure-oblivious Computing Search Space
High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions.
1
0
0
0
0
0
Theoretical Accuracy in Cosmological Growth Estimation
We elucidate the importance of the consistent treatment of gravity-model specific non-linearities when estimating the growth of cosmological structures from redshift space distortions (RSD). Within the context of standard perturbation theory (SPT), we compare the predictions of two theoretical templates with redshift space data from COLA (COmoving Lagrangian Acceleration) simulations in the normal branch of DGP gravity (nDGP) and General Relativity (GR). Using COLA for these comparisons is validated using a suite of full N-body simulations for the same theories. The two theoretical templates correspond to the standard general relativistic perturbation equations and those same equations modelled within nDGP. Gravitational clustering non-linear effects are accounted for by modelling the power spectrum up to one loop order and redshift space clustering anisotropy is modelled using the Taruya, Nishimichi and Saito (TNS) RSD model. Using this approach, we attempt to recover the simulation's fiducial logarithmic growth parameter $f$. By assigning the simulation data with errors representing an idealised survey with a volume of $10\mbox{Gpc}^3/h^3$, we find the GR template is unable to recover fiducial $f$ to within 1$\sigma$ at $z=1$ when we match the data up to $k_{\rm max}=0.195h$/Mpc. On the other hand, the DGP template recovers the fiducial value within $1\sigma$. Further, we conduct the same analysis for sets of mock data generated for generalised models of modified gravity using SPT, where again we analyse the GR template's ability to recover the fiducial value. We find that for models with enhanced gravitational non-linearity, the theoretical bias of the GR template becomes significant for stage IV surveys. Thus, we show that for the future large data volume galaxy surveys, the self-consistent modelling of non-GR gravity scenarios will be crucial in constraining theory parameters.
0
1
0
0
0
0
Model-Robust Counterfactual Prediction Method
We develop a novel method for counterfactual analysis based on observational data using prediction intervals for units under different exposures. Unlike methods that target heterogeneous or conditional average treatment effects of an exposure, the proposed approach aims to take into account the irreducible dispersions of counterfactual outcomes so as to quantify the relative impact of different exposures. The prediction intervals are constructed in a distribution-free and model-robust manner based on the conformal prediction approach. The computational obstacles to this approach are circumvented by leveraging properties of a tuning-free method that learns sparse additive predictor models for counterfactual outcomes. The method is illustrated using both real and synthetic data.
0
0
1
1
0
0
Exponentiated Generalized Pareto Distribution: Properties and applications towards Extreme Value Theory
The Generalized Pareto Distribution (GPD) plays a central role in modelling heavy tail phenomena in many applications. Applying the GPD to actual datasets however is a non-trivial task. One common way suggested in the literature to investigate the tail behaviour is to take logarithm to the original dataset in order to reduce the sample variability. Inspired by this, we propose and study the Exponentiated Generalized Pareto Distribution (exGPD), which is created via log-transform of the GPD variable. After introducing the exGPD we derive various distributional quantities, including the moment generating function, tail risk measures. As an application we also develop a plot as an alternative to the Hill plot to identify the tail index of heavy tailed datasets, based on the moment matching for the exGPD. Various numerical analyses with both simulated and actual datasets show that the proposed plot works well.
0
0
1
1
0
0
Learning with Average Top-k Loss
In this work, we introduce the {\em average top-$k$} (\atk) loss as a new aggregate loss for supervised learning, which is the average over the $k$ largest individual losses over a training dataset. We show that the \atk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the \atk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of \matk learning on the classification calibration of the \atk loss and the error bounds of \atk-SVM. We demonstrate the applicability of minimum average top-$k$ learning for binary classification and regression using synthetic and real datasets.
1
0
0
1
0
0
Reflexive polytopes arising from perfect graphs
Reflexive polytopes form one of the distinguished classes of lattice polytopes. Especially reflexive polytopes which possess the integer decomposition property are of interest. In the present paper, by virtue of the algebraic technique on Grönbner bases, a new class of reflexive polytopes which possess the integer decomposition property and which arise from perfect graphs will be presented. Furthermore, the Ehrhart $\delta$-polynomials of these polytopes will be studied.
0
0
1
0
0
0
Meta Networks
Neural networks have been successfully applied in applications with a large amount of labeled data. However, the task of rapid generalization on new concepts with small training data while preserving performances on previously learned ones still presents a significant challenge to neural network models. In this work, we introduce a novel meta learning method, Meta Networks (MetaNet), that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. When evaluated on Omniglot and Mini-ImageNet benchmarks, our MetaNet models achieve a near human-level performance and outperform the baseline approaches by up to 6% accuracy. We demonstrate several appealing properties of MetaNet relating to generalization and continual learning.
1
0
0
1
0
0
Variable selection for clustering with Gaussian mixture models: state of the art
The mixture models have become widely used in clustering, given its probabilistic framework in which its based, however, for modern databases that are characterized by their large size, these models behave disappointingly in setting out the model, making essential the selection of relevant variables for this type of clustering. After recalling the basics of clustering based on a model, this article will examine the variable selection methods for model-based clustering, as well as presenting opportunities for improvement of these methods.
1
0
0
1
0
0
Analysing Magnetism Using Scanning SQUID Microscopy
Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
0
1
0
0
0
0
Algorithms for solving optimization problems arising from deep neural net models: nonsmooth problems
Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems. The resulting optimization problem to solve for the optimal vector minimizing the empirical risk is, however, highly nonconvex. This alone presents a challenge to application and development of appropriate optimization algorithms for solving the problem. However, in addition, there are a number of interesting problems for which the objective function is non- smooth and nonseparable. In this paper, we summarize the primary challenges involved, the state of the art, and present some numerical results on an interesting and representative class of problems.
0
0
0
1
0
0
On the essential self-adjointness of singular sub-Laplacians
We prove a general essential self-adjointness criterion for sub-Laplacians on complete sub-Riemannian manifolds, defined with respect to singular measures. As a consequence, we show that the intrinsic sub-Laplacian (i.e. defined w.r.t. Popp's measure) is essentially self-adjoint on the equiregular connected components of a sub-Riemannian manifold. This result holds under mild regularity assumptions on the singular region, and when the latter does not contain characteristic points.
0
0
1
0
0
0
Are Saddles Good Enough for Deep Learning?
Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as `good saddles'. We also verified the famous Wigner's Semicircle Law in our experimental results.
1
0
0
1
0
0
Monotonicity and enclosure methods for the p-Laplace equation
We show that the convex hull of a monotone perturbation of a homogeneous background conductivity in the $p$-conductivity equation is determined by knowledge of the nonlinear Dirichlet-Neumann operator. We give two independent proofs, one of which is based on the monotonicity method and the other on the enclosure method. Our results are constructive and require no jump or smoothness properties on the conductivity perturbation or its support.
0
0
1
0
0
0
Tension and chemical efficiency of Myosin-II motors
Recent experiments demonstrate that molecular motors from the Myosin II family serve as cross-links inducing active tension in the cytoskeletal network. Here we revise the Brownian ratchet model, previously studied in the context of active transport along polymer tracks, in setups resembling a motor in a polymer network, also taking into account the effect of electrostatic changes in the motor heads. We explore important mechanical quantities and show that such a model is also capable of mechanosensing. Finally, we introduce a novel efficiency based on excess heat production by the chemical cycle which is directly related to the active tension the motor exerts. The chemical efficiencies differ considerably for motors with a different number of heads, while their mechanical properties remain qualitatively similar. For motors with a small number of heads, the chemical efficiency is maximal when they are frustrated, a trait that is not found in larger motors.
0
1
0
0
0
0
Token-based Function Computation with Memory
In distributed function computation, each node has an initial value and the goal is to compute a function of these values in a distributed manner. In this paper, we propose a novel token-based approach to compute a wide class of target functions to which we refer as "Token-based function Computation with Memory" (TCM) algorithm. In this approach, node values are attached to tokens and travel across the network. Each pair of travelling tokens would coalesce when they meet, forming a token with a new value as a function of the original token values. In contrast to the Coalescing Random Walk (CRW) algorithm, where token movement is governed by random walk, meeting of tokens in our scheme is accelerated by adopting a novel chasing mechanism. We proved that, compared to the CRW algorithm, the TCM algorithm results in a reduction of time complexity by a factor of at least $\sqrt{n/\log(n)}$ in Erdös-Renyi and complete graphs, and by a factor of $\log(n)/\log(\log(n))$ in torus networks. Simulation results show that there is at least a constant factor improvement in the message complexity of TCM algorithm in all considered topologies. Robustness of the CRW and TCM algorithms in the presence of node failure is analyzed. We show that their robustness can be improved by running multiple instances of the algorithms in parallel.
1
0
0
1
0
0
Simple property of heterogeneous aspiration dynamics: Beyond weak selection
How individuals adapt their behavior in cultural evolution remains elusive. Theoretical studies have shown that the update rules chosen to model individual decision making can dramatically modify the evolutionary outcome of the population as a whole. This hints at the complexities of considering the personality of individuals in a population, where each one uses its own rule. Here, we investigate whether and how heterogeneity in the rules of behavior update alters the evolutionary outcome. We assume that individuals update behaviors by aspiration-based self-evaluation and they do so in their own ways. Under weak selection, we analytically reveal a simple property that holds for any two-strategy multi-player games in well-mixed populations and on regular graphs: the evolutionary outcome in a population with heterogeneous update rules is the weighted average of the outcomes in the corresponding homogeneous populations, and the associated weights are the frequencies of each update rule in the heterogeneous population. Beyond weak selection, we show that this property holds for public goods games. Our finding implies that heterogeneous aspiration dynamics is additive. This additivity greatly reduces the complexity induced by the underlying individual heterogeneity. Our work thus provides an efficient method to calculate evolutionary outcomes under heterogeneous update rules.
0
0
0
0
1
0
Warm dark matter and the ionization history of the Universe
In warm dark matter scenarios structure formation is suppressed on small scales with respect to the cold dark matter case, reducing the number of low-mass halos and the fraction of ionized gas at high redshifts and thus, delaying reionization. This has an impact on the ionization history of the Universe and measurements of the optical depth to reionization, of the evolution of the global fraction of ionized gas and of the thermal history of the intergalactic medium, can be used to set constraints on the mass of the dark matter particle. However, the suppression of the fraction of ionized medium in these scenarios can be partly compensated by varying other parameters, as the ionization efficiency or the minimum mass for which halos can host star-forming galaxies. Here we use different data sets regarding the ionization and thermal histories of the Universe and, taking into account the degeneracies from several astrophysical parameters, we obtain a lower bound on the mass of thermal warm dark matter candidates of $m_X > 1.3$ keV, or $m_s > 5.5$ keV for the case of sterile neutrinos non-resonantly produced in the early Universe, both at 90\% confidence level.
0
1
0
0
0
0
High quality factor manganese-doped aluminum lumped-element kinetic inductance detectors sensitive to frequencies below 100 GHz
Aluminum lumped-element kinetic inductance detectors (LEKIDs) sensitive to millimeter-wave photons have been shown to exhibit high quality factors, making them highly sensitive and multiplexable. The superconducting gap of aluminum limits aluminum LEKIDs to photon frequencies above 100 GHz. Manganese-doped aluminum (Al-Mn) has a tunable critical temperature and could therefore be an attractive material for LEKIDs sensitive to frequencies below 100 GHz if the internal quality factor remains sufficiently high when manganese is added to the film. To investigate, we measured some of the key properties of Al-Mn LEKIDs. A prototype eight-element LEKID array was fabricated using a 40 nm thick film of Al-Mn deposited on a 500 {\mu}m thick high-resistivity, float-zone silicon substrate. The manganese content was 900 ppm, the measured $T_c = 694\pm1$ mK, and the resonance frequencies were near 150 MHz. Using measurements of the forward scattering parameter $S_{21}$ at various bath temperatures between 65 and 250 mK, we determined that the Al-Mn LEKIDs we fabricated have internal quality factors greater than $2 \times 10^5$, which is high enough for millimeter-wave astrophysical observations. In the dark conditions under which these devices were measured, the fractional frequency noise spectrum shows a shallow slope that depends on bath temperature and probe tone amplitude, which could be two-level system noise. The anticipated white photon noise should dominate this level of low-frequency noise when the detectors are illuminated with millimeter-waves in future measurements. The LEKIDs responded to light pulses from a 1550 nm light-emitting diode, and we used these light pulses to determine that the quasiparticle lifetime is 60 {\mu}s.
0
1
0
0
0
0
Tetramer Bound States in Heteronuclear Systems
We calculate the universal spectrum of trimer and tetramer states in heteronuclear mixtures of ultracold atoms with different masses in the vicinity of the heavy-light dimer threshold. To extract the energies, we solve the three- and four-body problem for simple two- and three-body potentials tuned to the universal region using the Gaussian expansion method. We focus on the case of one light particle of mass $m$ and two or three heavy bosons of mass $M$ with resonant heavy-light interactions. We find that trimer and tetramer cross into the heavy-light dimer threshold at almost the same point and that as the mass ratio $M/m$ decreases, the distance between the thresholds for trimer and tetramer states becomes smaller. We also comment on the possibility of observing exotic three-body states consisting of a dimer and two atoms in this region and compare with previous work.
0
1
0
0
0
0
Dark Energy Cosmological Models with General forms of Scale Factor
In this paper, we have constructed dark energy models in an anisotropic Bianchi-V space-time and studied the role of anisotropy in the evolution of dark energy. We have considered anisotropic dark energy fluid with different pressure gradients along different spatial directions. In order to obtain a deterministic solution, we have considered three general forms of scale factor. The different forms of scale factors considered here produce time varying deceleration parameters in all the cases that simulates the cosmic transition. The variable equation of state (EoS) parameter, skewness parameters for all the models are obtained and analyzed. The physical properties of the models are also discussed.
0
1
0
0
0
0
Mutual Kernel Matrix Completion
With the huge influx of various data nowadays, extracting knowledge from them has become an interesting but tedious task among data scientists, particularly when the data come in heterogeneous form and have missing information. Many data completion techniques had been introduced, especially in the advent of kernel methods. However, among the many data completion techniques available in the literature, studies about mutually completing several incomplete kernel matrices have not been given much attention yet. In this paper, we present a new method, called Mutual Kernel Matrix Completion (MKMC) algorithm, that tackles this problem of mutually inferring the missing entries of multiple kernel matrices by combining the notions of data fusion and kernel matrix completion, applied on biological data sets to be used for classification task. We first introduced an objective function that will be minimized by exploiting the EM algorithm, which in turn results to an estimate of the missing entries of the kernel matrices involved. The completed kernel matrices are then combined to produce a model matrix that can be used to further improve the obtained estimates. An interesting result of our study is that the E-step and the M-step are given in closed form, which makes our algorithm efficient in terms of time and memory. After completion, the (completed) kernel matrices are then used to train an SVM classifier to test how well the relationships among the entries are preserved. Our empirical results show that the proposed algorithm bested the traditional completion techniques in preserving the relationships among the data points, and in accurately recovering the missing kernel matrix entries. By far, MKMC offers a promising solution to the problem of mutual estimation of a number of relevant incomplete kernel matrices.
1
0
0
1
0
0
Quantum Klein Space and Superspace
We give an algebraic quantization, in the sense of quantum groups, of the complex Minkowski space, and we examine the real forms corresponding to the signatures $(3,1)$, $(2,2)$, $(4,0)$, constructing the corresponding quantum metrics and providing an explicit presentation of the quantized coordinate algebras. In particular, we focus on the Kleinian signature $(2,2)$. The quantizations of the complex and real spaces come together with a coaction of the quantizations of the respective symmetry groups. We also extend such quantizations to the $\mathcal{N}=1$ supersetting.
0
0
1
0
0
0
Bayesian Lasso Posterior Sampling via Parallelized Measure Transport
It is well known that the Lasso can be interpreted as a Bayesian posterior mode estimate with a Laplacian prior. Obtaining samples from the full posterior distribution, the Bayesian Lasso, confers major advantages in performance as compared to having only the Lasso point estimate. Traditionally, the Bayesian Lasso is implemented via Gibbs sampling methods which suffer from lack of scalability, unknown convergence rates, and generation of samples that are necessarily correlated. We provide a measure transport approach to generate i.i.d samples from the posterior by constructing a transport map that transforms a sample from the Laplacian prior into a sample from the posterior. We show how the construction of this transport map can be parallelized into modules that iteratively solve Lasso problems and perform closed-form linear algebra updates. With this posterior sampling method, we perform maximum likelihood estimation of the Lasso regularization parameter via the EM algorithm. We provide comparisons to traditional Gibbs samplers using the diabetes dataset of Efron et al. Lastly, we give an example implementation on a computing system that leverages parallelization, a graphics processing unit, whose execution time has much less dependence on dimension as compared to a standard implementation.
0
0
0
1
0
0
Endogeneous Dynamics of Intraday Liquidity
In this paper we investigate the endogenous information contained in four liquidity variables at a five minutes time scale on equity markets around the world: the traded volume, the bid-ask spread, the volatility and the volume at first limits of the orderbook. In the spirit of Granger causality, we measure the level of information by the level of accuracy of linear autoregressive models. This empirical study is carried out on a dataset of more than 300 stocks from four different markets (US, UK, Japan and Hong Kong) from a period of over five years. We discuss the obtained performances of autoregressive (AR) models on stationarized versions of the variables, focusing on explaining the observed differences between stocks. Since empirical studies are often conducted at this time scale, we believe it is of paramount importance to document endogenous dynamics in a simple framework with no addition of supplemental information. Our study can hence be used as a benchmark to identify exogenous effects. On the other hand, most optimal trading frameworks (like the celebrated Almgren and Chriss one), focus on computing an optimal trading speed at a frequency close to the one we consider. Such frameworks very often take i.i.d. assumptions on liquidity variables; this paper document the auto-correlations emerging from real data, opening the door to new developments in optimal trading.
0
0
0
0
0
1
Medical Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial Networks
Data diversity is critical to success when training deep learning models. Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models. In this work, we propose a method to generate synthetic abnormal MRI images with brain tumors by training a generative adversarial network using two publicly available data sets of brain MRI. We demonstrate two unique benefits that the synthetic images provide. First, we illustrate improved performance on tumor segmentation by leveraging the synthetic images as a form of data augmentation. Second, we demonstrate the value of generative models as an anonymization tool, achieving comparable tumor segmentation results when trained on the synthetic data versus when trained on real subject data. Together, these results offer a potential solution to two of the largest challenges facing machine learning in medical imaging, namely the small incidence of pathological findings, and the restrictions around sharing of patient data.
0
0
0
1
0
0
Adaptive Feature Representation for Visual Tracking
Robust feature representation plays significant role in visual tracking. However, it remains a challenging issue, since many factors may affect the experimental performance. The existing method which combine different features by setting them equally with the fixed weight could hardly solve the issues, due to the different statistical properties of different features across various of scenarios and attributes. In this paper, by exploiting the internal relationship among these features, we develop a robust method to construct a more stable feature representation. More specifically, we utilize a co-training paradigm to formulate the intrinsic complementary information of multi-feature template into the efficient correlation filter framework. We test our approach on challenging se- quences with illumination variation, scale variation, deformation etc. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods favorably.
1
0
0
0
0
0
An analysis of the SPARSEVA estimate for the finite sample data case
In this paper, we develop an upper bound for the SPARSEVA (SPARSe Estimation based on a VAlidation criterion) estimation error in a general scheme, i.e., when the cost function is strongly convex and the regularized norm is decomposable for a pair of subspaces. We show how this general bound can be applied to a sparse regression problem to obtain an upper bound for the traditional SPARSEVA problem. Numerical results are used to illustrate the effectiveness of the suggested bound.
0
0
1
1
0
0
The Remarkable Similarity of Massive Galaxy Clusters From z~0 to z~1.9
We present the results of a Chandra X-ray survey of the 8 most massive galaxy clusters at z>1.2 in the South Pole Telescope 2500 deg^2 survey. We combine this sample with previously-published Chandra observations of 49 massive X-ray-selected clusters at 0<z<0.1 and 90 SZ-selected clusters at 0.25<z<1.2 to constrain the evolution of the intracluster medium (ICM) over the past ~10 Gyr. We find that the bulk of the ICM has evolved self similarly over the full redshift range probed here, with the ICM density at r>0.2R500 scaling like E(z)^2. In the centers of clusters (r<0.1R500), we find significant deviations from self similarity (n_e ~ E(z)^{0.1+/-0.5}), consistent with no redshift dependence. When we isolate clusters with over-dense cores (i.e., cool cores), we find that the average over-density profile has not evolved with redshift -- that is, cool cores have not changed in size, density, or total mass over the past ~9-10 Gyr. We show that the evolving "cuspiness" of clusters in the X-ray, reported by several previous studies, can be understood in the context of a cool core with fixed properties embedded in a self similarly-evolving cluster. We find no measurable evolution in the X-ray morphology of massive clusters, seemingly in tension with the rapidly-rising (with redshift) rate of major mergers predicted by cosmological simulations. We show that these two results can be brought into agreement if we assume that the relaxation time after a merger is proportional to the crossing time, since the latter is proportional to H(z)^(-1).
0
1
0
0
0
0
Rigorous estimates for the relegation algorithm
We revisit the relegation algorithm by Deprit et al. (Celest. Mech. Dyn. Astron. 79:157-182, 2001) in the light of the rigorous Nekhoroshev's like theory. This relatively recent algorithm is nowadays widely used for implementing closed form analytic perturbation theories, as it generalises the classical Birkhoff normalisation algorithm. The algorithm, here briefly explained by means of Lie transformations, has been so far introduced and used in a formal way, i.e. without providing any rigorous convergence or asymptotic estimates. The overall aim of this paper is to find such quantitative estimates and to show how the results about stability over exponentially long times can be recovered in a simple and effective way, at least in the non-resonant case.
0
1
0
0
0
0
Linear Pentapods with a Simple Singularity Variety
There exists a bijection between the configuration space of a linear pentapod and all points $(u,v,w,p_x,p_y,p_z)\in\mathbb{R}^{6}$ located on the singular quadric $\Gamma: u^2+v^2+w^2=1$, where $(u,v,w)$ determines the orientation of the linear platform and $(p_x,p_y,p_z)$ its position. Then the set of all singular robot configurations is obtained by intersecting $\Gamma$ with a cubic hypersurface $\Sigma$ in $\mathbb{R}^{6}$, which is only quadratic in the orientation variables and position variables, respectively. This article investigates the restrictions to be imposed on the design of this mechanism in order to obtain a reduction in degree. In detail we study the cases where $\Sigma$ is (1) linear in position variables, (2) linear in orientation variables and (3) quadratic in total. The resulting designs of linear pentapods have the advantage of considerably simplified computation of singularity-free spheres in the configuration space. Finally we propose three kinematically redundant designs of linear pentapods with a simple singularity surface.
1
0
0
0
0
0
Neural Networks as Interacting Particle Systems: Asymptotic Convexity of the Loss Landscape and Universal Scaling of the Approximation Error
Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or "loss" function used to train the network. We show that, when the number $n$ of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of $n$. We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as $O(n^{-1})$. Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as $d=25$.
0
0
0
1
0
0
Adaptive Similar Triangles Method: a Stable Alternative to Sinkhorn's Algorithm for Regularized Optimal Transport
In this paper, we are motivated by two important applications: entropy-regularized optimal transport problem and road or IP traffic demand matrix estimation by entropy model. Both of them include solving a special type of optimization problem with linear equality constraints and objective given as a sum of an entropy regularizer and a linear function. It is known that the state-of-the-art solvers for this problem, which are based on Sinkhorn's method (also known as RSA or balancing method), can fail to work, when the entropy-regularization parameter is small. We consider the above optimization problem as a particular instance of a general strongly convex optimization problem with linear constraints. We propose a new algorithm to solve this general class of problems. Our approach is based on the transition to the dual problem. First, we introduce a new accelerated gradient method with adaptive choice of gradient's Lipschitz constant. Then, we apply this method to the dual problem and show, how to reconstruct an approximate solution to the primal problem with provable convergence rate. We prove the rate $O(1/k^2)$, $k$ being the iteration counter, both for the absolute value of the primal objective residual and constraints infeasibility. Our method has similar to Sinkhorn's method complexity of each iteration, but is faster and more stable numerically, when the regularization parameter is small. We illustrate the advantage of our method by numerical experiments for the two mentioned applications. We show that there exists a threshold, such that, when the regularization parameter is smaller than this threshold, our method outperforms the Sinkhorn's method in terms of computation time.
0
0
1
0
0
0
Images of Ideals under Derivations and $\mathcal E$-Derivations of Univariate Polynomial Algebras over a Field of Characteristic Zero
Let $K$ be a field of characteristic zero and $x$ a free variable. A $K$-$\mathcal E$-derivation of $K[x]$ is a $K$-linear map of the form $\operatorname{I}-\phi$ for some $K$-algebra endomorphism $\phi$ of $K[x]$, where $\operatorname{I}$ denotes the identity map of $K[x]$. In this paper we study the image of an ideal of $K[x]$ under some $K$-derivations and $K$-$\mathcal E$-derivations of $K[x]$. We show that the LFED conjecture proposed in [Z4] holds for all $K$-$\mathcal E$-derivations and all locally finite $K$-derivations of $K[x]$. We also show that the LNED conjecture proposed in [Z4] holds for all locally nilpotent $K$-derivations of $K[x]$, and also for all locally nilpotent $K$-$\mathcal E$-derivations of $K[x]$ and the ideals $uK[x]$ such that either $u=0$, or $\operatorname{deg}\, u\le 1$, or $u$ has at least one repeated root in the algebraic closure of $K$. As a bi-product, the homogeneous Mathieu subspaces (Mathieu-Zhao spaces) of the univariate polynomial algebra over an arbitrary field have also been classified.
0
0
1
0
0
0
Maximum genus of the Jenga like configurations
We treat the boundary of the union of blocks in the Jenga game as a surface with a polyhedral structure and consider its genus. We generalize the game and determine the maximum genus of the generalized game.
0
0
1
0
0
0
A Decidable Very Expressive Description Logic for Databases (Extended Version)
We introduce $\mathcal{DLR}^+$, an extension of the n-ary propositionally closed description logic $\mathcal{DLR}$ to deal with attribute-labelled tuples (generalising the positional notation), projections of relations, and global and local objectification of relations, able to express inclusion, functional, key, and external uniqueness dependencies. The logic is equipped with both TBox and ABox axioms. We show how a simple syntactic restriction on the appearance of projections sharing common attributes in a $\mathcal{DLR}^+$ knowledge base makes reasoning in the language decidable with the same computational complexity as $\mathcal{DLR}$. The obtained $\mathcal{DLR}^\pm$ n-ary description logic is able to encode more thoroughly conceptual data models such as EER, UML, and ORM.
1
0
0
0
0
0
Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure Prediction
Developing a Brain-Computer Interface~(BCI) for seizure prediction can help epileptic patients have a better quality of life. However, there are many difficulties and challenges in developing such a system as a real-life support for patients. Because of the nonstationary nature of EEG signals, normal and seizure patterns vary across different patients. Thus, finding a group of manually extracted features for the prediction task is not practical. Moreover, when using implanted electrodes for brain recording massive amounts of data are produced. This big data calls for the need for safe storage and high computational resources for real-time processing. To address these challenges, a cloud-based BCI system for the analysis of this big EEG data is presented. First, a dimensionality-reduction technique is developed to increase classification accuracy as well as to decrease the communication bandwidth and computation time. Second, following a deep-learning approach, a stacked autoencoder is trained in two steps for unsupervised feature extraction and classification. Third, a cloud-computing solution is proposed for real-time analysis of big EEG data. The results on a benchmark clinical dataset illustrate the superiority of the proposed patient-specific BCI as an alternative method and its expected usefulness in real-life support of epilepsy patients.
1
0
0
1
0
0
Centrality measures for graphons: Accounting for uncertainty in networks
As relational datasets modeled as graphs keep increasing in size and their data-acquisition is permeated by uncertainty, graph-based analysis techniques can become computationally and conceptually challenging. In particular, node centrality measures rely on the assumption that the graph is perfectly known -- a premise not necessarily fulfilled for large, uncertain networks. Accordingly, centrality measures may fail to faithfully extract the importance of nodes in the presence of uncertainty. To mitigate these problems, we suggest a statistical approach based on graphon theory: we introduce formal definitions of centrality measures for graphons and establish their connections to classical graph centrality measures. A key advantage of this approach is that centrality measures defined at the modeling level of graphons are inherently robust to stochastic variations of specific graph realizations. Using the theory of linear integral operators, we define degree, eigenvector, Katz and PageRank centrality functions for graphons and establish concentration inequalities demonstrating that graphon centrality functions arise naturally as limits of their counterparts defined on sequences of graphs of increasing size. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score.
1
0
1
1
0
0
A time-periodic mechanical analog of the quantum harmonic oscillator
We theoretically investigate the stability and linear oscillatory behavior of a naturally unstable particle whose potential energy is harmonically modulated. We find this fundamental dynamical system is analogous in time to a quantum harmonic oscillator. In a certain modulation limit, a.k.a. the Kapitza regime, the modulated oscillator can behave like an effective classic harmonic oscillator. But in the overlooked opposite limit, the stable modes of vibrations are quantized in the modulation parameter space. By analogy with the statistical interpretation of quantum physics, those modes can be characterized by the time-energy uncertainty relation of a quantum harmonic oscillator. Reducing the almost-periodic vibrational modes of the particle to their periodic eigenfunctions, one can transform the original equation of motion to a dimensionless Schrödinger stationary wave equation with a harmonic potential. This reduction process introduces two features reminiscent of the quantum realm: a wave-particle duality and a loss of causality that could legitimate a statistical interpretation of the computed eigenfunctions. These results shed new light on periodically time-varying linear dynamical systems and open an original path in the recently revived field of quantum mechanical analogs.
0
1
0
0
0
0
Anharmonicity and the isotope effect in superconducting lithium at high pressures: a first-principles approach
Recent experiments [Schaeffer 2015] have shown that lithium presents an extremely anomalous isotope effect in the 15-25 GPa pressure range. In this article we have calculated the anharmonic phonon dispersion of $\mathrm{^7Li}$ and $\mathrm{^6Li}$ under pressure, their superconducting transition temperatures, and the associated isotope effect. We have found a huge anharmonic renormalization of a transverse acoustic soft mode along $\Gamma$K in the fcc phase, the expected structure at the pressure range of interest. In fact, the anharmonic correction dynamically stabilizes the fcc phase above 25 GPa. However, we have not found any anomalous scaling of the superconducting temperature with the isotopic mass. Additionally, we have also analyzed whether the two lithium isotopes adopting different structures could explain the observed anomalous behavior. According to our enthalpy calculations including zero-point motion and anharmonicity it would not be possible in a stable regime.
0
1
0
0
0
0
Time-resolved polarimetry of the superluminous SN 2015bn with the Nordic Optical Telescope
We present imaging polarimetry of the superluminous supernova SN 2015bn, obtained over nine epochs between $-$20 and $+$46 days with the Nordic Optical Telescope. This was a nearby, slowly-evolving Type I superluminous supernova that has been studied extensively and for which two epochs of spectropolarimetry are also available. Based on field stars, we determine the interstellar polarisation in the Galaxy to be negligible. The polarisation of SN 2015bn shows a statistically significant increase during the last epochs, confirming previous findings. Our well-sampled imaging polarimetry series allows us to determine that this increase (from $\sim 0.54\%$ to $\gtrsim 1.10\%$) coincides in time with rapid changes that took place in the optical spectrum. We conclude that the supernova underwent a `phase transition' at around $+$20 days, when the photospheric emission shifted from an outer layer, dominated by natal C and O, to a more aspherical inner core, dominated by freshly nucleosynthesized material. This two-layered model might account for the characteristic appearance and properties of Type I superluminous supernovae.
0
1
0
0
0
0
Deep Bayesian Active Learning with Image Data
Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) methods generally rely on being able to learn and update models from small amounts of data. Recent advances in deep learning, on the other hand, are notorious for their dependence on large amounts of data. Second, many AL acquisition functions rely on model uncertainty, yet deep learning methods rarely represent such model uncertainty. In this paper we combine recent advances in Bayesian deep learning into the active learning framework in a practical way. We develop an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature. Taking advantage of specialised models such as Bayesian convolutional neural networks, we demonstrate our active learning techniques with image data, obtaining a significant improvement on existing active learning approaches. We demonstrate this on both the MNIST dataset, as well as for skin cancer diagnosis from lesion images (ISIC2016 task).
1
0
0
1
0
0
Robust Optical Flow Estimation in Rainy Scenes
Optical flow estimation in the rainy scenes is challenging due to background degradation introduced by rain streaks and rain accumulation effects in the scene. Rain accumulation effect refers to poor visibility of remote objects due to the intense rainfall. Most existing optical flow methods are erroneous when applied to rain sequences because the conventional brightness constancy constraint (BCC) and gradient constancy constraint (GCC) generally break down in this situation. Based on the observation that the RGB color channels receive raindrop radiance equally, we introduce a residue channel as a new data constraint to reduce the effect of rain streaks. To handle rain accumulation, our method decomposes the image into a piecewise-smooth background layer and a high-frequency detail layer. It also enforces the BCC on the background layer only. Results on both synthetic dataset and real images show that our algorithm outperforms existing methods on different types of rain sequences. To our knowledge, this is the first optical flow method specifically dealing with rain.
1
0
0
0
0
0
Thermophysical Phenomena in Metal Additive Manufacturing by Selective Laser Melting: Fundamentals, Modeling, Simulation and Experimentation
Among the many additive manufacturing (AM) processes for metallic materials, selective laser melting (SLM) is arguably the most versatile in terms of its potential to realize complex geometries along with tailored microstructure. However, the complexity of the SLM process, and the need for predictive relation of powder and process parameters to the part properties, demands further development of computational and experimental methods. This review addresses the fundamental physical phenomena of SLM, with a special emphasis on the associated thermal behavior. Simulation and experimental methods are discussed according to three primary categories. First, macroscopic approaches aim to answer questions at the component level and consider for example the determination of residual stresses or dimensional distortion effects prevalent in SLM. Second, mesoscopic approaches focus on the detection of defects such as excessive surface roughness, residual porosity or inclusions that occur at the mesoscopic length scale of individual powder particles. Third, microscopic approaches investigate the metallurgical microstructure evolution resulting from the high temperature gradients and extreme heating and cooling rates induced by the SLM process. Consideration of physical phenomena on all of these three length scales is mandatory to establish the understanding needed to realize high part quality in many applications, and to fully exploit the potential of SLM and related metal AM processes.
1
1
0
0
0
0
Numerical Methods for Pulmonary Image Registration
Due to complexity and invisibility of human organs, diagnosticians need to analyze medical images to determine where the lesion region is, and which kind of disease is, in order to make precise diagnoses. For satisfying clinical purposes through analyzing medical images, registration plays an essential role. For instance, in Image-Guided Interventions (IGI) and computer-aided surgeries, patient anatomy is registered to preoperative images to guide surgeons complete procedures. Medical image registration is also very useful in surgical planning, monitoring disease progression and for atlas construction. Due to the significance, the theories, methods, and implementation method of image registration constitute fundamental knowledge in educational training for medical specialists. In this chapter, we focus on image registration of a specific human organ, i.e. the lung, which is prone to be lesioned. For pulmonary image registration, the improvement of the accuracy and how to obtain it in order to achieve clinical purposes represents an important problem which should seriously be addressed. In this chapter, we provide a survey which focuses on the role of image registration in educational training together with the state-of-the-art of pulmonary image registration. In the first part, we describe clinical applications of image registration introducing artificial organs in Simulation-based Education. In the second part, we summarize the common methods used in pulmonary image registration and analyze popular papers to obtain a survey of pulmonary image registration.
0
1
0
0
0
0
Solving Non-parametric Inverse Problem in Continuous Markov Random Field using Loopy Belief Propagation
In this paper, we address the inverse problem, or the statistical machine learning problem, in Markov random fields with a non-parametric pair-wise energy function with continuous variables. The inverse problem is formulated by maximum likelihood estimation. The exact treatment of maximum likelihood estimation is intractable because of two problems: (1) it includes the evaluation of the partition function and (2) it is formulated in the form of functional optimization. We avoid Problem (1) by using Bethe approximation. Bethe approximation is an approximation technique equivalent to the loopy belief propagation. Problem (2) can be solved by using orthonormal function expansion. Orthonormal function expansion can reduce a functional optimization problem to a function optimization problem. Our method can provide an analytic form of the solution of the inverse problem within the framework of Bethe approximation.
1
1
0
1
0
0
Topology Adaptive Graph Convolutional Networks
Spectral graph convolutional neural networks (CNNs) require approximation to the convolution to alleviate the computational complexity, resulting in performance loss. This paper proposes the topology adaptive graph convolutional network (TAGCN), a novel graph convolutional network defined in the vertex domain. We provide a systematic way to design a set of fixed-size learnable filters to perform convolutions on graphs. The topologies of these filters are adaptive to the topology of the graph when they scan the graph to perform convolution. The TAGCN not only inherits the properties of convolutions in CNN for grid-structured data, but it is also consistent with convolution as defined in graph signal processing. Since no approximation to the convolution is needed, TAGCN exhibits better performance than existing spectral CNNs on a number of data sets and is also computationally simpler than other recent methods.
1
0
0
1
0
0
Suspensions of finite-size neutrally-buoyant spheres in turbulent duct flow
We study the turbulent square duct flow of dense suspensions of neutrally-buoyant spherical particles. Direct numerical simulations (DNS) are performed in the range of volume fractions $\phi=0-0.2$, using the immersed boundary method (IBM) to account for the dispersed phase. Based on the hydraulic diameter a Reynolds number of $5600$ is considered. We report flow features and particle statistics specific to this geometry, and compare the results to the case of two-dimensional channel flows. In particular, we observe that for $\phi=0.05$ and $0.1$, particles preferentially accumulate on the corner bisectors, close to the duct corners as also observed for laminar square duct flows of same duct-to-particle size ratios. At the highest volume fraction, particles preferentially accumulate in the core region. For channel flows, in the absence of lateral confinement particles are found instead to be uniformily distributed across the channel. We also observe that the intensity of the cross-stream secondary flows increases (with respect to the unladen case) with the volume fraction up to $\phi=0.1$, as a consequence of the high concentration of particles along the corner bisector. For $\phi=0.2$ the turbulence activity is strongly reduced and the intensity of the secondary flows reduces below that of the unladen case. The friction Reynolds number increases with $\phi$ in dilute conditions, as observed for channel flows. However, for $\phi=0.2$ the mean friction Reynolds number decreases below the value for $\phi=0.1$.
0
1
0
0
0
0
Far-from-equilibrium transport of excited carriers in nanostructures
Transport of charged carriers in regimes of strong non-equilibrium is critical in a wide array of applications ranging from solar energy conversion and semiconductor devices to quantum information. Plasmonic hot-carrier science brings this regime of transport physics to the forefront since photo-excited carriers must be extracted far from equilibrium to harvest their energy efficiently. Here, we present a theoretical and computational framework, Non-Equilibrium Scattering in Space and Energy (NESSE), to predict the spatial evolution of carrier energy distributions that combines the best features of phase-space (Boltzmann) and particle-based (Monte Carlo) methods. Within the NESSE framework, we bridge first-principles electronic structure predictions of plasmon decay and carrier collision integrals at the atomic scale, with electromagnetic field simulations at the nano- to mesoscale. Finally, we apply NESSE to predict spatially-resolved energy distributions of photo-excited carriers that impact the surface of experimentally realizable plasmonic nanostructures, enabling first-principles design of hot carrier devices.
0
1
0
0
0
0
On annihilators of bounded $(\frak g, \frak k)$-modules
Let $\frak g$ be a semisimple Lie algebra and $\frak k\subset\frak g$ be a reductive subalgebra. We say that a $\frak g$-module $M$ is a bounded $(\frak g, \frak k)$-module if $M$ is a direct sum of simple finite-dimensional $\frak k$-modules and the multiplicities of all simple $\frak k$-modules in that direct sum are universally bounded. The goal of this article is to show that the "boundedness" property for a simple $(\frak g, \frak k)$-module $M$ is equivalent to a property of the associated variety of the annihilator of $M$ (this is the closure of a nilpotent coadjoint orbit inside $\frak g^*$) under the assumption that the main field is algebraically closed and of characteristic 0. In particular this implies that if $M_1, M_2$ are simple $(\frak g, \frak k)$-modules such that $M_1$ is bounded and the associated varieties of the annihilators of $M_1$ and $M_2$ coincide then $M_2$ is also bounded. This statement is a geometric analogue of a purely algebraic fact due to I. Penkov and V. Serganova and it was posed as a conjecture in my Ph.D. thesis.
0
0
1
0
0
0
Neutron Star Planets: Atmospheric processes and habitability
Of the roughly 3000 neutron stars known, only a handful have sub-stellar companions. The most famous of these are the low-mass planets around the millisecond pulsar B1257+12. New evidence indicates that observational biases could still hide a wide variety of planetary systems around most neutron stars. We consider the environment and physical processes relevant to neutron star planets, in particular the effect of X-ray irradiation and the relativistic pulsar wind on the planetary atmosphere. We discuss the survival time of planet atmospheres and the planetary surface conditions around different classes of neutron stars, and define a neutron star habitable zone. Depending on as-yet poorly constrained aspects of the pulsar wind, both Super-Earths around B1257+12 could lie within its habitable zone.
0
1
0
0
0
0
Discrete Time-Crystalline Order in Cavity and Circuit QED Systems
Discrete time crystals are a recently proposed and experimentally observed out-of-equilibrium dynamical phase of Floquet systems, where the stroboscopic evolution of a local observable repeats itself at an integer multiple of the driving period. We address this issue in a driven-dissipative setup, focusing on the modulated open Dicke model, which can be implemented by cavity or circuit QED systems. In the thermodynamic limit, we employ semiclassical approaches and find rich dynamical phases on top of the discrete time-crystalline order. In a deep quantum regime with few qubits, we find clear signatures of a transient discrete time-crystalline behavior, which is absent in the isolated counterpart. We establish a phenomenology of dissipative discrete time crystals by generalizing the Landau theory of phase transitions to Floquet open systems.
0
1
0
0
0
0