text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Refining Source Representations with Relation Networks for Neural Machine Translation, Abstract: Although neural machine translation (NMT) with the encoder-decoder framework has achieved great success in recent times, it still suffers from some drawbacks: RNNs tend to forget old information which is often useful and the encoder only operates through words without considering word relationship. To solve these problems, we introduce a relation networks (RN) into NMT to refine the encoding representations of the source. In our method, the RN first augments the representation of each source word with its neighbors and reasons all the possible pairwise relations between them. Then the source representations and all the relations are fed to the attention module and the decoder together, keeping the main encoder-decoder architecture unchanged. Experiments on two Chinese-to-English data sets in different scales both show that our method can outperform the competitive baselines significantly.
[ 1, 0, 0, 0, 0, 0 ]
Title: On the Statistical Efficiency of Optimal Kernel Sum Classifiers, Abstract: We propose a novel combination of optimization tools with learning theory bounds in order to analyze the sample complexity of optimal kernel sum classifiers. This contrasts the typical learning theoretic results which hold for all (potentially suboptimal) classifiers. Our work also justifies assumptions made in prior work on multiple kernel learning. As a byproduct of our analysis, we also provide a new form of Rademacher complexity for hypothesis classes containing only optimal classifiers.
[ 1, 0, 0, 1, 0, 0 ]
Title: Henkin constructions of models with size continuum, Abstract: We survey the technique of constructing customized models of size continuum in omega steps and illustrate the method by giving new proofs of mostly old results within this rubric. One new theorem, which is joint with Saharon Shelah, is that a pseudominimal theory has an atomic model of size continuum.
[ 0, 0, 1, 0, 0, 0 ]
Title: Semi-supervised and Active-learning Scenarios: Efficient Acoustic Model Refinement for a Low Resource Indian Language, Abstract: We address the problem of efficient acoustic-model refinement (continuous retraining) using semi-supervised and active learning for a low resource Indian language, wherein the low resource constraints are having i) a small labeled corpus from which to train a baseline `seed' acoustic model and ii) a large training corpus without orthographic labeling or from which to perform a data selection for manual labeling at low costs. The proposed semi-supervised learning decodes the unlabeled large training corpus using the seed model and through various protocols, selects the decoded utterances with high reliability using confidence levels (that correlate to the WER of the decoded utterances) and iterative bootstrapping. The proposed active learning protocol uses confidence level based metric to select the decoded utterances from the large unlabeled corpus for further labeling. The semi-supervised learning protocols can offer a WER reduction, from a poorly trained seed model, by as much as 50% of the best WER-reduction realizable from the seed model's WER, if the large corpus were labeled and used for acoustic-model training. The active learning protocols allow that only 60% of the entire training corpus be manually labeled, to reach the same performance as the entire data.
[ 1, 0, 0, 0, 0, 0 ]
Title: Energy spectrum of cascade showers generated by cosmic ray muons in water, Abstract: The spatial distribution of Cherenkov radiation from cascade showers generated by muons in water has been measured with Cherenkov water calorimeter (CWC) NEVOD. This result allowed to improve the techniques of treating cascade showers with unknown axes by means of CWC response analysis. The techniques of selecting the events with high energy cascade showers and reconstructing their parameters are discussed. Preliminary results of measurements of the spectrum of cascade showers in the energy range 100 GeV - 20 TeV generated by cosmic ray muons at large zenith angles and their comparison with expectation are presented.
[ 0, 1, 0, 0, 0, 0 ]
Title: The limit point of the pentagram map, Abstract: The pentagram map is a discrete dynamical system defined on the space of polygons in the plane. In the first paper on the subject, R. Schwartz proved that the pentagram map produces from each convex polygon a sequence of successively smaller polygons that converges exponentially to a point. We investigate the limit point itself, giving an explicit description of its Cartesian coordinates as roots of certain degree three polynomials.
[ 0, 0, 1, 0, 0, 0 ]
Title: Transfer Learning for Neural Semantic Parsing, Abstract: The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with a focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence modeling and compare their performance with an independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to a target task with smaller labeled data. We see absolute accuracy gains ranging from 1.0% to 4.4% in our in- house data set, and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks.
[ 1, 0, 0, 0, 0, 0 ]
Title: Definable Valuations induced by multiplicative subgroups and NIP Fields, Abstract: We study the algebraic implications of the non-independence property (NIP) and variants thereof (dp-minimality) on infinite fields, motivated by the conjecture that all such fields which are neither real closed nor separably closed admit a definable henselian valuation. Our results mainly focus on Hahn fields and build up on Will Johnson's preprint "dp-minimal fields", arXiv: 1507.02745v1, July 2015.
[ 0, 0, 1, 0, 0, 0 ]
Title: Bridging the Gap Between Layout Pattern Sampling and Hotspot Detection via Batch Active Learning, Abstract: Layout hotpot detection is one of the main steps in modern VLSI design. A typical hotspot detection flow is extremely time consuming due to the computationally expensive mask optimization and lithographic simulation. Recent researches try to facilitate the procedure with a reduced flow including feature extraction, training set generation and hotspot detection, where feature extraction methods and hotspot detection engines are deeply studied. However, the performance of hotspot detectors relies highly on the quality of reference layout libraries which are costly to obtain and usually predetermined or randomly sampled in previous works. In this paper, we propose an active learning-based layout pattern sampling and hotspot detection flow, which simultaneously optimizes the machine learning model and the training set that aims to achieve similar or better hotspot detection performance with much smaller number of training instances. Experimental results show that our proposed method can significantly reduce lithography simulation overhead while attaining satisfactory detection accuracy on designs under both DUV and EUV lithography technologies.
[ 0, 0, 0, 1, 0, 0 ]
Title: On the Power Spectral Density Applied to the Analysis of Old Canvases, Abstract: A routine task for art historians is painting diagnostics, such as dating or attribution. Signal processing of the X-ray image of a canvas provides useful information about its fabric. However, previous methods may fail when very old and deteriorated artworks or simply canvases of small size are studied. We present a new framework to analyze and further characterize the paintings from their radiographs. First, we start from a general analysis of lattices and provide new unifying results about the theoretical spectra of weaves. Then, we use these results to infer the main structure of the fabric, like the type of weave and the thread densities. We propose a practical estimation of these theoretical results from paintings with the averaged power spectral density (PSD), which provides a more robust tool. Furthermore, we found that the PSD provides a fingerprint that characterizes the whole canvas. We search and discuss some distinctive features we may find in that fingerprint. We apply these results to several masterpieces of the 17th and 18th centuries from the Museo Nacional del Prado to show that this approach yields accurate results in thread counting and is very useful for paintings comparison, even in situations where previous methods fail.
[ 1, 0, 1, 0, 0, 0 ]
Title: Eco-evolutionary feedbacks - theoretical models and perspectives, Abstract: 1. Theoretical models pertaining to feedbacks between ecological and evolutionary processes are prevalent in multiple biological fields. An integrative overview is currently lacking, due to little crosstalk between the fields and the use of different methodological approaches. 2. Here we review a wide range of models of eco-evolutionary feedbacks and highlight their underlying assumptions. We discuss models where feedbacks occur both within and between hierarchical levels of ecosystems, including populations, communities, and abiotic environments, and consider feedbacks across spatial scales. 3. Identifying the commonalities among feedback models, and the underlying assumptions, helps us better understand the mechanistic basis of eco-evolutionary feedbacks. Eco-evolutionary feedbacks can be readily modelled by coupling demographic and evolutionary formalisms. We provide an overview of these approaches and suggest future integrative modelling avenues. 4. Our overview highlights that eco-evolutionary feedbacks have been incorporated in theoretical work for nearly a century. Yet, this work does not always include the notion of rapid evolution or concurrent ecological and evolutionary time scales. We discuss the importance of density- and frequency-dependent selection for feedbacks, as well as the importance of dispersal as a central linking trait between ecology and evolution in a spatial context.
[ 0, 0, 0, 0, 1, 0 ]
Title: Unusual behavior of cuprates explained by heterogeneous charge localization, Abstract: The cuprate high-temperature superconductors are among the most intensively studied materials, yet essential questions regarding their principal phases and the transitions between them remain unanswered. Generally thought of as doped charge-transfer insulators, these complex lamellar oxides exhibit pseudogap, strange-metal, superconducting and Fermi-liquid behaviour with increasing hole-dopant concentration. Here we propose a simple inhomogeneous Mott-like (de)localization model wherein exactly one hole per copper-oxygen unit is gradually delocalized with increasing doping and temperature. The model is percolative in nature, with parameters that are experimentally constrained. It comprehensively captures pivotal unconventional experimental results, including the temperature and doping dependence of the pseudogap phenomenon, the strange-metal linear temperature dependence of the planar resistivity, and the doping dependence of the superfluid density. The success and simplicity of our model greatly demystify the cuprate phase diagram and point to a local superconducting pairing mechanism involving the (de)localized hole.
[ 0, 1, 0, 0, 0, 0 ]
Title: Explicit construction of RIP matrices is Ramsey-hard, Abstract: Matrices $\Phi\in\R^{n\times p}$ satisfying the Restricted Isometry Property (RIP) are an important ingredient of the compressive sensing methods. While it is known that random matrices satisfy the RIP with high probability even for $n=\log^{O(1)}p$, the explicit construction of such matrices defied the repeated efforts, and the most known approaches hit the so-called $\sqrt{n}$ sparsity bottleneck. The notable exception is the work by Bourgain et al \cite{bourgain2011explicit} constructing an $n\times p$ RIP matrix with sparsity $s=\Theta(n^{{1\over 2}+\epsilon})$, but in the regime $n=\Omega(p^{1-\delta})$. In this short note we resolve this open question in a sense by showing that an explicit construction of a matrix satisfying the RIP in the regime $n=O(\log^2 p)$ and $s=\Theta(n^{1\over 2})$ implies an explicit construction of a three-colored Ramsey graph on $p$ nodes with clique sizes bounded by $O(\log^2 p)$ -- a question in the extremal combinatorics which has been open for decades.
[ 0, 0, 0, 1, 0, 0 ]
Title: Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net, Abstract: Models applied on real time response task, like click-through rate (CTR) prediction model, require high accuracy and rigorous response time. Therefore, top-performing deep models of high depth and complexity are not well suited for these applications with the limitations on the inference time. In order to further improve the neural networks' performance given the time and computational limitations, we propose an approach that exploits a cumbersome net to help train the lightweight net for prediction. We dub the whole process rocket launching, where the cumbersome booster net is used to guide the learning of the target light net throughout the whole training process. We analyze different loss functions aiming at pushing the light net to behave similarly to the booster net, and adopt the loss with best performance in our experiments. We use one technique called gradient block to improve the performance of the light net and booster net further. Experiments on benchmark datasets and real-life industrial advertisement data present that our light model can get performance only previously achievable with more complex models.
[ 1, 0, 0, 1, 0, 0 ]
Title: Complete Classification of Generalized Santha-Vazirani Sources, Abstract: Let $\mathcal{F}$ be a finite alphabet and $\mathcal{D}$ be a finite set of distributions over $\mathcal{F}$. A Generalized Santha-Vazirani (GSV) source of type $(\mathcal{F}, \mathcal{D})$, introduced by Beigi, Etesami and Gohari (ICALP 2015, SICOMP 2017), is a random sequence $(F_1, \dots, F_n)$ in $\mathcal{F}^n$, where $F_i$ is a sample from some distribution $d \in \mathcal{D}$ whose choice may depend on $F_1, \dots, F_{i-1}$. We show that all GSV source types $(\mathcal{F}, \mathcal{D})$ fall into one of three categories: (1) non-extractable; (2) extractable with error $n^{-\Theta(1)}$; (3) extractable with error $2^{-\Omega(n)}$. This rules out other error rates like $1/\log n$ or $2^{-\sqrt{n}}$. We provide essentially randomness-optimal extraction algorithms for extractable sources. Our algorithm for category (2) sources extracts with error $\varepsilon$ from $n = \mathrm{poly}(1/\varepsilon)$ samples in time linear in $n$. Our algorithm for category (3) sources extracts $m$ bits with error $\varepsilon$ from $n = O(m + \log 1/\varepsilon)$ samples in time $\min\{O(nm2^m),n^{O(\lvert\mathcal{F}\rvert)}\}$. We also give algorithms for classifying a GSV source type $(\mathcal{F}, \mathcal{D})$: Membership in category (1) can be decided in $\mathrm{NP}$, while membership in category (3) is polynomial-time decidable.
[ 1, 0, 0, 0, 0, 0 ]
Title: Hermann Hankel's "On the general theory of motion of fluids", an essay including an English translation of the complete Preisschrift from 1861, Abstract: The present is a companion paper to "A contemporary look at Hermann Hankel's 1861 pioneering work on Lagrangian fluid dynamics" by Frisch, Grimberg and Villone (2017). Here we present the English translation of the 1861 prize manuscript from Göttingen University "Zur allgemeinen Theorie der Bewegung der Flüssigkeiten" (On the general theory of the motion of the fluids) of Hermann Hankel (1839-1873), which was originally submitted in Latin and then translated into German by the Author for publication. We also provide the English translation of two important reports on the manuscript, one written by Bernhard Riemann and the other by Wilhelm Eduard Weber, during the assessment process for the prize. Finally we give a short biography of Hermann Hankel with his complete bibliography.
[ 0, 1, 1, 0, 0, 0 ]
Title: The limit of the Hermitian-Yang-Mills flow on reflexive sheaves, Abstract: In this paper, we study the asymptotic behavior of the Hermitian-Yang-Mills flow on a reflexive sheaf. We prove that the limiting reflexive sheaf is isomorphic to the double dual of the graded sheaf associated to the Harder-Narasimhan-Seshadri filtration, this answers a question by Bando and Siu.
[ 0, 0, 1, 0, 0, 0 ]
Title: Application of backpropagation neural networks to both stages of fingerprinting based WIPS, Abstract: We propose a scheme to employ backpropagation neural networks (BPNNs) for both stages of fingerprinting-based indoor positioning using WLAN/WiFi signal strengths (FWIPS): radio map construction during the offline stage, and localization during the online stage. Given a training radio map (TRM), i.e., a set of coordinate vectors and associated WLAN/WiFi signal strengths of the available access points, a BPNN can be trained to output the expected signal strengths for any input position within the region of interest (BPNN-RM). This can be used to provide a continuous representation of the radio map and to filter, densify or decimate a discrete radio map. Correspondingly, the TRM can also be used to train another BPNN to output the expected position within the region of interest for any input vector of recorded signal strengths and thus carry out localization (BPNN-LA).Key aspects of the design of such artificial neural networks for a specific application are the selection of design parameters like the number of hidden layers and nodes within the network, and the training procedure. Summarizing extensive numerical simulations, based on real measurements in a testbed, we analyze the impact of these design choices on the performance of the BPNN and compare the results in particular to those obtained using the $k$ nearest neighbors ($k$NN) and weighted $k$ nearest neighbors approaches to FWIPS.
[ 1, 0, 0, 1, 0, 0 ]
Title: Bayesian Bootstraps for Massive Data, Abstract: Recently, two scalable adaptations of the bootstrap have been proposed: the bag of little bootstraps (BLB; Kleiner et al., 2014) and the subsampled double bootstrap (SDB; Sengupta et al., 2016). In this paper, we introduce Bayesian bootstrap analogues to the BLB and SDB that have similar theoretical and computational properties, a strategy to perform lossless inference for a class of functionals of the Bayesian bootstrap, and briefly discuss extensions for Dirichlet Processes.
[ 0, 0, 0, 1, 0, 0 ]
Title: Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner, Abstract: Impressive image captioning results are achieved in domains with plenty of training image and sentence pairs (e.g., MSCOCO). However, transferring to a target domain with significant domain shifts but no paired training data (referred to as cross-domain image captioning) remains largely unexplored. We propose a novel adversarial training procedure to leverage unpaired data in the target domain. Two critic networks are introduced to guide the captioner, namely domain critic and multi-modal critic. The domain critic assesses whether the generated sentences are indistinguishable from sentences in the target domain. The multi-modal critic assesses whether an image and its generated sentence are a valid pair. During training, the critics and captioner act as adversaries -- captioner aims to generate indistinguishable sentences, whereas critics aim at distinguishing them. The assessment improves the captioner through policy gradient updates. During inference, we further propose a novel critic-based planning method to select high-quality sentences without additional supervision (e.g., tags). To evaluate, we use MSCOCO as the source domain and four other datasets (CUB-200-2011, Oxford-102, TGIF, and Flickr30k) as the target domains. Our method consistently performs well on all datasets. In particular, on CUB-200-2011, we achieve 21.8% CIDEr-D improvement after adaptation. Utilizing critics during inference further gives another 4.5% boost.
[ 1, 0, 0, 0, 0, 0 ]
Title: Contego: An Adaptive Framework for Integrating Security Tasks in Real-Time Systems, Abstract: Embedded real-time systems (RTS) are pervasive. Many modern RTS are exposed to unknown security flaws, and threats to RTS are growing in both number and sophistication. However, until recently, cyber-security considerations were an afterthought in the design of such systems. Any security mechanisms integrated into RTS must (a) co-exist with the real- time tasks in the system and (b) operate without impacting the timing and safety constraints of the control logic. We introduce Contego, an approach to integrating security tasks into RTS without affecting temporal requirements. Contego is specifically designed for legacy systems, viz., the real-time control systems in which major alterations of the system parameters for constituent tasks is not always feasible. Contego combines the concept of opportunistic execution with hierarchical scheduling to maintain compatibility with legacy systems while still providing flexibility by allowing security tasks to operate in different modes. We also define a metric to measure the effectiveness of such integration. We evaluate Contego using synthetic workloads as well as with an implementation on a realistic embedded platform (an open- source ARM CPU running real-time Linux).
[ 1, 0, 0, 0, 0, 0 ]
Title: Interface currents and magnetization in singlet-triplet superconducting heterostructures: Role of chiral and helical domains, Abstract: Chiral and helical domain walls are generic defects of topological spin-triplet superconductors. We study theoretically the magnetic and transport properties of superconducting singlet-triplet-singlet heterostructure as a function of the phase difference between the singlet leads in the presence of chiral and helical domains inside the spin-triplet region. The local inversion symmetry breaking at the singlet-triplet interface allows the emergence of a static phase-controlled magnetization, and generally yields both spin and charge currents flowing along the edges. The parity of the domain wall number affects the relative orientation of the interface moments and currents, while in some cases the domain walls themselves contribute to spin and charge transport. We demonstrate that singlet-triplet heterostructures are a generic prototype to generate and control non-dissipative spin and charge effects, putting them in a broader class of systems exhibiting spin-Hall, anomalous Hall effects and similar phenomena. Features of the electron transport and magnetic effects at the interfaces can be employed to assess the presence of domains in chiral/helical superconductors.
[ 0, 1, 0, 0, 0, 0 ]
Title: (non)-automaticity of completely multiplicative sequences having negligible many non-trivial prime factors, Abstract: In this article we consider the completely multiplicative sequences $(a_n)_{n \in \mathbf{N}}$ defined on a field $\mathbf{K}$ and satisfying $$\sum_{p| p \leq n, a_p \neq 1, p \in \mathbf{P}}\frac{1}{p}<\infty,$$ where $\mathbf{P}$ is the set of prime numbers. We prove that if such sequences are automatic then they cannot have infinitely many prime numbers $p$ such that $a_{p}\neq 1$. Using this fact, we prove that if a completely multiplicative sequence $(a_n)_{n \in \mathbf{N}}$, vanishing or not, can be written in the form $a_n=b_n\chi_n$ such that $(b_n)_{n \in \mathbf{N}}$ is a non ultimately periodic, completely multiplicative automatic sequence satisfying the above condition, and $(\chi_n)_{n \in \mathbf{N}}$ is a Dirichlet character or a constant sequence, then there exists only one prime number $p$ such that $b_p \neq 1$ or $0$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Timing Aware Dummy Metal Fill Methodology, Abstract: In this paper, we analyzed parasitic coupling capacitance coming from dummy metal fill and its impact on timing. Based on the modeling, we proposed two approaches to minimize the timing impact from dummy metal fill. The first approach applies more spacing between critical nets and metal fill, while the second approach leverages the shielding effects of reference nets. Experimental results show consistent improvement compared to traditional metal fill method.
[ 1, 0, 0, 0, 0, 0 ]
Title: Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets, Abstract: A $(t, s, v)$-all-or-nothing transform is a bijective mapping defined on $s$-tuples over an alphabet of size $v$, which satisfies the condition that the values of any $t$ input co-ordinates are completely undetermined, given only the values of any $s-t$ output co-ordinates. The main question we address in this paper is: for which choices of parameters does a $(t, s, v)$-all-or-nothing transform (AONT) exist? More specifically, if we fix $t$ and $v$, we want to determine the maximum integer $s$ such that a $(t, s, v)$-AONT exists. We mainly concentrate on the case $t=2$ for arbitrary values of $v$, where we obtain various necessary as well as sufficient conditions for existence of these objects. We consider both linear and general (linear or nonlinear) AONT. We also show some connections between AONT, orthogonal arrays and resilient functions.
[ 1, 0, 1, 0, 0, 0 ]
Title: Exhaustive Exploration of the Failure-oblivious Computing Search Space, Abstract: High-availability of software systems requires automated handling of crashes in presence of errors. Failure-oblivious computing is one technique that aims to achieve high availability. We note that failure-obliviousness has not been studied in depth yet, and there is very few study that helps understand why failure-oblivious techniques work. In order to make failure-oblivious computing to have an impact in practice, we need to deeply understand failure-oblivious behaviors in software. In this paper, we study, design and perform an experiment that analyzes the size and the diversity of the failure-oblivious behaviors. Our experiment consists of exhaustively computing the search space of 16 field failures of large-scale open-source Java software. The outcome of this experiment is a much better understanding of what really happens when failure-oblivious computing is used, and this opens new promising research directions.
[ 1, 0, 0, 0, 0, 0 ]
Title: Theoretical Accuracy in Cosmological Growth Estimation, Abstract: We elucidate the importance of the consistent treatment of gravity-model specific non-linearities when estimating the growth of cosmological structures from redshift space distortions (RSD). Within the context of standard perturbation theory (SPT), we compare the predictions of two theoretical templates with redshift space data from COLA (COmoving Lagrangian Acceleration) simulations in the normal branch of DGP gravity (nDGP) and General Relativity (GR). Using COLA for these comparisons is validated using a suite of full N-body simulations for the same theories. The two theoretical templates correspond to the standard general relativistic perturbation equations and those same equations modelled within nDGP. Gravitational clustering non-linear effects are accounted for by modelling the power spectrum up to one loop order and redshift space clustering anisotropy is modelled using the Taruya, Nishimichi and Saito (TNS) RSD model. Using this approach, we attempt to recover the simulation's fiducial logarithmic growth parameter $f$. By assigning the simulation data with errors representing an idealised survey with a volume of $10\mbox{Gpc}^3/h^3$, we find the GR template is unable to recover fiducial $f$ to within 1$\sigma$ at $z=1$ when we match the data up to $k_{\rm max}=0.195h$/Mpc. On the other hand, the DGP template recovers the fiducial value within $1\sigma$. Further, we conduct the same analysis for sets of mock data generated for generalised models of modified gravity using SPT, where again we analyse the GR template's ability to recover the fiducial value. We find that for models with enhanced gravitational non-linearity, the theoretical bias of the GR template becomes significant for stage IV surveys. Thus, we show that for the future large data volume galaxy surveys, the self-consistent modelling of non-GR gravity scenarios will be crucial in constraining theory parameters.
[ 0, 1, 0, 0, 0, 0 ]
Title: Model-Robust Counterfactual Prediction Method, Abstract: We develop a novel method for counterfactual analysis based on observational data using prediction intervals for units under different exposures. Unlike methods that target heterogeneous or conditional average treatment effects of an exposure, the proposed approach aims to take into account the irreducible dispersions of counterfactual outcomes so as to quantify the relative impact of different exposures. The prediction intervals are constructed in a distribution-free and model-robust manner based on the conformal prediction approach. The computational obstacles to this approach are circumvented by leveraging properties of a tuning-free method that learns sparse additive predictor models for counterfactual outcomes. The method is illustrated using both real and synthetic data.
[ 0, 0, 1, 1, 0, 0 ]
Title: Learning with Average Top-k Loss, Abstract: In this work, we introduce the {\em average top-$k$} (\atk) loss as a new aggregate loss for supervised learning, which is the average over the $k$ largest individual losses over a training dataset. We show that the \atk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the \atk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of \matk learning on the classification calibration of the \atk loss and the error bounds of \atk-SVM. We demonstrate the applicability of minimum average top-$k$ learning for binary classification and regression using synthetic and real datasets.
[ 1, 0, 0, 1, 0, 0 ]
Title: Variable selection for clustering with Gaussian mixture models: state of the art, Abstract: The mixture models have become widely used in clustering, given its probabilistic framework in which its based, however, for modern databases that are characterized by their large size, these models behave disappointingly in setting out the model, making essential the selection of relevant variables for this type of clustering. After recalling the basics of clustering based on a model, this article will examine the variable selection methods for model-based clustering, as well as presenting opportunities for improvement of these methods.
[ 1, 0, 0, 1, 0, 0 ]
Title: On the essential self-adjointness of singular sub-Laplacians, Abstract: We prove a general essential self-adjointness criterion for sub-Laplacians on complete sub-Riemannian manifolds, defined with respect to singular measures. As a consequence, we show that the intrinsic sub-Laplacian (i.e. defined w.r.t. Popp's measure) is essentially self-adjoint on the equiregular connected components of a sub-Riemannian manifold. This result holds under mild regularity assumptions on the singular region, and when the latter does not contain characteristic points.
[ 0, 0, 1, 0, 0, 0 ]
Title: Tension and chemical efficiency of Myosin-II motors, Abstract: Recent experiments demonstrate that molecular motors from the Myosin II family serve as cross-links inducing active tension in the cytoskeletal network. Here we revise the Brownian ratchet model, previously studied in the context of active transport along polymer tracks, in setups resembling a motor in a polymer network, also taking into account the effect of electrostatic changes in the motor heads. We explore important mechanical quantities and show that such a model is also capable of mechanosensing. Finally, we introduce a novel efficiency based on excess heat production by the chemical cycle which is directly related to the active tension the motor exerts. The chemical efficiencies differ considerably for motors with a different number of heads, while their mechanical properties remain qualitatively similar. For motors with a small number of heads, the chemical efficiency is maximal when they are frustrated, a trait that is not found in larger motors.
[ 0, 1, 0, 0, 0, 0 ]
Title: Token-based Function Computation with Memory, Abstract: In distributed function computation, each node has an initial value and the goal is to compute a function of these values in a distributed manner. In this paper, we propose a novel token-based approach to compute a wide class of target functions to which we refer as "Token-based function Computation with Memory" (TCM) algorithm. In this approach, node values are attached to tokens and travel across the network. Each pair of travelling tokens would coalesce when they meet, forming a token with a new value as a function of the original token values. In contrast to the Coalescing Random Walk (CRW) algorithm, where token movement is governed by random walk, meeting of tokens in our scheme is accelerated by adopting a novel chasing mechanism. We proved that, compared to the CRW algorithm, the TCM algorithm results in a reduction of time complexity by a factor of at least $\sqrt{n/\log(n)}$ in Erdös-Renyi and complete graphs, and by a factor of $\log(n)/\log(\log(n))$ in torus networks. Simulation results show that there is at least a constant factor improvement in the message complexity of TCM algorithm in all considered topologies. Robustness of the CRW and TCM algorithms in the presence of node failure is analyzed. We show that their robustness can be improved by running multiple instances of the algorithms in parallel.
[ 1, 0, 0, 1, 0, 0 ]
Title: Simple property of heterogeneous aspiration dynamics: Beyond weak selection, Abstract: How individuals adapt their behavior in cultural evolution remains elusive. Theoretical studies have shown that the update rules chosen to model individual decision making can dramatically modify the evolutionary outcome of the population as a whole. This hints at the complexities of considering the personality of individuals in a population, where each one uses its own rule. Here, we investigate whether and how heterogeneity in the rules of behavior update alters the evolutionary outcome. We assume that individuals update behaviors by aspiration-based self-evaluation and they do so in their own ways. Under weak selection, we analytically reveal a simple property that holds for any two-strategy multi-player games in well-mixed populations and on regular graphs: the evolutionary outcome in a population with heterogeneous update rules is the weighted average of the outcomes in the corresponding homogeneous populations, and the associated weights are the frequencies of each update rule in the heterogeneous population. Beyond weak selection, we show that this property holds for public goods games. Our finding implies that heterogeneous aspiration dynamics is additive. This additivity greatly reduces the complexity induced by the underlying individual heterogeneity. Our work thus provides an efficient method to calculate evolutionary outcomes under heterogeneous update rules.
[ 0, 0, 0, 0, 1, 0 ]
Title: High quality factor manganese-doped aluminum lumped-element kinetic inductance detectors sensitive to frequencies below 100 GHz, Abstract: Aluminum lumped-element kinetic inductance detectors (LEKIDs) sensitive to millimeter-wave photons have been shown to exhibit high quality factors, making them highly sensitive and multiplexable. The superconducting gap of aluminum limits aluminum LEKIDs to photon frequencies above 100 GHz. Manganese-doped aluminum (Al-Mn) has a tunable critical temperature and could therefore be an attractive material for LEKIDs sensitive to frequencies below 100 GHz if the internal quality factor remains sufficiently high when manganese is added to the film. To investigate, we measured some of the key properties of Al-Mn LEKIDs. A prototype eight-element LEKID array was fabricated using a 40 nm thick film of Al-Mn deposited on a 500 {\mu}m thick high-resistivity, float-zone silicon substrate. The manganese content was 900 ppm, the measured $T_c = 694\pm1$ mK, and the resonance frequencies were near 150 MHz. Using measurements of the forward scattering parameter $S_{21}$ at various bath temperatures between 65 and 250 mK, we determined that the Al-Mn LEKIDs we fabricated have internal quality factors greater than $2 \times 10^5$, which is high enough for millimeter-wave astrophysical observations. In the dark conditions under which these devices were measured, the fractional frequency noise spectrum shows a shallow slope that depends on bath temperature and probe tone amplitude, which could be two-level system noise. The anticipated white photon noise should dominate this level of low-frequency noise when the detectors are illuminated with millimeter-waves in future measurements. The LEKIDs responded to light pulses from a 1550 nm light-emitting diode, and we used these light pulses to determine that the quasiparticle lifetime is 60 {\mu}s.
[ 0, 1, 0, 0, 0, 0 ]
Title: Dark Energy Cosmological Models with General forms of Scale Factor, Abstract: In this paper, we have constructed dark energy models in an anisotropic Bianchi-V space-time and studied the role of anisotropy in the evolution of dark energy. We have considered anisotropic dark energy fluid with different pressure gradients along different spatial directions. In order to obtain a deterministic solution, we have considered three general forms of scale factor. The different forms of scale factors considered here produce time varying deceleration parameters in all the cases that simulates the cosmic transition. The variable equation of state (EoS) parameter, skewness parameters for all the models are obtained and analyzed. The physical properties of the models are also discussed.
[ 0, 1, 0, 0, 0, 0 ]
Title: Quantum Klein Space and Superspace, Abstract: We give an algebraic quantization, in the sense of quantum groups, of the complex Minkowski space, and we examine the real forms corresponding to the signatures $(3,1)$, $(2,2)$, $(4,0)$, constructing the corresponding quantum metrics and providing an explicit presentation of the quantized coordinate algebras. In particular, we focus on the Kleinian signature $(2,2)$. The quantizations of the complex and real spaces come together with a coaction of the quantizations of the respective symmetry groups. We also extend such quantizations to the $\mathcal{N}=1$ supersetting.
[ 0, 0, 1, 0, 0, 0 ]
Title: Bayesian Lasso Posterior Sampling via Parallelized Measure Transport, Abstract: It is well known that the Lasso can be interpreted as a Bayesian posterior mode estimate with a Laplacian prior. Obtaining samples from the full posterior distribution, the Bayesian Lasso, confers major advantages in performance as compared to having only the Lasso point estimate. Traditionally, the Bayesian Lasso is implemented via Gibbs sampling methods which suffer from lack of scalability, unknown convergence rates, and generation of samples that are necessarily correlated. We provide a measure transport approach to generate i.i.d samples from the posterior by constructing a transport map that transforms a sample from the Laplacian prior into a sample from the posterior. We show how the construction of this transport map can be parallelized into modules that iteratively solve Lasso problems and perform closed-form linear algebra updates. With this posterior sampling method, we perform maximum likelihood estimation of the Lasso regularization parameter via the EM algorithm. We provide comparisons to traditional Gibbs samplers using the diabetes dataset of Efron et al. Lastly, we give an example implementation on a computing system that leverages parallelization, a graphics processing unit, whose execution time has much less dependence on dimension as compared to a standard implementation.
[ 0, 0, 0, 1, 0, 0 ]
Title: Endogeneous Dynamics of Intraday Liquidity, Abstract: In this paper we investigate the endogenous information contained in four liquidity variables at a five minutes time scale on equity markets around the world: the traded volume, the bid-ask spread, the volatility and the volume at first limits of the orderbook. In the spirit of Granger causality, we measure the level of information by the level of accuracy of linear autoregressive models. This empirical study is carried out on a dataset of more than 300 stocks from four different markets (US, UK, Japan and Hong Kong) from a period of over five years. We discuss the obtained performances of autoregressive (AR) models on stationarized versions of the variables, focusing on explaining the observed differences between stocks. Since empirical studies are often conducted at this time scale, we believe it is of paramount importance to document endogenous dynamics in a simple framework with no addition of supplemental information. Our study can hence be used as a benchmark to identify exogenous effects. On the other hand, most optimal trading frameworks (like the celebrated Almgren and Chriss one), focus on computing an optimal trading speed at a frequency close to the one we consider. Such frameworks very often take i.i.d. assumptions on liquidity variables; this paper document the auto-correlations emerging from real data, opening the door to new developments in optimal trading.
[ 0, 0, 0, 0, 0, 1 ]
Title: An analysis of the SPARSEVA estimate for the finite sample data case, Abstract: In this paper, we develop an upper bound for the SPARSEVA (SPARSe Estimation based on a VAlidation criterion) estimation error in a general scheme, i.e., when the cost function is strongly convex and the regularized norm is decomposable for a pair of subspaces. We show how this general bound can be applied to a sparse regression problem to obtain an upper bound for the traditional SPARSEVA problem. Numerical results are used to illustrate the effectiveness of the suggested bound.
[ 0, 0, 1, 1, 0, 0 ]
Title: Rigorous estimates for the relegation algorithm, Abstract: We revisit the relegation algorithm by Deprit et al. (Celest. Mech. Dyn. Astron. 79:157-182, 2001) in the light of the rigorous Nekhoroshev's like theory. This relatively recent algorithm is nowadays widely used for implementing closed form analytic perturbation theories, as it generalises the classical Birkhoff normalisation algorithm. The algorithm, here briefly explained by means of Lie transformations, has been so far introduced and used in a formal way, i.e. without providing any rigorous convergence or asymptotic estimates. The overall aim of this paper is to find such quantitative estimates and to show how the results about stability over exponentially long times can be recovered in a simple and effective way, at least in the non-resonant case.
[ 0, 1, 0, 0, 0, 0 ]
Title: Linear Pentapods with a Simple Singularity Variety, Abstract: There exists a bijection between the configuration space of a linear pentapod and all points $(u,v,w,p_x,p_y,p_z)\in\mathbb{R}^{6}$ located on the singular quadric $\Gamma: u^2+v^2+w^2=1$, where $(u,v,w)$ determines the orientation of the linear platform and $(p_x,p_y,p_z)$ its position. Then the set of all singular robot configurations is obtained by intersecting $\Gamma$ with a cubic hypersurface $\Sigma$ in $\mathbb{R}^{6}$, which is only quadratic in the orientation variables and position variables, respectively. This article investigates the restrictions to be imposed on the design of this mechanism in order to obtain a reduction in degree. In detail we study the cases where $\Sigma$ is (1) linear in position variables, (2) linear in orientation variables and (3) quadratic in total. The resulting designs of linear pentapods have the advantage of considerably simplified computation of singularity-free spheres in the configuration space. Finally we propose three kinematically redundant designs of linear pentapods with a simple singularity surface.
[ 1, 0, 0, 0, 0, 0 ]
Title: Neural Networks as Interacting Particle Systems: Asymptotic Convexity of the Loss Landscape and Universal Scaling of the Approximation Error, Abstract: Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks. Here we characterize both the error and the scaling of the error with the size of the network by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or "loss" function used to train the network. We show that, when the number $n$ of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of $n$. We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as $O(n^{-1})$. Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as $d=25$.
[ 0, 0, 0, 1, 0, 0 ]
Title: Images of Ideals under Derivations and $\mathcal E$-Derivations of Univariate Polynomial Algebras over a Field of Characteristic Zero, Abstract: Let $K$ be a field of characteristic zero and $x$ a free variable. A $K$-$\mathcal E$-derivation of $K[x]$ is a $K$-linear map of the form $\operatorname{I}-\phi$ for some $K$-algebra endomorphism $\phi$ of $K[x]$, where $\operatorname{I}$ denotes the identity map of $K[x]$. In this paper we study the image of an ideal of $K[x]$ under some $K$-derivations and $K$-$\mathcal E$-derivations of $K[x]$. We show that the LFED conjecture proposed in [Z4] holds for all $K$-$\mathcal E$-derivations and all locally finite $K$-derivations of $K[x]$. We also show that the LNED conjecture proposed in [Z4] holds for all locally nilpotent $K$-derivations of $K[x]$, and also for all locally nilpotent $K$-$\mathcal E$-derivations of $K[x]$ and the ideals $uK[x]$ such that either $u=0$, or $\operatorname{deg}\, u\le 1$, or $u$ has at least one repeated root in the algebraic closure of $K$. As a bi-product, the homogeneous Mathieu subspaces (Mathieu-Zhao spaces) of the univariate polynomial algebra over an arbitrary field have also been classified.
[ 0, 0, 1, 0, 0, 0 ]
Title: Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure Prediction, Abstract: Developing a Brain-Computer Interface~(BCI) for seizure prediction can help epileptic patients have a better quality of life. However, there are many difficulties and challenges in developing such a system as a real-life support for patients. Because of the nonstationary nature of EEG signals, normal and seizure patterns vary across different patients. Thus, finding a group of manually extracted features for the prediction task is not practical. Moreover, when using implanted electrodes for brain recording massive amounts of data are produced. This big data calls for the need for safe storage and high computational resources for real-time processing. To address these challenges, a cloud-based BCI system for the analysis of this big EEG data is presented. First, a dimensionality-reduction technique is developed to increase classification accuracy as well as to decrease the communication bandwidth and computation time. Second, following a deep-learning approach, a stacked autoencoder is trained in two steps for unsupervised feature extraction and classification. Third, a cloud-computing solution is proposed for real-time analysis of big EEG data. The results on a benchmark clinical dataset illustrate the superiority of the proposed patient-specific BCI as an alternative method and its expected usefulness in real-life support of epilepsy patients.
[ 1, 0, 0, 1, 0, 0 ]
Title: Anharmonicity and the isotope effect in superconducting lithium at high pressures: a first-principles approach, Abstract: Recent experiments [Schaeffer 2015] have shown that lithium presents an extremely anomalous isotope effect in the 15-25 GPa pressure range. In this article we have calculated the anharmonic phonon dispersion of $\mathrm{^7Li}$ and $\mathrm{^6Li}$ under pressure, their superconducting transition temperatures, and the associated isotope effect. We have found a huge anharmonic renormalization of a transverse acoustic soft mode along $\Gamma$K in the fcc phase, the expected structure at the pressure range of interest. In fact, the anharmonic correction dynamically stabilizes the fcc phase above 25 GPa. However, we have not found any anomalous scaling of the superconducting temperature with the isotopic mass. Additionally, we have also analyzed whether the two lithium isotopes adopting different structures could explain the observed anomalous behavior. According to our enthalpy calculations including zero-point motion and anharmonicity it would not be possible in a stable regime.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Bayesian Active Learning with Image Data, Abstract: Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) methods generally rely on being able to learn and update models from small amounts of data. Recent advances in deep learning, on the other hand, are notorious for their dependence on large amounts of data. Second, many AL acquisition functions rely on model uncertainty, yet deep learning methods rarely represent such model uncertainty. In this paper we combine recent advances in Bayesian deep learning into the active learning framework in a practical way. We develop an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature. Taking advantage of specialised models such as Bayesian convolutional neural networks, we demonstrate our active learning techniques with image data, obtaining a significant improvement on existing active learning approaches. We demonstrate this on both the MNIST dataset, as well as for skin cancer diagnosis from lesion images (ISIC2016 task).
[ 1, 0, 0, 1, 0, 0 ]
Title: Robust Optical Flow Estimation in Rainy Scenes, Abstract: Optical flow estimation in the rainy scenes is challenging due to background degradation introduced by rain streaks and rain accumulation effects in the scene. Rain accumulation effect refers to poor visibility of remote objects due to the intense rainfall. Most existing optical flow methods are erroneous when applied to rain sequences because the conventional brightness constancy constraint (BCC) and gradient constancy constraint (GCC) generally break down in this situation. Based on the observation that the RGB color channels receive raindrop radiance equally, we introduce a residue channel as a new data constraint to reduce the effect of rain streaks. To handle rain accumulation, our method decomposes the image into a piecewise-smooth background layer and a high-frequency detail layer. It also enforces the BCC on the background layer only. Results on both synthetic dataset and real images show that our algorithm outperforms existing methods on different types of rain sequences. To our knowledge, this is the first optical flow method specifically dealing with rain.
[ 1, 0, 0, 0, 0, 0 ]
Title: Thermophysical Phenomena in Metal Additive Manufacturing by Selective Laser Melting: Fundamentals, Modeling, Simulation and Experimentation, Abstract: Among the many additive manufacturing (AM) processes for metallic materials, selective laser melting (SLM) is arguably the most versatile in terms of its potential to realize complex geometries along with tailored microstructure. However, the complexity of the SLM process, and the need for predictive relation of powder and process parameters to the part properties, demands further development of computational and experimental methods. This review addresses the fundamental physical phenomena of SLM, with a special emphasis on the associated thermal behavior. Simulation and experimental methods are discussed according to three primary categories. First, macroscopic approaches aim to answer questions at the component level and consider for example the determination of residual stresses or dimensional distortion effects prevalent in SLM. Second, mesoscopic approaches focus on the detection of defects such as excessive surface roughness, residual porosity or inclusions that occur at the mesoscopic length scale of individual powder particles. Third, microscopic approaches investigate the metallurgical microstructure evolution resulting from the high temperature gradients and extreme heating and cooling rates induced by the SLM process. Consideration of physical phenomena on all of these three length scales is mandatory to establish the understanding needed to realize high part quality in many applications, and to fully exploit the potential of SLM and related metal AM processes.
[ 1, 1, 0, 0, 0, 0 ]
Title: Numerical Methods for Pulmonary Image Registration, Abstract: Due to complexity and invisibility of human organs, diagnosticians need to analyze medical images to determine where the lesion region is, and which kind of disease is, in order to make precise diagnoses. For satisfying clinical purposes through analyzing medical images, registration plays an essential role. For instance, in Image-Guided Interventions (IGI) and computer-aided surgeries, patient anatomy is registered to preoperative images to guide surgeons complete procedures. Medical image registration is also very useful in surgical planning, monitoring disease progression and for atlas construction. Due to the significance, the theories, methods, and implementation method of image registration constitute fundamental knowledge in educational training for medical specialists. In this chapter, we focus on image registration of a specific human organ, i.e. the lung, which is prone to be lesioned. For pulmonary image registration, the improvement of the accuracy and how to obtain it in order to achieve clinical purposes represents an important problem which should seriously be addressed. In this chapter, we provide a survey which focuses on the role of image registration in educational training together with the state-of-the-art of pulmonary image registration. In the first part, we describe clinical applications of image registration introducing artificial organs in Simulation-based Education. In the second part, we summarize the common methods used in pulmonary image registration and analyze popular papers to obtain a survey of pulmonary image registration.
[ 0, 1, 0, 0, 0, 0 ]
Title: Topology Adaptive Graph Convolutional Networks, Abstract: Spectral graph convolutional neural networks (CNNs) require approximation to the convolution to alleviate the computational complexity, resulting in performance loss. This paper proposes the topology adaptive graph convolutional network (TAGCN), a novel graph convolutional network defined in the vertex domain. We provide a systematic way to design a set of fixed-size learnable filters to perform convolutions on graphs. The topologies of these filters are adaptive to the topology of the graph when they scan the graph to perform convolution. The TAGCN not only inherits the properties of convolutions in CNN for grid-structured data, but it is also consistent with convolution as defined in graph signal processing. Since no approximation to the convolution is needed, TAGCN exhibits better performance than existing spectral CNNs on a number of data sets and is also computationally simpler than other recent methods.
[ 1, 0, 0, 1, 0, 0 ]
Title: Far-from-equilibrium transport of excited carriers in nanostructures, Abstract: Transport of charged carriers in regimes of strong non-equilibrium is critical in a wide array of applications ranging from solar energy conversion and semiconductor devices to quantum information. Plasmonic hot-carrier science brings this regime of transport physics to the forefront since photo-excited carriers must be extracted far from equilibrium to harvest their energy efficiently. Here, we present a theoretical and computational framework, Non-Equilibrium Scattering in Space and Energy (NESSE), to predict the spatial evolution of carrier energy distributions that combines the best features of phase-space (Boltzmann) and particle-based (Monte Carlo) methods. Within the NESSE framework, we bridge first-principles electronic structure predictions of plasmon decay and carrier collision integrals at the atomic scale, with electromagnetic field simulations at the nano- to mesoscale. Finally, we apply NESSE to predict spatially-resolved energy distributions of photo-excited carriers that impact the surface of experimentally realizable plasmonic nanostructures, enabling first-principles design of hot carrier devices.
[ 0, 1, 0, 0, 0, 0 ]
Title: On annihilators of bounded $(\frak g, \frak k)$-modules, Abstract: Let $\frak g$ be a semisimple Lie algebra and $\frak k\subset\frak g$ be a reductive subalgebra. We say that a $\frak g$-module $M$ is a bounded $(\frak g, \frak k)$-module if $M$ is a direct sum of simple finite-dimensional $\frak k$-modules and the multiplicities of all simple $\frak k$-modules in that direct sum are universally bounded. The goal of this article is to show that the "boundedness" property for a simple $(\frak g, \frak k)$-module $M$ is equivalent to a property of the associated variety of the annihilator of $M$ (this is the closure of a nilpotent coadjoint orbit inside $\frak g^*$) under the assumption that the main field is algebraically closed and of characteristic 0. In particular this implies that if $M_1, M_2$ are simple $(\frak g, \frak k)$-modules such that $M_1$ is bounded and the associated varieties of the annihilators of $M_1$ and $M_2$ coincide then $M_2$ is also bounded. This statement is a geometric analogue of a purely algebraic fact due to I. Penkov and V. Serganova and it was posed as a conjecture in my Ph.D. thesis.
[ 0, 0, 1, 0, 0, 0 ]
Title: Discrete Time-Crystalline Order in Cavity and Circuit QED Systems, Abstract: Discrete time crystals are a recently proposed and experimentally observed out-of-equilibrium dynamical phase of Floquet systems, where the stroboscopic evolution of a local observable repeats itself at an integer multiple of the driving period. We address this issue in a driven-dissipative setup, focusing on the modulated open Dicke model, which can be implemented by cavity or circuit QED systems. In the thermodynamic limit, we employ semiclassical approaches and find rich dynamical phases on top of the discrete time-crystalline order. In a deep quantum regime with few qubits, we find clear signatures of a transient discrete time-crystalline behavior, which is absent in the isolated counterpart. We establish a phenomenology of dissipative discrete time crystals by generalizing the Landau theory of phase transitions to Floquet open systems.
[ 0, 1, 0, 0, 0, 0 ]
Title: Relaxation of p-growth integral functionals under space-dependent differential constraints, Abstract: A representation formula for the relaxation of integral energies $$(u,v)\mapsto\int_{\Omega} f(x,u(x),v(x))\,dx,$$ is obtained, where $f$ satisfies $p$-growth assumptions, $1<p<+\infty$, and the fields $v$ are subjected to space-dependent first order linear differential constraints in the framework of $\mathscr{A}$-quasiconvexity with variable coefficients.
[ 0, 0, 1, 0, 0, 0 ]
Title: Magma oceans and enhanced volcanism on TRAPPIST-1 planets due to induction heating, Abstract: Low-mass M stars are plentiful in the Universe and often host small, rocky planets detectable with the current instrumentation. Recently, seven small planets have been discovered orbiting the ultracool dwarf TRAPPIST-1\cite{Gillon16,Gillon17}. We examine the role of electromagnetic induction heating of these planets, caused by the star's rotation and the planet's orbital motion. If the stellar rotation and magnetic dipole axes are inclined with respect to each other, induction heating can melt the upper mantle and enormously increase volcanic activity, sometimes producing a magma ocean below the planetary surface. We show that induction heating leads the three innermost planets, one of which is in the habitable zone, to either evolve towards a molten mantle planet, or to experience increased outgassing and volcanic activity, while the four outermost planets remain mostly unaffected.
[ 0, 1, 0, 0, 0, 0 ]
Title: Buildup of Speaking Skills in an Online Learning Community: A Network-Analytic Exploration, Abstract: In this study, we explore peer-interaction effects in online networks on speaking skill development. In particular, we present an evidence for gradual buildup of skills in a small-group setting that has not been reported in the literature. We introduce a novel dataset of six online communities consisting of 158 participants focusing on improving their speaking skills. They video-record speeches for 5 prompts in 10 days and exchange comments and performance-ratings with their peers. We ask (i) whether the participants' ratings are affected by their interaction patterns with peers, and (ii) whether there is any gradual buildup of speaking skills in the communities towards homogeneity. To analyze the data, we employ tools from the emerging field of Graph Signal Processing (GSP). GSP enjoys a distinction from Social Network Analysis in that the latter is concerned primarily with the connection structures of graphs, while the former studies signals on top of graphs. We study the performance ratings of the participants as graph signals atop underlying interaction topologies. Total variation analysis of the graph signals show that the participants' rating differences decrease with time (slope=-0.04, p<0.01), while average ratings increase (slope=0.07, p<0.05)--thereby gradually building up the ratings towards community-wide homogeneity. We provide evidence for peer-influence through a prediction formulation. Our consensus-based prediction model outperforms baseline network-agnostic regression models by about 23% in predicting performance ratings. This, in turn, shows that participants' ratings are affected by their peers' ratings and the associated interaction patterns, corroborating previous findings. Then, we formulate a consensus-based diffusion model that captures these observations of peer-influence from our analyses.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Note on Kaldi's PLDA Implementation, Abstract: Some explanations to Kaldi's PLDA implementation to make formula derivation easier to catch.
[ 0, 0, 0, 1, 0, 0 ]
Title: Magnetic field--induced modification of selection rules for Rb D$_2$ line monitored by selective reflection from a vapor nanocell, Abstract: Magnetic field-induced giant modification of the probabilities of five transitions of $5S_{1/2}, F_g=2 \rightarrow 5P_{3/2}, F_e=4$ of $^{85}$Rb and three transitions of $5S_{1/2}, F_g=1 \rightarrow 5P_{3/2}, F_e=3$ of $^{87}$Rb forbidden by selection rules for zero magnetic field has been observed experimentally and described theoretically for the first time. For the case of excitation with circularly-polarized ($\sigma^+$) laser radiation, the probability of $F_g=2, ~m_F=-2 \rightarrow F_e=4, ~m_F=-1$ transition becomes the largest among the seventeen transitions of $^{85}$Rb $F_g=2 \rightarrow F_e=1,2,3,4$ group, and the probability of $F_g=1,~m_F=-1 \rightarrow F_e=3,~m_F=0$ transition becomes the largest among the nine transitions of $^{87}$Rb $F_g=1 \rightarrow F_e=0,1,2,3$ group, in a wide range of magnetic field 200 -- 1000 G. Complete frequency separation of individual Zeeman components was obtained by implementation of derivative selective reflection technique with a 300 nm-thick nanocell filled with Rb, allowing formation of narrow optical resonances. Possible applications are addressed. The theoretical model is perfectly consistent with the experimental results.
[ 0, 1, 0, 0, 0, 0 ]
Title: Twists of quantum Borel algebras, Abstract: We classify Drinfeld twists for the quantum Borel subalgebra u_q(b) in the Frobenius-Lusztig kernel u_q(g), where g is a simple Lie algebra over C and q an odd root of unity. More specifically, we show that alternating forms on the character group of the group of grouplikes for u_q(b) generate all twists for u_q(b), under a certain algebraic group action. This implies a simple classification of Hopf algebras whose categories of representations are tensor equivalent to that of u_q(b). We also show that cocycle twists for the corresponding De Concini-Kac algebra are in bijection with alternating forms on the aforementioned character group.
[ 0, 0, 1, 0, 0, 0 ]
Title: Uniqueness of the von Neumann continuous factor, Abstract: For a division ring $D$, denote by $\mathcal M_D$ the $D$-ring obtained as the completion of the direct limit $\varinjlim_n M_{2^n}(D)$ with respect to the metric induced by its unique rank function. We prove that, for any ultramatricial $D$-ring $\mathcal B$ and any non-discrete extremal pseudo-rank function $N$ on $\mathcal B$, there is an isomorphism of $D$-rings $\overline{\mathcal B} \cong \mathcal M_D$, where $\overline{\mathcal B}$ stands for the completion of $\mathcal B$ with respect to the pseudo-metric induced by $N$. This generalizes a result of von Neumann. We also show a corresponding uniqueness result for $*$-algebras over fields $F$ with positive definite involution, where the algebra $\mathcal M_F$ is endowed with its natural involution coming from the $*$-transpose involution on each of the factors $M_{2^n}(F)$.
[ 0, 0, 1, 0, 0, 0 ]
Title: A hybrid finite volume -- finite element method for bulk--surface coupled problems, Abstract: The paper develops a hybrid method for solving a system of advection--diffusion equations in a bulk domain coupled to advection--diffusion equations on an embedded surface. A monotone nonlinear finite volume method for equations posed in the bulk is combined with a trace finite element method for equations posed on the surface. In our approach, the surface is not fitted by the mesh and is allowed to cut through the background mesh in an arbitrary way. Moreover, a triangulation of the surface into regular shaped elements is not required. The background mesh is an octree grid with cubic cells. As an example of an application, we consider the modeling of contaminant transport in fractured porous media. One standard model leads to a coupled system of advection--diffusion equations in a bulk (matrix) and along a surface (fracture). A series of numerical experiments with both steady and unsteady problems and different embedded geometries illustrate the numerical properties of the hybrid approach. The method demonstrates great flexibility in handling curvilinear or branching lower dimensional embedded structures.
[ 0, 1, 1, 0, 0, 0 ]
Title: From Quenched Disorder to Continuous Time Random Walk, Abstract: This work focuses on quantitative representation of transport in systems with quenched disorder. Explicit mapping of the quenched trap model to continuous time random walk is presented. Linear temporal transformation: $t\to t/\Lambda^{1/\alpha}$ for transient process on translationally invariant lattice, in the sub-diffusive regime, is sufficient for asymptotic mapping. Exact form of the constant $\Lambda^{1/\alpha}$ is established. Disorder averaged position probability density function for quenched trap model is obtained and analytic expressions for the diffusion coefficient and drift are provided.
[ 0, 1, 0, 0, 0, 0 ]
Title: Network Flows that Solve Least Squares for Linear Equations, Abstract: This paper presents a first-order {distributed continuous-time algorithm} for computing the least-squares solution to a linear equation over networks. Given the uniqueness of the solution, with nonintegrable and diminishing step size, convergence results are provided for fixed graphs. The exact rate of convergence is also established for various types of step size choices falling into that category. For the case where non-unique solutions exist, convergence to one such solution is proved for constantly connected switching graphs with square integrable step size, and for uniformly jointly connected switching graphs under the boundedness assumption on system states. Validation of the results and illustration of the impact of step size on the convergence speed are made using a few numerical examples.
[ 1, 0, 0, 0, 0, 0 ]
Title: Bipartite Envy-Free Matching, Abstract: Bipartite Envy-Free Matching (BEFM) is a relaxation of perfect matching. In a bipartite graph with parts X and Y, a BEFM is a matching of some vertices in X to some vertices in Y, such that each unmatched vertex in X is not adjacent to any matched vertex in Y (so the unmatched vertices do not "envy" the matched ones). The empty matching is always a BEFM. This paper presents sufficient and necessary conditions for the existence of a non-empty BEFM. These conditions are based on cardinality of neighbor-sets, similarly to Hall's condition for the existence of a perfect matching. The conditions can be verified in polynomial time, and in case they are satisfied, a non-empty BEFM can be found by a polynomial-time algorithm. The paper presents some applications of BEFM as a subroutine in fair division algorithms.
[ 1, 0, 0, 0, 0, 0 ]
Title: Hierarchical star formation across the grand design spiral NGC1566, Abstract: We investigate how star formation is spatially organized in the grand-design spiral NGC 1566 from deep HST photometry with the Legacy ExtraGalactic UV Survey (LEGUS). Our contour-based clustering analysis reveals 890 distinct stellar conglomerations at various levels of significance. These star-forming complexes are organized in a hierarchical fashion with the larger congregations consisting of smaller structures, which themselves fragment into even smaller and more compact stellar groupings. Their size distribution, covering a wide range in length-scales, shows a power-law as expected from scale-free processes. We explain this shape with a simple "fragmentation and enrichment" model. The hierarchical morphology of the complexes is confirmed by their mass--size relation which can be represented by a power-law with a fractional exponent, analogous to that determined for fractal molecular clouds. The surface stellar density distribution of the complexes shows a log-normal shape similar to that for supersonic non-gravitating turbulent gas. Between 50 and 65 per cent of the recently-formed stars, as well as about 90 per cent of the young star clusters, are found inside the stellar complexes, located along the spiral arms. We find an age-difference between young stars inside the complexes and those in their direct vicinity in the arms of at least 10 Myr. This timescale may relate to the minimum time for stellar evaporation, although we cannot exclude the in situ formation of stars. As expected, star formation preferentially occurs in spiral arms. Our findings reveal turbulent-driven hierarchical star formation along the arms of a grand-design galaxy.
[ 0, 1, 0, 0, 0, 0 ]
Title: Certifying coloring algorithms for graphs without long induced paths, Abstract: Let $P_k$ be a path, $C_k$ a cycle on $k$ vertices, and $K_{k,k}$ a complete bipartite graph with $k$ vertices on each side of the bipartition. We prove that (1) for any integers $k, t>0$ and a graph $H$ there are finitely many subgraph minimal graphs with no induced $P_k$ and $K_{t,t}$ that are not $H$-colorable and (2) for any integer $k>4$ there are finitely many subgraph minimal graphs with no induced $P_k$ that are not $C_{k-2}$-colorable. The former generalizes the result of Hell and Huang [Complexity of coloring graphs without paths and cycles, Discrete Appl. Math. 216: 211--232 (2017)] and the latter extends a result of Bruce, Hoang, and Sawada [A certifying algorithm for 3-colorability of $P_5$-Free Graphs, ISAAC 2009: 594--604]. Both our results lead to polynomial-time certifying algorithms for the corresponding coloring problems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Efficient Estimation for Dimension Reduction with Censored Data, Abstract: We propose a general index model for survival data, which generalizes many commonly used semiparametric survival models and belongs to the framework of dimension reduction. Using a combination of geometric approach in semiparametrics and martingale treatment in survival data analysis, we devise estimation procedures that are feasible and do not require covariate-independent censoring as assumed in many dimension reduction methods for censored survival data. We establish the root-$n$ consistency and asymptotic normality of the proposed estimators and derive the most efficient estimator in this class for the general index model. Numerical experiments are carried out to demonstrate the empirical performance of the proposed estimators and an application to an AIDS data further illustrates the usefulness of the work.
[ 0, 0, 1, 1, 0, 0 ]
Title: Subdifferential characterization of probability functions under Gaussian distribution, Abstract: Probability functions figure prominently in optimization problems of engineering. They may be nonsmooth even if all input data are smooth.This fact motivates the consideration of subdifferentials for such typically just continuous functions. The aim of this paper is to provide subdifferential formulae in the case of Gaussian distributions for possibly infinite-dimensional decision variables and nonsmooth (locally Lipschitzian) input data. These formulae are based on the spheric-radial decomposition of Gaussian random vectors on the one hand and on a cone of directions of moderate growth on the other. By successively adding additional hypotheses, conditions are satisfied under which the probability function is locally Lipschitzian or even differentiable.
[ 0, 0, 1, 0, 0, 0 ]
Title: EPTL - A temporal logic for weakly consistent systems, Abstract: The high availability and scalability of weakly-consistent systems attracts system designers. Yet, writing correct application code for this type of systems is difficult; even how to specify the intended behavior of such systems is still an open question. There has not been established any standard method to specify the intended dynamic behavior of a weakly consistent system. There exist specifications of various consistency models for distributed and concurrent systems; and the semantics of replicated datatypes like CRDTs have been specified in axiomatic and operational models based on visibility relations. In this paper, we present a temporal logic, EPTL, that is tailored to specify properties of weakly consistent systems. In contrast to LTL and CTL, EPTL takes into account that operations of weakly consistent systems are in many cases not serializable and have to be treated respectively to capture the behavior. We embed our temporal logic in Isabelle/HOL and can thereby leverage strong semi-automatic proving capabilities.
[ 1, 0, 0, 0, 0, 0 ]
Title: Absence of chaos in Digital Memcomputing Machines with solutions, Abstract: Digital memcomputing machines (DMMs) are non-linear dynamical systems designed so that their equilibrium points are solutions of the Boolean problem they solve. In a previous work [Chaos 27, 023107 (2017)] it was argued that when DMMs support solutions of the associated Boolean problem then strange attractors cannot coexist with such equilibria. In this work, we demonstrate such conjecture. In particular, we show that both topological transitivity and the strongest property of topological mixing are inconsistent with the point dissipative property of DMMs when equilibrium points are present. This is true for both the whole phase space and the global attractor. Absence of topological transitivity is enough to imply absence of chaotic behavior. In a similar vein, we prove that if DMMs do not have equilibrium points, the only attractors present are invariant tori/periodic orbits with periods that may possibly increase with system size (quasi-attractors).
[ 1, 1, 0, 0, 0, 0 ]
Title: Fine-resolution analysis of exoplanetary distributions by wavelets: hints of an overshooting iceline accumulation, Abstract: We investigate 1D exoplanetary distributions using a novel analysis algorithm based on the continuous wavelet transform. The analysis pipeline includes an estimation of the wavelet transform of the probability density function (p.d.f.) without pre-binning, use of optimized wavelets, a rigorous significance testing of the patterns revealed in the p.d.f., and an optimized minimum-noise reconstruction of the p.d.f. via matching pursuit iterations. In the distribution of orbital periods, $P$, our analysis revealed a narrow subfamily of exoplanets within the broad family of "warm jupiters", or massive giants with $P\gtrsim 300$~d, which are often deemed to be related with the iceline accumulation in a protoplanetary disk. We detected a p.d.f. pattern that represents an upturn followed by an overshooting peak spanning $P\sim 300-600$~d, right beyond the "period valley". It is separated from the other planets by p.d.f. concavities from both sides. It has at least two-sigma significance. In the distribution of planet radii, $R$, and using the California Kepler Survey sample properly cleaned, we confirm the hints of a bimodality with two peaks about $R=1.3 R_\oplus$ and $R=2.4 R_\oplus$, and the "evaporation valley" between them. However, we obtain just a modest significance for this pattern, two-sigma only at the best. Besides, our follow-up application of the Hartigan & Hartigan dip test for unimodality returns $3$ per cent false alarm probability (merely $2.2$-sigma significance), contrary to $0.14$ per cent (or $3.2$-sigma), as claimed by Fulton et al. (2017).
[ 0, 1, 0, 0, 0, 0 ]
Title: Meta-Learning MCMC Proposals, Abstract: Effective implementations of sampling-based probabilistic inference often require manually constructed, model-specific proposals. Inspired by recent progresses in meta-learning for training learning agents that can generalize to unseen environments, we propose a meta-learning approach to building effective and generalizable MCMC proposals. We parametrize the proposal as a neural network to provide fast approximations to block Gibbs conditionals. The learned neural proposals generalize to occurrences of common structural motifs across different models, allowing for the construction of a library of learned inference primitives that can accelerate inference on unseen models with no model-specific training required. We explore several applications including open-universe Gaussian mixture models, in which our learned proposals outperform a hand-tuned sampler, and a real-world named entity recognition task, in which our sampler yields higher final F1 scores than classical single-site Gibbs sampling.
[ 1, 0, 0, 1, 0, 0 ]
Title: Putting Self-Supervised Token Embedding on the Tables, Abstract: Information distribution by electronic messages is a privileged means of transmission for many businesses and individuals, often under the form of plain-text tables. As their number grows, it becomes necessary to use an algorithm to extract text and numbers instead of a human. Usual methods are focused on regular expressions or on a strict structure in the data, but are not efficient when we have many variations, fuzzy structure or implicit labels. In this paper we introduce SC2T, a totally self-supervised model for constructing vector representations of tokens in semi-structured messages by using characters and context levels that address these issues. It can then be used for an unsupervised labeling of tokens, or be the basis for a semi-supervised information extraction system.
[ 1, 0, 0, 0, 0, 0 ]
Title: Getting the public involved in Quantum Error Correction, Abstract: The Decodoku project seeks to let users get hands-on with cutting-edge quantum research through a set of simple puzzle games. The design of these games is explicitly based on the problem of decoding qudit variants of surface codes. This problem is presented such that it can be tackled by players with no prior knowledge of quantum information theory, or any other high-level physics or mathematics. Methods devised by the players to solve the puzzles can then directly be incorporated into decoding algorithms for quantum computation. In this paper we give a brief overview of the novel decoding methods devised by players, and provide short postmortem for Decodoku v1.0-v4.1.
[ 0, 1, 0, 0, 0, 0 ]
Title: Second-grade fluids in curved pipes, Abstract: This paper is concerned with the application of finite element methods to obtain solutions for steady fully developed second-grade flows in a curved pipe of circular cross-section and arbitrary curvature ratio, under a given axial pressure gradient. The qualitative and quantitative behavior of the secondary flows is analyzed with respect to inertia and viscoelasticity.
[ 0, 1, 1, 0, 0, 0 ]
Title: Designing nearly tight window for improving time-frequency masking, Abstract: Many audio signal processing methods are formulated in the time-frequency (T-F) domain which is obtained by the short-time Fourier transform (STFT). The property of STFT is fully characterized by window function, and thus designing a better window is important for improving the performance of the processing especially when a less redundant T-F representation is desirable. While many window functions have been proposed in the literature, they are designed to have a good frequency response for analysis, which may not perform well in terms of signal processing. The window design must take the effect of the reconstruction (from the T-F domain into the time domain) into account for improving the performance. In this paper, an optimization-based design method of a nearly tight window is proposed to obtain a window performing well for the T-F domain signal processing.
[ 1, 0, 0, 0, 0, 0 ]
Title: Asymptotic Properties of Recursive Maximum Likelihood Estimation in Non-Linear State-Space Models, Abstract: Using stochastic gradient search and the optimal filter derivative, it is possible to perform recursive (i.e., online) maximum likelihood estimation in a non-linear state-space model. As the optimal filter and its derivative are analytically intractable for such a model, they need to be approximated numerically. In [Poyiadjis, Doucet and Singh, Biometrika 2018], a recursive maximum likelihood algorithm based on a particle approximation to the optimal filter derivative has been proposed and studied through numerical simulations. Here, this algorithm and its asymptotic behavior are analyzed theoretically. We show that the algorithm accurately estimates maxima to the underlying (average) log-likelihood when the number of particles is sufficiently large. We also derive (relatively) tight bounds on the estimation error. The obtained results hold under (relatively) mild conditions and cover several classes of non-linear state-space models met in practice.
[ 0, 0, 0, 1, 0, 0 ]
Title: Microservices in Practice: A Survey Study, Abstract: Microservices architectures have become largely popular in the last years. However, we still lack empirical evidence about the use of microservices and the practices followed by practitioners. Thereupon, in this paper, we report the results of a survey with 122 professionals who work with microservices. We report how the industry is using this architectural style and whether the perception of practitioners regarding the advantages and challenges of microservices is according to the literature.
[ 1, 0, 0, 0, 0, 0 ]
Title: The relationship between $k$-forcing and $k$-power domination, Abstract: Zero forcing and power domination are iterative processes on graphs where an initial set of vertices are observed, and additional vertices become observed based on some rules. In both cases, the goal is to eventually observe the entire graph using the fewest number of initial vertices. Chang et al. introduced $k$-power domination in [Generalized power domination in graphs, {\it Discrete Applied Math.} 160 (2012) 1691-1698] as a generalization of power domination and standard graph domination. Independently, Amos et al. defined $k$-forcing in [Upper bounds on the $k$-forcing number of a graph, {\it Discrete Applied Math.} 181 (2015) 1-10] to generalize zero forcing. In this paper, we combine the study of $k$-forcing and $k$-power domination, providing a new approach to analyze both processes. We give a relationship between the $k$-forcing and the $k$-power domination numbers of a graph that bounds one in terms of the other. We also obtain results using the contraction of subgraphs that allow the parallel computation of $k$-forcing and $k$-power dominating sets.
[ 0, 0, 1, 0, 0, 0 ]
Title: Finding Root Causes of Floating Point Error with Herbgrind, Abstract: Floating-point arithmetic plays a central role in science, engineering, and finance by enabling developers to approximate real arithmetic. To address numerical issues in large floating-point applications, developers must identify root causes, which is difficult because floating-point errors are generally non-local, non-compositional, and non-uniform. This paper presents Herbgrind, a tool to help developers identify and address root causes in numerical code written in low-level C/C++ and Fortran. Herbgrind dynamically tracks dependencies between operations and program outputs to avoid false positives and abstracts erroneous computations to a simplified program fragment whose improvement can reduce output error. We perform several case studies applying Herbgrind to large, expert-crafted numerical programs and show that it scales to applications spanning hundreds of thousands of lines, correctly handling the low-level details of modern floating point hardware and mathematical libraries, and tracking error across function boundaries and through the heap.
[ 1, 0, 0, 0, 0, 0 ]
Title: Catalog of Candidates for Quasars at 3 < z < 5.5 Selected among X-Ray Sources from the 3XMM-DR4 Survey of the XMM-Newton Observatory, Abstract: We have compiled a catalog of 903 candidates for type 1 quasars at redshifts 3<z<5.5 selected among the X-ray sources of the serendipitous XMM-Newton survey presented in the 3XMM-DR4 catalog (the median X-ray flux is 5x10^{-15} erg/s/cm^2 the 0.5-2 keV energy band) and located at high Galactic latitudes >20 deg in Sloan Digital Sky Survey (SDSS) fields with a total area of about 300 deg^2. Photometric SDSS data as well infrared 2MASS and WISE data were used to select the objects. We selected the point sources from the photometric SDSS catalog with a magnitude error Delta z<0.2 and a color i-z<0.6 (to first eliminate the M-type stars). For the selected sources, we have calculated the dependences chi^2(z) for various spectral templates from the library that we compiled for these purposes using the EAZY software. Based on these data, we have rejected the objects whose spectral energy distributions are better described by the templates of stars at z=0 and obtained a sample of quasars with photometric redshift estimates 2.75<zphot<5.5. The selection completeness of known quasars at z>3 in the investigated fields is shown to be about 80%. The normalized median absolute deviation is 0.07, while the outlier fraction is eta= 9. The number of objects per unit area in our sample exceeds the number of quasars in the spectroscopic SDSS sample at the same redshifts approximately by a factor of 1.5. The subsequent spectroscopic testing of the redshifts of our selected candidates for quasars at 3<z<5.5 will allow the purity of this sample to be estimated more accurately.
[ 0, 1, 0, 0, 0, 0 ]
Title: The Theta Number of Simplicial Complexes, Abstract: We introduce a generalization of the celebrated Lovász theta number of a graph to simplicial complexes of arbitrary dimension. Our generalization takes advantage of real simplicial cohomology theory, in particular combinatorial Laplacians, and provides a semidefinite programming upper bound of the independence number of a simplicial complex. We consider properties of the graph theta number such as the relationship to Hoffman's ratio bound and to the chromatic number and study how they extend to higher dimensions. Like in the case of graphs, the higher dimensional theta number can be extended to a hierarchy of semidefinite programming upper bounds reaching the independence number. We analyze the value of the theta number and of the hierarchy for dense random simplicial complexes.
[ 1, 0, 1, 0, 0, 0 ]
Title: Prediction Scores as a Window into Classifier Behavior, Abstract: Most multi-class classifiers make their prediction for a test sample by scoring the classes and selecting the one with the highest score. Analyzing these prediction scores is useful to understand the classifier behavior and to assess its reliability. We present an interactive visualization that facilitates per-class analysis of these scores. Our system, called Classilist, enables relating these scores to the classification correctness and to the underlying samples and their features. We illustrate how such analysis reveals varying behavior of different classifiers. Classilist is available for use online, along with source code, video tutorials, and plugins for R, RapidMiner, and KNIME at this https URL.
[ 1, 0, 0, 1, 0, 0 ]
Title: Effective perturbation theory for linear operators, Abstract: We propose a new approach to the spectral theory of perturbed linear operators , in the case of a simple isolated eigenvalue. We obtain two kind of results: "radius bounds" which ensure perturbation theory applies for perturbations up to an explicit size, and "regularity bounds" which control the variations of eigendata to any order. Our method is based on the Implicit Function Theorem and proceeds by establishing differential inequalities on two natural quantities: the norm of the projection to the eigendirection, and the norm of the reduced resolvent. We obtain completely explicit results without any assumption on the underlying Banach space. In companion articles, on the one hand we apply the regularity bounds to Markov chains, obtaining non-asymptotic concentration and Berry-Ess{é}en inequalities with explicit constants, and on the other hand we apply the radius bounds to transfer operator of intermittent maps, obtaining explicit high-temperature regimes where a spectral gap occurs.
[ 0, 0, 1, 0, 0, 0 ]
Title: I-MMSE relations in random linear estimation and a sub-extensive interpolation method, Abstract: Consider random linear estimation with Gaussian measurement matrices and noise. One can compute infinitesimal variations of the mutual information under infinitesimal variations of the signal-to-noise ratio or of the measurement rate. We discuss how each variation is related to the minimum mean-square error and deduce that the two variations are directly connected through a very simple identity. The main technical ingredient is a new interpolation method called "sub-extensive interpolation method". We use it to provide a new proof of an I-MMSE relation recently found by Reeves and Pfister [1] when the measurement rate is varied. Our proof makes it clear that this relation is intimately related to another I-MMSE relation also recently proved in [2]. One can directly verify that the identity relating the two types of variation of mutual information is indeed consistent with the one letter replica symmetric formula for the mutual information, first derived by Tanaka [3] for binary signals, and recently proved in more generality in [1,2,4,5] (by independent methods). However our proof is independent of any knowledge of Tanaka's formula.
[ 1, 1, 0, 0, 0, 0 ]
Title: Layered semi-convection and tides in giant planet interiors - I. Propagation of internal waves, Abstract: Layered semi-convection is a possible candidate to explain Saturn's luminosity excess and the abnormally large radius of some hot Jupiters. In giant planet interiors, it could lead to the creation of density staircases, which are convective layers separated by thin stably stratified interfaces. We study the propagation of internal waves in a region of layered semi-convection, with the aim to predict energy transport by internal waves incident upon a density staircase. The goal is then to understand the resulting tidal dissipation when these waves are excited by other bodies such as moons in giant planets systems. We use a local Cartesian analytical model, taking into account the complete Coriolis acceleration at any latitude, thus generalizing previous works. We find transmission of incident internal waves to be strongly affected by the presence of a density staircase, even if these waves are initially pure inertial waves (which are restored by the Coriolis acceleration). In particular, low-frequency waves of all wavelengths are perfectly transmitted near the critical latitude. Otherwise, short-wavelength waves are only efficiently transmitted if they are resonant with a free mode (interfacial gravity wave or short-wavelength inertial mode) of the staircase. In all other cases, waves are primarily reflected unless their wavelengths are longer than the vertical extent of the entire staircase (not just a single step). We expect incident internal waves to be strongly affected by the presence of a density staircase in a frequency-, latitude- and wavelength-dependent manner. First, this could lead to new criteria to probe the interior of giant planets by seismology; and second, this may have important consequences for tidal dissipation and our understanding of the evolution of giant planet systems.
[ 0, 1, 0, 0, 0, 0 ]
Title: Non-commutative Discretize-then-Optimize Algorithms for Elliptic PDE-Constrained Optimal Control Problems, Abstract: In this paper, we analyze the convergence of several discretize-then-optimize algorithms, based on either a second-order or a fourth-order finite difference discretization, for solving elliptic PDE-constrained optimization or optimal control problems. To ensure the convergence of a discretize-then-optimize algorithm, one well-accepted criterion is to choose or redesign the discretization scheme such that the resultant discretize-then-optimize algorithm commutes with the corresponding optimize-then-discretize algorithm. In other words, both types of algorithms would give rise to exactly the same discrete optimality system. However, such an approach is not trivial. In this work, by investigating a simple distributed elliptic optimal control problem, we first show that enforcing such a stringent condition of commutative property is only sufficient but not necessary for achieving the desired convergence. We then propose to add some suitable $H_1$ semi-norm penalty/regularization terms to recover the lost convergence due to the inconsistency caused by the loss of commutativity. Numerical experiments are carried out to verify our theoretical analysis and also validate the effectiveness of our proposed regularization techniques.
[ 0, 0, 1, 0, 0, 0 ]
Title: Translation matrix elements for spherical Gauss-Laguerre basis functions, Abstract: Spherical Gauss-Laguerre (SGL) basis functions, i.e., normalized functions of the type $L_{n-l-1}^{(l + 1/2)}(r^2) r^{l} Y_{lm}(\vartheta,\varphi)$, $|m| \leq l < n \in \mathbb{N}$, constitute an orthonormal polynomial basis of the space $L^{2}$ on $\mathbb{R}^{3}$ with radial Gaussian weight $\exp(-r^{2})$. We have recently described reliable fast Fourier transforms for the SGL basis functions. The main application of the SGL basis functions and our fast algorithms is in solving certain three-dimensional rigid matching problems, where the center is prioritized over the periphery. For this purpose, so-called SGL translation matrix elements are required, which describe the spectral behavior of the SGL basis functions under translations. In this paper, we derive a closed-form expression of these translation matrix elements, allowing for a direct computation of these quantities in practice.
[ 0, 0, 1, 0, 0, 0 ]
Title: Parallel Concatenation of Bayesian Filters: Turbo Filtering, Abstract: In this manuscript a method for developing novel filtering algorithms through the parallel concatenation of two Bayesian filters is illustrated. Our description of this method, called turbo filtering, is based on a new graphical model; this allows us to efficiently describe both the processing accomplished inside each of the constituent filter and the interactions between them. This model is exploited to develop two new filtering algorithms for conditionally linear Gaussian systems. Numerical results for a specific dynamic system evidence that such filters can achieve a better complexity-accuracy tradeoff than marginalized particle filtering.
[ 0, 0, 0, 1, 0, 0 ]
Title: Shot noise in ultrathin superconducting wires, Abstract: Quantum phase slips (QPS) may produce non-equilibrium voltage fluctuations in current-biased superconducting nanowires. Making use of the Keldysh technique and employing the phase-charge duality arguments we investigate such fluctuations within the four-point measurement scheme and demonstrate that shot noise of the voltage detected in such nanowires may essentially depend on the particular measurement setup. In long wires the shot noise power decreases with increasing frequency $\Omega$ and vanishes beyond a threshold value of $\Omega$ at $T \to 0$
[ 0, 1, 0, 0, 0, 0 ]
Title: Urban Data Streams and Machine Learning: A Case of Swiss Real Estate Market, Abstract: In this paper, we show how using publicly available data streams and machine learning algorithms one can develop practical data driven services with no input from domain experts as a form of prior knowledge. We report the initial steps toward development of a real estate portal in Switzerland. Based on continuous web crawling of publicly available real estate advertisements and using building data from Open Street Map, we developed a system, where we roughly estimate the rental and sale price indexes of 1.7 million buildings across the country. In addition to these rough estimates, we developed a web based API for accurate automated valuation of rental prices of individual properties and spatial sensitivity analysis of rental market. We tested several established function approximation methods against the test data to check the quality of the rental price estimations and based on our experiments, Random Forest gives very reasonable results with the median absolute relative error of 6.57 percent, which is comparable with the state of the art in the industry. We argue that while recently there have been successful cases of real estate portals, which are based on Big Data, majority of the existing solutions are expensive, limited to certain users and mostly with non-transparent underlying systems. As an alternative we discuss, how using the crawled data sets and other open data sets provided from different institutes it is easily possible to develop data driven services for spatial and temporal sensitivity analysis in the real estate market to be used for different stakeholders. We believe that this kind of digital literacy can disrupt many other existing business concepts across many domains.
[ 1, 0, 0, 1, 0, 0 ]
Title: Unsupervised Learning of Neural Networks to Explain Neural Networks (extended abstract), Abstract: This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i.e., the explainer uses interpretable visual concepts to explain features in middle conv-layers of a CNN. Given feature maps of a conv-layer of the CNN, the explainer performs like an auto-encoder, which decomposes the feature maps into object-part features. The object-part features are learned to reconstruct CNN features without much loss of information. We can consider the disentangled representations of object parts a paraphrase of CNN features, which help people understand the knowledge encoded by the CNN. More crucially, we learn the explainer via knowledge distillation without using any annotations of object parts or textures for supervision. In experiments, our method was widely used to interpret features of different benchmark CNNs, and explainers significantly boosted the feature interpretability without hurting the discrimination power of the CNNs.
[ 1, 0, 0, 1, 0, 0 ]
Title: Measuring Cognitive Conflict in Virtual Reality with Feedback-Related Negativity, Abstract: As virtual reality (VR) emerges as a mainstream platform, designers have started to experiment new interaction techniques to enhance the user experience. This is a challenging task because designers not only strive to provide designs with good performance but also carefully ensure not to disrupt users' immersive experience. There is a dire need for a new evaluation tool that extends beyond traditional quantitative measurements to assist designers in the design process. We propose an EEG-based experiment framework that evaluates interaction techniques in VR by measuring intentionally elicited cognitive conflict. Through the analysis of the feedback-related negativity (FRN) as well as other quantitative measurements, this framework allows designers to evaluate the effect of the variables of interest. We studied the framework by applying it to the fundamental task of 3D object selection using direct 3D input, i.e. tracked hand in VR. The cognitive conflict is intentionally elicited by manipulating the selection radius of the target object. Our first behavior experiment validated the framework in line with the findings of conflict-induced behavior adjustments like those reported in other classical psychology experiment paradigms. Our second EEG-based experiment examines the effect of the appearance of virtual hands. We found that the amplitude of FRN correlates with the level of realism of the virtual hands, which concurs with the Uncanny Valley theory.
[ 1, 0, 0, 0, 0, 0 ]
Title: Strong deformations of DNA: Effect on the persistence length, Abstract: Extreme deformations of the DNA double helix attracted a lot of attention during the past decades. Particularly, the determination of the persistence length of DNA with extreme local disruptions, or kinks, has become a crucial problem in the studies of many important biological processes. In this paper we review an approach to calculate the persistence length of the double helix by taking into account the formation of kinks of arbitrary configuration. The reviewed approach improves the Kratky--Porod model to determine the type and nature of kinks that occur in the double helix, by measuring a reduction of the persistence length of the kinkable DNA.
[ 0, 0, 0, 0, 1, 0 ]
Title: Dust in the reionization era: ALMA observations of a $z$=8.38 Galaxy, Abstract: We report on the detailed analysis of a gravitationally-lensed Y-band dropout, A2744_YD4, selected from deep Hubble Space Telescope imaging in the Frontier Field cluster Abell 2744. Band 7 observations with the Atacama Large Millimeter Array (ALMA) indicate the proximate detection of a significant 1mm continuum flux suggesting the presence of dust for a star-forming galaxy with a photometric redshift of $z\simeq8$. Deep X-SHOOTER spectra confirms the high redshift identity of A2744_YD4 via the detection of Lyman $\alpha$ emission at a redshift $z$=8.38. The association with the ALMA detection is confirmed by the presence of [OIII] 88$\mu$m emission at the same redshift. Although both emission features are only significant at the 4 $\sigma$ level, we argue their joint detection and the positional coincidence with a high redshift dropout in the HST images confirms the physical association. Analysis of the available photometric data and the modest gravitational magnification ($\mu\simeq2$) indicates A2744_YD4 has a stellar mass of $\sim$ 2$\times$10$^9$ M$_{\odot}$, a star formation rate of $\sim20$ M$_{\odot}$/yr and a dust mass of $\sim$6$\times$10$^{6}$ M$_{\odot}$. We discuss the implications of the formation of such a dust mass only $\simeq$200 Myr after the onset of cosmic reionisation.
[ 0, 1, 0, 0, 0, 0 ]
Title: It Takes (Only) Two: Adversarial Generator-Encoder Networks, Abstract: We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex architectures.
[ 1, 0, 0, 1, 0, 0 ]
Title: The Montecinos-Balsara ADER-FV Polynomial Basis: Convergence Properties & Extension to Non-Conservative Multidimensional Systems, Abstract: Hyperbolic systems of PDEs can be solved to arbitrary orders of accuracy by using the ADER Finite Volume method. These PDE systems may be non-conservative and non-homogeneous, and contain stiff source terms. ADER-FV requires a spatio-temporal polynomial reconstruction of the data in each spacetime cell, at each time step. This reconstruction is obtained as the root of a nonlinear system, resulting from the use of a Galerkin method. It was proved in Jackson [7] that for traditional choices of basis polynomials, the eigenvalues of certain matrices appearing in these nonlinear systems are always 0, regardless of the number of spatial dimensions of the PDEs or the chosen order of accuracy of the ADER-FV method. This guarantees fast convergence to the Galerkin root for certain classes of PDEs. In Montecinos and Balsara [9] a new, more efficient class of basis polynomials for the one-dimensional ADER-FV method was presented. This new class of basis polynomials, originally presented for conservative systems, is extended to multidimensional, non-conservative systems here, and the corresponding property regarding the eigenvalues of the Galerkin matrices is proved.
[ 0, 1, 0, 0, 0, 0 ]
Title: Reducibility of the Quantum Harmonic Oscillator in $d$-dimensions with Polynomial Time Dependent Perturbation, Abstract: We prove a reducibility result for a quantum harmonic oscillator in arbitrary dimensions with arbitrary frequencies perturbed by a linear operator which is a polynomial of degree two in $x_j$, $-i \partial_j$ with coefficients which depend quasiperiodically on time.
[ 0, 0, 1, 0, 0, 0 ]
Title: Model Order Selection Rules For Covariance Structure Classification, Abstract: The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.
[ 0, 0, 1, 1, 0, 0 ]