title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Self-Organizing Maps as a Storage and Transfer Mechanism in Reinforcement Learning
The idea of reusing information from previously learned tasks (source tasks) for the learning of new tasks (target tasks) has the potential to significantly improve the sample efficiency reinforcement learning agents. In this work, we describe an approach to concisely store and represent learned task knowledge, and reuse it by allowing it to guide the exploration of an agent while it learns new tasks. In order to do so, we use a measure of similarity that is defined directly in the space of parameterized representations of the value functions. This similarity measure is also used as a basis for a variant of the growing self-organizing map algorithm, which is simultaneously used to enable the storage of previously acquired task knowledge in an adaptive and scalable manner.We empirically validate our approach in a simulated navigation environment and discuss possible extensions to this approach along with potential applications where it could be particularly useful.
0
0
0
1
0
0
Distributed model predictive control for continuous-time nonlinear systems based on suboptimal ADMM
The paper presents a distributed model predictive control (DMPC) scheme for continuous-time nonlinear systems based on the alternating direction method of multipliers (ADMM). A stopping criterion in the ADMM algorithm limits the iterations and therefore the required communication effort during the distributed MPC solution at the expense of a suboptimal solution. Stability results are presented for the suboptimal DMPC scheme under two different ADMM convergence assumptions. In particular, it is shown that the required iterations in each ADMM step are bounded, which is also confirmed in simulation studies.
0
0
1
0
0
0
Implementing focal-plane phase masks optimized for real telescope apertures with SLM-based digital adaptive coronagraphy
Direct imaging of exoplanets or circumstellar disk material requires extreme contrast at the 10-6 to 10-12 levels at < 100 mas angular separation from the star. Focal-plane mask (FPM) coronagraphic imaging has played a key role in this field, taking advantage of progress in Adaptive Optics on ground-based 8+m class telescopes. However, large telescope entrance pupils usually consist of complex, sometimes segmented, non-ideal apertures, which include a central obstruction for the secondary mirror and its support structure. In practice, this negatively impacts wavefront quality and coronagraphic performance, in terms of achievable contrast and inner working angle. Recent theoretical works on structured darkness have shown that solutions for FPM phase profiles, optimized for non-ideal apertures, can be numerically derived. Here we present and discuss a first experimental validation of this concept, using reflective liquid crystal spatial light modulators as adaptive FPM coronagraphs.
0
1
0
0
0
0
Verifying Asynchronous Interactions via Communicating Session Automata
The relationship between communicating automata and session types is the cornerstone of many diverse theories and tools, including type checking, code generation, and runtime verification. A serious limitation of session types is that, while endpoint programs interact asynchronously, the underlying property which guarantees safety of session types is too synchronous: it requires a one-to-one synchronisation between send and receive actions. This paper proposes a sound procedure to verify properties of communicating session automata (CSA), i.e., communicating automata that correspond to multiparty session types. We introduce a new asynchronous compatibility property for CSA, called k-multiparty compatibility (k-MC), which is a strict superset of the synchronous multiparty compatibility proposed in the literature. It is decomposed into two bounded properties: (i) a condition called k-safety which guarantees that, within the bound, all sent messages can be received and each automaton can make a move; and (ii) a condition called k-exhaustivity which guarantees that all k-reachable send actions can be fired within the bound. We show that k-exhaustive systems soundly and completely characterise systems where each automaton behaves uniformly for any bound greater or equal to k. We show that checking k-MC is PSPACE-complete, but can be done efficiently over large systems by using partial order reduction techniques. We demonstrate that several examples from the literature are k-MC, but not synchronous compatible.
1
0
0
0
0
0
Phase induced transparency mediated structured beam generation in a closed-loop tripod configuration
We present a phase induced transparency based scheme to generate structured beam patterns in a closed four level atomic system. We employ phase structured probe beam and a transverse magnetic field (TMF) to create phase dependent medium susceptibility. We show that such phase dependent modulation of absorption holds the key to formation of a structured beam. We use a full density matrix formalism to explain the experiments of Radwell et al. [Phys. Rev. Lett. 114,123603 (2015)] at weak probe limits. Our numerical results on beam propagation confirms that the phase information present in the absorption profile gets encoded on the spatial probe envelope which creates petal-like structures even in the strong field limit. The contrast of the formed structured beam can be enhanced by changing the strength of TMF as well as of the probe intensity. In weak field limits an absorption profile is solely responsible for creating a structured beam, whereas in the strong probe regime, both dispersion and absorption profiles facilitate the generation of high contrast structured beam. Furthermore we find the rotation of structured beams owing to strong field induced nonlinear magneto-optical rotation (NMOR).
0
1
0
0
0
0
Additional cases of positive twisted torus knots
A twisted torus knot is a knot obtained from a torus knot by twisting adjacent strands by full twists. The twisted torus knots lie in $F$, the genus 2 Heegaard surface for $S^3$. Primitive/primitive and primitive/Seifert knots lie in $F$ in a particular way. Dean gives sufficient conditions for the parameters of the twisted torus knots to ensure they are primitive/primitive or primitive/Seifert. Using Dean's conditions, Doleshal shows that there are infinitely many twisted torus knots that are fibered and that there are twisted torus knots with distinct primitive/Seifert representatives with the same slope in $F$. In this paper, we extend Doleshal's results to show there is a four parameter family of positive twisted torus knots. Additionally, we provide new examples of twisted torus knots with distinct representatives with the same surface slope in $F$.
0
0
1
0
0
0
Action-conditional Sequence Modeling for Recommendation
In many online applications interactions between a user and a web-service are organized in a sequential way, e.g., user browsing an e-commerce website. In this setting, recommendation system acts throughout user navigation by showing items. Previous works have addressed this recommendation setup through the task of predicting the next item user will interact with. In particular, Recurrent Neural Networks (RNNs) has been shown to achieve substantial improvements over collaborative filtering baselines. In this paper, we consider interactions triggered by the recommendations of deployed recommender system in addition to browsing behavior. Indeed, it is reported that in online services interactions with recommendations represent up to 30\% of total interactions. Moreover, in practice, recommender system can greatly influence user behavior by promoting specific items. In this paper, we extend the RNN modeling framework by taking into account user interaction with recommended items. We propose and evaluate RNN architectures that consist of the recommendation action module and the state-action fusion module. Using real-world large-scale datasets we demonstrate improved performance on the next item prediction task compared to the baselines.
0
0
0
1
0
0
Limits of End-to-End Learning
End-to-end learning refers to training a possibly complex learning system by applying gradient-based learning to the system as a whole. End-to-end learning system is specifically designed so that all modules are differentiable. In effect, not only a central learning machine, but also all "peripheral" modules like representation learning and memory formation are covered by a holistic learning process. The power of end-to-end learning has been demonstrated on many tasks, like playing a whole array of Atari video games with a single architecture. While pushing for solutions to more challenging tasks, network architectures keep growing more and more complex. In this paper we ask the question whether and to what extent end-to-end learning is a future-proof technique in the sense of scaling to complex and diverse data processing architectures. We point out potential inefficiencies, and we argue in particular that end-to-end learning does not make optimal use of the modular design of present neural networks. Our surprisingly simple experiments demonstrate these inefficiencies, up to the complete breakdown of learning.
1
0
0
1
0
0
Quantum Graphs: $ \mathcal{PT}$-symmetry and reflection symmetry of the spectrum
Not necessarily self-adjoint quantum graphs -- differential operators on metric graphs -- are considered. Assume in addition that the underlying metric graph possesses an automorphism (symmetry) $ \mathcal P $. If the differential operator is $ \mathcal P \mathcal T$-symmetric, then its spectrum has reflection symmetry with respect to the real line. Our goal is to understand whether the opposite statement holds, namely whether the reflection symmetry of the spectrum of a quantum graph implies that the underlying metric graph possesses a non-trivial automorphism and the differential operator is $ \mathcal P \mathcal T$-symmetric. We give partial answer to this question by considering equilateral star-graphs. The corresponding Laplace operator with Robin vertex conditions possesses reflection-symmetric spectrum if and only if the operator is $ \mathcal P \mathcal T$-symmetric with $ \mathcal P $ being an automorphism of the metric graph.
0
0
1
0
0
0
On Diophantine equations involving sums of Fibonacci numbers and powers of $2$
In this paper, we completely solve the Diophantine equations $F_{n_1} + F_{n_2} = 2^{a_1} + 2^{a_2} + 2^{a_3}$ and $ F_{m_1} + F_{m_2} + F_{m_3} =2^{t_1} + 2^{t_2} $, where $F_k$ denotes the $k$-th Fibonacci number. In particular, we prove that $\max \{n_1, n_2, a_1, a_2, a_3 \}\leq 18$ and $\max \{ m_1, m_2, m_3, t_1, t_2 \}\leq 16$.
0
0
1
0
0
0
T-duality in rational homotopy theory via $L_\infty$-algebras
We combine Sullivan models from rational homotopy theory with Stasheff's $L_\infty$-algebras to describe a duality in string theory. Namely, what in string theory is known as topological T-duality between $K^0$-cocycles in type IIA string theory and $K^1$-cocycles in type IIB string theory, or as Hori's formula, can be recognized as a Fourier-Mukai transform between twisted cohomologies when looked through the lenses of rational homotopy theory. We show this as an example of topological T-duality in rational homotopy theory, which in turn can be completely formulated in terms of morphisms of $L_\infty$-algebras.
0
0
1
0
0
0
Phasebook and Friends: Leveraging Discrete Representations for Source Separation
Deep learning based speech enhancement and source separation systems have recently reached unprecedented levels of quality, to the point that performance is reaching a new ceiling. Most systems rely on estimating the magnitude of a target source by estimating a real-valued mask to be applied to a time-frequency representation of the mixture signal. A limiting factor in such approaches is a lack of phase estimation: the phase of the mixture is most often used when reconstructing the estimated time-domain signal. Here, we propose `MagBook', `phasebook', and `Combook', three new types of layers based on discrete representations that can be used to estimate complex time-frequency masks. MagBook layers extend classical sigmoidal units and a recently introduced convex softmax activation for mask-based magnitude estimation. Phasebook layers use a similar structure to give an estimate of the phase mask without suffering from phase wrapping issues. Combook layers are an alternative to the MagBook-Phasebook combination that directly estimate complex masks. We present various training and inference regimes involving these representations, and explain in particular how to include them in an end-to-end learning framework. We also present an oracle study to assess upper bounds on performance for various types of masks using discrete phase representations. We evaluate the proposed methods on the wsj0-2mix dataset, a well-studied corpus for single-channel speaker-independent speaker separation, matching the performance of state-of-the-art mask-based approaches without requiring additional phase reconstruction steps.
1
0
0
0
0
0
RankDCG: Rank-Ordering Evaluation Measure
Ranking is used for a wide array of problems, most notably information retrieval (search). There are a number of popular approaches to the evaluation of ranking such as Kendall's $\tau$, Average Precision, and nDCG. When dealing with problems such as user ranking or recommendation systems, all these measures suffer from various problems, including an inability to deal with elements of the same rank, inconsistent and ambiguous lower bound scores, and an inappropriate cost function. We propose a new measure, rankDCG, that addresses these problems. This is a modification of the popular nDCG algorithm. We provide a number of criteria for any effective ranking algorithm and show that only rankDCG satisfies all of them. Results are presented on constructed and real data sets. We release a publicly available rankDCG evaluation package.
1
0
0
0
0
0
FUV Spectral Signatures of Molecules and the Evolution of the Gaseous Coma of Comet 67P/Churyumov-Gerasimenko
The Alice far-ultraviolet imaging spectrograph onboard Rosetta observed emissions from atomic and molecular species from within the coma of comet 67P/Churyumov-Gerasimenko during the entire escort phase of the mission from 2014 August to 2016 September. The initial observations showed that emissions of atomic hydrogen and oxygen close to the surface were produced by energetic electron impact dissociation of H2O. Following delivery of the lander, Philae, on 2014 November 12, the trajectory of Rosetta shifted to near-terminator orbits that allowed for these emissions to be observed against the shadowed nucleus that, together with the compositional heterogeneity, enabled us to identify unique spectral signatures of dissociative electron impact excitation of H2O, CO2, and O2. CO emissions were found to be due to both electron and photoexcitation processes. Thus we are able, from far-ultraviolet spectroscopy, to qualitatively study the evolution of the primary molecular constituents of the gaseous coma from start to finish of the escort phase. Our results show asymmetric outgassing of H2O and CO2 about perihelion, H2O dominant before and CO2 dominant after, consistent with the results from both the in situ and other remote sensing instruments on Rosetta.
0
1
0
0
0
0
Adsorption and desorption of hydrogen at nonpolar GaN(1-100) surfaces: Kinetics and impact on surface vibrational and electronic properties
The adsorption of hydrogen at nonpolar GaN(1-100) surfaces and its impact on the electronic and vibrational properties is investigated using surface electron spectroscopy in combination with density functional theory (DFT) calculations. For the surface mediated dissociation of H2 and the subsequent adsorption of H, an energy barrier of 0.55 eV has to be overcome. The calculated kinetic surface phase diagram indicates that the reaction is kinetically hindered at low pressures and low temperatures. At higher temperatures ab-initio thermodynamics show, that the H-free surface is energetically favored. To validate these theoretical predictions experiments at room temperature and under ultrahigh vacuum conditions were performed. They reveal that molecular hydrogen does not dissociatively adsorb at the GaN(1-100) surface. Only activated atomic hydrogen atoms attach to the surface. At temperatures above 820 K, the attached hydrogen gets desorbed. The adsorbed hydrogen atoms saturate the dangling bonds of the gallium and nitrogen surface atoms and result in an inversion of the Ga-N surface dimer buckling. The signatures of the Ga-H and N-H vibrational modes on the H-covered surface have experimentally been identified and are in good agreement with the DFT calculations of the surface phonon modes. Both theory and experiment show that H adsorption results in a removal of occupied and unoccupied intragap electron states of the clean GaN(1-100) surface and a reduction of the surface upward band bending by 0.4 eV. The latter mechanism largely reduces surface electron depletion.
0
1
0
0
0
0
Application of the Fast Multipole Fully Coupled Poroelastic Displacement Discontinuity Method to Hydraulic Fracturing Problems
In this study, a fast multipole method (FMM) is used to decrease the computational time of a fully-coupled poroelastic hydraulic fracture model with a controllable effect on its accuracy. The hydraulic fracture model is based on the poroelastic formulation of the displacement discontinuity method (DDM) which is a special formulation of the boundary element method (BEM). DDM is a powerful and efficient method for problems involving fractures. However, this method becomes slow as the number of temporal, or spatial elements increases, or necessary details such as poroelasticity, that makes the solution history-dependent, are added to the model. FMM is a technique to expedite matrix-vector multiplications within a controllable error without forming the matrix explicitly. Fully-coupled poroelastic formulation of DDM involves the multiplication of a dense matrix with a vector in several places. A crucial modification to DDM is suggested in two places in the algorithm to leverage the speed efficiency of FMM for carrying out these multiplications. The first modification is in the time-marching scheme, which accounts for the solution of previous time steps to compute the current time step. The second modification is in the generalized minimal residual method (GMRES) to iteratively solve for the problem unknowns. Several examples are provided to show the efficiency of the proposed approach in problems with large degrees of freedom (in time and space). Examples include hydraulic fracturing of a horizontal well and randomly distributed pressurized fractures at different orientations with respect to horizontal stresses. The results are compared to the conventional DDM in terms of computational processing time and accuracy. Accordingly, the proposed algorithm may be used for fracture propagation studies while substantially reducing the processing time with a controllable error.
1
0
0
0
0
0
Conduction Channels of an InAs-Al nanowire Josephson weak link
We present a quantitative characterization of an electrically tunable Josephson junction defined in an InAs nanowire proximitized by an epitax-ially-grown superconducting Al shell. The gate-dependence of the number of conduction channels and of the set of transmission coefficients are extracted from the highly nonlinear current-voltage characteristics. Although the transmissions evolve non-monotonically, the number of independent channels can be tuned, and configurations with a single quasi-ballistic channel achieved.
0
1
0
0
0
0
Two-level Chebyshev filter based complementary subspace method: pushing the envelope of large-scale electronic structure calculations
We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (> 1000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to: 1) compute a set of vectors that span the occupied subspace of the Hamiltonian; 2) reduce subspace diagonalization to just partially occupied states; and 3) obtain those states in an efficient, scalable manner via an inner Chebyshev-filter iteration. By reducing the necessary computation to just partially occupied states, and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the Discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 seconds on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 hours (of wall time).
0
1
0
0
0
0
Using the Tsetlin Machine to Learn Human-Interpretable Rules for High-Accuracy Text Categorization with Medical Applications
Medical applications challenge today's text categorization techniques by demanding both high accuracy and ease-of-interpretation. Although deep learning has provided a leap ahead in accuracy, this leap comes at the sacrifice of interpretability. To address this accuracy-interpretability challenge, we here introduce, for the first time, a text categorization approach that leverages the recently introduced Tsetlin Machine. In all brevity, we represent the terms of a text as propositional variables. From these, we capture categories using simple propositional formulae, such as: if "rash" and "reaction" and "penicillin" then Allergy. The Tsetlin Machine learns these formulae from a labelled text, utilizing conjunctive clauses to represent the particular facets of each category. Indeed, even the absence of terms (negated features) can be used for categorization purposes. Our empirical comparison with Naïve Bayes, decision trees, linear support vector machines (SVMs), random forest, long short-term memory (LSTM) neural networks, and other techniques, is quite conclusive. The Tsetlin Machine either performs on par with or outperforms all of the evaluated methods on both the 20 Newsgroups and IMDb datasets, as well as on a non-public clinical dataset. On average, the Tsetlin Machine delivers the best recall and precision scores across the datasets. Finally, our GPU implementation of the Tsetlin Machine executes 5 to 15 times faster than the CPU implementation, depending on the dataset. We thus believe that our novel approach can have a significant impact on a wide range of text analysis applications, forming a promising starting point for deeper natural language understanding with the Tsetlin Machine.
0
0
0
1
0
0
Deep Learning with Experience Ranking Convolutional Neural Network for Robot Manipulator
Supervised learning, more specifically Convolutional Neural Networks (CNN), has surpassed human ability in some visual recognition tasks such as detection of traffic signs, faces and handwritten numbers. On the other hand, even state-of-the-art reinforcement learning (RL) methods have difficulties in environments with sparse and binary rewards. They requires manually shaping reward functions, which might be challenging to come up with. These tasks, however, are trivial to human. One of the reasons that human are better learners in these tasks is that we are embedded with much prior knowledge of the world. These knowledge might be either embedded in our genes or learned from imitation - a type of supervised learning. For that reason, the best way to narrow the gap between machine and human learning ability should be to mimic how we learn so well in various tasks by a combination of RL and supervised learning. Our method, which integrates Deep Deterministic Policy Gradients and Hindsight Experience Replay (RL method specifically dealing with sparse rewards) with an experience ranking CNN, provides a significant speedup over the learning curve on simulated robotics tasks. Experience ranking allows high-reward transitions to be replayed more frequently, and therefore help learn more efficiently. Our proposed approach can also speed up learning in any other tasks that provide additional information for experience ranking.
1
0
0
0
0
0
Isotope Shifts in the 7s$\rightarrow$8s Transition of Francium: Measurements and Comparison to \textit{ab initio} Theory
We observe the electric-dipole forbidden $7s\rightarrow8s$ transition in the francium isotopes $^{208-211}$Fr and $^{213}$Fr using a two-photon excitation scheme. We collect the atoms online from an accelerator and confine them in a magneto optical trap for the measurements. In combination with previous measurements of the $7s\rightarrow7p_{1/2}$ transition we perform a King Plot analysis. We compare the thus determined ratio of the field shift constants (1.230 $\pm$ 0.019) to results obtained from new ab initio calculations (1.234 $\pm$ 0.010) and find excellent agreement.
0
1
0
0
0
0
SRM: An Efficient Framework for Autonomous Robotic Exploration in Indoor Environments
In this paper, we propose an integrated framework for the autonomous robotic exploration in indoor environments. Specially, we present a hybrid map, named Semantic Road Map (SRM), to represent the topological structure of the explored environment and facilitate decision-making in the exploration. The SRM is built incrementally along with the exploration process. It is a graph structure with collision-free nodes and edges that are generated within the sensor coverage. Moreover, each node has a semantic label and the expected information gain at that location. Based on the concise SRM, we present a novel and effective decision-making model to determine the next-best-target (NBT) during the exploration. The model concerns the semantic information, the information gain, and the path cost to the target location. We use the nodes of SRM to represent the candidate targets, which enables the target evaluation to be performed directly on the SRM. With the SRM, both the information gain of a node and the path cost to the node can be obtained efficiently. Besides, we adopt the cross-entropy method to optimize the path to make it more informative. We conduct experimental studies in both simulated and real-world environments, which demonstrate the effectiveness of the proposed method.
1
0
0
0
0
0
Mechanisms for bacterial gliding motility on soft substrates
The motility mechanism of certain rod-shaped bacteria has long been a mystery, since no external appendages are involved in their motion which is known as gliding. However, the physical principles behind gliding motility still remain poorly understood. Using myxobacteria as a canonical example of such organisms, we identify here the physical principles behind gliding motility, and develop a theoretical model that predicts a two-regime behavior of the gliding speed as a function of the substrate stiffness. Our theory describes the elastic, viscous, and capillary interactions between the bacterial membrane carrying a traveling wave, the secreted slime acting as a lubricating film, and the substrate which we model as a soft solid. Defining the myxobacterial gliding as the horizontal motion on the substrate under zero net force, we find the two-regime behavior is due to two different mechanisms of motility thrust. On stiff substrates, the thrust arises from the bacterial shape deformations creating a flow of slime that exerts a pressure along the bacterial length. This pressure in conjunction with the bacterial shape provides the necessary thrust for propulsion. However, we show that such a mechanism cannot lead to gliding on very soft substrates. Instead, we show that capillary effects lead to the formation of a ridge at the slime-substrate-air interface, which creates a thrust in the form of a localized pressure gradient at the tip of the bacteria. To test our theory, we perform experiments with isolated cells on agar substrates of varying stiffness and find the measured gliding speeds to be in good agreement with the predictions from our elasto-capillary-hydrodynamic model. The physical mechanisms reported here serve as an important step towards an accurate theory of friction and substrate-mediated interaction between bacteria in a swarm of cells proliferating in soft media.
0
0
0
0
1
0
Random Walk in a N-cube Without Hamiltonian Cycle to Chaotic Pseudorandom Number Generation: Theoretical and Practical Considerations
Designing a pseudorandom number generator (PRNG) is a difficult and complex task. Many recent works have considered chaotic functions as the basis of built PRNGs: the quality of the output would indeed be an obvious consequence of some chaos properties. However, there is no direct reasoning that goes from chaotic functions to uniform distribution of the output. Moreover, embedding such kind of functions into a PRNG does not necessarily allow to get a chaotic output, which could be required for simulating some chaotic behaviors. In a previous work, some of the authors have proposed the idea of walking into a $\mathsf{N}$-cube where a balanced Hamiltonian cycle has been removed as the basis of a chaotic PRNG. In this article, all the difficult issues observed in the previous work have been tackled. The chaotic behavior of the whole PRNG is proven. The construction of the balanced Hamiltonian cycle is theoretically and practically solved. An upper bound of the expected length of the walk to obtain a uniform distribution is calculated. Finally practical experiments show that the generators successfully pass the classical statistical tests.
1
1
0
0
0
0
A Hardware Platform for Efficient Multi-Modal Sensing with Adaptive Approximation
We present Warp, a hardware platform to support research in approximate computing, sensor energy optimization, and energy-scavenged systems. Warp incorporates 11 state-of-the-art sensor integrated circuits, computation, and an energy-scavenged power supply, all within a miniature system that is just 3.6 cm x 3.3 cm x 0.5 cm. Warp's sensor integrated circuits together contain a total of 21 sensors with a range of precisions and accuracies for measuring eight sensing modalities of acceleration, angular rate, magnetic flux density (compass heading), humidity, atmospheric pressure (elevation), infrared radiation, ambient temperature, and color. Warp uses a combination of analog circuits and digital control to facilitate further tradeoffs between sensor and communication accuracy, energy efficiency, and performance. This article presents the design of Warp and presents an evaluation of our hardware implementation. The results show how Warp's design enables performance and energy efficiency versus ac- curacy tradeoffs.
1
0
0
0
0
0
Electrocaloric effects in the lead-free Ba(Zr,Ti)O$_{3}$ relaxor ferroelectric from atomistic simulations
Atomistic effective Hamiltonian simulations are used to investigate electrocaloric (EC) effects in the lead-free Ba(Zr$_{0.5}$Ti$_{0.5}$)O$_{3}$ (BZT) relaxor ferroelectric. We find that the EC coefficient varies non-monotonically with the field at any temperature, presenting a maximum that can be traced back to the behavior of BZT's polar nanoregions. We also introduce a simple Landau-based model that reproduces the EC behavior of BZT as a function of field and temperature, and which is directly applicable to other compounds. Finally, we confirm that, for low temperatures (i.e., in non-ergodic conditions), the usual indirect approach to measure the EC response provides an estimate that differs quantitatively from a direct evaluation of the field-induced temperature change.
0
1
0
0
0
0
Accelerating Permutation Testing in Voxel-wise Analysis through Subspace Tracking: A new plugin for SnPM
Permutation testing is a non-parametric method for obtaining the max null distribution used to compute corrected $p$-values that provide strong control of false positives. In neuroimaging, however, the computational burden of running such an algorithm can be significant. We find that by viewing the permutation testing procedure as the construction of a very large permutation testing matrix, $T$, one can exploit structural properties derived from the data and the test statistics to reduce the runtime under certain conditions. In particular, we see that $T$ is low-rank plus a low-variance residual. This makes $T$ a good candidate for low-rank matrix completion, where only a very small number of entries of $T$ ($\sim0.35\%$ of all entries in our experiments) have to be computed to obtain a good estimate. Based on this observation, we present RapidPT, an algorithm that efficiently recovers the max null distribution commonly obtained through regular permutation testing in voxel-wise analysis. We present an extensive validation on a synthetic dataset and four varying sized datasets against two baselines: Statistical NonParametric Mapping (SnPM13) and a standard permutation testing implementation (referred as NaivePT). We find that RapidPT achieves its best runtime performance on medium sized datasets ($50 \leq n \leq 200$), with speedups of 1.5x - 38x (vs. SnPM13) and 20x-1000x (vs. NaivePT). For larger datasets ($n \geq 200$) RapidPT outperforms NaivePT (6x - 200x) on all datasets, and provides large speedups over SnPM13 when more than 10000 permutations (2x - 15x) are needed. The implementation is a standalone toolbox and also integrated within SnPM13, able to leverage multi-core architectures when available.
1
0
0
1
0
0
Image restoration of solar spectra
When recording spectra from the ground, atmospheric turbulence causes degradation of the spatial resolution. We present a data reduction method that restores the spatial resolution of the spectra to their undegraded state. By assuming that the point spread function (PSF) estimated from a strictly synchronized, broadband slit-jaw camera is the same as the PSF that spatially degraded the spectra, we can quantify what linear combination of undegraded spectra is present in each degraded data point. The set of equations obtained in this way is found to be generally well-conditioned and sufficiently diagonal to be solved using an iterative linear solver. The resulting solution has regained a spatial resolution comparable to that of the restored slit-jaw images.
0
1
0
0
0
0
Who Said What: Modeling Individual Labelers Improves Classification
Data are often labeled by many different experts with each expert only labeling a small fraction of the data and each data point being labeled by several experts. This reduces the workload on individual experts and also gives a better estimate of the unobserved ground truth. When experts disagree, the standard approaches are to treat the majority opinion as the correct label or to model the correct label as a distribution. These approaches, however, do not make any use of potentially valuable information about which expert produced which label. To make use of this extra information, we propose modeling the experts individually and then learning averaging weights for combining them, possibly in sample-specific ways. This allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data. Here we show that our approach leads to improvements in computer-aided diagnosis of diabetic retinopathy. We also show that our method performs better than competing algorithms by Welinder and Perona (2010), and by Mnih and Hinton (2012). Our work offers an innovative approach for dealing with the myriad real-world settings that use expert opinions to define labels for training.
1
0
0
0
0
0
Automated Tiling of Unstructured Mesh Computations with Application to Seismological Modelling
Sparse tiling is a technique to fuse loops that access common data, thus increasing data locality. Unlike traditional loop fusion or blocking, the loops may have different iteration spaces and access shared datasets through indirect memory accesses, such as A[map[i]] -- hence the name "sparse". One notable example of such loops arises in discontinuous-Galerkin finite element methods, because of the computation of numerical integrals over different domains (e.g., cells, facets). The major challenge with sparse tiling is implementation -- not only is it cumbersome to understand and synthesize, but it is also onerous to maintain and generalize, as it requires a complete rewrite of the bulk of the numerical computation. In this article, we propose an approach to extend the applicability of sparse tiling based on raising the level of abstraction. Through a sequence of compiler passes, the mathematical specification of a problem is progressively lowered, and eventually sparse-tiled C for-loops are generated. Besides automation, we advance the state-of-the-art by introducing: a revisited, more efficient sparse tiling algorithm; support for distributed-memory parallelism; a range of fine-grained optimizations for increased run-time performance; implementation in a publicly-available library, SLOPE; and an in-depth study of the performance impact in Seigen, a real-world elastic wave equation solver for seismological problems, which shows speed-ups up to 1.28x on a platform consisting of 896 Intel Broadwell cores.
1
1
0
0
0
0
A copula approach for dependence modeling in multivariate nonparametric time series
This paper is concerned with modeling the dependence structure of two (or more) time-series in the presence of a (possible multivariate) covariate which may include past values of the time series. We assume that the covariate influences only the conditional mean and the conditional variance of each of the time series but the distribution of the standardized innovations is not influenced by the covariate and is stable in time. The joint distribution of the time series is then determined by the conditional means, the conditional variances and the marginal distributions of the innovations, which we estimate nonparametrically, and the copula of the innovations, which represents the dependency structure. We consider a nonparametric as well as a semiparametric estimator based on the estimated residuals. We show that under suitable assumptions these copula estimators are asymptotically equivalent to estimators that would be based on the unobserved innovations. The theoretical results are illustrated by simulations and a real data example.
0
0
1
1
0
0
A geometric second-order-rectifiable stratification for closed subsets of Euclidean space
Defining the $m$-th stratum of a closed subset of an $n$ dimensional Euclidean space to consist of those points, where it can be touched by a ball from at least $n-m$ linearly independent directions, we establish that the $m$-th stratum is second-order rectifiable of dimension $m$ and a Borel set. This was known for convex sets, but is new even for sets of positive reach. The result is based on a new criterion for second-order rectifiability.
0
0
1
0
0
0
Probabilistic Formulations of Regression with Mixed Guidance
Regression problems assume every instance is annotated (labeled) with a real value, a form of annotation we call \emph{strong guidance}. In order for these annotations to be accurate, they must be the result of a precise experiment or measurement. However, in some cases additional \emph{weak guidance} might be given by imprecise measurements, a domain expert or even crowd sourcing. Current formulations of regression are unable to use both types of guidance. We propose a regression framework that can also incorporate weak guidance based on relative orderings, bounds, neighboring and similarity relations. Consider learning to predict ages from portrait images, these new types of guidance allow weaker forms of guidance such as stating a person is in their 20s or two people are similar in age. These types of annotations can be easier to generate than strong guidance. We introduce a probabilistic formulation for these forms of weak guidance and show that the resulting optimization problems are convex. Our experimental results show the benefits of these formulations on several data sets.
0
0
0
1
0
0
DCN+: Mixed Objective and Deep Residual Coattention for Question Answering
Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning. The objective uses rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we improve dynamic coattention networks (DCN) with a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state-of-the-art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.
1
0
0
0
0
0
An optimal unrestricted learning procedure
We study learning problems involving arbitrary classes of functions $F$, distributions $X$ and targets $Y$. Because proper learning procedures, i.e., procedures that are only allowed to select functions in $F$, tend to perform poorly unless the problem satisfies some additional structural property (e.g., that $F$ is convex), we consider unrestricted learning procedures that are free to choose functions outside the given class. We present a new unrestricted procedure that is optimal in a very strong sense: the required sample complexity is essentially the best one can hope for, and the estimate holds for (almost) any problem, including heavy-tailed situations. Moreover, the sample complexity coincides with the what one would expect if $F$ were convex, even when $F$ is not. And if $F$ is convex, the procedure turns out to be proper. Thus, the unrestricted procedure is actually optimal in both realms, for convex classes as a proper procedure and for arbitrary classes as an unrestricted procedure.
0
0
0
1
0
0
Means of infinite sets I
We open a new field on how one can define means on infinite sets. We investigate many different ways on how such means can be constructed. One method is based on sequences of ideals, other deals with accumulation points, one uses isolated points, other deals with average using integral, other with limit of average on surroundings and one deals with evenly distributed samples. We study various properties of such means and their relations to each other.
0
0
1
0
0
0
On tamed almost complex four manifolds
This paper proves that on any tamed closed almost complex four-manifold $(M,J)$ whose dimension of $J$-anti-invariant cohomology is equal to self-dual second Betti number minus one, there exists a new symplectic form compatible with the given almost complex structure $J$. In particular, if the self-dual second Betti number is one, we give an affirmative answer to Donaldson question for tamed closed almost complex four-manifolds that is a conjecture in joint paper of Tosatti, Weinkove and Yau. Our approach is along the lines used by Buchdahl to give a unified proof of the Kodaira conjecture. Thus, our main result gives an affirmative answer to the Kodaira conjecture in symplectic version.
0
0
1
0
0
0
Evaluation and Prediction of Polygon Approximations of Planar Contours for Shape Analysis
Contours may be viewed as the 2D outline of the image of an object. This type of data arises in medical imaging as well as in computer vision and can be modeled as data on a manifold and can be studied using statistical shape analysis. Practically speaking, each observed contour, while theoretically infinite dimensional, must be discretized for computations. As such, the coordinates for each contour as obtained at k sampling times, resulting in the contour being represented as a k-dimensional complex vector. While choosing large values of k will result in closer approximations to the original contour, this will also result in higher computational costs in the subsequent analysis. The goal of this study is to determine reasonable values for k so as to keep the computational cost low while maintaining accuracy. To do this, we consider two methods for selecting sample points and determine lower bounds for k for obtaining a desired level of approximation error using two different criteria. Because this process is computationally inefficient to perform on a large scale, we then develop models for predicting the lower bounds for k based on simple characteristics of the contours.
0
0
0
1
0
0
Perturbative Expansion of Irreversible Work in Fokker-Planck Equation a la Quantum Mechanics
We discuss the systematic expansion of the solution of the Fokker-Planck equation with the help of the eigenfunctions of the time-dependent Fokker-Planck operator. The expansion parameter is the time derivative of the external parameter which controls the form of an external potential. Our expansion corresponds to the perturbative calculation of the adiabatic motion in quantum mechanics. With this method, we derive a new formula to calculate the irreversible work order by order, which is expressed as the expectation value with a pseudo density matrix. Applying this method to the case of the harmonic potential, we show that the first order term of the expansion gives the exact result. Because we do not need to solve the coupled differential equations of moments, our method simplifies the calculations of various functions such as the fluctuation of the irreversible work per unit time. We further investigate the exact optimized protocol to minimize the irreversible work by calculating its variation with respect to the control parameter itself.
0
1
1
0
0
0
Operational Semantics of Process Monitors
CSPe is a specification language for runtime monitors that can directly express concurrency in a bottom-up manner that composes the system from simpler, interacting components. It includes constructs to explicitly flag failures to the monitor, which unlike deadlocks and livelocks in conventional process algebras, propagate globally and aborts the whole system's execution. Although CSPe has a trace semantics along with an implementation demonstrating acceptable performance, it lacks an operational semantics. An operational semantics is not only more accessible than trace semantics but also indispensable for ensuring the correctness of the implementation. Furthermore, a process algebra like CSPe admits multiple denotational semantics appropriate for different purposes, and an operational semantics is the basis for justifying such semantics' integrity and relevance. In this paper, we develop an SOS-style operational semantics for CSPe, which properly accounts for explicit failures and will serve as a basis for further study of its properties, its optimization, and its use in runtime verification.
1
0
0
0
0
0
Anyonic Entanglement and Topological Entanglement Entropy
We study the properties of entanglement in two-dimensional topologically ordered phases of matter. Such phases support anyons, quasiparticles with exotic exchange statistics. The emergent nonlocal state spaces of anyonic systems admit a particular form of entanglement that does not exist in conventional quantum mechanical systems. We study this entanglement by adapting standard notions of entropy to anyonic systems. We use the algebraic theory of anyon models (modular tensor categories) to illustrate the nonlocal entanglement structure of anyonic systems. Using this formalism, we present a general method of deriving the universal topological contributions to the entanglement entropy for general system configurations of a topological phase, including surfaces of arbitrary genus, punctures, and quasiparticle content. We analyze a number of examples in detail. Our results recover and extend prior results for anyonic entanglement and the topological entanglement entropy.
0
1
0
0
0
0
Water-based and Biocompatible 2D Crystal Inks: from Ink Formulation to All- Inkjet Printed Heterostructures
Fully exploiting the properties of 2D crystals requires a mass production method able to produce heterostructures of arbitrary complexity on any substrate, including plastic. Solution processing of graphene allows simple and low-cost techniques such as inkjet printing to be used for device fabrication. However, available inkjet printable formulations are still far from ideal as they are either based on toxic solvents, have low concentration, or require time-consuming and expensive formulation processing. In addition, none of those formulations are suitable for thin-film heterostructure fabrication due to the re-mixing of different 2D crystals, giving rise to uncontrolled interfaces, which results in poor device performance and lack of reproducibility. In this work we show a general formulation engineering approach to achieve highly concentrated, and inkjet printable water-based 2D crystal formulations, which also provides optimal film formation for multi-stack fabrication. We show examples of all-inkjet printed heterostructures, such as large area arrays of photosensors on plastic and paper and programmable logic memory devices, fully exploiting the design flexibility of inkjet printing. Finally, dose-escalation cytotoxicity assays in vitro also confirm the inks biocompatible character, revealing the possibility of extending use of such 2D crystal formulations to drug delivery and biomedical applications.
0
1
0
0
0
0
Clustering to Reduce Spatial Data Set Size
Traditionally it had been a problem that researchers did not have access to enough spatial data to answer pressing research questions or build compelling visualizations. Today, however, the problem is often that we have too much data. Spatially redundant or approximately redundant points may refer to a single feature (plus noise) rather than many distinct spatial features. We use a machine learning approach with density-based clustering to compress such spatial data into a set of representative features.
0
0
0
1
0
0
Thermal diffusivity and chaos in metals without quasiparticles
We study the thermal diffusivity $D_T$ in models of metals without quasiparticle excitations (`strange metals'). The many-body quantum chaos and transport properties of such metals can be efficiently described by a holographic representation in a gravitational theory in an emergent curved spacetime with an additional spatial dimension. We find that at generic infra-red fixed points $D_T$ is always related to parameters characterizing many-body quantum chaos: the butterfly velocity $v_B$, and Lyapunov time $\tau_L$ through $D_T \sim v_B^2 \tau_L$. The relationship holds independently of the charge density, periodic potential strength or magnetic field at the fixed point. The generality of this result follows from the observation that the thermal conductivity of strange metals depends only on the metric near the horizon of a black hole in the emergent spacetime, and is otherwise insensitive to the profile of any matter fields.
0
1
0
0
0
0
The Quest for Scalability and Accuracy in the Simulation of the Internet of Things: an Approach based on Multi-Level Simulation
This paper presents a methodology for simulating the Internet of Things (IoT) using multi-level simulation models. With respect to conventional simulators, this approach allows us to tune the level of detail of different parts of the model without compromising the scalability of the simulation. As a use case, we have developed a two-level simulator to study the deployment of smart services over rural territories. The higher level is base on a coarse grained, agent-based adaptive parallel and distributed simulator. When needed, this simulator spawns OMNeT++ model instances to evaluate in more detail the issues concerned with wireless communications in restricted areas of the simulated world. The performance evaluation confirms the viability of multi-level simulations for IoT environments.
1
0
0
0
0
0
Harmonic spinors from twistors and potential forms
Symmetry operators of twistor spinors and harmonic spinors can be constructed from conformal Killing-Yano forms. Transformation operators relating twistors to harmonic spinors are found in terms of potential forms. These constructions are generalized to gauged twistor spinors and gauged harmonic spinors. The operators that transform gauged twistor spinors to gauged harmonic spinors are found. Symmetry operators of gauged harmonic spinors in terms of conformal Killing-Yano forms are obtained. Algebraic conditions to obtain solutions of the Seiberg-Witten equations are discussed.
0
0
1
0
0
0
Goodness-of-fit tests for the functional linear model based on randomly projected empirical processes
We consider marked empirical processes indexed by a randomly projected functional covariate to construct goodness-of-fit tests for the functional linear model with scalar response. The test statistics are built from continuous functionals over the projected process, resulting in computationally efficient tests that exhibit root-n convergence rates and circumvent the curse of dimensionality. The weak convergence of the empirical process is obtained conditionally on a random direction, whilst the almost surely equivalence between the testing for significance expressed on the original and on the projected functional covariate is proved. The computation of the test in practice involves calibration by wild bootstrap resampling and the combination of several p-values, arising from different projections, by means of the false discovery rate method. The finite sample properties of the tests are illustrated in a simulation study for a variety of linear models, underlying processes, and alternatives. The software provided implements the tests and allows the replication of simulations and data applications.
0
0
0
1
0
0
Efficient method for estimating the number of communities in a network
While there exist a wide range of effective methods for community detection in networks, most of them require one to know in advance how many communities one is looking for. Here we present a method for estimating the number of communities in a network using a combination of Bayesian inference with a novel prior and an efficient Monte Carlo sampling scheme. We test the method extensively on both real and computer-generated networks, showing that it performs accurately and consistently, even in cases where groups are widely varying in size or structure.
1
1
0
0
0
0
The Pluto System After New Horizons
The discovery of Pluto in 1930 presaged the discoveries of both the Kuiper Belt and ice dwarf planets, which are the third class of planets in our solar system. From the 1970s to the 19990s numerous fascinating attributes of the Pluto system were discovered, including multiple surface volatile species, Pluto's large satellite Charon, and its atmosphere. These attributes, and the 1990s discovery of the Kuiper Belt and Pluto's cohort of small Kuiper Belt planets, motivated the exploration of Pluto. That mission, called New Horizons (NH), revolutionized knowledge of Pluto and its system of satellites in 2015. Beyond providing rich geological, compositional, and atmospheric data sets, New Horizons demonstrated that Pluto itself has been surprisingly geologically active throughout the past 4 billion years, and that the planet exhibits a surprisingly complex range of atmospheric phenomenology and geologic expression that rival Mars in their richness.
0
1
0
0
0
0
Global-in-time Strichartz estimates and cubic Schrodinger equation on metric cone
We study the Strichartz estimates for Schrödinger equation on a metric cone $X$, where the metric cone $X=C(Y)=(0,\infty)_r\times Y$ and the cross section $Y$ is a $(n-1)$-dimensional closed Riemannian manifold $(Y,h)$. The equipped metric on $X$ is given by $g=dr^2+r^2h$, and let $\Delta_g$ be the Friedrich extension positive Laplacian on $X$ and $V=V_0 r^{-2}$ where $V_0\in\CC^\infty(Y)$ is a real function such that the operator $\Delta_h+V_0+(n-2)^2/4$ is a strictly positive operator on $L^2(Y)$. We establish the full range of the global-in-time Strichartz estimate without loss for the Schödinger equation associated with the operator $\LL_V=\Delta_g+V_0 r^{-2}$ including the endpoint estimate both in homogeneous and inhomogeneous cases. As an application, we study the well-posed theory and scattering theory for the Schödinger equation with cubic nonlinearity on this setting.
0
0
1
0
0
0
Compressive Embedding and Visualization using Graphs
Visualizing high-dimensional data has been a focus in data analysis communities for decades, which has led to the design of many algorithms, some of which are now considered references (such as t-SNE for example). In our era of overwhelming data volumes, the scalability of such methods have become more and more important. In this work, we present a method which allows to apply any visualization or embedding algorithm on very large datasets by considering only a fraction of the data as input and then extending the information to all data points using a graph encoding its global similarity. We show that in most cases, using only $\mathcal{O}(\log(N))$ samples is sufficient to diffuse the information to all $N$ data points. In addition, we propose quantitative methods to measure the quality of embeddings and demonstrate the validity of our technique on both synthetic and real-world datasets.
1
0
0
1
0
0
Electrical control of metallic heavy-metal/ferromagnet interfacial states
Voltage control effects provide an energy-efficient means of tailoring material properties, especially in highly integrated nanoscale devices. However, only insulating and semiconducting systems can be controlled so far. In metallic systems, there is no electric field due to electron screening effects and thus no such control effect exists. Here we demonstrate that metallic systems can also be controlled electrically through ionic not electronic effects. In a Pt/Co structure, the control of the metallic Pt/Co interface can lead to unprecedented control effects on the magnetic properties of the entire structure. Consequently, the magnetization and perpendicular magnetic anisotropy of the Co layer can be independently manipulated to any desired state, the efficient spin toques can be enhanced about 3.5 times, and the switching current can be reduced about one order of magnitude. This ability to control a metallic system may be extended to control other physical phenomena.
0
1
0
0
0
0
An Expectation-Maximization Algorithm for the Fractal Inverse Problem
We present an Expectation-Maximization algorithm for the fractal inverse problem: the problem of fitting a fractal model to data. In our setting the fractals are Iterated Function Systems (IFS), with similitudes as the family of transformations. The data is a point cloud in ${\mathbb R}^H$ with arbitrary dimension $H$. Each IFS defines a probability distribution on ${\mathbb R}^H$, so that the fractal inverse problem can be cast as a problem of parameter estimation. We show that the algorithm reconstructs well-known fractals from data, with the model converging to high precision parameters. We also show the utility of the model as an approximation for datasources outside the IFS model class.
1
0
0
1
0
0
An Improved SCFlip Decoder for Polar Codes
This paper focuses on the recently introduced Successive Cancellation Flip (SCFlip) decoder of polar codes. Our contribution is twofold. First, we propose the use of an optimized metric to determine the flipping positions within the SCFlip decoder, which improves its ability to find the first error that occurred during the initial SC decoding attempt. We also show that the proposed metric allows closely approaching the performance of an ideal SCFlip decoder. Second, we introduce a generalisation of the SCFlip decoder to a number of $\omega$ nested flips, denoted by SCFlip-$\omega$, using a similar optimized metric to determine the positions of the nested flips. We show that the SCFlip-2 decoder yields significant gains in terms of decoding performance and competes with the performance of the CRC-aided SC-List decoder with list size L=4, while having an average decoding complexity similar to that of the standard SC decoding, at medium to high signal to noise ratio.
1
0
0
0
0
0
Generation of controllable plasma wakefield noise in particle-in-cell simulations
Numerical simulations of beam-plasma instabilities may produce quantitatively incorrect results because of unrealistically high initial noise from which the instabilities develop. Of particular importance is the wakefield noise, the potential perturbations that have a phase velocity which is equal to the beam velocity. Controlling the noise level in simulations may offer the possibility of extrapolating simulation results to the more realistic low-noise case. We propose a novel method for generating wakefield noise with a controllable amplitude by randomly located charged rods propagating ahead of the beam. We also illustrate the method with particle-in-cell simulations. The generation of this noise is not accompanied by parasitic Cherenkov radiation waves.
0
1
0
0
0
0
Variable Exponent Fock Spaces
In this article we introduce Variable exponent Fock spaces and study some of their basic properties such as the boundedness of evaluation functionals, density of polynomials, boundedness of a Bergman-type projection and duality.
0
0
1
0
0
0
Pattern representation and recognition with accelerated analog neuromorphic systems
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, low-power neuromorphic hardware. Since many of these devices employ analog components, which cannot be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust architectures and a training method for hardware-emulated networks that functions without perfect knowledge of the system's dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.
1
0
0
1
0
0
Dynamic Policies for Cooperative Networked Systems
A set of economic entities embedded in a network graph collaborate by opportunistically exchanging their resources to satisfy their dynamically generated needs. Under what conditions their collaboration leads to a sustainable economy? Which online policy can ensure a feasible resource exchange point will be attained, and what information is needed to implement it? Furthermore, assuming there are different resources and the entities have diverse production capabilities, which production policy each entity should employ in order to maximize the economy's sustainability? Importantly, can we design such policies that are also incentive compatible even when there is no a priori information about the entities' needs? We introduce a dynamic production scheduling and resource exchange model to capture this fundamental problem and provide answers to the above questions. Applications range from infrastructure sharing, trade and organisation management, to social networks and sharing economy services.
1
0
0
0
0
0
On certain families of planar patterns and fractals
This survey article is dedicated to some families of fractals that were introduced and studied during the last decade, more precisely, families of Sierpiński carpets: limit net sets, generalised Sierpiński carpets and labyrinth fractals. We give a unifying approach of these fractals and several of their topological and geometrical properties, by using the framework of planar patterns.
0
0
1
0
0
0
Static Dalvik VM bytecode instrumentation
This work proposes a novel approach to restricting the access for blacklisted Android system API calls. Main feature of the suggested method introduced in this paper is that it requires only rootless or (user-mode) access to the system unlike previous works. For that reason this method is valuable for end-users due to the possibility of project distribution via Play Market and it does not require any phone system modifications and/or updates. This paper explains the required background of Android OS necessary for understanding and describes the method for modification Android application. In this paper the proof-of-concept implementation. That is able to block the application's IMEI requests is introduced. Also this paper lists unsuccessful methods that tried to provide the user security. Obviously with those restrictions application may lack some of features that can only be granted in unsecured environment.
1
0
0
0
0
0
PMU-Based Estimation of Dynamic State Jacobian Matrix
In this paper, a hybrid measurement- and model-based method is proposed which can estimate the dynamic state Jacobian matrix in near real-time. The proposed method is computationally efficient and robust to the variation of network topology. A numerical example is given to show that the proposed method is able to provide good estimation for the dynamic state Jacobian matrix and is superior to the model-based method under undetectable network topology change. The proposed method may also help identify big discrepancy in the assumed network model.
1
0
0
0
0
0
DeepDownscale: a Deep Learning Strategy for High-Resolution Weather Forecast
Running high-resolution physical models is computationally expensive and essential for many disciplines. Agriculture, transportation, and energy are sectors that depend on high-resolution weather models, which typically consume many hours of large High Performance Computing (HPC) systems to deliver timely results. Many users cannot afford to run the desired resolution and are forced to use low resolution output. One simple solution is to interpolate results for visualization. It is also possible to combine an ensemble of low resolution models to obtain a better prediction. However, these approaches fail to capture the redundant information and patterns in the low-resolution input that could help improve the quality of prediction. In this paper, we propose and evaluate a strategy based on a deep neural network to learn a high-resolution representation from low-resolution predictions using weather forecast as a practical use case. We take a supervised learning approach, since obtaining labeled data can be done automatically. Our results show significant improvement when compared with standard practices and the strategy is still lightweight enough to run on modest computer systems.
0
0
0
1
0
0
Open quantum random walks on the half-line: the Karlin-McGregor formula, path counting and Foster's Theorem
In this work we consider open quantum random walks on the non-negative integers. By considering orthogonal matrix polynomials we are able to describe transition probability expressions for classes of walks via a matrix version of the Karlin-McGregor formula. We focus on absorbing boundary conditions and, for simpler classes of examples, we consider path counting and the corresponding combinatorial tools. A non-commutative version of the gambler's ruin is studied by obtaining the probability of reaching a certain fortune and the mean time to reach a fortune or ruin in terms of generating functions. In the case of the Hadamard coin, a counting technique for boundary restricted paths in a lattice is also presented. We discuss an open quantum version of Foster's Theorem for the expected return time together with applications.
0
0
1
0
0
0
Nevanlinna classes associated to a closed set on $\partial$D
We introduce Nevanlinna classes of holomorphic functions associated to a closed set on the boundary of the unit disc in the complex plane and we get Blaschke type theorems relative to these classes by use of several complex variables methods. This gives alternative proofs of some results of Favorov \& Golinskii, useful, in particular, for the study of eigenvalues of non self adjoint Schr{ö}dinger operators.
0
0
1
0
0
0
Exploring the nuances in the relationship "culture-strategy" for the business world
The current article explores interesting, significant and recently identified nuances in the relationship "culture-strategy". The shared views of leading scholars at the University of National and World Economy in relation with the essence, direction, structure, role and hierarchy of "culture-strategy" relation are defined as a starting point of the analysis. The research emphasis is directed on recent developments in interpreting the observed realizations of the aforementioned link among the community of international scholars and consultants, publishing in selected electronic scientific databases. In this way a contemporary notion of the nature of "culture-strategy" relationship for the entities from the world of business is outlined.
0
0
0
0
0
1
Audio style transfer
'Style transfer' among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.
1
1
0
0
0
0
Hyperbolic pseudoinverses for kinematics in the Euclidean group
The kinematics of a robot manipulator are described in terms of the mapping connecting its joint space and the 6-dimensional Euclidean group of motions $SE(3)$. The associated Jacobian matrices map into its Lie algebra $\mathfrak{se}(3)$, the space of twists describing infinitesimal motion of a rigid body. Control methods generally require knowledge of an inverse for the Jacobian. However for an arm with fewer or greater than six actuated joints or at singularities of the kinematic mapping this breaks down. The Moore-Penrose pseudoinverse has frequently been used as a surrogate but is not invariant under change of coordinates. Since the Euclidean Lie algebra carries a pencil of invariant bilinear forms that are indefinite, a family of alternative hyperbolic pseudoinverses is available. Generalised Gram matrices and the classification of screw systems are used to determine conditions for their existence. The existence or otherwise of these pseudoinverses also relates to a classical problem addressed by Sylvester concerning the conditions for a system of lines to be in involution or, equivalently, the corresponding system of generalised forces to be in equilibrium.
0
0
1
0
0
0
Nonlinear Unknown Input and State Estimation Algorithm in Mobile Robots
This technical report provides the description and the derivation of a novel nonlinear unknown input and state estimation algorithm (NUISE) for mobile robots. The algorithm is designed for real-world robots with nonlinear dynamic models and subject to stochastic noises on sensing and actuation. Leveraging sensor readings and planned control commands, the algorithm detects and quantifies anomalies on both sensors and actuators. Later, we elaborate the dynamic models of two distinctive mobile robots for the purpose of demonstrating the application of NUISE. This report serves as a supplementary document for [1].
1
0
0
0
0
0
Acute sets
A set of points in $\mathbb{R}^d$ is acute, if any three points from this set form an acute triangle. In this note we construct an acute set in $\mathbb{R}^d$ of size at least $1.618^d$. Also, we present a simple example of an acute set of size at least $2^{\frac{d}{2}}$.
0
0
1
0
0
0
Session Analysis using Plan Recognition
This paper presents preliminary results of our work with a major financial company, where we try to use methods of plan recognition in order to investigate the interactions of a costumer with the company's online interface. In this paper, we present the first steps of integrating a plan recognition algorithm in a real-world application for detecting and analyzing the interactions of a costumer. It uses a novel approach for plan recognition from bare-bone UI data, which reasons about the plan library at the lowest recognition level in order to define the relevancy of actions in our domain, and then uses it to perform plan recognition. We present preliminary results of inference on three different use-cases modeled by domain experts from the company, and show that this approach manages to decrease the overload of information required from an analyst to evaluate a costumer's session - whether this is a malicious or benign session, whether the intended tasks were completed, and if not - what actions are expected next.
1
0
0
0
0
0
Personalized Driver Stress Detection with Multi-task Neural Networks using Physiological Signals
Stress can be seen as a physiological response to everyday emotional, mental and physical challenges. A long-term exposure to stressful situations can have negative health consequences, such as increased risk of cardiovascular diseases and immune system disorder. Therefore, a timely stress detection can lead to systems for better management and prevention in future circumstances. In this paper, we suggest a multi-task learning based neural network approach (with hard parameter sharing of mutual representation and task-specific layers) for personalized stress recognition using skin conductance and heart rate from wearable devices. The proposed method is tested on multi-modal physiological responses collected during real-world and simulator driving tasks.
1
0
0
0
0
0
Budgeted Experiment Design for Causal Structure Learning
We study the problem of causal structure learning when the experimenter is limited to perform at most $k$ non-adaptive experiments of size $1$. We formulate the problem of finding the best intervention target set as an optimization problem, which aims to maximize the average number of edges whose directions are resolved. We prove that the corresponding objective function is submodular and a greedy algorithm suffices to achieve $(1-\frac{1}{e})$-approximation of the optimal value. We further present an accelerated variant of the greedy algorithm, which can lead to orders of magnitude performance speedup. We validate our proposed approach on synthetic and real graphs. The results show that compared to the purely observational setting, our algorithm orients the majority of the edges through a considerably small number of interventions.
1
0
0
1
0
0
Evaluating the Robustness of Rogue Waves Under Perturbations
Rogue waves, and their periodic counterparts, have been shown to exist in a number of integrable models. However, relatively little is known about the existence of these objects in models where an exact formula is unattainable. In this work, we develop a novel numerical perspective towards identifying such states as localized solutions in space-time. Importantly, we illustrate that this methodology in addition to benchmarking known solutions (and confirming their numerical propagation under controllable error) enables the continuation of such solutions over parametric variations to non-integrable models. As a result, we can answer in the positive the question about the parametric robustness of Peregrine-like waveforms and even of generalizations thereof on a cnoidal wave background.
0
1
0
0
0
0
Feeble fish in time-dependent waters and homogenization of the G-equation
We study the following control problem. A fish with bounded aquatic locomotion speed swims in fast waters. Can this fish, under reasonable assumptions, get to a desired destination? It can, even if the flow is time-dependent. Moreover, given a prescribed sufficiently large time $t$, it can be there at exactly the time $t$. The major difference from our previous work is the time-dependence of the flow. We also give an application to homogenization of the G-equation.
0
0
1
0
0
0
Betti tables for indecomposable matrix factorizations of $XY(X-Y)(X-λY)$
We classify the Betti tables of indecomposable graded matrix factorizations over the simple elliptic singularity $f_\lambda = XY(X-Y)(X-\lambda Y)$ by making use of an associated weighted projective line of genus one.
0
0
1
0
0
0
Discrete Integrable Systems, Supersymmetric Quantum Mechanics, and Framed BPS States - I
It is possible to understand whether a given BPS spectrum is generated by a relevant deformation of a 4D N=2 SCFT or of an asymptotically free theory from the periodicity properties of the corresponding quantum monodromy. With the aim of giving a better understanding of the above conjecture, in this paper we revisit the description of framed BPS states of four-dimensional relativistic quantum field theories with eight conserved supercharges in terms of supersymmetric quantum mechanics. We unveil aspects of the deep interrelationship in between the Seiberg-dualities of the latter, the discrete symmetries of the theory in the bulk, and quantum discrete integrable systems.
0
1
1
0
0
0
Generalized Robust Bayesian Committee Machine for Large-scale Gaussian Process Regression
In order to scale standard Gaussian process (GP) regression to large-scale datasets, aggregation models employ factorized training process and then combine predictions from distributed experts. The state-of-the-art aggregation models, however, either provide inconsistent predictions or require time-consuming aggregation process. We first prove the inconsistency of typical aggregations using disjoint or random data partition, and then present a consistent yet efficient aggregation model for large-scale GP. The proposed model inherits the advantages of aggregations, e.g., closed-form inference and aggregation, parallelization and distributed computing. Furthermore, theoretical and empirical analyses reveal that the new aggregation model performs better due to the consistent predictions that converge to the true underlying function when the training size approaches infinity.
0
0
0
1
0
0
Network Inference from a Link-Traced Sample using Approximate Bayesian Computation
We present a new inference method based on approximate Bayesian computation for estimating parameters governing an entire network based on link-traced samples of that network. To do this, we first take summary statistics from an observed link-traced network sample, such as a recruitment network of subjects in a hard-to-reach population. Then we assume prior distributions, such as multivariate uniform, for the distribution of some parameters governing the structure of the network and behaviour of its nodes. Then, we draw many independent and identically distributed values for these parameters. For each set of values, we simulate a population network, take a link-traced sample from that network, and find the summary statistics for that sample. The statistics from the sample, and the parameters that eventually led to that sample, are collectively treated as a single point. We take a Kernel Density estimate of the points from many simulations, and observe the density across the hyperplane coinciding with the statistic values of the originally observed sample. This density function is treat as a posterior estimate of the paramaters of the network that provided the observed sample. We also apply this method to a network of precedence citations between legal documents, centered around cases overseen by the Supreme Court of Canada, is observed. The features of certain cases that lead to their frequent citation are inferred, and their effects estimated by ABC. Future work and extensions are also briefly discussed.
1
1
0
1
0
0
Variational Dropout Sparsifies Deep Neural Networks
We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.
1
0
0
1
0
0
A note on clustered cells
This note contains additions to the paper 'Clustered cell decomposition in P-minimal structures' (arXiv:1612.02683). We discuss a question which was raised in that paper, on the order of clustered cells. We also consider a notion of cells of minimal order, which is a slight optimalisation of the theorem from the original paper.
0
0
1
0
0
0
Record statistics of a strongly correlated time series: random walks and Lévy flights
We review recent advances on the record statistics of strongly correlated time series, whose entries denote the positions of a random walk or a Lévy flight on a line. After a brief survey of the theory of records for independent and identically distributed random variables, we focus on random walks. During the last few years, it was indeed realized that random walks are a very useful "laboratory" to test the effects of correlations on the record statistics. We start with the simple one-dimensional random walk with symmetric jumps (both continuous and discrete) and discuss in detail the statistics of the number of records, as well as of the ages of the records, i.e., the lapses of time between two successive record breaking events. Then we review the results that were obtained for a wide variety of random walk models, including random walks with a linear drift, continuous time random walks, constrained random walks (like the random walk bridge) and the case of multiple independent random walkers. Finally, we discuss further observables related to records, like the record increments, as well as some questions raised by physical applications of record statistics, like the effects of measurement error and noise.
0
1
1
0
0
0
On nonlinear instability of Prandtl's boundary layers: the case of Rayleigh's stable shear flows
In this paper, we study Prandtl's boundary layer asymptotic expansion for incompressible fluids on the half-space in the inviscid limit. In \cite{Gr1}, E. Grenier proved that Prandtl's Ansatz is false for data with Sobolev regularity near Rayleigh's unstable shear flows. In this paper, we show that this Ansatz is also false for Rayleigh's stable shear flows. Namely we construct unstable solutions near arbitrary stable monotonic boundary layer profiles. Such shear flows are stable for Euler equations, but not for Navier-Stokes equations: adding a small viscosity destabilizes the flow.
0
0
1
0
0
0
Subspace Clustering with Missing and Corrupted Data
Given full or partial information about a collection of points that lie close to a union of several subspaces, subspace clustering refers to the process of clustering the points according to their subspace and identifying the subspaces. One popular approach, sparse subspace clustering (SSC), represents each sample as a weighted combination of the other samples, with weights of minimal $\ell_1$ norm, and then uses those learned weights to cluster the samples. SSC is stable in settings where each sample is contaminated by a relatively small amount of noise. However, when there is a significant amount of additive noise, or a considerable number of entries are missing, theoretical guarantees are scarce. In this paper, we study a robust variant of SSC and establish clustering guarantees in the presence of corrupted or missing data. We give explicit bounds on amount of noise and missing data that the algorithm can tolerate, both in deterministic settings and in a random generative model. Notably, our approach provides guarantees for higher tolerance to noise and missing data than existing analyses for this method. By design, the results hold even when we do not know the locations of the missing data; e.g., as in presence-only data.
0
0
0
1
0
0
Monte Carlo Tensor Network Renormalization
Techniques for approximately contracting tensor networks are limited in how efficiently they can make use of parallel computing resources. In this work we demonstrate and characterize a Monte Carlo approach to the tensor network renormalization group method which can be used straightforwardly on modern computing architectures. We demonstrate the efficiency of the technique and show that Monte Carlo tensor network renormalization provides an attractive path to improving the accuracy of a wide class of challenging computations while also providing useful estimates of uncertainty and a statistical guarantee of unbiased results.
0
1
0
0
0
0
Experimental study of electron and phonon dynamics in nanoscale materials by ultrafast laser time-domain spectroscopy
With the rapid advances in the development of nanotechnology, nowadays, the sizes of elementary unit, i.e. transistor, of micro- and nanoelectronic devices are well deep into nanoscale. For the pursuit of cheaper and faster nanoscale electronic devices, the size of transistors keeps scaling down. As the miniaturization of the nanoelectronic devices, the electrical resistivity increases dramatically, resulting rapid growth in the heat generation. The heat generation and limited thermal dissipation in nanoscale materials have become a critical problem in the development of the next generation nanoelectronic devices. Copper (Cu) is widely used conducting material in nanoelectronic devices, and the electron-phonon scattering is the dominant contributor to the resistivity in Cu nanowires at room temperature. Meanwhile, phonons are the main carriers of heat in insulators, intrinsic and lightly doped semiconductors. The thermal transport is an ensemble of phonon transport, which strongly depends on the phonon frequency. In addition, the phonon transport in nanoscale materials can behave fundamentally different than in bulk materials, because of the spatial confinement. However, the size effect on electron-phonon scattering and frequency dependent phonon transport in nanoscale materials remain largely unexplored, due to the lack of suitable experimental techniques. This thesis is mainly focusing on the study of carrier dynamics and acoustic phonon transport in nanoscale materials.
0
1
0
0
0
0
SSGP topologies on abelian groups of positive finite divisible rank
Let G be an abelian group. For a subset A of G, Cyc(A) denotes the set of all elements x of G such that the cyclic subgroup generated by x is contained in A, and G is said to have the small subgroup generating property (abbreviated to SSGP) if the smallest subgroup of G generated by Cyc(U) is dense in G for every neighbourhood U of zero of G. SSGP groups form a proper subclass of the class of minimally almost periodic groups. Comfort and Gould asked for a characterization of abelian groups G which admit an SSGP group topology, and they solved this problem for bounded torsion groups (which have divisible rank zero). Dikranjan and the first author proved that an abelian group of infinite divisible rank admits an SSGP group topology. In the remaining case of positive finite divisible rank, the same authors found a necessary condition on G in order to admit an SSGP group topology and asked if this condition is also sufficient. We answer this question positively, thereby completing the characterization of abelian groups which admit an SSGP group topology.
0
0
1
0
0
0
Intelligent Pothole Detection and Road Condition Assessment
Poor road conditions are a public nuisance, causing passenger discomfort, damage to vehicles, and accidents. In the U.S., road-related conditions are a factor in 22,000 of the 42,000 traffic fatalities each year. Although we often complain about bad roads, we have no way to detect or report them at scale. To address this issue, we developed a system to detect potholes and assess road conditions in real-time. Our solution is a mobile application that captures data on a car's movement from gyroscope and accelerometer sensors in the phone. To assess roads using this sensor data, we trained SVM models to classify road conditions with 93% accuracy and potholes with 92% accuracy, beating the base rate for both problems. As the user drives, the models use the sensor data to classify whether the road is good or bad, and whether it contains potholes. Then, the classification results are used to create data-rich maps that illustrate road conditions across the city. Our system will empower civic officials to identify and repair damaged roads which inconvenience passengers and cause accidents. This paper details our data science process for collecting training data on real roads, transforming noisy sensor data into useful signals, training and evaluating machine learning models, and deploying those models to production through a real-time classification app. It also highlights how cities can use our system to crowdsource data and deliver road repair resources to areas in need.
1
0
0
0
0
0
Concrete Autoencoders for Differentiable Feature Selection and Reconstruction
We introduce the concrete autoencoder, an end-to-end differentiable method for global feature selection, which efficiently identifies a subset of the most informative features and simultaneously learns a neural network to reconstruct the input data from the selected features. Our method is unsupervised, and is based on using a concrete selector layer as the encoder and using a standard neural network as the decoder. During the training phase, the temperature of the concrete selector layer is gradually decreased, which encourages a user-specified number of discrete features to be learned. During test time, the selected features can be used with the decoder network to reconstruct the remaining input features. We evaluate concrete autoencoders on a variety of datasets, where they significantly outperform state-of-the-art methods for feature selection and data reconstruction. In particular, on a large-scale gene expression dataset, the concrete autoencoder selects a small subset of genes whose expression levels can be use to impute the expression levels of the remaining genes. In doing so, it improves on the current widely-used expert-curated L1000 landmark genes, potentially reducing measurement costs by 20%. The concrete autoencoder can be implemented by adding just a few lines of code to a standard autoencoder.
1
0
0
1
0
0
Emergence and complexity in theoretical models of self-organized criticality
In this thesis we present few theoretical studies of the models of self-organized criticality. Following a brief introduction of self-organized criticality, we discuss three main problems. The first problem is about growing patterns formed in the abelian sandpile model (ASM). The patterns exhibit proportionate growth where different parts of the pattern grow in same rate, keeping the overall shape unchanged. This non-trivial property, often found in biological growth, has received increasing attention in recent years. In this thesis, we present a mathematical characterization of a large class of such patterns in terms of discrete holomorphic functions. In the second problem, we discuss a well known model of self-organized criticality introduced by Zhang in 1989. We present an exact analysis of the model and quantitatively explain an intriguing property known as the emergence of quasi-units. In the third problem, we introduce an operator algebra to determine the steady state of a class of stochastic sandpile models.
0
1
1
0
0
0
An Extension of Averaged-Operator-Based Algorithms
Many of the algorithms used to solve minimization problems with sparsity-inducing regularizers are generic in the sense that they do not take into account the sparsity of the solution in any particular way. However, algorithms known as semismooth Newton are able to take advantage of this sparsity to accelerate their convergence. We show how to extend these algorithms in different directions, and study the convergence of the resulting algorithms by showing that they are a particular case of an extension of the well-known Krasnosel'ski\u{\i}--Mann scheme.
0
0
0
1
0
0
Along the sun-drenched roadside: On the interplay between urban street orientation entropy and the buildings' solar potential
We explore the relation between urban road network characteristics particularly circuitry, street orientation entropy and the city's topography on the one hand and the building's orientation entropy on the other in order to quantify their effect on the city's solar potential. These statistical measures of the road network reveal the interplay between the built environment's design and its sustainability.
0
1
0
0
0
0
$HD(M\setminus L)>0.353$
The complement $M\setminus L$ of the Lagrange spectrum $L$ in the Markov spectrum $M$ was studied by many authors (including Freiman, Berstein, Cusick and Flahive). After their works, we disposed of a countable collection of points in $M\setminus L$. In this article, we describe the structure of $M\setminus L$ near a non-isolated point $\alpha_{\infty}$ found by Freiman in 1973, and we use this description to exhibit a concrete Cantor set $X$ whose Hausdorff dimension coincides with the Hausdorff dimension of $M\setminus L$ near $\alpha_{\infty}$. A consequence of our results is the lower bound $HD(M\setminus L)>0.353$ on the Hausdorff dimension $HD(M\setminus L)$ of $M\setminus L$. Another by-product of our analysis is the explicit construction of new elements of $M\setminus L$, including its largest known member $c\in M\setminus L$ (surpassing the former largest known number $\alpha_4\in M\setminus L$ obtained by Cusick and Flahive in 1989).
0
0
1
0
0
0
Comparing the dark matter models, modified Newtonian dynamics and modified gravity in accounting for the galaxy rotation curves
We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for the galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom's MOND theory with two different interpolation functions, i.e. the standard and the simple interpolation functions. As for the modified gravity, we focus on Moffat's MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that non of the six models can well fit all the galaxy rotation curves. Two galaxies can be best fitted by the baryonic model without involving the nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by MSTG model. Core-modified model can well fit about one half LSB galaxies but no HSB galaxy, while NFW model can fit only a small fraction of HSB galaxies but no LSB galaxy. This may imply that the oversimplified NFW and Core-modified profiles couldn't well mimic the postulated dark matter halo.
0
1
0
0
0
0
Good Clusterings Have Large Volume
The clustering of a data set is one of the core tasks in data analytics. Many clustering algorithms exhibit a strong contrast between a favorable performance in practice and bad theoretical worst-cases. Prime examples are least-squares assignments and the popular $k$-means algorithm. We are interested in this contrast and study it through polyhedral theory. Several popular clustering algorithms can be connected to finding a vertex of the so-called bounded-shape partition polytopes. The vertices correspond to clusterings with extraordinary separation properties, in particular allowing the construction of a separating power diagram, defined by its so-called sites, such that each cluster has its own cell. First, we quantitatively measure the space of all sites that allow construction of a separating power diagram for a clustering by the volume of the normal cone at the corresponding vertex. This gives rise to a new quality criterion for clusterings, and explains why good clusterings are also the most likely to be found by some classical algorithms. Second, we characterize the edges of the bounded-shape partition polytopes. Through this, we obtain an explicit description of the normal cones. This allows us to compute measures with respect to the new quality criterion, and even compute "most stable" sites, and thereby "most stable" power diagrams, for the separation of clusters. The hardness of these computations depends on the number of edges incident to a vertex, which may be exponential. However, the computational effort is rewarded with a wealth of information that can be gained from the results, which we highlight through some proof-of-concept computations.
0
0
1
0
0
0
Regularity of Lie Groups
We solve the regularity problem for Milnor's infinite dimensional Lie groups in the $C^0$-topological context, and provide necessary and sufficient regularity conditions for the standard setting ($C^k$-topology). We prove that the evolution map is $C^0$-continuous on its domain $\textit{iff}\hspace{1pt}$ the Lie group $G$ is locally $\mu$-convex. We furthermore show that if the evolution map is defined on all smooth curves, then $G$ is Mackey complete - This is a completeness condition formulated in terms of the Lie group operations that generalizes Mackey completeness as defined for locally convex vector spaces. Then, under the presumption that $G$ is locally $\mu$-convex, we show that each $C^k$-curve, for $k\in \mathbb{N}_{\geq 1}\sqcup\{\mathrm{lip},\infty\}$, is integrable (contained in the domain of the evolution map) $\textit{iff}\hspace{1pt}$ $G$ is Mackey complete and $\mathrm{k}$-confined. The latter condition states that each $C^k$-curve in the Lie algebra $\mathfrak{g}$ of $G$ can be uniformly approximated by a special type of sequence consisting of piecewise integrable curves - A similar result is proven for the case $k\equiv 0$; and we provide several mild conditions that ensure that $G$ is $\mathrm{k}$-confined for each $k\in \mathbb{N}\sqcup\{\mathrm{lip},\infty\}$. We finally discuss the differentiation of parameter-dependent integrals in the standard topological context. In particular, we show that if the evolution map is well defined and continuous on $C^k([0,1],\mathfrak{g})$ for $k\in \mathbb{N}\sqcup\{\infty\}$, then it is smooth thereon $\textit{iff}\hspace{1pt}$ $\mathfrak{g}$ is $\hspace{0.2pt}$ Mackey complete for $k\in \mathbb{N}_{\geq 1}\sqcup\{\infty\}$ $\hspace{1pt}/\hspace{1pt}$ integral complete for $k\equiv 0$. This result is obtained by calculating the directional derivatives explicitly - recovering the standard formulas that hold in the Banach case.
0
0
1
0
0
0
A study on text-score disagreement in online reviews
In this paper, we focus on online reviews and employ artificial intelligence tools, taken from the cognitive computing field, to help understanding the relationships between the textual part of the review and the assigned numerical score. We move from the intuitions that 1) a set of textual reviews expressing different sentiments may feature the same score (and vice-versa); and 2) detecting and analyzing the mismatches between the review content and the actual score may benefit both service providers and consumers, by highlighting specific factors of satisfaction (and dissatisfaction) in texts. To prove the intuitions, we adopt sentiment analysis techniques and we concentrate on hotel reviews, to find polarity mismatches therein. In particular, we first train a text classifier with a set of annotated hotel reviews, taken from the Booking website. Then, we analyze a large dataset, with around 160k hotel reviews collected from Tripadvisor, with the aim of detecting a polarity mismatch, indicating if the textual content of the review is in line, or not, with the associated score. Using well established artificial intelligence techniques and analyzing in depth the reviews featuring a mismatch between the text polarity and the score, we find that -on a scale of five stars- those reviews ranked with middle scores include a mixture of positive and negative aspects. The approach proposed here, beside acting as a polarity detector, provides an effective selection of reviews -on an initial very large dataset- that may allow both consumers and providers to focus directly on the review subset featuring a text/score disagreement, which conveniently convey to the user a summary of positive and negative features of the review target.
1
0
0
0
0
0
Convergence rates for nonequilibrium Langevin dynamics
We study the exponential convergence to the stationary state for nonequilibrium Langevin dynamics, by a perturbative approach based on hypocoercive techniques developed for equilibrium Langevin dynamics. The Hamiltonian and overdamped limits (corresponding respectively to frictions going to zero or infinity) are carefully investigated. In particular, the maximal magnitude of admissible perturbations are quantified as a function of the friction. Numerical results based on a Galerkin discretization of the generator of the dynamics confirm the theoretical lower bounds on the spectral gap.
0
1
1
0
0
0
Natural Extension of Hartree-Fock through extremal $1$-fermion information: Overview and application to the lithium atom
Fermionic natural occupation numbers do not only obey Pauli's exclusion principle but are even stronger restricted by so-called generalized Pauli constraints. Whenever given natural occupation numbers lie on the boundary of the allowed region the corresponding $N$-fermion quantum state has a significantly simpler structure. We recall the recently proposed natural extension of the Hartree-Fock ansatz based on this structural simplification. This variational ansatz is tested for the lithium atom. Intriguingly, the underlying mathematical structure yields universal geometrical bounds on the correlation energy reconstructed by this ansatz.
0
1
0
0
0
0
Homotopy Parametric Simplex Method for Sparse Learning
High dimensional sparse learning has imposed a great computational challenge to large scale data analysis. In this paper, we are interested in a broad class of sparse learning approaches formulated as linear programs parametrized by a {\em regularization factor}, and solve them by the parametric simplex method (PSM). Our parametric simplex method offers significant advantages over other competing methods: (1) PSM naturally obtains the complete solution path for all values of the regularization parameter; (2) PSM provides a high precision dual certificate stopping criterion; (3) PSM yields sparse solutions through very few iterations, and the solution sparsity significantly reduces the computational cost per iteration. Particularly, we demonstrate the superiority of PSM over various sparse learning approaches, including Dantzig selector for sparse linear regression, LAD-Lasso for sparse robust linear regression, CLIME for sparse precision matrix estimation, sparse differential network estimation, and sparse Linear Programming Discriminant (LPD) analysis. We then provide sufficient conditions under which PSM always outputs sparse solutions such that its computational performance can be significantly boosted. Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method.
1
0
1
1
0
0
Fractional Operators with Inhomogeneous Boundary Conditions: Analysis, Control, and Discretization
In this paper we introduce new characterizations of spectral fractional Laplacian to incorporate nonhomogeneous Dirichlet and Neumann boundary conditions. The classical cases with homogeneous boundary conditions arise as a special case. We apply our definition to fractional elliptic equations of order $s \in (0,1)$ with nonzero Dirichlet and Neumann boundary condition. Here the domain $\Omega$ is assumed to be a bounded, quasi-convex Lipschitz domain. To impose the nonzero boundary conditions, we construct fractional harmonic extensions of the boundary data. It is shown that solving for the fractional harmonic extension is equivalent to solving for the standard harmonic extension in the very-weak form. The latter result is of independent interest as well. The remaining fractional elliptic problem (with homogeneous boundary data) can be realized using the existing techniques. We introduce finite element discretizations and derive discretization error estimates in natural norms, which are confirmed by numerical experiments. We also apply our characterizations to Dirichlet and Neumann boundary optimal control problems with fractional elliptic equation as constraints.
0
0
1
0
0
0