title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Deep Neural Networks
Deep Neural Networks (DNNs) are universal function approximators providing state-of- the-art solutions on wide range of applications. Common perceptual tasks such as speech recognition, image classification, and object tracking are now commonly tackled via DNNs. Some fundamental problems remain: (1) the lack of a mathematical framework providing an explicit and interpretable input-output formula for any topology, (2) quantification of DNNs stability regarding adversarial examples (i.e. modified inputs fooling DNN predictions whilst undetectable to humans), (3) absence of generalization guarantees and controllable behaviors for ambiguous patterns, (4) leverage unlabeled data to apply DNNs to domains where expert labeling is scarce as in the medical field. Answering those points would provide theoretical perspectives for further developments based on a common ground. Furthermore, DNNs are now deployed in tremendous societal applications, pushing the need to fill this theoretical gap to ensure control, reliability, and interpretability.
1
0
0
1
0
0
The Rosetta mission orbiter Science overview the comet phase
The International Rosetta Mission was launched in 2004 and consists of the orbiter spacecraft Rosetta and the lander Philae. The aim of the mission is to map the comet 67P Churyumov Gerasimenko by remote sensing, to examine its environment insitu and its evolution in the inner solar system.Rosetta was the first spacecraft to rendezvous and orbit a comet, accompanying it as it passes through the inner solar system, and to deploy a lander, Philae and perform in situ science on the comet surface. The primary goals of the mission were to: characterize the comets nucleus; examine the chemical, mineralogical and isotopic composition of volatiles and refractories; examine the physical properties and interrelation of volatiles and refractories in a cometary nucleus; study the development of cometary activity and the processes in the surface layer of the nucleus and in the coma; detail the origin of comets, the relationship between cometary and interstellar material and the implications for the origin of the solar system; characterize asteroids, 2867 Steins and 21 Lutetia. This paper presents a summary of mission operations and science, focusing on the Rosetta orbiter component of the mission during its comet phase, from early 2014 up to September 2016.
0
1
0
0
0
0
Back to the Future: an Even More Nearly Optimal Cardinality Estimation Algorithm
We describe a new cardinality estimation algorithm that is extremely space-efficient. It applies one of three novel estimators to the compressed state of the Flajolet-Martin-85 coupon collection process. In an apples-to-apples empirical comparison against compressed HyperLogLog sketches, the new algorithm simultaneously wins on all three dimensions of the time/space/accuracy tradeoff. Our prototype uses the zstd compression library, and produces sketches that are smaller than the entropy of HLL, so no possible implementation of compressed HLL can match its space efficiency. The paper's technical contributions include analyses and simulations of the three new estimators, accurate values for the entropies of FM85 and HLL, and a non-trivial method for estimating a double asymptotic limit via simulation.
1
0
0
0
0
0
Tracking Systems as Thinging Machine: A Case Study of a Service Company
Object tracking systems play important roles in tracking moving objects and overcoming problems such as safety, security and other location-related applications. Problems arise from the difficulties in creating a well-defined and understandable description of tracking systems. Nowadays, describing such processes results in fragmental representation that most of the time leads to difficulties creating documentation. Additionally, once learned by assigned personnel, repeated tasks result in them continuing on autopilot in a way that often degrades their effectiveness. This paper proposes the modeling of tracking systems in terms of a new diagrammatic methodology to produce engineering-like schemata. The resultant diagrams can be used in documentation, explanation, communication, education and control.
1
0
0
0
0
0
Spectral Evidence for an Inner Carbon-Rich Circumstellar Dust Belt in the Young HD36546 A-Star System
Using the NASA/IRTF SpeX & BASS spectrometers we have obtained novel 0.7 - 13 um observations of the newly imaged HD36546 debris disk system. The SpeX spectrum is most consistent with the photospheric emission expected from an Lstar ~ 20 Lsun, solar abundance A1.5V star with little/no extinction and excess emission from circumstellar dust detectable beyond 4.5 um. Non-detections of CO emission lines and accretion signatures point to the gas poor circumstellar environment of a very old transition disk. Combining the SpeX and BASS spectra with archival WISE/AKARI/IRAS/Herschel photometery, we find an outer cold dust belt at ~135K and 20 - 40 AU from the primary, likely coincident with the disk imaged by Subaru (Currie et al. 2017), and a new second inner belt with temperature ~570K and an unusual, broad SED maximum in the 6 - 9 um region, tracing dust at 1.1 - 2.2 AU. An SED maximum at 6 - 9 um has been reported in just two other A-star systems, HD131488 and HD121191, both of ~10 Myr age (Melis et al. 2013). From Spitzer, we have also identified the ~12 Myr old A7V HD148567 system as having similar 5 - 35 um excess spectral features (Mittal et al. 2015). The Spitzer data allows us to rule out water emission and rule in carbonaceous materials - organics, carbonates, SiC - as the source of the 6 - 9 um excess. Assuming a common origin for the 4 young Astar systems' disks, we suggest they are experiencing an early era of carbon-rich planetesimal processing.
0
1
0
0
0
0
The flyby anomaly: A multivariate analysis approach
The flyby anomaly is the unexpected variation of the asymptotic post-encounter velocity of a spacecraft with respect to the pre-encounter velocity as it performs a slingshot manoeuvre. This effect has been detected in, at least, six flybys of the Earth but it has not appeared in other recent flybys. In order to find a pattern in these, apparently contradictory, data several phenomenological formulas have been proposed but all have failed to predict a new result in agreement with the observations. In this paper we use a multivariate dimensional analysis approach to propose a fitting of the data in terms of the local parameters at perigee, as it would occur if this anomaly comes from an unknown fifth force with latitude dependence. Under this assumption, we estimate the range of this force around 300 km.
0
1
0
0
0
0
Opto-Valleytronic Spin Injection in Monolayer MoS2/Few-Layer Graphene Hybrid Spin Valves
Two dimensional (2D) materials provide a unique platform for spintronics and valleytronics due to the ability to combine vastly different functionalities into one vertically-stacked heterostructure, where the strengths of each of the constituent materials can compensate for the weaknesses of the others. Graphene has been demonstrated to be an exceptional material for spin transport at room temperature, however it lacks a coupling of the spin and optical degrees of freedom. In contrast, spin/valley polarization can be efficiently generated in monolayer transition metal dichalcogenides (TMD) such as MoS2 via absorption of circularly-polarized photons, but lateral spin or valley transport has not been realized at room temperature. In this letter, we fabricate monolayer MoS2/few-layer graphene hybrid spin valves and demonstrate, for the first time, the opto-valleytronic spin injection across a TMD/graphene interface. We observe that the magnitude and direction of spin polarization is controlled by both helicity and photon energy. In addition, Hanle spin precession measurements confirm optical spin injection, spin transport, and electrical detection up to room temperature. Finally, analysis by a one-dimensional drift-diffusion model quantifies the optically injected spin current and the spin transport parameters. Our results demonstrate a 2D spintronic/valleytronic system that achieves optical spin injection and lateral spin transport at room temperature in a single device, which paves the way for multifunctional 2D spintronic devices for memory and logic applications.
0
1
0
0
0
0
On singular limit equations for incompressible fluids in moving thin domains
We consider the incompressible Euler and Navier-Stokes equations in a three-dimensional moving thin domain. Under the assumption that the moving thin domain degenerates into a two-dimensional moving closed surface as the width of the thin domain goes to zero, we give a heuristic derivation of singular limit equations on the degenerate moving surface of the Euler and Navier-Stokes equations in the moving thin domain and investigate relations between their energy structures. We also compare the limit equations with the Euler and Navier-Stokes equations on a stationary manifold, which are described in terms of the Levi-Civita connection.
0
1
1
0
0
0
Flying, Hopping Pit-Bots for Cave and Lava Tube Exploration on the Moon and Mars
Wheeled ground robots are limited from exploring extreme environments such as caves, lava tubes and skylights. Small robots that utilize unconventional mobility through hopping, flying and rolling can overcome many roughness limitations and thus extend exploration sites of interest on Moon and Mars. In this paper we introduce a network of 3 kg, 0.30 m diameter ball robots (pit-bots) that can fly, hop and roll using an onboard miniature propulsion system. These pit-bots can be deployed from a lander or large rover. Each robot is equipped with a smartphone sized computer, stereo camera and laser rangefinder to per-form navigation and mapping. The ball robot can carry a payload of 1 kg or perform sample return. Our studies show a range of 5 km and 0.7 hours flight time on the Moon.
1
1
0
0
0
0
Design of a Robotic System for Diagnosis and Rehabilitation of Lower Limbs
Currently, lower limb robotic rehabilitation is widely developed, However, the devices used so far seem to not have a uniform criteria for their design, because, on the contrary, each developed mechanism is often presented as if it does not take into account the criteria used in previous designs. On the other hand, the diagnosis of lower limb from robotic devices has been little studied. This chapter presents a guide for the design of robotic devices in diagnosis of lower limbs, taking into account the mobility of the human leg and the techniques used by physiotherapists in the execution of exercises and the rehabilitation of rehabilitation and diagnosis tests, as well as the recommendations made by various authors, among other aspects. The proposed guide is illustrated through a case study based on a parallel robot RPU+3UPS able to make movements that are applied during the processes of rehabilitation and diagnosis. The proposal presents advantages over some existing devices such as its load capacity that can support, and also allows you to restrict the movement in directions required by the rehabilitation and the diagnosis movements.
1
1
0
0
0
0
Multi-parametric sensitivity analysis of the band structure for tetrachiral acoustic metamaterials
Tetrachiral materials are characterized by a cellular microstructure made by a periodic pattern of stiff rings and flexible ligaments. Their mechanical behaviour can be described by a planar lattice of rigid massive bodies and elastic massless beams. The periodic cell dynamics is governed by a monoatomic structural model, conveniently reduced to the only active degrees-of-freedom. The paper presents an explicit parametric description of the band structure governing the free propagation of elastic waves. By virtue of multiparametric perturbation techniques, sensitivity analyses are performed to achieve analytical asymptotic approximation of the dispersion functions. The parametric conditions for the existence of full band gaps in the low-frequency range are established. Furthermore, the band gap amplitude is analytically assessed in the admissible parameter range. In tetrachiral acoustic metamaterials, stop bands can be opened by the introduction of intra-ring resonators. Perturbation methods can efficiently deal with the consequent enlargement of the mechanical parameter space. Indeed high-accuracy parametric approximations are achieved for the band structure, enriched by the new optical branches related to the resonator frequencies. In particular, target stop bands in the metamaterial spectrum are analytically designed through the asymptotic solution of inverse spectral problems.
0
1
0
0
0
0
What do the US West Coast Public Libraries Post on Twitter?
Twitter has provided a great opportunity for public libraries to disseminate information for a variety of purposes. Twitter data have been applied in different domains such as health, politics, and history. There are thousands of public libraries in the US, but no study has yet investigated the content of their social media posts like tweets to find their interests. Moreover, traditional content analysis of Twitter content is not an efficient task for exploring thousands of tweets. Therefore, there is a need for automatic methods to overcome the limitations of manual methods. This paper proposes a computational approach to collecting and analyzing using Twitter Application Programming Interfaces (API) and investigates more than 138,000 tweets from 48 US west coast libraries using topic modeling. We found 20 topics and assigned them to five categories including public relations, book, event, training, and social good. Our results show that the US west coast libraries are more interested in using Twitter for public relations and book-related events. This research has both practical and theoretical applications for libraries as well as other organizations to explore social media actives of their customer and themselves.
0
0
0
1
0
0
Dynamic Boltzmann Machines for Second Order Moments and Generalized Gaussian Distributions
Dynamic Boltzmann Machine (DyBM) has been shown highly efficient to predict time-series data. Gaussian DyBM is a DyBM that assumes the predicted data is generated by a Gaussian distribution whose first-order moment (mean) dynamically changes over time but its second-order moment (variance) is fixed. However, in many financial applications, the assumption is quite limiting in two aspects. First, even when the data follows a Gaussian distribution, its variance may change over time. Such variance is also related to important temporal economic indicators such as the market volatility. Second, financial time-series data often requires learning datasets generated by the generalized Gaussian distribution with an additional shape parameter that is important to approximate heavy-tailed distributions. Addressing those aspects, we show how to extend DyBM that results in significant performance improvement in predicting financial time-series data.
1
0
0
1
0
0
Edge switching transformations of quantum graphs
Discussed here are the effects of basics graph transformations on the spectra of associated quantum graphs. In particular it is shown that under an edge switch the spectrum of the transformed Schrödinger operator is interlaced with that of the original one. By implication, under edge swap the spectra before and after the transformation, denoted by $\{ E_n\}_{n=1}^{\infty}$ and $\{\widetilde E_n\}_{n=1}^{\infty}$ correspondingly, are level-2 interlaced, so that $E_{n-2}\le \widetilde E_n\le E_{n+2}$. The proofs are guided by considerations of the quantum graphs' discrete analogs.
0
0
1
0
0
0
Resilient Monotone Submodular Function Maximization
In this paper, we focus on applications in machine learning, optimization, and control that call for the resilient selection of a few elements, e.g. features, sensors, or leaders, against a number of adversarial denial-of-service attacks or failures. In general, such resilient optimization problems are hard, and cannot be solved exactly in polynomial time, even though they often involve objective functions that are monotone and submodular. Notwithstanding, in this paper we provide the first scalable, curvature-dependent algorithm for their approximate solution, that is valid for any number of attacks or failures, and which, for functions with low curvature, guarantees superior approximation performance. Notably, the curvature has been known to tighten approximations for several non-resilient maximization problems, yet its effect on resilient maximization had hitherto been unknown. We complement our theoretical analyses with supporting empirical evaluations.
1
0
1
0
0
0
Antireflection Coated Semiconductor Laser Amplifier
This paper presents a laser amplifier based on an antireflection coated laser diode. This laser amplifier operates without active temperature stabilisation at any wavelength within its gain profile without restrictions on the injection current. Using a active feedback from an external detector to the laser current the power stabilized to better than $10^{-4}$, even after additional optical elements such as an optical fiber and/or a polarization cleaner. This power can also be modulated and tuned arbitrarily. In the absence of the seeding light, the laser amplifier does not lase, thus resulting in an extremely simple setup, which requires neither an external Fabry Perot cavity for monitoring the mode purity nor a temperature stabilization.
0
1
0
0
0
0
On the structure of radial solutions for some quasilinear elliptic equations
In this paper we study entire radial solutions for the quasilinear $p$-Laplace equation $\Delta_p u + k(x) f(u) = 0$ where $k$ is a radial positive weight and the nonlinearity behaves e.g. as $f(u)=u|u|^{q-2}-u|u|^{Q-2}$ with $q<Q$. In particular we focus our attention on solutions (positive and sign changing) which are infinitesimal at infinity, thus providing an extension of a previous result by Tang (2001).
0
0
1
0
0
0
Bi-class classification of humpback whale sound units against complex background noise with Deep Convolution Neural Network
Automatically detecting sound units of humpback whales in complex time-varying background noises is a current challenge for scientists. In this paper, we explore the applicability of Convolution Neural Network (CNN) method for this task. In the evaluation stage, we present 6 bi-class classification experimentations of whale sound detection against different background noise types (e.g., rain, wind). In comparison to classical FFT-based representation like spectrograms, we showed that the use of image-based pretrained CNN features brought higher performance to classify whale sounds and background noise.
1
0
0
1
0
0
A model for a Lindenmayer reconstruction algorithm
Given an input string s and a specific Lindenmayer system (the so-called Fibonacci grammar), we define an automaton which is capable of (i) determining whether s belongs to the set of strings that the Fibonacci grammar can generate (in other words, if s corresponds to a generation of the grammar) and, if so, (ii) reconstructing the previous generation.
1
0
0
0
0
0
Spacetime symmetries and conformal data in the continuous multi-scale entanglement renormalization ansatz
The generalization of the multi-scale entanglement renormalization ansatz (MERA) to continuous systems, or cMERA [Haegeman et al., Phys. Rev. Lett, 110, 100402 (2013)], is expected to become a powerful variational ansatz for the ground state of strongly interacting quantum field theories. In this paper we investigate, in the simpler context of Gaussian cMERA for free theories, the extent to which the cMERA state $|\Psi^\Lambda\rangle$ with finite UV cut-off $\Lambda$ can capture the spacetime symmetries of the ground state $|\Psi\rangle$. For a free boson conformal field theory (CFT) in 1+1 dimensions as a concrete example, we build a quasi-local unitary transformation $V$ that maps $|\Psi\rangle$ into $|\Psi^\Lambda\rangle$ and show two main results. (i) Any spacetime symmetry of the ground state $|\Psi\rangle$ is also mapped by $V$ into a spacetime symmetry of the cMERA $|\Psi^\Lambda\rangle$. However, while in the CFT the stress-energy tensor $T_{\mu\nu}(x)$ (in terms of which all the spacetime symmetry generators are expressed) is local, the corresponding cMERA stress-energy tensor $T_{\mu\nu}^{\Lambda}(x) = V T_{\mu\nu}(x) V^{\dagger}$ is quasi-local. (ii) From the cMERA, we can extract quasi-local scaling operators $O^{\Lambda}_{\alpha}(x)$ characterized by the exact same scaling dimensions $\Delta_{\alpha}$, conformal spins $s_{\alpha}$, operator product expansion coefficients $C_{\alpha\beta\gamma}$, and central charge $c$ as the original CFT. Finally, we argue that these results should also apply to interacting theories.
0
1
0
0
0
0
Augmenting Input Method Language Model with user Location Type Information
Geo-tags from micro-blog posts have been shown to be useful in many data mining applications. This work seeks to find out if the location type derived from these geo-tags can benefit input methods, which attempts to predict the next word a user will input during typing. If a correlation between different location types and a change in word distribution can be found, the location type information can be used to make the input method more accurate. This work queried micro-blog posts from Twitter API and location type of these posts from Google Place API, forming a dataset of around 500k samples. A statistical study on the word distribution found weak support for the assumption. An LSTM based prediction experiment found a 2% edge in the accuracy from language models leveraging location type information when compared to a baseline without that information.
1
0
0
0
0
0
Cauchy-Lipschitz theory for fractional multi-order dynamics -- State-transition matrices, Duhamel formulas and duality theorems
The aim of the present paper is to contribute to the development of the study of Cauchy problems involving Riemann-Liouville and Caputo fractional derivatives. Firstly existence-uniqueness results for solutions of non-linear Cauchy problems with vector fractional multi-order are addressed. A qualitative result about the behavior of local but non-global solutions is also provided. Finally the major aim of this paper is to introduce notions of fractional state-transition matrices and to derive fractional versions of the classical Duhamel formula. We also prove duality theorems relying left state-transition matrices with right state-transition matrices.
0
0
1
0
0
0
Tanaka formula for strictly stable processes
For symmetric Lévy processes, if the local times exist, the Tanaka formula has already constructed via the techniques in the potential theory by Salminen and Yor (2007). In this paper, we study the Tanaka formula for arbitrary strictly stable processes with index $\alpha \in (1,2)$ including spectrally positive and negative cases in a framework of Itô's stochastic calculus. Our approach to the existence of local times for such processes is different from Bertoin (1996).
0
0
1
0
0
0
The Classical Complexity of Boson Sampling
We study the classical complexity of the exact Boson Sampling problem where the objective is to produce provably correct random samples from a particular quantum mechanical distribution. The computational framework was proposed by Aaronson and Arkhipov in 2011 as an attainable demonstration of `quantum supremacy', that is a practical quantum computing experiment able to produce output at a speed beyond the reach of classical (that is non-quantum) computer hardware. Since its introduction Boson Sampling has been the subject of intense international research in the world of quantum computing. On the face of it, the problem is challenging for classical computation. Aaronson and Arkhipov show that exact Boson Sampling is not efficiently solvable by a classical computer unless $P^{\#P} = BPP^{NP}$ and the polynomial hierarchy collapses to the third level. The fastest known exact classical algorithm for the standard Boson Sampling problem takes $O({m + n -1 \choose n} n 2^n )$ time to produce samples for a system with input size $n$ and $m$ output modes, making it infeasible for anything but the smallest values of $n$ and $m$. We give an algorithm that is much faster, running in $O(n 2^n + \operatorname{poly}(m,n))$ time and $O(m)$ additional space. The algorithm is simple to implement and has low constant factor overheads. As a consequence our classical algorithm is able to solve the exact Boson Sampling problem for system sizes far beyond current photonic quantum computing experimentation, thereby significantly reducing the likelihood of achieving near-term quantum supremacy in the context of Boson Sampling.
1
0
0
1
0
0
Impurity band conduction in group-IV ferromagnetic semiconductor Ge1-xFex with nanoscale fluctuations in Fe concentration
We study the carrier transport and magnetic properties of group-IV-based ferromagnetic semiconductor Ge1-xFex thin films (Fe concentration x = 2.3 - 14 %) with and without boron (B) doping, by measuring their transport characteristics; the temperature dependence of resistivity, hole concentration, mobility, and the relation between the anomalous Hall conductivity versus conductivity. At relatively low x (= 2.3 %), the transport in the undoped Ge1-xFex film is dominated by hole hopping between Fe-rich hopping sites in the Fe impurity band, whereas that in the B-doped Ge1-xFex film is dominated by the holes in the valence band in the degenerated Fe-poor regions. As x increases (x = 2.3 - 14 %), the transport in the both undoped and B-doped Ge1-xFex films is dominated by hole hopping between the Fe-rich hopping sites of the impurity band. The magnetic properties of the Ge1-xFex films are studied by various methods including magnetic circular dichroism, magnetization and anomalous Hall resistance, and are not influenced by B-doping. We show band profile models of both undoped and B-doped Ge1-xFex films, which can explain the transport and the magnetic properties of the Ge1-xFex films.
0
1
0
0
0
0
On the metastable Mabillard-Wagner conjecture
The purpose of this note is to attract attention to the following conjecture (metastable $r$-fold Whitney trick) by clarifying its status as not having a complete proof, in the sense described in the paper. Assume that $D=D_1\sqcup\ldots\sqcup D_r$ is disjoint union of $r$ disks of dimension $s$, $f:D\to B^d$ a proper PL map such that $f\partial D_1\cap\ldots\cap f\partial D_r=\emptyset$, $rd\ge (r+1)s+3$ and $d\ge s+3$. If the map $$f^r:\partial(D_1\times\ldots\times D_r)\to (B^d)^r-\{(x,x,\ldots,x)\in(B^d)^r\ |\ x\in B^d\}$$ extends to $D_1\times\ldots\times D_r$, then there is a PL map $\overline f:D\to B^d$ such that $$\overline f=f \quad\text{on}\quad D_r\cup\partial D\quad\text{and}\quad \overline fD_1\cap\ldots\cap \overline fD_r=\emptyset.$$
1
0
1
0
0
0
Reference Publication Year Spectroscopy (RPYS) of Eugene Garfield's publications
Which studies, theories, and ideas have influenced Eugene Garfield's scientific work? Recently, the method reference publication year spectroscopy (RPYS) has been introduced, which can be used to answer this and related questions. Since then, several studies have been published dealing with the historical roots of research fields and scientists. The program CRExplorer (this http URL) was specifically developed for RPYS. In this study, we use this program to investigate the historical roots of Eugene Garfield's oeuvre.
1
0
0
0
0
0
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World
In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model architecture. We propose a novel online dictionary-learning (sparse-coding) framework which incorporates the addition and deletion of hidden units (dictionary elements), and is inspired by the adult neurogenesis phenomenon in the dentate gyrus of the hippocampus, known to be associated with improved cognitive function and adaptation to new environments. In the online learning setting, where new input instances arrive sequentially in batches, the neuronal-birth is implemented by adding new units with random initial weights (random dictionary elements); the number of new units is determined by the current performance (representation error) of the dictionary, higher error causing an increase in the birth rate. Neuronal-death is implemented by imposing l1/l2-regularization (group sparsity) on the dictionary within the block-coordinate descent optimization at each iteration of our online alternating minimization scheme, which iterates between the code and dictionary updates. Finally, hidden unit connectivity adaptation is facilitated by introducing sparsity in dictionary elements. Our empirical evaluation on several real-life datasets (images and language) as well as on synthetic data demonstrates that the proposed approach can considerably outperform the state-of-art fixed-size (nonadaptive) online sparse coding of Mairal et al. (2009) in the presence of nonstationary data. Moreover, we identify certain properties of the data (e.g., sparse inputs with nearly non-overlapping supports) and of the model (e.g., dictionary sparsity) associated with such improvements.
1
0
0
1
0
0
Transiently enhanced interlayer tunneling in optically driven high $T_c$ superconductors
Recent pump-probe experiments reported an enhancement of superconducting transport along the $c$-axis of underdoped YBa$_2$Cu$_3$O$_{6+\delta}$ (YBCO), induced by a mid-infrared optical pump pulse tuned to a specific lattice vibration. To understand this transient non-equilibrium state, we develop a pump-probe formalism for a stack of Josephson junctions, and we consider the tunneling strengths in presence of modulation with an ultrashort optical pulse. We demonstrate that a transient enhancement of the Josephson coupling can be obtained for pulsed excitation and that this can be even larger than in a continuously driven steady-state. Especially interesting is the conclusion that the effect is largest when the material is parametrically driven at a frequency immediately above the plasma frequency, in agreement with what is found experimentally. For bilayer Josephson junctions, an enhancement similar to that experimentally is predicted below the critical temperature $T_c$. This model reproduces the essential features of the enhancement measured below $T_c$. To reproduce the experimental results above $T_c$, we will explore extensions of this model, such as in-plane and amplitude fluctuations, elsewhere.
0
1
0
0
0
0
An Integer Programming Formulation of the Key Management Problem in Wireless Sensor Networks
With the advent of modern communications systems, much attention has been put on developing methods for securely transferring information between constituents of wireless sensor networks. To this effect, we introduce a mathematical programming formulation for the key management problem, which broadly serves as a mechanism for encrypting communications. In particular, an integer programming model of the q-Composite scheme is proposed and utilized to distribute keys among nodes of a network whose topology is known. Numerical experiments demonstrating the effectiveness of the proposed model are conducted using using a well-known optimization solver package. An illustrative example depicting an optimal encryption for a small-scale network is also presented.
1
0
0
0
0
0
Laplacian-Steered Neural Style Transfer
Neural Style Transfer based on Convolutional Neural Networks (CNN) aims to synthesize a new image that retains the high-level structure of a content image, rendered in the low-level texture of a style image. This is achieved by constraining the new image to have high-level CNN features similar to the content image, and lower-level CNN features similar to the style image. However in the traditional optimization objective, low-level features of the content image are absent, and the low-level features of the style image dominate the low-level detail structures of the new image. Hence in the synthesized image, many details of the content image are lost, and a lot of inconsistent and unpleasing artifacts appear. As a remedy, we propose to steer image synthesis with a novel loss function: the Laplacian loss. The Laplacian matrix ("Laplacian" in short), produced by a Laplacian operator, is widely used in computer vision to detect edges and contours. The Laplacian loss measures the difference of the Laplacians, and correspondingly the difference of the detail structures, between the content image and a new image. It is flexible and compatible with the traditional style transfer constraints. By incorporating the Laplacian loss, we obtain a new optimization objective for neural style transfer named Lapstyle. Minimizing this objective will produce a stylized image that better preserves the detail structures of the content image and eliminates the artifacts. Experiments show that Lapstyle produces more appealing stylized images with less artifacts, without compromising their "stylishness".
1
0
0
0
0
0
A quantized physical framework for understanding the working mechanism of ion channels
A quantized physical framework, called the five-anchor model, is developed for a general understanding of the working mechanism of ion channels. According to the hypotheses of this model, the following two basic physical principles are assigned to each anchor: the polarity change induced by an electron transition and the mutual repulsion and attraction induced by an electrostatic force. Consequently, many unique phenomena, such as fast and slow inactivation, the stochastic gating pattern and constant conductance of a single ion channel, the difference between electrical and optical stimulation (optogenetics), nerve conduction block and the generation of an action potential, become intrinsic features of this physical model. Moreover, this model also provides a foundation for the probability equation used to calculate the results of electrical stimulation in our previous C-P theory.
0
0
0
0
1
0
Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
Reinforcement learning has emerged as a promising methodology for training robot controllers. However, most results have been limited to simulation due to the need for a large number of samples and the lack of automated-yet-safe data collection methods. Model-based reinforcement learning methods provide an avenue to circumvent these challenges, but the traditional concern has been the mismatch between the simulator and the real world. Here, we show that control policies learned in simulation can successfully transfer to a physical system, composed of three Phantom robots pushing an object to various desired target positions. We use a modified form of the natural policy gradient algorithm for learning, applied to a carefully identified simulation model. The resulting policies, trained entirely in simulation, work well on the physical system without additional training. In addition, we show that training with an ensemble of models makes the learned policies more robust to modeling errors, thus compensating for difficulties in system identification.
1
0
0
0
0
0
A contour for the entanglement entropies in harmonic lattices
We construct a contour function for the entanglement entropies in generic harmonic lattices. In one spatial dimension, numerical analysis are performed by considering harmonic chains with either periodic or Dirichlet boundary conditions. In the massless regime and for some configurations where the subsystem is a single interval, the numerical results for the contour function are compared to the inverse of the local weight function which multiplies the energy-momentum tensor in the corresponding entanglement hamiltonian, found through conformal field theory methods, and a good agreement is observed. A numerical analysis of the contour function for the entanglement entropy is performed also in a massless harmonic chain for a subsystem made by two disjoint intervals.
0
1
0
0
0
0
A Constrained Coupled Matrix-Tensor Factorization for Learning Time-evolving and Emerging Topics
Topic discovery has witnessed a significant growth as a field of data mining at large. In particular, time-evolving topic discovery, where the evolution of a topic is taken into account has been instrumental in understanding the historical context of an emerging topic in a dynamic corpus. Traditionally, time-evolving topic discovery has focused on this notion of time. However, especially in settings where content is contributed by a community or a crowd, an orthogonal notion of time is the one that pertains to the level of expertise of the content creator: the more experienced the creator, the more advanced the topic. In this paper, we propose a novel time-evolving topic discovery method which, in addition to the extracted topics, is able to identify the evolution of that topic over time, as well as the level of difficulty of that topic, as it is inferred by the level of expertise of its main contributors. Our method is based on a novel formulation of Constrained Coupled Matrix-Tensor Factorization, which adopts constraints well-motivated for, and, as we demonstrate, are essential for high-quality topic discovery. We qualitatively evaluate our approach using real data from the Physics and also Programming Stack Exchange forum, and we were able to identify topics of varying levels of difficulty which can be linked to external events, such as the announcement of gravitational waves by the LIGO lab in Physics forum. We provide a quantitative evaluation of our method by conducting a user study where experts were asked to judge the coherence and quality of the extracted topics. Finally, our proposed method has implications for automatic curriculum design using the extracted topics, where the notion of the level of difficulty is necessary for the proper modeling of prerequisites and advanced concepts.
0
0
0
1
0
0
Generalized Commutative Association Schemes, Hypergroups, and Positive Product Formulas
It is well known that finite commutative association schemes in the sense of the monograph of Bannai and Ito lead to finite commutative hypergroups with positive dual convolutions and even dual hypergroup structures. In this paper we present several discrete generalizations of association schemes which also lead to associated hypergroups. We show that discrete commutative hypergroups associated with such generalized association schemes admit dual positive convolutions at least on the support of the Plancherel measure. We hope that examples for this theory will lead to the existence of new dual positive product formulas in near future.
0
0
1
0
0
0
BD-19 5044L: discovery of a short-period SB2 system with a magnetic Bp primary in the open cluster IC 4725
Until recently almost nothing was known about the evolution of magnetic fields found in upper main sequence Ap/Bp stars during their long main sequence lifetime. We are thus studying magnetic Ap/Bp stars in open clusters in order to obtain observational evidence of how the properties of Ap/Bp magnetic stars, such as field strength and structure, evolve with age during the main sequence. One important aspect of this study is to search for the very rare examples of hot magnetic stars in short-period binary systems among magnetic cluster members. In this paper we characterize the object BD-19~5044L, which is both a member of the open cluster IC 4725 = M~25, and a short-period SB2 system containing a magnetic primary star. We have obtained a series of intensity and circular polarisation spectra distributed through the orbital and rotation cycles of BD-19 5044L with the ESPaDOnS spectropolarimeter at CFHT. We find that the orbit of BD-19 5044L AB is quite eccentric (e = 0.477), with a period of 17.63 d. The primary is a magnetic Bp star with a variable longitudinal magnetic field, a polar field strength of ~1400 G and a low obliquity, while the secondary is probably a hot Am star and does not appear to be magnetic. The rotation period of the primary (5.04 d) is not synchronised with the orbit, but the rotation angular velocity is close to being synchronised with the orbital angular velocity of the secondary at periastron, perhaps as a result of tidal interactions. The periastron separation is small enough (about 12 times the radius of the primary star) that BD-19 5044L may be one of the very rare known cases of a tidally interacting SB2 binary system containing a magnetic Ap/Bp star.
0
1
0
0
0
0
A Submillimeter Perspective on the GOODS Fields (SUPER GOODS) - II. The High Radio Power Population in the GOODS-N
We use ultradeep 20 cm data from the Karl G. Jansky Very Large Array and 850 micron data from SCUBA-2 and the Submillimeter Array of an 124 arcmin^2 region of the Chandra Deep Field-north to analyze the high radio power (P_20cm>10^31 erg s^-1 Hz^-1) population. We find that 20 (42+/-9%) of the spectroscopically identified z>0.8 sources have consistent star formation rates (SFRs) inferred from both submillimeter and radio observations, while the remaining sources have lower (mostly undetected) submillimeter fluxes, suggesting that active galactic nucleus (AGN) activity dominates the radio power in these sources. We develop a classification scheme based on the ratio of submillimeter flux to radio power versus radio power and find that it agrees with AGN and star-forming galaxy classifications from Very Long Baseline Interferometry. Our results provide support for an extremely rapid drop in the number of high SFR galaxies above about a thousand solar masses per year (Kroupa initial mass function) and for the locally determined relation between X-ray luminosity and radio power for star-forming galaxies applying at high redshifts and high radio powers. We measure far-infrared (FIR) luminosities and find that some AGNs lie on the FIR-radio correlation, while others scatter below. The AGNs that lie on the correlation appear to do so based on their emission from the AGN torus. We measure a median radio size of 1.0+/-0.3 arcsecond for the star-forming galaxies. The radio sizes of the star-forming galaxies are generally larger than those of the AGNs.
0
1
0
0
0
0
Discrete Spacetime Quantum Field Theory
This paper begins with a theoretical explanation of why spacetime is discrete. The derivation shows that there exists an elementary length which is essentially Planck's length. We then show how the existence of this length affects time dilation in special relativity. We next consider the symmetry group for discrete spacetime. This symmetry group gives a discrete version of the usual Lorentz group. However, it is much simpler and is actually a discrete version of the rotation group. From the form of the symmetry group we deduce a possible explanation for the structure of elementary particle classes. Energy-momentum space is introduced and mass operators are defined. Discrete versions of the Klein-Gordon and Dirac equations are derived. The final section concerns discrete quantum field theory. Interaction Hamiltonians and scattering operators are considered. In particular, we study the scalar spin~0 and spin~1 bosons as well as the spin~$1/2$ fermion cases
0
1
0
0
0
0
Neural network-based arithmetic coding of intra prediction modes in HEVC
In both H.264 and HEVC, context-adaptive binary arithmetic coding (CABAC) is adopted as the entropy coding method. CABAC relies on manually designed binarization processes as well as handcrafted context models, which may restrict the compression efficiency. In this paper, we propose an arithmetic coding strategy by training neural networks, and make preliminary studies on coding of the intra prediction modes in HEVC. Instead of binarization, we propose to directly estimate the probability distribution of the 35 intra prediction modes with the adoption of a multi-level arithmetic codec. Instead of handcrafted context models, we utilize convolutional neural network (CNN) to perform the probability estimation. Simulation results show that our proposed arithmetic coding leads to as high as 9.9% bits saving compared with CABAC.
1
0
0
0
0
0
Securing Information-Centric Networking without negating Middleboxes
Information-Centric Networking is a promising networking paradigm that overcomes many of the limitations of current networking architectures. Various research efforts investigate solutions for securing ICN. Nevertheless, most of these solutions relax security requirements in favor of network performance. In particular, they weaken end-user privacy and the architecture's tolerance to security breaches in order to support middleboxes that offer services such as caching and content replication. In this paper, we adapt TLS, a widely used security standard, to an ICN context. We design solutions that allow session reuse and migration among multiple stakeholders and we propose an extension that allows authorized middleboxes to lawfully and transparently intercept secured communications.
1
0
0
0
0
0
Inference in Deep Gaussian Processes using Stochastic Gradient Hamiltonian Monte Carlo
Deep Gaussian Processes (DGPs) are hierarchical generalizations of Gaussian Processes that combine well calibrated uncertainty estimates with the high flexibility of multilayer models. One of the biggest challenges with these models is that exact inference is intractable. The current state-of-the-art inference method, Variational Inference (VI), employs a Gaussian approximation to the posterior distribution. This can be a potentially poor unimodal approximation of the generally multimodal posterior. In this work, we provide evidence for the non-Gaussian nature of the posterior and we apply the Stochastic Gradient Hamiltonian Monte Carlo method to generate samples. To efficiently optimize the hyperparameters, we introduce the Moving Window MCEM algorithm. This results in significantly better predictions at a lower computational cost than its VI counterpart. Thus our method establishes a new state-of-the-art for inference in DGPs.
0
0
0
1
0
0
Probabilistic Numerical Methods for PDE-constrained Bayesian Inverse Problems
This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.
1
0
1
1
0
0
Some discussions on the Read Paper "Beyond subjective and objective in statistics" by A. Gelman and C. Hennig
This note is a collection of several discussions of the paper "Beyond subjective and objective in statistics", read by A. Gelman and C. Hennig to the Royal Statistical Society on April 12, 2017, and to appear in the Journal of the Royal Statistical Society, Series A.
0
0
0
1
0
0
FASER: ForwArd Search ExpeRiment at the LHC
New physics has traditionally been expected in the high-$p_T$ region at high-energy collider experiments. If new particles are light and weakly-coupled, however, this focus may be completely misguided: light particles are typically highly concentrated within a few mrad of the beam line, allowing sensitive searches with small detectors, and even extremely weakly-coupled particles may be produced in large numbers there. We propose a new experiment, ForwArd Search ExpeRiment, or FASER, which would be placed downstream of the ATLAS or CMS interaction point (IP) in the very forward region and operated concurrently there. Two representative on-axis locations are studied: a far location, $400~\text{m}$ from the IP and just off the beam tunnel, and a near location, just $150~\text{m}$ from the IP and right behind the TAN neutral particle absorber. For each location, we examine leading neutrino- and beam-induced backgrounds. As a concrete example of light, weakly-coupled particles, we consider dark photons produced through light meson decay and proton bremsstrahlung. We find that even a relatively small and inexpensive cylindrical detector, with a radius of $\sim 10~\text{cm}$ and length of $5-10~\text{m}$, depending on the location, can discover dark photons in a large and unprobed region of parameter space with dark photon mass $m_{A'} \sim 10~\text{MeV} - 1~\text{GeV}$ and kinetic mixing parameter $\epsilon \sim 10^{-7} - 10^{-3}$. FASER will clearly also be sensitive to many other forms of new physics. We conclude with a discussion of topics for further study that will be essential for understanding FASER's feasibility, optimizing its design, and realizing its discovery potential.
0
1
0
0
0
0
Size Agnostic Change Point Detection Framework for Evolving Networks
Changes in the structure of observed social and complex networks' structure can indicate a significant underlying change in an organization, or reflect the response of the network to an external event. Automatic detection of change points in evolving networks is rudimentary to the research and the understanding of the effect of such events on networks. Here we present an easy-to-implement and fast framework for change point detection in temporal evolving networks. Unlike previous approaches, our method is size agnostic, and does not require either prior knowledge about the network's size and structure, nor does it require obtaining historical information or nodal identities over time. We use both synthetic data derived from dynamic models and two real datasets: Enron email exchange and Ask-Ubuntu forum. Our framework succeeds with both precision and recall and outperforms previous solutions
1
0
0
0
0
0
On distributions determined by their upward, space-time Wiener-Hopf factor
According to the Wiener-Hopf factorization, the characteristic function $\varphi$ of any probability distribution $\mu$ on $\mathbb{R}$ can be decomposed in a unique way as \[1-s\varphi(t)=[1-\chi_-(s,it)][1-\chi_+(s,it)]\,,\;\;\;|s|\le1,\,t\in\mathbb{R}\,,\] where $\chi_-(e^{iu},it)$ and $\chi_+(e^{iu},it)$ are the characteristic functions of possibly defective distributions in $\mathbb{Z}_+\times(-\infty,0)$ and $\mathbb{Z}_+\times[0,\infty)$, respectively. We prove that $\mu$ can be characterized by the sole data of the upward factor $\chi_+(s,it)$, $s\in[0,1)$, $t\in\mathbb{R}$ in many cases including the cases where: 1) $\mu$ has some exponential moments; 2) the function $t\mapsto\mu(t,\infty)$ is completely monotone on $(0,\infty)$; 3) the density of $\mu$ on $[0,\infty)$ admits an analytic continuation on $\mathbb{R}$. We conjecture that any probability distribution is actually characterized by its upward factor. This conjecture is equivalent to the following: {\it Any probability measure $\mu$ on $\mathbb{R}$ whose support is not included in $(-\infty,0)$ is determined by its convolution powers $\mu^{*n}$, $n\ge1$ restricted to $[0,\infty)$}. We show that in many instances, the sole knowledge of $\mu$ and $\mu^{*2}$ restricted to $[0,\infty)$ is actually sufficient to determine $\mu$. Then we investigate the analogous problem in the framework of infinitely divisible distributions.
0
0
1
0
0
0
Spontaneous Octahedral Tilting in the Cubic Inorganic Caesium Halide Perovskites CsSnX$_3$ and CsPbX$_3$ (X = F, Cl, Br, I)
The local crystal structures of many perovskite-structured materials deviate from the average space group symmetry. We demonstrate, from lattice-dynamics calculations based on quantum chemical force constants, that all the caesium-lead and caesium-tin halide perovskites exhibit vibrational instabilities associated with octahedral titling in their high-temperature cubic phase. Anharmonic double-well potentials are found for zone-boundary phonon modes in all compounds with barriers ranging from 108 to 512 meV. The well depth is correlated with the tolerance factor and the chemistry of the composition, but is not proportional to the imaginary harmonic phonon frequency. We provide quantitative insights into the thermodynamic driving forces and distinguish between dynamic and static disorder based on the potential-energy landscape. A positive band gap deformation (spectral blueshift) accompanies the structural distortion, with implications for understanding the performance of these materials in applications areas including solar cells and light-emitting diodes.
0
1
0
0
0
0
Kinetics of the Phospholipid Multilayer Formation at the Surface of the Silica Sol
The ordering of a multilayer consisting of DSPC bilayers on a silica sol substrate is studied within the model-independent approach to the reconstruction of profiles of the electron density from X-ray reflectometry data. It is found that the electroporation of bilayers in the field of anion silica nanoparticles significantly accelerates the process of their saturation with Na+ and H2O, which explains both a relatively small time of formation of the structure of the multilayer of 10^5 - 7x10^5 s and ~13 % excess of the electron density in it.
0
1
0
0
0
0
Asymmetric Actor Critic for Image-Based Robot Learning
Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data.
1
0
0
0
0
0
Effects of particles on spinodal decomposition: A Phase field study
In this thesis, we study the interplay of phase separation and wetting in multicomponent systems. For this purpose, we have examined the phase separation pattern of a binary mixture (AB) in presence of stationary spherical particles (C) which prefers one of the components of the binary (say, A). Binary AB is composed of critical composition(50:50) and off-critical compositions(60:40, 40:60). Particle sizes of 8 units and 16 units are used in the simulations. Two types of particle loading are used, 5\% and 10\%. We have employed a ternary form of Cahn-Hilliard equation to incorporate immobile fillers in our system. To elucidate the effect of wetting on phase separation we have designed three sets of $\chi_{ij}$ and $\kappa_{ij}$ to include the effects of neutral preference, weak preference and strong preference of the particle for one of the binary components. If the particles are preferentially wetted by one of the components then early stage microstructures show transient concentric alternate layers of preferred and non-preferred phases around the particles. When particles are neutral to binary components then such a ring pattern does not form. At late times, neutral preference between particles and binary components yields a continuous morphology whereas preferential wetting produces isolated domains of non-preferred phases dispersed in a continuous matrix of preferred phase. For off-critical compositions, if minor component wets the particle then a bicontinuous morphology results whereas if major component wets the network a droplet morphology is seen. When majority component wets the particle, a possibility of double phase separation is reported. In such alloys phase separation starts near the particle surface and propagates to the bulk at intermediate to late times forming spherical or nearly spherical droplets of the minor component.
0
1
0
0
0
0
Tail asymptotics of signal-to-interference ratio distribution in spatial cellular network models
We consider a spatial stochastic model of wireless cellular networks, where the base stations (BSs) are deployed according to a simple and stationary point process on $\mathbb{R}^d$, $d\ge2$. In this model, we investigate tail asymptotics of the distribution of signal-to-interference ratio (SIR), which is a key quantity in wireless communications. In the case where the path-loss function representing signal attenuation is unbounded at the origin, we derive the exact tail asymptotics of the SIR distribution under an appropriate sufficient condition. While we show that widely-used models based on a Poisson point process and on a determinantal point process meet the sufficient condition, we also give a counterexample violating it. In the case of bounded path-loss functions, we derive a logarithmically asymptotic upper bound on the SIR tail distribution for the Poisson-based and $\alpha$-Ginibre-based models. A logarithmically asymptotic lower bound with the same order as the upper bound is also obtained for the Poisson-based model.
1
0
1
0
0
0
An Optimal Combination of Proportional and Stop-Loss Reinsurance Contracts From Insurer's and Reinsurer's Viewpoints
A reinsurance contract should address the conflicting interests of the insurer and reinsurer. Most of existing optimal reinsurance contracts only considers the interests of one party. This article combines the proportional and stop-loss reinsurance contracts and introduces a new reinsurance contract called proportional-stop-loss reinsurance. Using the balanced loss function, unknown parameters of the proportional-stop-loss reinsurance have been estimated such that the expected surplus for both the insurer and reinsurer are maximized. Several characteristics for the new reinsurance are provided.
0
0
0
1
0
0
Computational complexity and 3-manifolds and zombies
We show the problem of counting homomorphisms from the fundamental group of a homology $3$-sphere $M$ to a finite, non-abelian simple group $G$ is #P-complete, in the case that $G$ is fixed and $M$ is the computational input. Similarly, deciding if there is a non-trivial homomorphism is NP-complete. In both reductions, we can guarantee that every non-trivial homomorphism is a surjection. As a corollary, for any fixed integer $m \ge 5$, it is NP-complete to decide whether $M$ admits a connected $m$-sheeted covering. Our construction is inspired by universality results in topological quantum computation. Given a classical reversible circuit $C$, we construct $M$ so that evaluations of $C$ with certain initialization and finalization conditions correspond to homomorphisms $\pi_1(M) \to G$. An intermediate state of $C$ likewise corresponds to a homomorphism $\pi_1(\Sigma_g) \to G$, where $\Sigma_g$ is a pointed Heegaard surface of $M$ of genus $g$. We analyze the action on these homomorphisms by the pointed mapping class group $\text{MCG}_*(\Sigma_g)$ and its Torelli subgroup $\text{Tor}_*(\Sigma_g)$. By results of Dunfield-Thurston, the action of $\text{MCG}_*(\Sigma_g)$ is as large as possible when $g$ is sufficiently large; we can pass to the Torelli group using the congruence subgroup property of $\text{Sp}(2g,\mathbb{Z})$. Our results can be interpreted as a sharp classical universality property of an associated combinatorial $(2+1)$-dimensional TQFT.
1
0
1
0
0
0
Learning to Schedule Deadline- and Operator-Sensitive Tasks
The use of semi-autonomous and autonomous robotic assistants to aid in care of the elderly is expected to ease the burden on human caretakers, with small-stage testing already occurring in a variety of countries. Yet, it is likely that these robots will need to request human assistance via teleoperation when domain expertise is needed for a specific task. As deployment of robotic assistants moves to scale, mapping these requests for human aid to the teleoperators themselves will be a difficult online optimization problem. In this paper, we design a system that allocates requests to a limited number of teleoperators, each with different specialities, in an online fashion. We generalize a recent model of online job scheduling with a worst-case competitive-ratio bound to our setting. Next, we design a scalable machine-learning-based teleoperator-aware task scheduling algorithm and show, experimentally, that it performs well when compared to an omniscient optimal scheduling algorithm.
1
0
0
0
0
0
r-BTN: Cross-domain Face Composite and Synthesis from Limited Facial Patches
We start by asking an interesting yet challenging question, "If an eyewitness can only recall the eye features of the suspect, such that the forensic artist can only produce a sketch of the eyes (e.g., the top-left sketch shown in Fig. 1), can advanced computer vision techniques help generate the whole face image?" A more generalized question is that if a large proportion (e.g., more than 50%) of the face/sketch is missing, can a realistic whole face sketch/image still be estimated. Existing face completion and generation methods either do not conduct domain transfer learning or can not handle large missing area. For example, the inpainting approach tends to blur the generated region when the missing area is large (i.e., more than 50%). In this paper, we exploit the potential of deep learning networks in filling large missing region (e.g., as high as 95% missing) and generating realistic faces with high-fidelity in cross domains. We propose the recursive generation by bidirectional transformation networks (r-BTN) that recursively generates a whole face/sketch from a small sketch/face patch. The large missing area and the cross domain challenge make it difficult to generate satisfactory results using a unidirectional cross-domain learning structure. On the other hand, a forward and backward bidirectional learning between the face and sketch domains would enable recursive estimation of the missing region in an incremental manner (Fig. 1) and yield appealing results. r-BTN also adopts an adversarial constraint to encourage the generation of realistic faces/sketches. Extensive experiments have been conducted to demonstrate the superior performance from r-BTN as compared to existing potential solutions.
1
0
0
0
0
0
Regularity of Kleinian limit sets and Patterson-Sullivan measures
We consider several (related) notions of geometric regularity in the context of limit sets of geometrically finite Kleinian groups and associated Patterson-Sullivan measures. We begin by computing the upper and lower regularity dimensions of the Patterson-Sullivan measure, which involves controlling the relative measure of concentric balls. We then compute the Assouad and lower dimensions of the limit set, which involves controlling local doubling properties. Unlike the Hausdorff, packing, and box-counting dimensions, we show that the Assouad and lower dimensions are not necessarily given by the Poincaré exponent.
0
0
1
0
0
0
Large-Scale Sleep Condition Analysis Using Selfies from Social Media
Sleep condition is closely related to an individual's health. Poor sleep conditions such as sleep disorder and sleep deprivation affect one's daily performance, and may also cause many chronic diseases. Many efforts have been devoted to monitoring people's sleep conditions. However, traditional methodologies require sophisticated equipment and consume a significant amount of time. In this paper, we attempt to develop a novel way to predict individual's sleep condition via scrutinizing facial cues as doctors would. Rather than measuring the sleep condition directly, we measure the sleep-deprived fatigue which indirectly reflects the sleep condition. Our method can predict a sleep-deprived fatigue rate based on a selfie provided by a subject. This rate is used to indicate the sleep condition. To gain deeper insights of human sleep conditions, we collected around 100,000 faces from selfies posted on Twitter and Instagram, and identified their age, gender, and race using automatic algorithms. Next, we investigated the sleep condition distributions with respect to age, gender, and race. Our study suggests among the age groups, fatigue percentage of the 0-20 youth and adolescent group is the highest, implying that poor sleep condition is more prevalent in this age group. For gender, the fatigue percentage of females is higher than that of males, implying that more females are suffering from sleep issues than males. Among ethnic groups, the fatigue percentage in Caucasian is the highest followed by Asian and African American.
1
0
0
0
0
0
A note on the Diophantine equation $2^{n-1}(2^{n}-1)=x^3+y^3+z^3$
Motivated by the recent result of Farhi we show that for each $n\equiv \pm 1\pmod{6}$ the title Diophantine equation has at least two solutions in integers. As a consequence, we get that each (even) perfect number is a sum of three cubes of integers. Moreover, we present some computational results concerning the considered equation and state some questions and conjectures.
0
0
1
0
0
0
Modeling of nonlinear audio effects with end-to-end deep neural networks
In the context of music production, distortion effects are mainly used for aesthetic reasons and are usually applied to electric musical instruments. Most existing methods for nonlinear modeling are often either simplified or optimized to a very specific circuit. In this work, we investigate deep learning architectures for audio processing and we aim to find a general purpose end-to-end deep neural network to perform modeling of nonlinear audio effects. We show the network modeling various nonlinearities and we discuss the generalization capabilities among different instruments.
1
0
0
0
0
0
Weak Adaptive Submodularity and Group-Based Active Diagnosis with Applications to State Estimation with Persistent Sensor Faults
In this paper, we consider adaptive decision-making problems for stochastic state estimation with partial observations. First, we introduce the concept of weak adaptive submodularity, a generalization of adaptive submodularity, which has found great success in solving challenging adaptive state estimation problems. Then, for the problem of active diagnosis, i.e., discrete state estimation via active sensing, we show that an adaptive greedy policy has a near-optimal performance guarantee when the reward function possesses this property. We further show that the reward function for group-based active diagnosis, which arises in applications such as medical diagnosis and state estimation with persistent sensor faults, is also weakly adaptive submodular. Finally, in experiments of state estimation for an aircraft electrical system with persistent sensor faults, we observe that an adaptive greedy policy performs equally well as an exhaustive search.
1
0
1
1
0
0
Robust Decentralized Learning Using ADMM with Unreliable Agents
Many machine learning problems can be formulated as consensus optimization problems which can be solved efficiently via a cooperative multi-agent system. However, the agents in the system can be unreliable due to a variety of reasons: noise, faults and attacks. Providing erroneous updates leads the optimization process in a wrong direction, and degrades the performance of distributed machine learning algorithms. This paper considers the problem of decentralized learning using ADMM in the presence of unreliable agents. First, we rigorously analyze the effect of erroneous updates (in ADMM learning iterations) on the convergence behavior of multi-agent system. We show that the algorithm linearly converges to a neighborhood of the optimal solution under certain conditions and characterize the neighborhood size analytically. Next, we provide guidelines for network design to achieve a faster convergence. We also provide conditions on the erroneous updates for exact convergence to the optimal solution. Finally, to mitigate the influence of unreliable agents, we propose \textsf{ROAD}, a robust variant of ADMM, and show its resilience to unreliable agents with an exact convergence to the optimum.
1
0
0
1
0
0
Applying the Delta method in metric analytics: A practical guide with novel ideas
During the last decade, the information technology industry has adopted a data-driven culture, relying on online metrics to measure and monitor business performance. Under the setting of big data, the majority of such metrics approximately follow normal distributions, opening up potential opportunities to model them directly without extra model assumptions and solve big data problems via closed-form formulas using distributed algorithms at a fraction of the cost of simulation-based procedures like bootstrap. However, certain attributes of the metrics, such as their corresponding data generating processes and aggregation levels, pose numerous challenges for constructing trustworthy estimation and inference procedures. Motivated by four real-life examples in metric development and analytics for large-scale A/B testing, we provide a practical guide to applying the Delta method, one of the most important tools from the classic statistics literature, to address the aforementioned challenges. We emphasize the central role of the Delta method in metric analytics by highlighting both its classic and novel applications.
0
0
0
1
0
0
Chebyshev Approximation and Higher Order Derivatives of Lyapunov Functions for Estimating the Domain of Attraction
Estimating the Domain of Attraction (DA) of non-polynomial systems is a challenging problem. Taylor expansion is widely adopted for transforming a nonlinear analytic function into a polynomial function, but the performance of Taylor expansion is not always satisfactory. This paper provides solvable ways for estimating the DA via Chebyshev approximation. Firstly, for Chebyshev approximation without the remainder, higher order derivatives of Lyapunov functions are used for estimating the DA, and the largest estimate is obtained by solving a generalized eigenvalue problem. Moreover, for Chebyshev approximation with the remainder, an uncertain polynomial system is reformulated, and a condition is proposed for ensuring the convergence to the largest estimate with a selected Lyapunov function. Numerical examples demonstrate that both accuracy and efficiency are improved compared to Taylor approximation.
1
0
0
0
0
0
Ranking Median Regression: Learning to Order through Local Consensus
This article is devoted to the problem of predicting the value taken by a random permutation $\Sigma$, describing the preferences of an individual over a set of numbered items $\{1,\; \ldots,\; n\}$ say, based on the observation of an input/explanatory r.v. $X$ e.g. characteristics of the individual), when error is measured by the Kendall $\tau$ distance. In the probabilistic formulation of the 'Learning to Order' problem we propose, which extends the framework for statistical Kemeny ranking aggregation developped in \citet{CKS17}, this boils down to recovering conditional Kemeny medians of $\Sigma$ given $X$ from i.i.d. training examples $(X_1, \Sigma_1),\; \ldots,\; (X_N, \Sigma_N)$. For this reason, this statistical learning problem is referred to as \textit{ranking median regression} here. Our contribution is twofold. We first propose a probabilistic theory of ranking median regression: the set of optimal elements is characterized, the performance of empirical risk minimizers is investigated in this context and situations where fast learning rates can be achieved are also exhibited. Next we introduce the concept of local consensus/median, in order to derive efficient methods for ranking median regression. The major advantage of this local learning approach lies in its close connection with the widely studied Kemeny aggregation problem. From an algorithmic perspective, this permits to build predictive rules for ranking median regression by implementing efficient techniques for (approximate) Kemeny median computations at a local level in a tractable manner. In particular, versions of $k$-nearest neighbor and tree-based methods, tailored to ranking median regression, are investigated. Accuracy of piecewise constant ranking median regression rules is studied under a specific smoothness assumption for $\Sigma$'s conditional distribution given $X$.
0
0
1
1
0
0
Anisotropic power-law inflation in a two-scalar-field model with a mixed kinetic term
We examine whether an extended scenario of a two-scalar-field model, in which a mixed kinetic term of canonical and phantom scalar fields is involved, admits the Bianchi type I metric, which is homogeneous but anisotropic spacetime, as its power-law solutions. Then we analyze the stability of the anisotropic power-law solutions to see whether these solutions respect the cosmic no-hair conjecture or not during the inflationary phase. In addition, we will also investigate a special scenario, where the pure kinetic terms of canonical and phantom fields disappear altogether in field equations, to test again the validity of cosmic no-hair conjecture. As a result, the cosmic no-hair conjecture always holds in both these scenarios due to the instability of the corresponding anisotropic inflationary solutions.
0
1
0
0
0
0
Higher-dimensional SYK Non-Fermi Liquids at Lifshitz transitions
We address the key open problem of a higher dimensional generalization of the Sachdev-Ye-Kitaev (SYK) model. We construct a model on a lattice of SYK dots with non-random intersite hopping. The crucial feature of the resulting band dispersion is the presence of a Lifshitz point where two bands touch with a tunable powerlaw divergent density of states (DOS). For a certain regime of the powerlaw exponent, we obtain a new class of interaction-dominated non-Fermi liquid (NFL) states, which exhibits exciting features such as a zero-temperature scaling symmetry, an emergent (approximate) time reparameterization invariance, a powerlaw entropy-temperature relationship, and a fermion dimension that depends continuously on the DOS exponent. Notably, we further demonstrate that these NFL states are fast scramblers with a Lyapunov exponent $\lambda_L\propto T$, although they do not saturate the upper bound of chaos, rendering them truly unique.
0
1
0
0
0
0
Convex-constrained Sparse Additive Modeling and Its Extensions
Sparse additive modeling is a class of effective methods for performing high-dimensional nonparametric regression. In this work we show how shape constraints such as convexity/concavity and their extensions, can be integrated into additive models. The proposed sparse difference of convex additive models (SDCAM) can estimate most continuous functions without any a priori smoothness assumption. Motivated by a characterization of difference of convex functions, our method incorporates a natural regularization functional to avoid overfitting and to reduce model complexity. Computationally, we develop an efficient backfitting algorithm with linear per-iteration complexity. Experiments on both synthetic and real data verify that our method is competitive against state-of-the-art sparse additive models, with improved performance in most scenarios.
1
0
0
1
0
0
Comparative study of finite element methods using the Time-Accuracy-Size (TAS) spectrum analysis
We present a performance analysis appropriate for comparing algorithms using different numerical discretizations. By taking into account the total time-to-solution, numerical accuracy with respect to an error norm, and the computation rate, a cost-benefit analysis can be performed to determine which algorithm and discretization are particularly suited for an application. This work extends the performance spectrum model in Chang et. al. 2017 for interpretation of hardware and algorithmic tradeoffs in numerical PDE simulation. As a proof-of-concept, popular finite element software packages are used to illustrate this analysis for Poisson's equation.
1
0
0
0
0
0
A sharpened Riesz-Sobolev inequality
The Riesz-Sobolev inequality provides an upper bound, in integral form, for the convolution of indicator functions of subsets of Euclidean space. We formulate and prove a sharper form of the inequality. This can be equivalently phrased as a stability result, quantifying an inverse theorem of Burchard that characterizes cases of equality.
0
0
1
0
0
0
Unoriented Cobordism Maps on Link Floer Homology
We study the problem of defining maps on link Floer homology induced by unoriented link cobordisms. We provide a natural notion of link cobordism, disoriented link cobordism, which tracks the motion of index zero and index three critical points. Then we construct a map on unoriented link Floer homology associated to a disoriented link cobordism. Furthermore, we give a comparison with Oszváth-Stipsicz-Szabó's and Manolescu's constructions of link cobordism maps for an unoriented band move.
0
0
1
0
0
0
3D Object Reconstruction from Hand-Object Interactions
Recent advances have enabled 3d object reconstruction approaches using a single off-the-shelf RGB-D camera. Although these approaches are successful for a wide range of object classes, they rely on stable and distinctive geometric or texture features. Many objects like mechanical parts, toys, household or decorative articles, however, are textureless and characterized by minimalistic shapes that are simple and symmetric. Existing in-hand scanning systems and 3d reconstruction techniques fail for such symmetric objects in the absence of highly distinctive features. In this work, we show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of even featureless and highly symmetric objects and we present an approach that fuses the rich additional information of hands into a 3d reconstruction pipeline, significantly contributing to the state-of-the-art of in-hand scanning.
1
0
0
0
0
0
Secular dynamics of a planar model of the Sun-Jupiter-Saturn-Uranus system; effective stability into the light of Kolmogorov and Nekhoroshev theories
We investigate the long-time stability of the Sun-Jupiter-Saturn-Uranus system by considering a planar secular model, that can be regarded as a major refinement of the approach first introduced by Lagrange. Indeed, concerning the planetary orbital revolutions, we improve the classical circular approximation by replacing it with a solution that is invariant up to order two in the masses; therefore, we investigate the stability of the secular system for rather small values of the eccentricities. First, we explicitly construct a Kolmogorov normal form, so as to find an invariant KAM torus which approximates very well the secular orbits. Finally, we adapt the approach that is at basis of the analytic part of the Nekhoroshev's theorem, so as to show that there is a neighborhood of that torus for which the estimated stability time is larger than the lifetime of the Solar System. The size of such a neighborhood, compared with the uncertainties of the astronomical observations, is about ten times smaller.
0
0
1
0
0
0
Variants on the Berz sublinearity theorem
We consider variants on the classical Berz sublinearity theorem, using only DC, the Axiom of Dependent Choices, rather than AC, the Axiom of Choice which Berz used. We consider thinned versions, in which conditions are imposed on only part of the domain of the function -- results of quantifier-weakening type. There are connections with classical results on subadditivity. We close with a discussion of the extensive related literature.
0
0
1
0
0
0
Linguistic Markers of Influence in Informal Interactions
There has been a long standing interest in understanding `Social Influence' both in Social Sciences and in Computational Linguistics. In this paper, we present a novel approach to study and measure interpersonal influence in daily interactions. Motivated by the basic principles of influence, we attempt to identify indicative linguistic features of the posts in an online knitting community. We present the scheme used to operationalize and label the posts with indicator features. Experiments with the identified features show an improvement in the classification accuracy of influence by 3.15%. Our results illustrate the important correlation between the characteristics of the language and its potential to influence others.
1
0
0
0
0
0
Finding multiple core-periphery pairs in networks
With a core-periphery structure of networks, core nodes are densely interconnected, peripheral nodes are connected to core nodes to different extents, and peripheral nodes are sparsely interconnected. Core-periphery structure composed of a single core and periphery has been identified for various networks. However, analogous to the observation that many empirical networks are composed of densely interconnected groups of nodes, i.e., communities, a network may be better regarded as a collection of multiple cores and peripheries. We propose a scalable algorithm to detect multiple non-overlapping groups of core-periphery structure in a network. We illustrate our algorithm using synthesised and empirical networks. For example, we find distinct core-periphery pairs with different political leanings in a network of political blogs and separation between international and domestic subnetworks of airports in some single countries in a world-wide airport network.
1
1
0
0
0
0
Central Limit Theorems of Local Polynomial Threshold Estimators for Diffusion Processes with Jumps
Central limit theorems play an important role in the study of statistical inference for stochastic processes. However, when the nonparametric local polynomial threshold estimator, especially local linear case, is employed to estimate the diffusion coefficients of diffusion processes, the adaptive and predictable structure of the estimator conditionally on the $\sigma-$field generated by diffusion processes is destroyed, the classical central limit theorem for martingale difference sequences can not work. In this paper, we proved the central limit theorems of local polynomial threshold estimators for the volatility function in diffusion processes with jumps. We believe that our proof for local polynomial threshold estimators provides a new method in this fields, especially local linear case.
0
0
1
0
0
0
Honey from the Hives: A Theoretical and Computational Exploration of Combinatorial Hives
In the first half of this manuscript, we begin with a brief review of combinatorial hives as introduced by Knutson and Tao, and focus on a conjecture by Danilov and Koshevoy for generating such a hive from Hermitian matrix pairs through an optimization scheme. We examine a proposal by Appleby and Whitehead in the spirit of this conjecture and analytically elucidate an obstruction in their construction for guaranteeing hive generation, while detailing stronger conditions under which we can produce hives with almost certain probability. We provide the first mapping of this prescription onto a practical algorithmic space that enables us to produce affirming computational results and open a new area of research into the analysis of the random geometries and curvatures of hive surfaces from select matrix ensembles. The second part of this manuscript concerns Littlewood-Richardson coefficients and methods of estimating them from the hive construction. We illustrate experimental confirmation of two numerical algorithms that we provide as tools for the community: one as a rounded estimator on the continuous hive polytope volume following a proposal by Narayanan, and the other as a novel construction using a coordinate hit-and-run on the hive lattice itself. We compare the advantages of each, and include numerical results on their accuracies for some tested cases.
0
0
0
1
0
0
Pressure-induced ferromagnetism due to an anisotropic electronic topological transition in Fe1.08Te
A rapid and anisotropic modification of the Fermi-surface shape can be associated with abrupt changes in crystalline lattice geometry or in the magnetic state of a material. In this study we show that such an electronic topological transition is at the basis of the formation of an unusual pressure-induced tetragonal ferromagnetic phase in Fe$_{1.08}$Te. Around 2 GPa, the orthorhombic and incommensurate antiferromagnetic ground-state of Fe$_{1.08}$Te is transformed upon increasing pressure into a tetragonal ferromagnetic state via a conventional first-order transition. On the other hand, an isostructural transition takes place from the paramagnetic high-temperature state into the ferromagnetic phase as a rare case of a `type 0' transformation with anisotropic properties. Electronic-structure calculations in combination with electrical resistivity, magnetization, and x-ray diffraction experiments show that the electronic system of Fe$_{1.08}$Te is instable with respect to profound topological transitions that can drive fundamental changes of the lattice anisotropy and the associated magnetic order.
0
1
0
0
0
0
DataSlicer: Task-Based Data Selection for Visual Data Exploration
In visual exploration and analysis of data, determining how to select and transform the data for visualization is a challenge for data-unfamiliar or inexperienced users. Our main hypothesis is that for many data sets and common analysis tasks, there are relatively few "data slices" that result in effective visualizations. By focusing human users on appropriate and suitably transformed parts of the underlying data sets, these data slices can help the users carry their task to correct completion. To verify this hypothesis, we develop a framework that permits us to capture exemplary data slices for a user task, and to explore and parse visual-exploration sequences into a format that makes them distinct and easy to compare. We develop a recommendation system, DataSlicer, that matches a "currently viewed" data slice with the most promising "next effective" data slices for the given exploration task. We report the results of controlled experiments with an implementation of the DataSlicer system, using four common analytical task types. The experiments demonstrate statistically significant improvements in accuracy and exploration speed versus users without access to our system.
1
0
0
0
0
0
Convolutional Analysis Operator Learning: Acceleration, Convergence, Application, and Neural Networks
Convolutional operator learning is increasingly gaining attention in many signal processing and computer vision applications. Learning kernels has mostly relied on so-called local approaches that extract and store many overlapping patches across training signals. Due to memory demands, local approaches have limitations when learning kernels from large datasets -- particularly with multi-layered structures, e.g., convolutional neural network (CNN) -- and/or applying the learned kernels to high-dimensional signal recovery problems. The so-called global approach has been studied within the "synthesis" signal model, e.g., convolutional dictionary learning, overcoming the memory problems by careful algorithmic designs. This paper proposes a new convolutional analysis operator learning (CAOL) framework in the global approach, and develops a new convergent Block Proximal Gradient method using a Majorizer (BPG-M) to solve the corresponding block multi-nonconvex problems. To learn diverse filters within the CAOL framework, this paper introduces an orthogonality constraint that enforces a tight-frame (TF) filter condition, and a regularizer that promotes diversity between filters. Numerical experiments show that, for tight majorizers, BPG-M significantly accelerates the CAOL convergence rate compared to the state-of-the-art method, BPG. Numerical experiments for sparse-view computational tomography show that CAOL using TF filters significantly improves reconstruction quality compared to a conventional edge-preserving regularizer. Finally, this paper shows that CAOL can be useful to mathematically model a CNN, and the corresponding updates obtained via BPG-M coincide with core modules of the CNN.
0
0
0
1
0
0
Several Classes of Permutation Trinomials over $\mathbb F_{5^n}$ From Niho Exponents
The construction of permutation trinomials over finite fields attracts people's interest recently due to their simple form and some additional properties. Motivated by some results on the construction of permutation trinomials with Niho exponents, by constructing some new fractional polynomials that permute the set of the $(q+1)$-th roots of unity in $\mathbb F_{q^2}$, we present several classes of permutation trinomials with Niho exponents over $\mathbb F_{q^2}$, where $q=5^k$.
1
0
0
0
0
0
Core-powered mass loss and the radius distribution of small exoplanets
Recent observations identify a valley in the radius distribution of small exoplanets, with planets in the range $1.5-2.0\,{\rm R}_{\oplus}$ significantly less common than somewhat smaller or larger planets. This valley may suggest a bimodal population of rocky planets that are either engulfed by massive gas envelopes that significantly enlarge their radius, or do not have detectable atmospheres at all. One explanation of such a bimodal distribution is atmospheric erosion by high-energy stellar photons. We investigate an alternative mechanism: the luminosity of the cooling rocky core, which can completely erode light envelopes while preserving heavy ones, produces a deficit of intermediate sized planets. We evolve planetary populations that are derived from observations using a simple analytical prescription, accounting self-consistently for envelope accretion, cooling and mass loss, and demonstrate that core-powered mass loss naturally reproduces the observed radius distribution, regardless of the high-energy incident flux. Observations of planets around different stellar types may distinguish between photoevaporation, which is powered by the high-energy tail of the stellar radiation, and core-powered mass loss, which depends on the bolometric flux through the planet's equilibrium temperature that sets both its cooling and mass-loss rates.
0
1
0
0
0
0
Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront
The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.
0
1
0
0
0
0
Probabilistic interpretation of HJB equations by the representation theorem for generators of BSDEs
The purpose of this note is to propose a new approach for the probabilistic interpretation of Hamilton-Jacobi-Bellman equations associated with stochastic recursive optimal control problems, utilizing the representation theorem for generators of backward stochastic differential equations. The key idea of our approach for proving this interpretation consists of transmitting the signs between the solution and generator via the identity given by representation theorem. Compared with existing methods, our approach seems to be more applicable for general settings. This can also be regarded as a new application of such representation theorem.
0
0
1
0
0
0
A Review on Deep Learning Techniques Applied to Semantic Segmentation
Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we describe the terminology of this field as well as mandatory background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and their targets. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques.
1
0
0
0
0
0
Sum-Product Networks for Hybrid Domains
While all kinds of mixed data -from personal data, over panel and scientific data, to public and commercial data- are collected and stored, building probabilistic graphical models for these hybrid domains becomes more difficult. Users spend significant amounts of time in identifying the parametric form of the random variables (Gaussian, Poisson, Logit, etc.) involved and learning the mixed models. To make this difficult task easier, we propose the first trainable probabilistic deep architecture for hybrid domains that features tractable queries. It is based on Sum-Product Networks (SPNs) with piecewise polynomial leave distributions together with novel nonparametric decomposition and conditioning steps using the Hirschfeld-Gebelein-Rényi Maximum Correlation Coefficient. This relieves the user from deciding a-priori the parametric form of the random variables but is still expressive enough to effectively approximate any continuous distribution and permits efficient learning and inference. Our empirical evidence shows that the architecture, called Mixed SPNs, can indeed capture complex distributions across a wide range of hybrid domains.
1
0
0
1
0
0
Unified Hypersphere Embedding for Speaker Recognition
Incremental improvements in accuracy of Convolutional Neural Networks are usually achieved through use of deeper and more complex models trained on larger datasets. However, enlarging dataset and models increases the computation and storage costs and cannot be done indefinitely. In this work, we seek to improve the identification and verification accuracy of a text-independent speaker recognition system without use of extra data or deeper and more complex models by augmenting the training and testing data, finding the optimal dimensionality of embedding space and use of more discriminative loss functions. Results of experiments on VoxCeleb dataset suggest that: (i) Simple repetition and random time-reversion of utterances can reduce prediction errors by up to 18%. (ii) Lower dimensional embeddings are more suitable for verification. (iii) Use of proposed logistic margin loss function leads to unified embeddings with state-of-the-art identification and competitive verification accuracies.
1
0
0
0
0
0
Linearly-Recurrent Autoencoder Networks for Learning Dynamics
This paper describes a method for learning low-dimensional approximations of nonlinear dynamical systems, based on neural-network approximations of the underlying Koopman operator. Extended Dynamic Mode Decomposition (EDMD) provides a useful data-driven approximation of the Koopman operator for analyzing dynamical systems. This paper addresses a fundamental problem associated with EDMD: a trade-off between representational capacity of the dictionary and over-fitting due to insufficient data. A new neural network architecture combining an autoencoder with linear recurrent dynamics in the encoded state is used to learn a low-dimensional and highly informative Koopman-invariant subspace of observables. A method is also presented for balanced model reduction of over-specified EDMD systems in feature space. Nonlinear reconstruction using partially linear multi-kernel regression aims to improve reconstruction accuracy from the low-dimensional state when the data has complex but intrinsically low-dimensional structure. The techniques demonstrate the ability to identify Koopman eigenfunctions of the unforced Duffing equation, create accurate low-dimensional models of an unstable cylinder wake flow, and make short-time predictions of the chaotic Kuramoto-Sivashinsky equation.
1
0
0
1
0
0
Macros to Conduct Tests of Multimodality in SAS
The Dip Test of Unimodality and Silverman's Critical Bandwidth Test are two popular tests to determine if an unknown density contains more than one mode. While the tests can be easily run in R, they are not included in SAS software. We provide implementations of the Dip Test and Silverman Test as macros in the SAS software, capitalizing on the capability of SAS to execute R code internally. Descriptions of the macro parameters, installation steps, and sample macro calls are provided, along with an appendix for troubleshooting. We illustrate the use of the macros on data simulated from one or more Gaussian distributions as well as on the famous $\textit{iris}$ dataset.
0
0
0
1
0
0
The James construction and $π_4(\mathbb{S}^3)$ in homotopy type theory
In the first part of this paper we present a formalization in Agda of the James construction in homotopy type theory. We include several fragments of code to show what the Agda code looks like, and we explain several techniques that we used in the formalization. In the second part, we use the James construction to give a constructive proof that $\pi_4(\mathbb{S}^3)$ is of the form $\mathbb{Z}/n\mathbb{Z}$ (but we do not compute the $n$ here).
1
0
1
0
0
0
Effective One-Dimensional Coupling in the Highly-Frustrated Square-Lattice Itinerant Magnet CaCo$_{\mathrm{2}-y}$As$_{2}$
Inelastic neutron scattering measurements on the itinerant antiferromagnet (AFM) CaCo$_{\mathrm{2}-y}$As$_{2}$ at a temperature of 8 K reveal two orthogonal planes of scattering perpendicular to the Co square lattice in reciprocal space, demonstrating the presence of effective one-dimensional spin interactions. These results are shown to arise from near-perfect bond frustration within the $J_1$-$J_2$ Heisenberg model on a square lattice with ferromagnetic $J_1$, and hence indicate that the extensive previous experimental and theoretical study of the $J_1$-$J_2$ Heisenberg model on local-moment square spin lattices should be expanded to include itinerant spin systems.
0
1
0
0
0
0
Graph-Based Ascent Algorithms for Function Maximization
We study the problem of finding the maximum of a function defined on the nodes of a connected graph. The goal is to identify a node where the function obtains its maximum. We focus on local iterative algorithms, which traverse the nodes of the graph along a path, and the next iterate is chosen from the neighbors of the current iterate with probability distribution determined by the function values at the current iterate and its neighbors. We study two algorithms corresponding to a Metropolis-Hastings random walk with different transition kernels: (i) The first algorithm is an exponentially weighted random walk governed by a parameter $\gamma$. (ii) The second algorithm is defined with respect to the graph Laplacian and a smoothness parameter $k$. We derive convergence rates for the two algorithms in terms of total variation distance and hitting times. We also provide simulations showing the relative convergence rates of our algorithms in comparison to an unbiased random walk, as a function of the smoothness of the graph function. Our algorithms may be categorized as a new class of "descent-based" methods for function maximization on the nodes of a graph.
1
0
0
1
0
0
Multiequilibria analysis for a class of collective decision-making networked systems
The models of collective decision-making considered in this paper are nonlinear interconnected cooperative systems with saturating interactions. These systems encode the possible outcomes of a decision process into different steady states of the dynamics. In particular, they are characterized by two main attractors in the positive and negative orthant, representing two choices of agreement among the agents, associated to the Perron-Frobenius eigenvector of the system. In this paper we give conditions for the appearance of other equilibria of mixed sign. The conditions are inspired by Perron-Frobenius theory and are related to the algebraic connectivity of the network. We also show how all these equilibria must be contained in a solid disk of radius given by the norm of the equilibrium point which is located in the positive orthant.
1
0
1
0
0
0
Pattern Recognition Techniques for the Identification of Activities of Daily Living using Mobile Device Accelerometer
This paper focuses on the recognition of Activities of Daily Living (ADL) applying pattern recognition techniques to the data acquired by the accelerometer available in the mobile devices. The recognition of ADL is composed by several stages, including data acquisition, data processing, and artificial intelligence methods. The artificial intelligence methods used are related to pattern recognition, and this study focuses on the use of Artificial Neural Networks (ANN). The data processing includes data cleaning, and the feature extraction techniques to define the inputs for the ANN. Due to the low processing power and memory of the mobile devices, they should be mainly used to acquire the data, applying an ANN previously trained for the identification of the ADL. The main purpose of this paper is to present a new method implemented with ANN for the identification of a defined set of ADL with a reliable accuracy. This paper also presents a comparison of different types of ANN in order to choose the type for the implementation of the final method. Results of this research probes that the best accuracies are achieved with Deep Learning techniques with an accuracy higher than 80%.
1
0
0
0
0
0
Simultaneous dynamic characterization of charge and structural motion during ferroelectric switching
Monitoring structural changes in ferroelectric thin films during electric field-induced polarization switching is important for a full microscopic understanding of the coupled motion of charges, atoms and domain walls. We combine standard ferroelectric test-cycles with time-resolved x-ray diffraction to investigate the response of a nanoscale ferroelectric oxide capacitor upon charging, discharging and switching. Piezoelectric strain develops during the electronic RC time constant and additionally during structural domain-wall creep. The complex atomic motion during ferroelectric polarization reversal starts with a negative piezoelectric response to the charge flow triggered by voltage pulses. Incomplete screening limits the compressive strain. The piezoelectric modulation of the unit cell tweaks the energy barrier between the two polarization states. Domain wall motion is evidenced by a broadening of the in-plane components of Bragg reflections. Such simultaneous measurements on a working device elucidate and visualize the complex interplay of charge flow and structural motion and challenges theoretical modelling.
0
1
0
0
0
0
Continuous Relaxations for the Traveling Salesman Problem
In this work, we aim to explore connections between dynamical systems techniques and combinatorial optimization problems. In particular, we construct heuristic approaches for the traveling salesman problem (TSP) based on embedding the relaxed discrete optimization problem into appropriate manifolds. We explore multiple embedding techniques -- namely, the construction of new dynamical systems on the manifold of orthogonal matrices and associated Procrustes approximations of the TSP cost function. Using these dynamical systems, we analyze the local neighborhood around the optimal TSP solutions (which are equilibria) using computations to approximate the associated \emph{stable manifolds}. We find that these flows frequently converge to undesirable equilibria. However, the solutions of the dynamical systems and the associated Procrustes approximation provide an interesting biasing approach for the popular Lin--Kernighan heuristic which yields fast convergence. The Lin--Kernighan heuristic is typically based on the computation of edges that have a `high probability' of being in the shortest tour, thereby effectively pruning the search space. Our new approach, instead, relies on a natural relaxation of the combinatorial optimization problem to the manifold of orthogonal matrices and the subsequent use of this solution to bias the Lin--Kernighan heuristic. Although the initial cost of computing these edges using the Procrustes solution is higher than existing methods, we find that the Procrustes solution, when coupled with a homotopy computation, contains valuable information regarding the optimal edges. We explore the Procrustes based approach on several TSP instances and find that our approach often requires fewer $k$-opt moves than existing approaches. Broadly, we hope that this work initiates more work in the intersection of dynamical systems theory and combinatorial optimization.
1
0
1
0
0
0
Isotopes of Octonion Algebras, G2-Torsors and Triality
Octonion algebras over rings are, in contrast to those over fields, not determined by their norm forms. Octonion algebras whose norm is isometric to the norm q of a given algebra C are twisted forms of C by means of the Aut(C)-torsor O(q) ->O(q)/Aut(C). We show that, over any commutative unital ring, these twisted forms are precisely the isotopes C(a,b) of C, with multiplication given by x*y=(xa)(by), for unit norm octonions a,b of C. The link is provided by the triality phenomenon, which we study from new and classical perspectives. We then study these twisted forms using the interplay, thus obtained, between torsor geometry and isotope computations, thus obtaining new results on octonion algebras over e.g. rings of (Laurent) polynomials.
0
0
1
0
0
0
Second Order Statistics Analysis and Comparison between Arithmetic and Geometric Average Fusion
Two fundamental approaches to information averaging are based on linear and logarithmic combination, yielding the arithmetic average (AA) and geometric average (GA) of the fusing initials, respectively. In the context of target tracking, the two most common formats of data to be fused are random variables and probability density functions, namely $v$-fusion and $f$-fusion, respectively. In this work, we analyze and compare the second order statistics (including variance and mean square error) of AA and GA in terms of both $v$-fusion and $f$-fusion. The case of weighted Gaussian mixtures representing multitarget densities in the presence of false alarms and misdetection (whose weight sums are not necessarily unit) is also considered, the result of which appears significantly different from that for a single target. In addition to exact derivation, exemplifying analysis and illustrations are provided.
1
0
1
1
0
0
Phonon lasing from optical frequency comb illumination of a trapped ion
An atomic transition can be addressed by a single tooth of an optical frequency comb if the excited state lifetime ($\tau$) is significantly longer than the pulse repetition period ($T_\mathrm{r}$). In the crossover regime between fully-resolved and unresolved comb teeth ($\tau \lessapprox T_\mathrm{r}$), we observe Doppler cooling of a pre-cooled trapped atomic ion by a single tooth of a frequency-doubled optical frequency comb. We find that for initially hot ions, a multi-tooth effect gives rise to lasing of the ion's harmonic motion in the trap, verified by acoustic injection locking. The gain saturation of this phonon laser action leads to a comb of steady-state oscillation amplitudes, allowing hot ions to be loaded directly into the trap and laser cooled to crystallization despite the presence of hundreds of blue-detuned teeth.
0
1
0
0
0
0