title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
VQABQ: Visual Question Answering by Basic Questions
Taking an image and question as the input of our method, it can output the text-based answer of the query question about the given image, so called Visual Question Answering (VQA). There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the basic questions of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question. We formulate the basic questions generation problem as a LASSO optimization problem, and also propose a criterion about how to exploit these basic questions to help answer main question. Our method is evaluated on the challenging VQA dataset and yields state-of-the-art accuracy, 60.34% in open-ended task.
1
0
0
0
0
0
Power-Constrained Secrecy Rate Maximization for Joint Relay and Jammer Selection Assisted Wireless Networks
In this paper, we examine the physical layer security for cooperative wireless networks with multiple intermediate nodes, where the decode-and-forward (DF) protocol is considered. We propose a new joint relay and jammer selection (JRJS) scheme for protecting wireless communications against eavesdropping, where an intermediate node is selected as the relay for the sake of forwarding the source signal to the destination and meanwhile, the remaining intermediate nodes are employed to act as friendly jammers which broadcast the artificial noise for disturbing the eavesdropper. We further investigate the power allocation among the source, relay and friendly jammers for maximizing the secrecy rate of proposed JRJS scheme and derive a closed-form sub-optimal solution. Specificially, all the intermediate nodes which successfully decode the source signal are considered as relay candidates. For each candidate, we derive the sub-optimal closed-form power allocation solution and obtain the secrecy rate result of the corresponding JRJS scheme. Then, the candidate which is capable of achieving the highest secrecy rate is selected as the relay. Two assumptions about the channel state information (CSI), namely the full CSI (FCSI) and partial CSI (PCSI), are considered. Simulation results show that the proposed JRJS scheme outperforms the conventional pure relay selection, pure jamming and GSVD based beamforming schemes in terms of secrecy rate. Additionally, the proposed FCSI based power allocation (FCSI-PA) and PCSI based power allocation (PCSI-PA) schemes both achieve higher secrecy rates than the equal power allocation (EPA) scheme.
1
0
1
0
0
0
Demagnetization of cubic Gd-Ba-Cu-O bulk superconductor by cross-fields: measurements and 3D modelling
Superconducting bulks, acting as high-field permanent magnets, are promising for many applications. An important effect in bulk permanent magnets is crossed-field demagnetization, which can reduce the magnetic field in superconductors due to relatively small transverse fields. Crossed-field demagnetization has not been studied in sample shapes such as rectangular prisms or cubes. This contribution presents a study based on both 3D numerical modelling and experiments. We study a cubic Gd-Ba-Cu-O bulk superconductor sample of size 6 mm magnetized by field cooling in an external field of around 1.3 T, which is later submitted to crossed-field magnetic fields of up to 164 mT. Modelling results agree with experiments, except at transverse fields 50\% or above of the initial trapped field. The current paths present a strong 3D nature. For instance, at the mid-plane perpendicular to the initial magnetizing field, the current density in this direction changes smoothly from the critical magnitude, ${J_c}$, at the lateral sides to zero at a certain penetration depth. This indicates a rotation of the current density with magnitude ${J_c}$, and hence force free effects like flux cutting are expected to play a significant role.
0
1
0
0
0
0
Resonance-Free Light Recycling
The inability to efficiently tune the optical properties of waveguiding structures has been one of the major hurdles for the future scalability of integrated photonic systems. In silicon photonics, although dynamic tuning has been achieved with various mechanisms, even the most effective thermo-optic effect offers a refractive index change of only $1.86 \times 10^{-4} K^{-1}$. To enhance this small change, light recycling based on resonators has been employed in order to realize efficient modulators, phase shifters, and optical switches. However, the resonant enhancement comes at a great cost of optical bandwidth, fabrication tolerance and system scalability. Here we demonstrate a scalable light recycling approach based on spatial-mode multiplexing. Our approach offers a fabrication tolerance of ${\pm}$ 15 nm, in stark contrast to the non-scalable subnanometer tolerance in typical silicon resonators. We experimentally demonstrate light recycling up to 7 passes with an optical bandwidth greater than 100 nm. We realize power-efficient thermo-optic phase shifters that require only 1.7 mW per ${\pi}$, representing more than an 8-fold reduction in the power consumption.
0
1
0
0
0
0
A Computationally Efficient and Practically Feasible Two Microphones Blind Speech Separation Method
Traditionally, Blind Speech Separation techniques are computationally expensive as they update the demixing matrix at every time frame index, making them impractical to use in many Real-Time applications. In this paper, a robust data-driven two-microphone sound source localization method is used as a criterion to reduce the computational complexity of the Independent Vector Analysis (IVA) Blind Speech Separation (BSS) method. IVA is used to separate convolutedly mixed speech and noise sources. The practical feasibility of the proposed method is proved by implementing it on a smartphone device to separate speech and noise in Real-World scenarios for Hearing-Aid applications. The experimental results with objective and subjective tests reveal the practical usability of the developed method in many real-world applications.
1
0
0
0
0
0
Magnetic ground state and magnon-phonon interaction in multiferroic h-YMnO$_3$
Inelastic neutron scattering has been used to study the magneto-elastic excitations in the multiferroic manganite hexagonal YMnO$_3$. An avoided crossing is found between magnon and phonon modes close to the Brillouin zone boundary in the $(a,b)$-plane. Neutron polarization analysis reveals that this mode has mixed magnon-phonon character. An external magnetic field along the $c$-axis is observed to cause a linear field-induced splitting of one of the spin wave branches. A theoretical description is performed, using a Heisenberg model of localized spins, acoustic phonon modes and a magneto-elastic coupling via the single-ion magnetostriction. The model quantitatively reproduces the dispersion and intensities of all modes in the full Brillouin zone, describes the observed magnon-phonon hybridized modes, and quantifies the magneto-elastic coupling. The combined information, including the field-induced magnon splitting, allows us to exclude several of the earlier proposed models and point to the correct magnetic ground state symmetry, and provides an effective dynamic model relevant for the multiferroic hexagonal manganites.
0
1
0
0
0
0
Neuromodulation of Neuromorphic Circuits
We present a novel methodology to enable control of a neuromorphic circuit in close analogy with the physiological neuromodulation of a single neuron. The methodology is general in that it only relies on a parallel interconnection of elementary voltage-controlled current sources. In contrast to controlling a nonlinear circuit through the parameter tuning of a state-space model, our approach is purely input-output. The circuit elements are controlled and interconnected to shape the current-voltage characteristics (I-V curves) of the circuit in prescribed timescales. In turn, shaping those I-V curves determines the excitability properties of the circuit. We show that this methodology enables both robust and accurate control of the circuit behavior and resembles the biophysical mechanisms of neuromodulation. As a proof of concept, we simulate a SPICE model composed of MOSFET transconductance amplifiers operating in the weak inversion regime.
0
0
0
0
1
0
Frank-Wolfe Style Algorithms for Large Scale Optimization
We introduce a few variants on Frank-Wolfe style algorithms suitable for large scale optimization. We show how to modify the standard Frank-Wolfe algorithm using stochastic gradients, approximate subproblem solutions, and sketched decision variables in order to scale to enormous problems while preserving (up to constants) the optimal convergence rate $\mathcal{O}(\frac{1}{k})$.
0
0
0
1
0
0
Theoretical Description of Micromaser in the Ultrastrong-Coupling Regime
We theoretically investigate an ultrastrongly-coupled micromaser based on Rydberg atoms interacting with a superconducting LC resonator, where the common rotating-wave approximation and slowly-varying-envelope approximation are no longer applicable. The effect of counter-rotating terms on the masing dynamics is studied in detail. We find that the intraresonator electric energy declines and the microwave oscillation frequency shifts significantly in the regime of ultrastrong coupling. Additionally, the micromaser phase fluctuation is suppressed, resulting in a reduced spectral linewidth.
0
1
0
0
0
0
Compatibility of quasi-orderings and valuations; A Baer-Krull Theorem for quasi-ordered Rings
In his work of 1969, Merle E. Manis introduced valuations on commutative rings. Recently, the class of totally quasi-ordered rings was developped by the second author. In the present paper, we establish the notion of compatibility between valuations and quasi-orders on rings, leading to a definition of the rank of a quasi-ordered ring. Moreover, we prove a Baer-Krull Theorem for quasi-ordered rings: fixing a Manis valuation v on R, we characterize all v-compatible quasi-orders of R by lifting the quasi-orders from the residue class ring to R itself.
0
0
1
0
0
0
Feature Decomposition Based Saliency Detection in Electron Cryo-Tomograms
Electron Cryo-Tomography (ECT) allows 3D visualization of subcellular structures at the submolecular resolution in close to the native state. However, due to the high degree of structural complexity and imaging limits, the automatic segmentation of cellular components from ECT images is very difficult. To complement and speed up existing segmentation methods, it is desirable to develop a generic cell component segmentation method that is 1) not specific to particular types of cellular components, 2) able to segment unknown cellular components, 3) fully unsupervised and does not rely on the availability of training data. As an important step towards this goal, in this paper, we propose a saliency detection method that computes the likelihood that a subregion in a tomogram stands out from the background. Our method consists of four steps: supervoxel over-segmentation, feature extraction, feature matrix decomposition, and computation of saliency. The method produces a distribution map that represents the regions' saliency in tomograms. Our experiments show that our method can successfully label most salient regions detected by a human observer, and able to filter out regions not containing cellular components. Therefore, our method can remove the majority of the background region, and significantly speed up the subsequent processing of segmentation and recognition of cellular components captured by ECT.
0
0
0
1
1
0
A Characterization of Integral ISS for Switched and Time-varying Systems
Most of the existing characterizations of the integral input-to-state stability (iISS) property are not valid for time-varying or switched systems in cases where converse Lyapunov theorems for stability are not available. This note provides a characterization that is valid for switched and time-varying systems, and shows that natural extensions of some of the existing characterizations result in only sufficient but not necessary conditions. The results provided also pinpoint suitable iISS gains and relate these to supply functions and bounds on the function defining the system dynamics.
1
0
1
0
0
0
Low-temperature marginal ferromagnetism explains anomalous scale-free correlations in natural flocks
We introduce a new ferromagnetic model capable of reproducing one of the most intriguing properties of collective behaviour in starling flocks, namely the fact that strong collective order of the system coexists with scale-free correlations of the modulus of the microscopic degrees of freedom, that is the birds' speeds. The key idea of the new theory is that the single-particle potential needed to bound the modulus of the microscopic degrees of freedom around a finite value, is marginal, that is has zero curvature. We study the model by using mean-field approximation and Monte Carlo simulations in three dimensions, complemented by finite-size scaling analysis. While at the standard critical temperature, $T_c$, the properties of the marginal model are exactly the same as a normal ferromagnet with continuous symmetry-breaking, our results show that a novel zero-temperature critical point emerges, so that in its deeply ordered phase the marginal model develops divergent susceptibility and correlation length of the modulus of the microscopic degrees of freedom, in complete analogy with experimental data on natural flocks of starlings.
0
0
0
0
1
0
Historical and personal recollections of Guido Altarelli
In this paper I will present a short scientific biography of Guido Altarelli, briefly describing some of his most important seminal works. I will analyze in great details the paper of the $q^2$ evolution of the effective quark distribution: I will put this paper in a historical perspective, describing our theoretical understanding at that time and the reasons why the paper was so successful.
0
1
0
0
0
0
Segment Parameter Labelling in MCMC Mean-Shift Change Detection
This work addresses the problem of segmentation in time series data with respect to a statistical parameter of interest in Bayesian models. It is common to assume that the parameters are distinct within each segment. As such, many Bayesian change point detection models do not exploit the segment parameter patterns, which can improve performance. This work proposes a Bayesian mean-shift change point detection algorithm that makes use of repetition in segment parameters, by introducing segment class labels that utilise a Dirichlet process prior. The performance of the proposed approach was assessed on both synthetic and real world data, highlighting the enhanced performance when using parameter labelling.
1
0
0
1
0
0
M/G/c/c state dependent queuing model for a road traffic system of two sections in tandem
We propose in this article a M/G/c/c state dependent queuing model for road traffic flow. The model is based on finite capacity queuing theory which captures the stationary density-flow relationships. It is also inspired from the deterministic Godunov scheme for the road traffic simulation. We first present a reformulation of the existing linear case of M/G/c/c state dependent model, in order to use flow rather than speed variables. We then extend this model in order to consider upstream traffic demand and downstream traffic supply. After that, we propose the model for two road sections in tandem where both sections influence each other. In order to deal with this mutual dependence, we solve an implicit system given by an algebraic equation. Finally, we derive some performance measures (throughput and expected travel time). A comparison with results predicted by the M/G/c/c state dependent queuing networks shows that the model we propose here captures really the dynamics of the road traffic.
1
0
1
0
0
0
Laplace approximation and the natural gradient for Gaussian process regression with the heteroscedastic Student-t model
This paper considers the Laplace method to derive approximate inference for the Gaussian process (GP) regression in the location and scale parameters of the Student-t probabilistic model. This allows both mean and variance of the data to vary as a function of covariates with the attractive feature that the Student-t model has been widely used as a useful tool for robustifying data analysis. The challenge in the approximate inference for the GP regression with the Student-t probabilistic model, lies in the analytical intractability of the posterior distribution and the lack of concavity of the log-likelihood function. We present the natural gradient adaptation for the estimation process which primarily relies on the property that the Student-t model naturally has orthogonal parametrization with respect to the location and scale paramaters. Due to this particular property of the model, we also introduce an alternative Laplace approximation by using the Fisher information matrix in place of the Hessian matrix of the negative log-likelihood function. According to experiments this alternative approximation provides very similar posterior approximations and predictive performance when compared to the traditional Laplace approximation. We also compare both of these Laplace approximations with the Monte Carlo Markov Chain (MCMC) method. Moreover, we compare our heteroscedastic Student-t model and the GP regression with the heteroscedastic Gaussian model. We also discuss how our approach can improve the inference algorithm in cases where the probabilistic model assumed for the data is not log-concave.
0
0
0
1
0
0
Fixed points of n-valued maps, the fixed point property and the case of surfaces -- a braid approach
We study the fixed point theory of n-valued maps of a space X using the fixed point theory of maps between X and its configuration spaces. We give some general results to decide whether an n-valued map can be deformed to a fixed point free n-valued map. In the case of surfaces, we provide an algebraic criterion in terms of the braid groups of X to study this problem. If X is either the k-dimensional ball or an even-dimensional real or complex projective space, we show that the fixed point property holds for n-valued maps for all n $\ge$ 1, and we prove the same result for even-dimensional spheres for all n $\ge$ 2. If X is the 2-torus, we classify the homotopy classes of 2-valued maps in terms of the braid groups of X. We do not currently have a complete characterisation of the homotopy classes of split 2-valued maps of the 2-torus that contain a fixed point free representative, but we give an infinite family of such homotopy classes.
0
0
1
0
0
0
Large Synoptic Survey Telescope Galaxies Science Roadmap
The Large Synoptic Survey Telescope (LSST) will enable revolutionary studies of galaxies, dark matter, and black holes over cosmic time. The LSST Galaxies Science Collaboration has identified a host of preparatory research tasks required to leverage fully the LSST dataset for extragalactic science beyond the study of dark energy. This Galaxies Science Roadmap provides a brief introduction to critical extragalactic science to be conducted ahead of LSST operations, and a detailed list of preparatory science tasks including the motivation, activities, and deliverables associated with each. The Galaxies Science Roadmap will serve as a guiding document for researchers interested in conducting extragalactic science in anticipation of the forthcoming LSST era.
0
1
0
0
0
0
Multiresolution Coupled Vertical Equilibrium Model for Fast Flexible Simulation of CO$_2$ Storage
CO2 capture and storage is an important technology for mitigating climate change. Design of efficient strategies for safe, long-term storage requires the capability to efficiently simulate processes taking place on very different temporal and spatial scales. The physical laws describing CO2 storage are the same as for hydrocarbon recovery, but the characteristic spatial and temporal scales are quite different. Petroleum reservoirs seldom extend more than tens of kilometers and have operational horizons spanning decades. Injected CO2 needs to be safely contained for hundreds or thousands of years, during which it can migrate hundreds or thousands of kilometers. Because of the vast scales involved, conventional 3D reservoir simulation quickly becomes computationally unfeasible. Large density difference between injected CO2 and resident brine means that vertical segregation will take place relatively quickly, and depth-integrated models assuming vertical equilibrium (VE) often represents a better strategy to simulate long-term migration of CO2 in large-scale aquifer systems. VE models have primarily been formulated for relatively simple rock formations and have not been coupled to 3D simulation in a uniform way. In particular, known VE simulations have not been applied to models of realistic geology in which many flow compartments may exist in-between impermeable layers. In this paper, we generalize the concept of VE models, formulated in terms of well-proven reservoir simulation technology, to complex aquifer systems with multiple layers and regions. We also introduce novel formulations for multi-layered VE models by use of both direct spill and diffuse leakage between individual layers. This new layered 3D model is then coupled to a state-of-the-art, 3D black-oil type model.
0
1
0
0
0
0
Improved GelSight Tactile Sensor for Measuring Geometry and Slip
A GelSight sensor uses an elastomeric slab covered with a reflective membrane to measure tactile signals. It measures the 3D geometry and contact force information with high spacial resolution, and successfully helped many challenging robot tasks. A previous sensor, based on a semi-specular membrane, produces high resolution but with limited geometry accuracy. In this paper, we describe a new design of GelSight for robot gripper, using a Lambertian membrane and new illumination system, which gives greatly improved geometric accuracy while retaining the compact size. We demonstrate its use in measuring surface normals and reconstructing height maps using photometric stereo. We also use it for the task of slip detection, using a combination of information about relative motions on the membrane surface and the shear distortions. Using a robotic arm and a set of 37 everyday objects with varied properties, we find that the sensor can detect translational and rotational slip in general cases, and can be used to improve the stability of the grasp.
1
0
0
0
0
0
Deep Learning for Design and Retrieval of Nano-photonic Structures
Our visual perception of our surroundings is ultimately limited by the diffraction limit, which stipulates that optical information smaller than roughly half the illumination wavelength is not retrievable. Over the past decades, many breakthroughs have led to unprecedented imaging capabilities beyond the diffraction-limit, with applications in biology and nanotechnology. In this context, nano-photonics has revolutionized the field of optics in recent years by enabling the manipulation of light-matter interaction with subwavelength structures. However, despite the many advances in this field, its impact and penetration in our daily life has been hindered by a convoluted and iterative process, cycling through modeling, nanofabrication and nano-characterization. The fundamental reason is the fact that not only the prediction of the optical response is very time consuming and requires solving Maxwell's equations with dedicated numerical packages. But, more significantly, the inverse problem, i.e. designing a nanostructure with an on-demand optical response, is currently a prohibitive task even with the most advanced numerical tools due to the high non-linearity of the problem. Here, we harness the power of Deep Learning, a new path in modern machine learning, and show its ability to predict the geometry of nanostructures based solely on their far-field response. This approach also addresses in a direct way the currently inaccessible inverse problem breaking the ground for on-demand design of optical response with applications such as sensing, imaging and also for plasmon's mediated cancer thermotherapy.
0
1
0
0
0
0
A Sparse Completely Positive Relaxation of the Modularity Maximization for Community Detection
In this paper, we consider the community detection problem under either the stochastic block model (SBM) assumption or the degree-correlated stochastic block model (DCSBM) assumption. The modularity maximization formulation for the community detection problem is NP-hard in general. In this paper, we propose a sparse and low-rank completely positive relaxation for the modularity maximization problem, we then develop an efficient row-by-row (RBR) type block coordinate descent (BCD) algorithm to solve the relaxation and prove an $\mathcal{O}(1/\sqrt{N})$ convergence rate to a stationary point where $N$ is the number of iterations. A fast rounding scheme is constructed to retrieve the community structure from the solution. Non-asymptotic high probability bounds on the misclassification rate are established to justify our approach. We further develop an asynchronous parallel RBR algorithm to speed up the convergence. Extensive numerical experiments on both synthetic and real world networks show that the proposed approach enjoys advantages in both clustering accuracy and numerical efficiency. Our numerical results indicate that the newly proposed method is a quite competitive alternative for community detection on sparse networks with over 50 million nodes.
0
0
1
0
0
0
Automated Algorithm Selection on Continuous Black-Box Problems By Combining Exploratory Landscape Analysis and Machine Learning
In this paper, we build upon previous work on designing informative and efficient Exploratory Landscape Analysis features for characterizing problems' landscapes and show their effectiveness in automatically constructing algorithm selection models in continuous black-box optimization problems. Focussing on algorithm performance results of the COCO platform of several years, we construct a representative set of high-performing complementary solvers and present an algorithm selection model that - compared to the portfolio's single best solver - on average requires less than half of the resources for solving a given problem. Therefore, there is a huge gain in efficiency compared to classical ensemble methods combined with an increased insight into problem characteristics and algorithm properties by using informative features. Acting on the assumption that the function set of the Black-Box Optimization Benchmark is representative enough for practical applications the model allows for selecting the best suited optimization algorithm within the considered set for unseen problems prior to the optimization itself based on a small sample of function evaluations. Note that such a sample can even be reused for the initial population of an evolutionary (optimization) algorithm so that even the feature costs become negligible.
1
0
0
1
0
0
Car-following behavior of connected vehicles in a mixed traffic flow: modeling and stability analysis
Vehicle-to-vehicle communications can change the driving behavior of drivers significantly by providing them rich information on downstream traffic flow conditions. This study seeks to model the varying car-following behaviors involving connected vehicles and human-driving vehicles in mixed traffic flow. A revised car-following model is developed using an intelligent driver model (IDM) to capture drivers' perceptions of their preceding traffic conditions through vehicle-to-vehicle communications. Stability analysis of the mixed traffic flow is conducted for a specific case. Numerical results show that the stable region is apparently enlarged compared with the IDM.
1
0
0
0
0
0
A note on a two-weight estimate for the dyadic square function
We show that the two-weight estimate for the dyadic square function proved by Lacey--Li in [2] is sharp.
0
0
1
0
0
0
The Oblique Orbit of WASP-107b from K2 Photometry
Observations of nine transits of WASP-107 during the {\it K2} mission reveal three separate occasions when the planet crossed in front of a starspot. The data confirm the stellar rotation period to be 17 days --- approximately three times the planet's orbital period --- and suggest that large spots persist for at least one full rotation. If the star had a low obliquity, at least two additional spot crossings should have been observed. They were not observed, giving evidence for a high obliquity. We use a simple geometric model to show that the obliquity is likely in the range 40-140$^\circ$, i.e., both spin-orbit alignment and anti-alignment can be ruled out. WASP-107 thereby joins the small collection of relatively low-mass stars hosting a giant planet with a high obliquity. Most such stars have been observed to have low obliquities; all the exceptions, including WASP-107, involve planets with relatively wide orbits ("warm Jupiters", with $a_{\rm min}/R_\star \gtrsim 8$). This demonstrates a connection between stellar obliquity and planet properties, in contradiction to some theories for obliquity excitation.
0
1
0
0
0
0
A characterization of signed discrete infinitely divisible distributions
In this article, we give some reviews concerning negative probabilities model and quasi-infinitely divisible at the beginning. We next extend Feller's characterization of discrete infinitely divisible distributions to signed discrete infinitely divisible distributions, which are discrete pseudo compound Poisson (DPCP) distributions with connections to the Lévy-Wiener theorem. This is a special case of an open problem which is proposed by Sato(2014), Chaumont and Yor(2012). An analogous result involving characteristic functions is shown for signed integer-valued infinitely divisible distributions. We show that many distributions are DPCP by the non-zero p.g.f. property, such as the mixed Poisson distribution and fractional Poisson process. DPCP has some bizarre properties, and one is that the parameter $\lambda $ in the DPCP class cannot be arbitrarily small.
0
0
1
1
0
0
Frequency Domain Singular Value Decomposition for Efficient Spatial Audio Coding
Advances in virtual reality have generated substantial interest in accurately reproducing and storing spatial audio in the higher order ambisonics (HOA) representation, given its rendering flexibility. Recent standardization for HOA compression adopted a framework wherein HOA data are decomposed into principal components that are then encoded by standard audio coding, i.e., frequency domain quantization and entropy coding to exploit psychoacoustic redundancy. A noted shortcoming of this approach is the occasional mismatch in principal components across blocks, and the resulting suboptimal transitions in the data fed to the audio coder. Instead, we propose a framework where singular value decomposition (SVD) is performed after transformation to the frequency domain via the modified discrete cosine transform (MDCT). This framework not only ensures smooth transition across blocks, but also enables frequency dependent SVD for better energy compaction. Moreover, we introduce a novel noise substitution technique to compensate for suppressed ambient energy in discarded higher order ambisonics channels, which significantly enhances the perceptual quality of the reconstructed HOA signal. Objective and subjective evaluation results provide evidence for the effectiveness of the proposed framework in terms of both higher compression gains and better perceptual quality, compared to existing methods.
1
0
0
0
0
0
Functional Dynamical Structures in Complex Systems: an Information-Theoretic Approach
Understanding the dynamical behavior of complex systems is of exceptional relevance in everyday life, from biology to economy. In order to describe the dynamical organization of complex systems, existing methods require the knowledge of the network topology. By contrast, in this thesis we develop a new method based on Information Theory which does not require any topological knowledge. We introduce the Dynamical Cluster Index to detect those groups of system elements which have strong mutual interactions, named as Relevant Subsets. Among them, we identify those which exchange most information with the rest of the system, thus being the most influential for its dynamics. In order to detect such Functional Dynamical Structures, we introduce another information theoretic measure, called D-index. The experimental results make us confident that our method can be effectively used to study both artificial and natural complex systems.
0
1
0
0
0
0
Angle-resolved photoemission spectroscopy with quantum gas microscopes
Quantum gas microscopes are a promising tool to study interacting quantum many-body systems and bridge the gap between theoretical models and real materials. So far they were limited to measurements of instantaneous correlation functions of the form $\langle \hat{O}(t) \rangle$, even though extensions to frequency-resolved response functions $\langle \hat{O}(t) \hat{O}(0) \rangle$ would provide important information about the elementary excitations in a many-body system. For example, single particle spectral functions, which are usually measured using photoemission experiments in electron systems, contain direct information about fractionalization and the quasiparticle excitation spectrum. Here, we propose a measurement scheme to experimentally access the momentum and energy resolved spectral function in a quantum gas microscope with currently available techniques. As an example for possible applications, we numerically calculate the spectrum of a single hole excitation in one-dimensional $t-J$ models with isotropic and anisotropic antiferromagnetic couplings. A sharp asymmetry in the distribution of spectral weight appears when a hole is created in an isotropic Heisenberg spin chain. This effect slowly vanishes for anisotropic spin interactions and disappears completely in the case of pure Ising interactions. The asymmetry strongly depends on the total magnetization of the spin chain, which can be tuned in experiments with quantum gas microscopes. An intuitive picture for the observed behavior is provided by a slave-fermion mean field theory. The key properties of the spectra are visible at currently accessible temperatures.
0
1
0
0
0
0
A note on MLE of covariance matrix
For a multivariate normal set up, it is well known that the maximum likelihood estimator of covariance matrix is neither admissible nor minimax under the Stein loss function. For the past six decades, a bunch of researches have followed along this line for Stein's phenomenon in the literature. In this note, the results are two folds: Firstly, with respect to Stein type loss function we use the full Iwasawa decomposition to enhance the unpleasant phenomenon that the minimum risks of maximum likelihood estimators for the different coordinate systems (Cholesky decomposition and full Iwasawa decomposition) are different. Secondly, we introduce a new class of loss functions to show that the minimum risks of maximum likelihood estimators for the different coordinate systems, the Cholesky decomposition and the full Iwasawa decomposition, are of the same, and hence the Stein's paradox disappears.
0
0
1
1
0
0
Resource Management in Cloud Computing: Classification and Taxonomy
Cloud Computing is a new era of remote computing / Internet based computing where one can access their personal resources easily from any computer through Internet. Cloud delivers computing as a utility as it is available to the cloud consumers on demand. It is a simple pay-per-use consumer-provider service model. It contains large number of shared resources. So Resource Management is always a major issue in cloud computing like any other computing paradigm. Due to the availability of finite resources it is very challenging for cloud providers to provide all the requested resources. From the cloud providers perspective cloud resources must be allocated in a fair and efficient manner. Research Survey is not available from the perspective of resource management as a process in cloud computing. So this research paper provides a detailed sequential view / steps on resource management in cloud computing. Firstly this research paper classifies various resources in cloud computing. It also gives taxonomy on resource management in cloud computing through which one can do further research. Lastly comparisons on various resource management algorithms has been presented.
1
0
0
0
0
0
What can the programming language Rust do for astrophysics?
The astrophysics community uses different tools for computational tasks such as complex systems simulations, radiative transfer calculations or big data. Programming languages like Fortran, C or C++ are commonly present in these tools and, generally, the language choice was made based on the need for performance. However, this comes at a cost: safety. For instance, a common source of error is the access to invalid memory regions, which produces random execution behaviors and affects the scientific interpretation of the results. In 2015, Mozilla Research released the first stable version of a new programming language named Rust. Many features make this new language attractive for the scientific community, it is open source and it guarantees memory safety while offering zero-cost abstraction. We explore the advantages and drawbacks of Rust for astrophysics by re-implementing the fundamental parts of Mercury-T, a Fortran code that simulates the dynamical and tidal evolution of multi-planet systems.
1
1
0
0
0
0
Eliashberg theory with the external pair potential
Based on BCS model with the external pair potential formulated in a work \emph{K.V. Grigorishin} arXiv:1605.07080, analogous model with electron-phonon coupling and Coulomb coupling is proposed. The generalized Eliashberg equations in the regime of renormalization of the order parameter are obtained. High temperature asymptotics and influence of Coulomb pseudopotential on them are investigated: as in the BCS model the order parameter asymptotically tends to zero as temperature rises, but the accounting of the Coulomb pseudopotential leads to existence of critical temperature. The effective Ginzburg-Landau theory is formulated for such model, where the temperature dependencies near $T_{c}$ of the basic characteristics of a superconductor (coherence length, magnetic penetration depth, GL parameter, the thermodynamical critical field, the first and the second critical fields) recovers to the temperature dependencies as in the ordinary GL theory after the BCS model with the external pair potential.
0
1
0
0
0
0
An exponential limit shape of random $q$-proportion Bulgarian solitaire
We introduce \emph{$p_n$-random $q_n$-proportion Bulgarian solitaire} ($0<p_n,q_n\le 1$), played on $n$ cards distributed in piles. In each pile, a number of cards equal to the proportion $q_n$ of the pile size rounded upward to the nearest integer are candidates to be picked. Each candidate card is picked with probability $p_n$, independently of other candidate cards. This generalizes Popov's random Bulgarian solitaire, in which there is a single candidate card in each pile. Popov showed that a triangular limit shape is obtained for a fixed $p$ as $n$ tends to infinity. Here we let both $p_n$ and $q_n$ vary with $n$. We show that under the conditions $q_n^2 p_n n/{\log n}\rightarrow \infty$ and $p_n q_n \rightarrow 0$ as $n\to\infty$, the $p_n$-random $q_n$-proportion Bulgarian solitaire has an exponential limit shape.
0
0
1
0
0
0
Constraining Effective Temperature, Mass and Radius of Hot White Dwarfs
By introducing a simplified transport model of outer layers of white dwarfs we derive an analytical semi-empirical relation which constrains effective temperature-mass-radius for white dwarfs. This relation is used to classify recent data of white dwarfs according to their time evolution in non-accretion process of cooling. This formula permit us to study the population map of white dwarfs in the central temperature and mass plane, and discuss the relation with the ignition temperature for C-O material. Our effective temperature-mass-radius relation provide a quick method to estimate the mass of newly observed white dwarfs from their spectral measurements of effective temperature and superficial gravity.
0
1
0
0
0
0
Hierarchical Multinomial-Dirichlet model for the estimation of conditional probability tables
We present a novel approach for estimating conditional probability tables, based on a joint, rather than independent, estimate of the conditional distributions belonging to the same table. We derive exact analytical expressions for the estimators and we analyse their properties both analytically and via simulation. We then apply this method to the estimation of parameters in a Bayesian network. Given the structure of the network, the proposed approach better estimates the joint distribution and significantly improves the classification performance with respect to traditional approaches.
0
0
0
1
0
0
Proceedings of the Fifth Workshop on Proof eXchange for Theorem Proving
This volume of EPTCS contains the proceedings of the Fifth Workshop on Proof Exchange for Theorem Proving (PxTP 2017), held on September 23-24, 2017 as part of the Tableaux, FroCoS and ITP conferences in Brasilia, Brazil. The PxTP workshop series brings together researchers working on various aspects of communication, integration, and cooperation between reasoning systems and formalisms, with a special focus on proofs. The progress in computer-aided reasoning, both automated and interactive, during the past decades, made it possible to build deduction tools that are increasingly more applicable to a wider range of problems and are able to tackle larger problems progressively faster. In recent years, cooperation between such tools in larger systems has demonstrated the potential to reduce the amount of manual intervention. Cooperation between reasoning systems relies on availability of theoretical formalisms and practical tools to exchange problems, proofs, and models. The PxTP workshop series strives to encourage such cooperation by inviting contributions on all aspects of cooperation between reasoning tools, whether automatic or interactive.
1
0
0
0
0
0
Photoinduced charge-order melting dynamics in a one-dimensional interacting Holstein model
Transient quantum dynamics in an interacting fermion-phonon system are investigated. In particular, a charge order (CO) melting after a short optical-pulse irradiation and roles of the quantum phonons on the transient dynamics are focused on. A spinless-fermion model in a one-dimensional chain coupled with local phonons is analyzed numerically. The infinite time-evolving block decimation algorithm is adopted as a reliable numerical method for one-dimensional quantum many-body systems. Numerical results for the photoinduced CO melting dynamics without phonons are well interpreted by the soliton picture for the CO domains. This interpretation is confirmed by the numerical simulation for an artificial local excitation and the classical soliton model. In the case of the large phonon frequency corresponding to the antiadiabatic condition, the CO melting is induced by propagations of the polaronic solitons with the renormalized soliton velocity. On the other hand, in the case of the small phonon frequency corresponding to the adiabatic condition, the first stage of the CO melting dynamics occurs due to the energy transfer from the fermionic to phononic systems, and the second stage is brought about by the soliton motions around the bottom of the soliton band. Present analyses provide a standard reference for the photoinduced CO melting dynamics in low-dimensional many-body quantum systems.
0
1
0
0
0
0
Compactness of the automorphism group of a topological parallelism on real projective 3-space: The disconnected case
We prove that the automorphism group of a topological parallelism on real projective 3-space is compact. In a preceding article it was proved that at least the connected component of the identity is compact. The present proof does not depend on that earlier result.
0
0
1
0
0
0
A model provides insight into electric field-induced rupture mechanism of water-in-toluene emulsion films
This paper presents the first MD simulations of a model, which we have designed for understanding the development of electro-induced instability of a thin toluene emulsion film in contact with saline aqueous phase. This study demonstrates the charge accumulation role in toluene film rupture when a DC electric field is applied. The critical value of the external field at which film ruptures, thin film charge distribution, capacitance, number densities and film structure have been obtained in simulating the system within NVT and NPT ensembles. A mechanism of thin film rupture driven by the electric discharge is suggested.We show that NPT ensemble with a constant surface tension is a better choice for further modeling of the systems that resemble more close the real films.
0
1
0
0
0
0
A Rich-Variant Architecture for a User-Aware multi-tenant SaaS approach
Software as a Service cloud computing model favorites the Multi-Tenancy as a key factor to exploit economies of scale. However Multi-Tenancy present several disadvantages. Therein, our approach comes to assign instances to multi-tenants with an optimal solution while ensuring more economies of scale and avoiding tenants hesitation to share resources. The present paper present the architecture of our user-aware multi-tenancy SaaS approach based on the use of rich-variant components. The proposed approach seek to model services functional customization as well as automation of computing the optimal distribution of instances by tenants. The proposed model takes into consideration tenants functional requirements and tenants deployment requirements to deduce an optimal distribution using essentially a specific variability engine and a graph-based execution framework.
1
0
0
0
0
0
Comparison of Parallelisation Approaches, Languages, and Compilers for Unstructured Mesh Algorithms on GPUs
Efficiently exploiting GPUs is increasingly essential in scientific computing, as many current and upcoming supercomputers are built using them. To facilitate this, there are a number of programming approaches, such as CUDA, OpenACC and OpenMP 4, supporting different programming languages (mainly C/C++ and Fortran). There are also several compiler suites (clang, nvcc, PGI, XL) each supporting different combinations of languages. In this study, we take a detailed look at some of the currently available options, and carry out a comprehensive analysis and comparison using computational loops and applications from the domain of unstructured mesh computations. Beyond runtimes and performance metrics (GB/s), we explore factors that influence performance such as register counts, occupancy, usage of different memory types, instruction counts, and algorithmic differences. Results of this work show how clang's CUDA compiler frequently outperform NVIDIA's nvcc, performance issues with directive-based approaches on complex kernels, and OpenMP 4 support maturing in clang and XL; currently around 10% slower than CUDA.
1
0
0
0
0
0
Notes on Growing a Tree in a Graph
We study the height of a spanning tree $T$ of a graph $G$ obtained by starting with a single vertex of $G$ and repeatedly selecting, uniformly at random, an edge of $G$ with exactly one endpoint in $T$ and adding this edge to $T$.
1
0
0
0
0
0
Vortex pinning by the point potential in topological superconductors: a scheme for braiding Majorana bound states
We propose theoretically an effective scheme for braiding Majorana bound states by manipulating the point potential. The vortex pinning effect is carefully elucidated. This effect may be used to control the vortices and Majorana bound states in topological superconductors. The exchange of two vortices induced by moving the potentials is simulated numerically. The zero-energy state in the vortex core is robust with respect to the strength of the potential. The Majorana bound states in a pinned vortex are identified numerically.
0
1
0
0
0
0
T* : A Heuristic Search Based Algorithm for Motion Planning with Temporal Goals
Motion planning is the core problem to solve for developing any application involving an autonomous mobile robot. The fundamental motion planning problem involves generating a trajectory for a robot for point-to-point navigation while avoiding obstacles. Heuristic-based search algorithms like A* have been shown to be extremely efficient in solving such planning problems. Recently, there has been an increased interest in specifying complex motion plans using temporal logic. In the state-of-the-art algorithm, the temporal logic motion planning problem is reduced to a graph search problem and Dijkstra's shortest path algorithm is used to compute the optimal trajectory satisfying the specification. The A* algorithm when used with a proper heuristic for the distance from the destination can generate an optimal path in a graph efficiently. The primary challenge for using A* algorithm in temporal logic path planning is that there is no notion of a single destination state for the robot. In this thesis, we present a novel motion planning algorithm T* that uses the A* search procedure in temporal logic path planning \emph{opportunistically} to generate an optimal trajectory satisfying a temporal logic query. Our experimental results demonstrate that T* achieves an order of magnitude improvement over the state-of-the-art algorithm to solve many temporal logic motion planning problems.
1
0
0
0
0
0
Discontinuous Homomorphisms of $C(X)$ with $2^{\aleph_0}>\aleph_2$
Assume that $M$ is a c.t.m. of $ZFC+CH$ containing a simplified $(\omega_1,2)$-morass, $P\in M$ is the poset adding $\aleph_3$ generic reals and $G$ is $P$-generic over $M$. In $M$ we construct a function between sets of terms in the forcing language, that interpreted in $M[G]$ is an $\mathbb R$-linear order-preserving monomorphism from the finite elements of an ultrapower of the reals, over a non-principal ultrafilter on $\omega$, into the Esterle algebra of formal power series. Therefore it is consistent that $2^{\aleph_0}=\aleph_3$ and, for any infinite compact Hausdorff space $X$, there exists a discontinuous homomorphism of $C(X)$, the algebra of continuous real-valued functions on $X$. For $n\in \mathbb N$, If $M$ contains a simplified $(\omega_1,n)$-morass, then in the Cohen extension of $M$ adding $\aleph_n$ generic reals there exists a discontinuous homomorphism of $C(X)$, for any infinite compact Hausdorff space $X$.
0
0
1
0
0
0
Investigation of beam self-polarization in the future $e^{+}e^{-}$ circular collider
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the $e^{+}e^{-}$ Future Circular Collider (FCC-$e^{+}e^{-}$) for Z and WW physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. Preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-$e^{+}e^{-}$ ring are presented.
0
1
0
0
0
0
Learning to Grasp from a Single Demonstration
Learning-based approaches for robotic grasping using visual sensors typically require collecting a large size dataset, either manually labeled or by many trial and errors of a robotic manipulator in the real or simulated world. We propose a simpler learning-from-demonstration approach that is able to detect the object to grasp from merely a single demonstration using a convolutional neural network we call GraspNet. In order to increase robustness and decrease the training time even further, we leverage data from previous demonstrations to quickly fine-tune a GrapNet for each new demonstration. We present some preliminary results on a grasping experiment with the Franka Panda cobot for which we can train a GraspNet with only hundreds of train iterations.
1
0
0
0
0
0
Tensor Minkowski Functionals for random fields on the sphere
We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We obtain analytic expressions for the ensemble expectation values for the matrix elements of the tensor-valued Minkowski Functionals for isotropic Gaussian and Rayleigh fields. We elucidate the way in which the elements of the tensor Minkowski Functionals encode information about the nature and statistical isotropy (or departure from isotropy) of the field. We then implement our method to compute the tensor-valued Minkowski Functionals numerically and demonstrate how they encode statistical anisotropy and departure from Gaussianity by applying the method to maps of the Galactic foreground emissions from the PLANCK data.
0
1
0
0
0
0
Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: Statistical and systematic error budgets for future experiments
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
0
1
0
0
0
0
The Incremental Proximal Method: A Probabilistic Perspective
In this work, we highlight a connection between the incremental proximal method and stochastic filters. We begin by showing that the proximal operators coincide, and hence can be realized with, Bayes updates. We give the explicit form of the updates for the linear regression problem and show that there is a one-to-one correspondence between the proximal operator of the least-squares regression and the Bayes update when the prior and the likelihood are Gaussian. We then carry out this observation to a general sequential setting: We consider the incremental proximal method, which is an algorithm for large-scale optimization, and show that, for a linear-quadratic cost function, it can naturally be realized by the Kalman filter. We then discuss the implications of this idea for nonlinear optimization problems where proximal operators are in general not realizable. In such settings, we argue that the extended Kalman filter can provide a systematic way for the derivation of practical procedures.
0
0
0
1
0
0
List Decoding of Insertions and Deletions
List decoding of insertions and deletions in the Levenshtein metric is considered. The Levenshtein distance between two sequences is the minimum number of insertions and deletions needed to turn one of the sequences into the other. In this paper, a Johnson-like upper bound on the maximum list size when list decoding in the Levenshtein metric is derived. This bound depends only on the length and minimum Levenshtein distance of the code, the length of the received word, and the alphabet size. It shows that polynomial-time list decoding beyond half the Levenshtein distance is possible for many parameters. Further, we also prove a lower bound on list decoding of deletions with with the well-known binary Varshamov-Tenengolts (VT) codes which shows that the maximum list size grows exponentially with the number of deletions. Finally, an efficient list decoding algorithm for two insertions/deletions with VT codes is given. This decoder can be modified to a polynomial-time list decoder of any constant number of insertions/deletions.
1
0
1
0
0
0
Almost sure scattering for the energy-critical NLS with radial data below $H^1(\mathbb{R}^4)$
We prove almost sure global existence and scattering for the energy-critical nonlinear Schrödinger equation with randomized spherically symmetric initial data in $H^s(\mathbb{R}^4)$ with $\frac56<s<1$. We were inspired to consider this problem by the recent work of Dodson--Lührmann--Mendelson, which treated the analogous problem for the energy-critical wave equation.
0
0
1
0
0
0
Almost Boltzmann Exploration
Boltzmann exploration is widely used in reinforcement learning to provide a trade-off between exploration and exploitation. Recently, in (Cesa-Bianchi et al., 2017) it has been shown that pure Boltzmann exploration does not perform well from a regret perspective, even in the simplest setting of stochastic multi-armed bandit (MAB) problems. In this paper, we show that a simple modification to Boltzmann exploration, motivated by a variation of the standard doubling trick, achieves $O(K\log^{1+\alpha} T)$ regret for a stochastic MAB problem with $K$ arms, where $\alpha>0$ is a parameter of the algorithm. This improves on the result in (Cesa-Bianchi et al., 2017), where an algorithm inspired by the Gumbel-softmax trick achieves $O(K\log^2 T)$ regret. We also show that our algorithm achieves $O(\beta(G) \log^{1+\alpha} T)$ regret in stochastic MAB problems with graph-structured feedback, without knowledge of the graph structure, where $\beta(G)$ is the independence number of the feedback graph. Additionally, we present extensive experimental results on real datasets and applications for multi-armed bandits with both traditional bandit feedback and graph-structured feedback. In all cases, our algorithm performs as well or better than the state-of-the-art.
1
0
0
1
0
0
A Time Localization System in Smart Home Using Hierarchical Structure and Dynamic Frequency
Both GPS and WiFi based localization have been exploited in recent years, yet most researches focus on localizing at home without environment context. Besides, the near home or workplace area is complex and has little attention in smart home or IOT. Therefore, after exploring the realistic route in and out of building, we conducted a time localization system (TLS) based on off-the-shelf smart phones with WiFi identification. TLS can identify the received signal strength indication (RSSI) of home and construct radio map of users' time route without site survey. As designed to service the smart devices in home, TLS applies the time interval as the distance of positions and as the variables of WiFi environment to mark time points. Experimental results with real users show that TLS as a service system for timeline localization achieves a median accuracy of 70 seconds and is more robust compared with nearest neighbor localization approach.
1
0
0
0
0
0
Vision-based Real Estate Price Estimation
Since the advent of online real estate database companies like Zillow, Trulia and Redfin, the problem of automatic estimation of market values for houses has received considerable attention. Several real estate websites provide such estimates using a proprietary formula. Although these estimates are often close to the actual sale prices, in some cases they are highly inaccurate. One of the key factors that affects the value of a house is its interior and exterior appearance, which is not considered in calculating automatic value estimates. In this paper, we evaluate the impact of visual characteristics of a house on its market value. Using deep convolutional neural networks on a large dataset of photos of home interiors and exteriors, we develop a method for estimating the luxury level of real estate photos. We also develop a novel framework for automated value assessment using the above photos in addition to home characteristics including size, offered price and number of bedrooms. Finally, by applying our proposed method for price estimation to a new dataset of real estate photos and metadata, we show that it outperforms Zillow's estimates.
1
0
0
0
0
0
Deep Rewiring: Training very sparse deep networks
Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.
1
0
0
1
0
0
Analyzing and Disentangling Interleaved Interrupt-driven IoT Programs
In the Internet of Things (IoT) community, Wireless Sensor Network (WSN) is a key technique to enable ubiquitous sensing of environments and provide reliable services to applications. WSN programs, typically interrupt-driven, implement the functionalities via the collaboration of Interrupt Procedure Instances (IPIs, namely executions of interrupt processing logic). However, due to the complicated concurrency model of WSN programs, the IPIs are interleaved intricately and the program behaviours are hard to predicate from the source codes. Thus, to improve the software quality of WSN programs, it is significant to disentangle the interleaved executions and develop various IPI-based program analysis techniques, including offline and online ones. As the common foundation of those techniques, a generic efficient and real-time algorithm to identify IPIs is urgently desired. However, the existing instance-identification approach cannot satisfy the desires. In this paper, we first formally define the concept of IPI. Next, we propose a generic IPI-identification algorithm, and prove its correctness, real-time and efficiency. We also conduct comparison experiments to illustrate that our algorithm is more efficient than the existing one in terms of both time and space. As the theoretical analyses and empirical studies exhibit, our algorithm provides the groundwork for IPI-based analyses of WSN programs in IoT environment.
1
0
0
0
0
0
Absence of replica symmetry breaking in the transverse and longitudinal random field Ising model
It is proved that replica symmetry is not broken in the transverse and longitudinal random field Ising model. In this model, the variance of spin overlap of any component vanishes in any dimension almost everywhere in the coupling constant space in the infinite volume limit. The weak Fortuin-Kasteleyn-Ginibre property in this model and the Ghirlanda-Guerra identities in artificial models in a path integral representation based on the Lie-Trotter-Suzuki formula enable us to extend Chatterjee's proof for the random field Ising model to the quantum model.
0
1
0
0
0
0
Specialization of Generic Array Accesses After Inlining
We have implemented an optimization that specializes type-generic array accesses after inlining of polymorphic functions in the native-code OCaml compiler. Polymorphic array operations (read and write) in OCaml require runtime type dispatch because of ad hoc memory representations of integer and float arrays. It cannot be removed even after being monomorphized by inlining because the intermediate language is mostly untyped. We therefore extended it with explicit type application like System F (while keeping implicit type abstraction by means of unique identifiers for type variables). Our optimization has achieved up to 21% speed-up of numerical programs.
1
0
0
0
0
0
Exact tensor completion with sum-of-squares
We obtain the first polynomial-time algorithm for exact tensor completion that improves over the bound implied by reduction to matrix completion. The algorithm recovers an unknown 3-tensor with $r$ incoherent, orthogonal components in $\mathbb R^n$ from $r\cdot \tilde O(n^{1.5})$ randomly observed entries of the tensor. This bound improves over the previous best one of $r\cdot \tilde O(n^{2})$ by reduction to exact matrix completion. Our bound also matches the best known results for the easier problem of approximate tensor completion (Barak & Moitra, 2015). Our algorithm and analysis extends seminal results for exact matrix completion (Candes & Recht, 2009) to the tensor setting via the sum-of-squares method. The main technical challenge is to show that a small number of randomly chosen monomials are enough to construct a degree-3 polynomial with precisely planted orthogonal global optima over the sphere and that this fact can be certified within the sum-of-squares proof system.
1
0
0
1
0
0
Outlier Cluster Formation in Spectral Clustering
Outlier detection and cluster number estimation is an important issue for clustering real data. This paper focuses on spectral clustering, a time-tested clustering method, and reveals its important properties related to outliers. The highlights of this paper are the following two mathematical observations: first, spectral clustering's intrinsic property of an outlier cluster formation, and second, the singularity of an outlier cluster with a valid cluster number. Based on these observations, we designed a function that evaluates clustering and outlier detection results. In experiments, we prepared two scenarios, face clustering in photo album and person re-identification in a camera network. We confirmed that the proposed method detects outliers and estimates the number of clusters properly in both problems. Our method outperforms state-of-the-art methods in both the 128-dimensional sparse space for face clustering and the 4,096-dimensional non-sparse space for person re-identification.
1
0
0
0
0
0
Stored Electromagnetic Field Energies in General Materials
The most general expressions of the stored energies for time-harmonic electromagnetic fields are derived from the time-domain Poynting theorem, and are valuable in characterizing the energy storage and transport properties of complex media. A new energy conservation law for the time-harmonic electromagnetic fields, which involves the derived general expressions of the stored energies, is introduced. In contrast to the well-established Poynting theorem for time-harmonic fields, the real part of the new energy conservation law gives an equation for the sum of stored electric and magnetic field energies; the imaginary part involves an equation related to the difference between the dissipated electric and magnetic field energies. In a lossless isotropic and homogeneous medium, the new energy conservation law has a clear physical implication: the stored electromagnetic field energy of a radiating system enclosed by a surface is equal to the total field energy inside the surface subtracted by the field energy flowing out of the surface.
0
1
0
0
0
0
From Relational Data to Graphs: Inferring Significant Links using Generalized Hypergeometric Ensembles
The inference of network topologies from relational data is an important problem in data analysis. Exemplary applications include the reconstruction of social ties from data on human interactions, the inference of gene co-expression networks from DNA microarray data, or the learning of semantic relationships based on co-occurrences of words in documents. Solving these problems requires techniques to infer significant links in noisy relational data. In this short paper, we propose a new statistical modeling framework to address this challenge. It builds on generalized hypergeometric ensembles, a class of generative stochastic models that give rise to analytically tractable probability spaces of directed, multi-edge graphs. We show how this framework can be used to assess the significance of links in noisy relational data. We illustrate our method in two data sets capturing spatio-temporal proximity relations between actors in a social system. The results show that our analytical framework provides a new approach to infer significant links from relational data, with interesting perspectives for the mining of data on social systems.
1
1
0
1
0
0
A Relaxed Kačanov Iteration for the $p$-Poisson Problem
In this paper, we introduce an iterative linearization scheme that allows to approximate the weak solution of the $p$-Poisson problem \begin{align*} -\operatorname{div}(|\nabla u|^{p-2}\nabla u) &= f\quad\text{in }\Omega, u&= 0\quad\text{on}\partial\Omega \end{align*} for $1 < p \leq 2$. The algorithm can be interpreted as a relaxed Kačanov iteration. We prove that the algorithm converges at least with an algebraic rate.
0
0
1
0
0
0
Explosive Percolation on Directed Networks Due to Monotonic Flow of Activity
An important class of real-world networks have directed edges, and in addition, some rank ordering on the nodes, for instance the "popularity" of users in online social networks. Yet, nearly all research related to explosive percolation has been restricted to undirected networks. Furthermore, information on such rank ordered networks typically flows from higher ranked to lower ranked individuals, such as follower relations, replies and retweets on Twitter. Here we introduce a simple percolation process on an ordered, directed network where edges are added monotonically with respect to the rank ordering. We show with a numerical approach that the emergence of a dominant strongly connected component appears to be discontinuous. Large scale connectivity occurs at very high density compared with most percolation processes, and this holds not just for the strongly connected component structure but for the weakly connected component structure as well. We present analysis with branching processes which explains this unusual behavior and gives basic intuition for the underlying mechanisms. We also show that before the emergence of a dominant strongly connected component, multiple giant strongly connected components may exist simultaneously. By adding a competitive percolation rule with a small bias to link uses of similar rank, we show this leads to formation of two distinct components, one of high ranked users, and one of low ranked users, with little flow between the two components.
1
1
0
0
0
0
Experimental observation of self excited co--rotating multiple vortices in a dusty plasma with inhomogeneous plasma background
We report an experimental observation of multiple co--rotating vortices in a extended dust column in the background of non--uniform diffused plasma. Inductively coupled RF discharge is initiated in the background of argon gas in the source region which later found to diffuse in the main experimental chamber. A secondary DC glow discharge plasma is produced to introduce the dust particles into the plasma. These micron sized poly-disperse dust particles get charged in the plasma environment and transported by the ambipolar electric field of the diffused plasma and found to confine in the potential well, where the resultant electric field of the diffused plasma (ambipolar E--field) and glass wall charging (sheath E--field) hold the micron sized particles against the gravity. Multiple co--rotating (anti--clockwise) dust vortices are observed in the dust cloud for a particular discharge condition. The transition from multiple to single dust vortex is observed when input RF power is lowered. Occurrence of these vortices are explained on the basis of the charge gradient of dust particles which is orthogonal to the ion drag force. The charge gradient is a consequence of the plasma inhomogeneity along the dust cloud length. The detailed nature and the reason for multiple vortices are still under investigation through further experiments, however, preliminary qualitative understanding is discussed based on characteristic scale length of dust vortex. There is a characteristic size of the vortex in the dusty plasma so that multiple vortices is possible to form in the extended dusty plasma with inhomogeneous plasma background. The experimental results on the vortex motion of particles are compared with a theoretical model and found some agreement.
0
1
0
0
0
0
Analysis of Computational Science Papers from ICCS 2001-2016 using Topic Modeling and Graph Theory
This paper presents results of topic modeling and network models of topics using the International Conference on Computational Science corpus, which contains domain-specific (computational science) papers over sixteen years (a total of 5695 papers). We discuss topical structures of International Conference on Computational Science, how these topics evolve over time in response to the topicality of various problems, technologies and methods, and how all these topics relate to one another. This analysis illustrates multidisciplinary research and collaborations among scientific communities, by constructing static and dynamic networks from the topic modeling results and the keywords of authors. The results of this study give insights about the past and future trends of core discussion topics in computational science. We used the Non-negative Matrix Factorization topic modeling algorithm to discover topics and labeled and grouped results hierarchically.
1
0
0
0
0
0
A maximal Boolean sublattice that is not the range of a Banaschewski function
We construct a countable bounded sublattice of the lattice of all subspaces of a vector space with two non-isomorphic maximal Boolean sublattice. We represent one of them as the range of a Banschewski function and we prove that this is not the case of the other. Hereby we solve a problem of F. Wehrung.
0
0
1
0
0
0
Bayesian inference for stationary data on finite state spaces
In this work the issue of Bayesian inference for stationary data is addressed. Therefor a parametrization of a statistically suitable subspace of the the shift-ergodic probability measures on a Cartesian product of some finite state space is given using an inverse limit construction. Moreover, an explicit model for the prior is given by taking into account an additional step in the usual stepwise sampling scheme of data. An update to the posterior is defined by exploiting this augmented sample scheme. Thereby, its model-step is updated using a measurement of the empirical distances between the model classes.
0
0
1
1
0
0
Why is solar cycle 24 an inefficient producer of high-energy particle events?
The aim of the study is to investigate the reason for the low productivity of high-energy SEPs in the present solar cycle. We employ scaling laws derived from diffusive shock acceleration theory and simulation studies including proton-generated upstream Alfvén waves to find out how the changes observed in the long-term average properties of the erupting and ambient coronal and/or solar wind plasma would affect the ability of shocks to accelerate particles to the highest energies. Provided that self-generated turbulence dominates particle transport around coronal shocks, it is found that the most crucial factors controlling the diffusive shock acceleration process are the number density of seed particles and the plasma density of the ambient medium. Assuming that suprathermal populations provide a fraction of the particles injected to shock acceleration in the corona, we show that the lack of most energetic particle events as well as the lack of low charge-to-mass ratio ion species in the present cycle can be understood as a result of the reduction of average coronal plasma and suprathermal densities in the present cycle over the previous one.
0
1
0
0
0
0
E2M2: Energy Efficient Mobility Management in Dense Small Cells with Mobile Edge Computing
Merging mobile edge computing with the dense deployment of small cell base stations promises enormous benefits such as a real proximity, ultra-low latency access to cloud functionalities. However, the envisioned integration creates many new challenges and one of the most significant is mobility management, which is becoming a key bottleneck to the overall system performance. Simply applying existing solutions leads to poor performance due to the highly overlapped coverage areas of multiple base stations in the proximity of the user and the co-provisioning of radio access and computing services. In this paper, we develop a novel user-centric mobility management scheme, leveraging Lyapunov optimization and multi-armed bandits theories, in order to maximize the edge computation performance for the user while keeping the user's communication energy consumption below a constraint. The proposed scheme effectively handles the uncertainties present at multiple levels in the system and provides both short-term and long-term performance guarantee. Simulation results show that our proposed scheme can significantly improve the computation performance (compared to state of the art) while satisfying the communication energy constraint.
1
0
0
0
0
0
Big Data in HEP: A comprehensive use case study
Experimental Particle Physics has been at the forefront of analyzing the worlds largest datasets for decades. The HEP community was the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems collectively called Big Data technologies have emerged to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and promise a fresh look at analysis of very large datasets and could potentially reduce the time-to-physics with increased interactivity. In this talk, we present an active LHC Run 2 analysis, searching for dark matter with the CMS detector, as a testbed for Big Data technologies. We directly compare the traditional NTuple-based analysis with an equivalent analysis using Apache Spark on the Hadoop ecosystem and beyond. In both cases, we start the analysis with the official experiment data formats and produce publication physics plots. We will discuss advantages and disadvantages of each approach and give an outlook on further studies needed.
1
0
0
0
0
0
Criticality & Deep Learning II: Momentum Renormalisation Group
Guided by critical systems found in nature we develop a novel mechanism consisting of inhomogeneous polynomial regularisation via which we can induce scale invariance in deep learning systems. Technically, we map our deep learning (DL) setup to a genuine field theory, on which we act with the Renormalisation Group (RG) in momentum space and produce the flow equations of the couplings; those are translated to constraints and consequently interpreted as "critical regularisation" conditions in the optimiser; the resulting equations hence prove to be sufficient conditions for - and serve as an elegant and simple mechanism to induce scale invariance in any deep learning setup.
1
0
0
0
0
0
The Price of Diversity in Assignment Problems
We introduce and analyze an extension to the matching problem on a weighted bipartite graph: Assignment with Type Constraints. The two parts of the graph are partitioned into subsets called types and blocks; we seek a matching with the largest sum of weights under the constraint that there is a pre-specified cap on the number of vertices matched in every type-block pair. Our primary motivation stems from the public housing program of Singapore, accounting for over 70% of its residential real estate. To promote ethnic diversity within its housing projects, Singapore imposes ethnicity quotas: each new housing development comprises blocks of flats and each ethnicity-based group in the population must not own more than a certain percentage of flats in a block. Other domains using similar hard capacity constraints include matching prospective students to schools or medical residents to hospitals. Limiting agents' choices for ensuring diversity in this manner naturally entails some welfare loss. One of our goals is to study the trade-off between diversity and social welfare in such settings. We first show that, while the classic assignment program is polynomial-time computable, adding diversity constraints makes it computationally intractable; however, we identify a $\tfrac{1}{2}$-approximation algorithm, as well as reasonable assumptions on the weights that permit poly-time algorithms. Next, we provide two upper bounds on the price of diversity -- a measure of the loss in welfare incurred by imposing diversity constraints -- as functions of natural problem parameters. We conclude the paper with simulations based on publicly available data from two diversity-constrained allocation problems -- Singapore Public Housing and Chicago School Choice -- which shed light on how the constrained maximization as well as lottery-based variants perform in practice.
1
0
0
0
0
0
X-ray spectral analyses of AGNs from the 7Ms Chandra Deep Field-South survey: the distribution, variability, and evolution of AGN's obscuration
We present a detailed spectral analysis of the brightest Active Galactic Nuclei (AGN) identified in the 7Ms Chandra Deep Field South (CDF-S) survey over a time span of 16 years. Using a model of an intrinsically absorbed power-law plus reflection, with possible soft excess and narrow Fe K$\alpha$ line, we perform a systematic X-ray spectral analysis, both on the total 7Ms exposure and in four different periods with lengths of 2-21 months. With this approach, we not only present the power-law slopes, column densities $N_H$, observed fluxes, and absorption-corrected 2-10~keV luminosities $L_X$ for our sample of AGNs, but also identify significant spectral variabilities among them on time scales of years. We find that the $N_H$ variabilities can be ascribed to two different types of mechanisms, either flux-driven or flux-independent. We also find that the correlation between the narrow Fe line EW and $N_H$ can be well explained by the continuum suppression with increasing $N_H$. Accounting for the sample incompleteness and bias, we measure the intrinsic distribution of $N_H$ for the CDF-S AGN population and present re-selected subsamples which are complete with respect to $N_H$. The $N_H$-complete subsamples enable us to decouple the dependences of $N_H$ on $L_X$ and on redshift. Combining our data with that from C-COSMOS, we confirm the anti-correlation between the average $N_H$ and $L_X$ of AGN, and find a significant increase of the AGN obscured fraction with redshift at any luminosity. The obscured fraction can be described as $f_{obscured}\thickapprox 0.42\ (1+z)^{0.60}$.
0
1
0
0
0
0
Muon spin relaxation and inelastic neutron scattering investigations of all-in/all-out antiferromagnet Nd2Hf2O7
Nd2Hf2O7, belonging to the family of geometrically frustrated cubic rare earth pyrochlore oxides, was recently identified to order antiferromagnetically below T_N = 0.55 K with an all-in/all-out arrangement of Nd3+ moments, however with a much reduced ordered state moment. Herein we investigate the spin dynamics and crystal field states of Nd2Hf2O7 using muon spin relaxation (muSR) and inelastic neutron scattering (INS) measurements. Our muSR study confirms the long range magnetic ordering and shows evidence for coexisting persistent dynamic spin fluctuations deep inside the ordered state down to 42 mK. The INS data show the crytal electric field (CEF) excitations due to the transitions both within the ground state multiplet and to the first excited state multiplet. The INS data are analyzed by a model based on CEF and crystal field states are determined. Strong Ising-type anisotropy is inferred from the ground state wavefunction. The CEF parameters indicate the CEF-split Kramers doublet ground state of Nd3+ to be consistent with the dipolar-octupolar character.
0
1
0
0
0
0
Modeling and Soft-fault Diagnosis of Underwater Thrusters with Recurrent Neural Networks
Noncritical soft-faults and model deviations are a challenge for Fault Detection and Diagnosis (FDD) of resident Autonomous Underwater Vehicles (AUVs). Such systems may have a faster performance degradation due to the permanent exposure to the marine environment, and constant monitoring of component conditions is required to ensure their reliability. This works presents an evaluation of Recurrent Neural Networks (RNNs) for a data-driven fault detection and diagnosis scheme for underwater thrusters with empirical data. The nominal behavior of the thruster was modeled using the measured control input, voltage, rotational speed and current signals. We evaluated the performance of fault classification using all the measured signals compared to using the computed residuals from the nominal model as features.
1
0
0
1
0
0
Epidemic Spreading on Activity-Driven Networks with Attractiveness
We study SIS epidemic spreading processes unfolding on a recent generalisation of the activity-driven modelling framework. In this model of time-varying networks each node is described by two variables: activity and attractiveness. The first, describes the propensity to form connections. The second, defines the propensity to attract them. We derive analytically the epidemic threshold considering the timescale driving the evolution of contacts and the contagion as comparable. The solutions are general and hold for any joint distribution of activity and attractiveness. The theoretical picture is confirmed via large-scale numerical simulations performed considering heterogeneous distributions and different correlations between the two variables. We find that heterogeneous distributions of attractiveness alter the contagion process. In particular, in case of uncorrelated and positive correlations between the two variables, heterogeneous attractiveness facilitates the spreading. On the contrary, negative correlations between activity and attractiveness hamper the spreading. The results presented contribute to the understanding of the dynamical properties of time-varying networks and their effects on contagion phenomena unfolding on their fabric.
1
1
0
0
0
0
A generalized family of anisotropic compact object in general relativity
We present model for anisotropic compact star under the general theory of relativity of Einstein. In the study a 4-dimensional spacetime has been considered which is embedded into the 5-dimensional flat metric so that the spherically symmetric metric has class 1 when the condition $e^{\lambda}=\left(\,1+C\,e^{\nu} \,{\nu'}^2\,\right)$ is satisfied ($\lambda$ and $\nu$ being the metric potentials along with a constant $C$). A set of solutions for the field equations are found depending on the index $n$ involved in the physical parameters. The interior solutions have been matched smoothly at the boundary of the spherical distribution to the exterior Schwarzschild solution which necessarily provides values of the unknown constants. We have chosen the values of $n$ as $n=2$ and $n$=10 to 20000 for which interesting and physically viable results can be found out. The numerical values of the parameters and arbitrary constants for different compact stars are assumed in the graphical plots and tables as follows: (i) LMC X-4 : $a=0.0075$, $b=0.000821$ for $n=2$ and $a=0.0075$, $nb=0.00164$ for $n\ge 10$, (ii) SMC X-1: $a=0.00681$, $b=0.00078$ for $n=2$, and $a=0.00681$, $nb=0.00159$ for $n \ge 10$. The investigations on the physical features of the model include several astrophysical issues, like (i) regularity behavior of stars at the centre, (ii) well behaved condition for velocity of sound, (iii) energy conditions, (iv) stabilty of the system via the following three techniques - adiabatic index, Herrera cracking concept and TOV equation, (v) total mass, effective mass and compactification factor and (vi) surface redshift. Specific numerical values of the compact star candidates LMC X-4 and SMC X-1 are calculated for central and surface densities as well as central pressure to compare the model value with actual observational data.
0
1
0
0
0
0
Automated capture and delivery of assistive task guidance with an eyewear computer: The GlaciAR system
In this paper we describe and evaluate a mixed reality system that aims to augment users in task guidance applications by combining automated and unsupervised information collection with minimally invasive video guides. The result is a self-contained system that we call GlaciAR (Glass-enabled Contextual Interactions for Augmented Reality), that operates by extracting contextual interactions from observing users performing actions. GlaciAR is able to i) automatically determine moments of relevance based on a head motion attention model, ii) automatically produce video guidance information, iii) trigger these video guides based on an object detection method, iv) learn without supervision from observing multiple users and v) operate fully on-board a current eyewear computer (Google Glass). We describe the components of GlaciAR together with evaluations on how users are able to use the system to achieve three tasks. We see this work as a first step toward the development of systems that aim to scale up the notoriously difficult authoring problem in guidance systems and where people's natural abilities are enhanced via minimally invasive visual guidance.
1
0
0
0
0
0
V2X Meets NOMA: Non-Orthogonal Multiple Access for 5G Enabled Vehicular Networks
Benefited from the widely deployed infrastructure, the LTE network has recently been considered as a promising candidate to support the vehicle-to-everything (V2X) services. However, with a massive number of devices accessing the V2X network in the future, the conventional OFDM-based LTE network faces the congestion issues due to its low efficiency of orthogonal access, resulting in significant access delay and posing a great challenge especially to safety-critical applications. The non-orthogonal multiple access (NOMA) technique has been well recognized as an effective solution for the future 5G cellular networks to provide broadband communications and massive connectivity. In this article, we investigate the applicability of NOMA in supporting cellular V2X services to achieve low latency and high reliability. Starting with a basic V2X unicast system, a novel NOMA-based scheme is proposed to tackle the technical hurdles in designing high spectral efficient scheduling and resource allocation schemes in the ultra dense topology. We then extend it to a more general V2X broadcasting system. Other NOMA-based extended V2X applications and some open issues are also discussed.
1
0
0
0
0
0
Theoretical Foundation of Co-Training and Disagreement-Based Algorithms
Disagreement-based approaches generate multiple classifiers and exploit the disagreement among them with unlabeled data to improve learning performance. Co-training is a representative paradigm of them, which trains two classifiers separately on two sufficient and redundant views; while for the applications where there is only one view, several successful variants of co-training with two different classifiers on single-view data instead of two views have been proposed. For these disagreement-based approaches, there are several important issues which still are unsolved, in this article we present theoretical analyses to address these issues, which provides a theoretical foundation of co-training and disagreement-based approaches.
1
0
0
1
0
0
Schrödinger's Man
What if someone built a "box" that applies quantum superposition not just to quantum bits in the microscopic but also to macroscopic everyday "objects", such as Schrödinger's cat or a human being? If that were possible, and if the different "copies" of a man could exploit quantum interference to synchronize and collapse into their preferred state, then one (or they?) could in a sense choose their future, win the lottery, break codes and other security devices, and become king of the world, or actually of the many-worlds. We set up the plot-line of a new episode of Black Mirror to reflect on what might await us if one were able to build such a technology.
1
0
0
0
0
0
High-Resolution Multispectral Dataset for Semantic Segmentation
Unmanned aircraft have decreased the cost required to collect remote sensing imagery, which has enabled researchers to collect high-spatial resolution data from multiple sensor modalities more frequently and easily. The increase in data will push the need for semantic segmentation frameworks that are able to classify non-RGB imagery, but this type of algorithmic development requires an increase in publicly available benchmark datasets with class labels. In this paper, we introduce a high-resolution multispectral dataset with image labels. This new benchmark dataset has been pre-split into training/testing folds in order to standardize evaluation and continue to push state-of-the-art classification frameworks for non-RGB imagery.
1
0
0
0
0
0
A Methodology for the Selection of Requirement Elicitation Techniques
In this paper, we present an approach to select a subset of requirement elicitation technique for an optimum result in the requirement elicitation process. Our approach consists of three steps. First, we identify various attribute in three important dimensions namely project, people and the process of software development that can influence the outcome of an elicitation process. Second, we construct three p matrix (3PM) separately for each dimension, that shows a relation between the elicitation techniques and three dimensions of a software. Third, we provide a mapping criteria and use them in the selection of a subset of elicitation techniques. We demonstrate the applicability of the proposed approach using case studies to evaluate and provide the contextual knowledge of selecting requirement elicitation technique.
1
0
0
0
0
0
Nonclassical Light Generation from III-V and Group-IV Solid-State Cavity Quantum Systems
In this chapter, we present the state-of-the-art in the generation of nonclassical states of light using semiconductor cavity quantum electrodynamics (QED) platforms. Our focus is on the photon blockade effects that enable the generation of indistinguishable photon streams with high purity and efficiency. Starting with the leading platform of InGaAs quantum dots in optical nanocavities, we review the physics of a single quantum emitter strongly coupled to a cavity. Furthermore, we propose a complete model for photon blockade and tunneling in III-V quantum dot cavity QED systems. Turning toward quantum emitters with small inhomogeneous broadening, we propose a direction for novel experiments for nonclassical light generation based on group-IV color-center systems. We present a model of a multi-emitter cavity QED platform, which features richer dressed-states ladder structures, and show how it can offer opportunities for studying new regimes of high-quality photon blockade.
0
1
0
0
0
0
INtERAcT: Interaction Network Inference from Vector Representations of Words
In recent years, the number of biomedical publications has steadfastly grown, resulting in a rich source of untapped new knowledge. Most biomedical facts are however not readily available, but buried in the form of unstructured text, and hence their exploitation requires the time-consuming manual curation of published articles. Here we present INtERAcT, a novel approach to extract protein-protein interactions from a corpus of biomedical articles related to a broad range of scientific domains in a completely unsupervised way. INtERAcT exploits vector representation of words, computed on a corpus of domain specific knowledge, and implements a new metric that estimates an interaction score between two molecules in the space where the corresponding words are embedded. We demonstrate the power of INtERAcT by reconstructing the molecular pathways associated to 10 different cancer types using a corpus of disease-specific articles for each cancer type. We evaluate INtERAcT using STRING database as a benchmark, and show that our metric outperforms currently adopted approaches for similarity computation at the task of identifying known molecular interactions in all studied cancer types. Furthermore, our approach does not require text annotation, manual curation or the definition of semantic rules based on expert knowledge, and hence it can be easily and efficiently applied to different scientific domains. Our findings suggest that INtERAcT may increase our capability to summarize the understanding of a specific disease using the published literature in an automated and completely unsupervised fashion.
0
0
0
0
1
0
Distributed Deep Transfer Learning by Basic Probability Assignment
Transfer learning is a popular practice in deep neural networks, but fine-tuning of large number of parameters is a hard task due to the complex wiring of neurons between splitting layers and imbalance distributions of data in pretrained and transferred domains. The reconstruction of the original wiring for the target domain is a heavy burden due to the size of interconnections across neurons. We propose a distributed scheme that tunes the convolutional filters individually while backpropagates them jointly by means of basic probability assignment. Some of the most recent advances in evidence theory show that in a vast variety of the imbalanced regimes, optimizing of some proper objective functions derived from contingency matrices prevents biases towards high-prior class distributions. Therefore, the original filters get gradually transferred based on individual contributions to overall performance of the target domain. This largely reduces the expected complexity of transfer learning whilst highly improves precision. Our experiments on standard benchmarks and scenarios confirm the consistent improvement of our distributed deep transfer learning strategy.
1
0
0
1
0
0
Parameter Learning and Change Detection Using a Particle Filter With Accelerated Adaptation
This paper presents the construction of a particle filter, which incorporates elements inspired by genetic algorithms, in order to achieve accelerated adaptation of the estimated posterior distribution to changes in model parameters. Specifically, the filter is designed for the situation where the subsequent data in online sequential filtering does not match the model posterior filtered based on data up to a current point in time. The examples considered encompass parameter regime shifts and stochastic volatility. The filter adapts to regime shifts extremely rapidly and delivers a clear heuristic for distinguishing between regime shifts and stochastic volatility, even though the model dynamics assumed by the filter exhibit neither of those features.
0
0
0
1
0
1
Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation
Enabling robots to autonomously navigate complex environments is essential for real-world deployment. Prior methods approach this problem by having the robot maintain an internal map of the world, and then use a localization and planning method to navigate through the internal map. However, these approaches often include a variety of assumptions, are computationally intensive, and do not learn from failures. In contrast, learning-based methods improve as the robot acts in the environment, but are difficult to deploy in the real-world due to their high sample complexity. To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We then instantiate this graph to form a navigation model that learns from raw images and is sample efficient. Our simulated car experiments explore the design decisions of our navigation model, and show our approach outperforms single-step and $N$-step double Q-learning. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training. Videos of the experiments and code can be found at github.com/gkahn13/gcg
1
0
0
0
0
0
Magnetization reversal by superconducting current in $φ_0$ Josephson junctions
We study magnetization reversal in a $\varphi_0$ Josephson junction with direct coupling between magnetic moment and Josephson current. Our simulations of magnetic moment dynamics show that by applying an electric current pulse, we can realize the full magnetization reversal. We propose different protocols of full magnetization reversal based on the variation of the Josephson junction and pulse parameters, particularly, electric current pulse amplitude, damping of magnetization and spin-orbit interaction. We discuss experiments which can probe the magnetization reversal in $\varphi_0$-junctions.
0
1
0
0
0
0
Higher Order Accurate Space-Time Schemes for Computational Astrophysics -- Part I -- Finite Volume Methods
As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. Hence the need for a specialized review on higher order schemes for computational astrophysics. The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes which are already familiar to most computational astrophysicists. DG schemes, on the other hand, evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and PNPM schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger. Time-dependent astrophysical codes need to be accurate in space and time. This is realized with the help of SSP-RK (strong stability preserving Runge-Kutta) schemes and ADER (Arbitrary DERivative in space and time) schemes. The most popular approaches to SSP-RK and ADER schemes are also described. The style of this review is to assume that readers have a basic understanding of hyperbolic systems and one-dimensional Riemann solvers. Such an understanding can be acquired from a sequence of prepackaged lectures available from this http URL. We then build on this understanding to give the reader a practical introduction to the schemes described here. The emphasis is on computer-implementable ideas, not necessarily on the underlying theory, because it was felt that this would be most interesting to most computational astrophysicists.
0
1
0
0
0
0
Building Robust Deep Neural Networks for Road Sign Detection
Deep Neural Networks are built to generalize outside of training set in mind by using techniques such as regularization, early stopping and dropout. But considerations to make them more resilient to adversarial examples are rarely taken. As deep neural networks become more prevalent in mission-critical and real-time systems, miscreants start to attack them by intentionally making deep neural networks to misclassify an object of one type to be seen as another type. This can be catastrophic in some scenarios where the classification of a deep neural network can lead to a fatal decision by a machine. In this work, we used GTSRB dataset to craft adversarial samples by Fast Gradient Sign Method and Jacobian Saliency Method, used those crafted adversarial samples to attack another Deep Convolutional Neural Network and built the attacked network to be more resilient against adversarial attacks by making it more robust by Defensive Distillation and Adversarial Training
1
0
0
1
0
0
Statman's Hierarchy Theorem
In the Simply Typed $\lambda$-calculus Statman investigates the reducibility relation $\leq_{\beta\eta}$ between types: for $A,B \in \mathbb{T}^0$, types freely generated using $\rightarrow$ and a single ground type $0$, define $A \leq_{\beta\eta} B$ if there exists a $\lambda$-definable injection from the closed terms of type $A$ into those of type $B$. Unexpectedly, the induced partial order is the (linear) well-ordering (of order type) $\omega + 4$. In the proof a finer relation $\leq_{h}$ is used, where the above injection is required to be a Böhm transformation, and an (a posteriori) coarser relation $\leq_{h^+}$, requiring a finite family of Böhm transformations that is jointly injective. We present this result in a self-contained, syntactic, constructive and simplified manner. En route similar results for $\leq_h$ (order type $\omega + 5$) and $\leq_{h^+}$ (order type $8$) are obtained. Five of the equivalence classes of $\leq_{h^+}$ correspond to canonical term models of Statman, one to the trivial term model collapsing all elements of the same type, and one does not even form a model by the lack of closed terms of many types.
1
0
0
0
0
0
Anisotropic Dzyaloshinskii-Moriya Interaction in ultra-thin epitaxial Au/Co/W(110)
We have used Brillouin Light Scattering spectroscopy to independently determine the in-plane Magneto-Crystalline Anisotropy and the Dzyaloshinskii-Moriya Interaction (DMI) in out-of-plane magnetized Au/Co/W(110). We found that the DMI strength is 2-3 times larger along the bcc$[\bar{1}10]$ than along the bcc$[001]$ direction. We use analytical considerations to illustrate the relationship between the crystal symmetry of the stack and the anisotropy of microscopic DMI. Such an anisotropic DMI is the first step to realize isolated elliptical skyrmions or anti-skyrmions in thin film systems with $C_{2v}$ symmetry.
0
1
0
0
0
0
Modeling and Simulation of Robotic Finger Powered by Nylon Artificial Muscles- Equations with Simulink model
This paper shows a detailed modeling of three-link robotic finger that is actuated by nylon artificial muscles and a simulink model that can be used for numerical study of a robotic finger. The robotic hand prototype was recently demonstrated in recent publication Wu, L., Jung de Andrade, M., Saharan, L.,Rome, R., Baughman, R., and Tadesse, Y., 2017, Compact and Low-cost Humanoid Hand Powered by Nylon Artificial Muscles, Bioinspiration & Biomimetics, 12 (2). The robotic hand is a 3D printed, lightweight and compact hand actuated by silver-coated nylon muscles, often called Twisted and coiled Polymer (TCP) muscles. TCP muscles are thermal actuators that contract when they are heated and they are getting attention for application in robotics. The purpose of this paper is to demonstrate the modeling equations that were derived based on Euler Lagrangian approach that is suitable for implementation in simulink model.
1
0
0
0
0
0
Beyond Log-concavity: Provable Guarantees for Sampling Multi-modal Distributions using Simulated Tempering Langevin Monte Carlo
A key task in Bayesian statistics is sampling from distributions that are only specified up to a partition function (i.e., constant of proportionality). However, without any assumptions, sampling (even approximately) can be #P-hard, and few works have provided "beyond worst-case" guarantees for such settings. For log-concave distributions, classical results going back to Bakry and Émery (1985) show that natural continuous-time Markov chains called Langevin diffusions mix in polynomial time. The most salient feature of log-concavity violated in practice is uni-modality: commonly, the distributions we wish to sample from are multi-modal. In the presence of multiple deep and well-separated modes, Langevin diffusion suffers from torpid mixing. We address this problem by combining Langevin diffusion with simulated tempering. The result is a Markov chain that mixes more rapidly by transitioning between different temperatures of the distribution. We analyze this Markov chain for the canonical multi-modal distribution: a mixture of gaussians (of equal variance). The algorithm based on our Markov chain provably samples from distributions that are close to mixtures of gaussians, given access to the gradient of the log-pdf. For the analysis, we use a spectral decomposition theorem for graphs (Gharan and Trevisan, 2014) and a Markov chain decomposition technique (Madras and Randall, 2002).
1
0
0
1
0
0